zerocopy/
lib.rs

1// Copyright 2018 The Fuchsia Authors
2//
3// Licensed under the 2-Clause BSD License <LICENSE-BSD or
4// https://opensource.org/license/bsd-2-clause>, Apache License, Version 2.0
5// <LICENSE-APACHE or https://www.apache.org/licenses/LICENSE-2.0>, or the MIT
6// license <LICENSE-MIT or https://opensource.org/licenses/MIT>, at your option.
7// This file may not be copied, modified, or distributed except according to
8// those terms.
9
10// After updating the following doc comment, make sure to run the following
11// command to update `README.md` based on its contents:
12//
13//   cargo -q run --manifest-path tools/Cargo.toml -p generate-readme > README.md
14
15//! *<span style="font-size: 100%; color:grey;">Need more out of zerocopy?
16//! Submit a [customer request issue][customer-request-issue]!</span>*
17//!
18//! ***<span style="font-size: 140%">Fast, safe, <span
19//! style="color:red;">compile error</span>. Pick two.</span>***
20//!
21//! Zerocopy makes zero-cost memory manipulation effortless. We write `unsafe`
22//! so you don't have to.
23//!
24//! *Thanks for using zerocopy 0.8! For an overview of what changes from 0.7,
25//! check out our [release notes][release-notes], which include a step-by-step
26//! guide for upgrading from 0.7.*
27//!
28//! *Have questions? Need help? Ask the maintainers on [GitHub][github-q-a] or
29//! on [Discord][discord]!*
30//!
31//! [customer-request-issue]: https://github.com/google/zerocopy/issues/new/choose
32//! [release-notes]: https://github.com/google/zerocopy/discussions/1680
33//! [github-q-a]: https://github.com/google/zerocopy/discussions/categories/q-a
34//! [discord]: https://discord.gg/MAvWH2R6zk
35//!
36//! # Overview
37//!
38//! ##### Conversion Traits
39//!
40//! Zerocopy provides four derivable traits for zero-cost conversions:
41//! - [`TryFromBytes`] indicates that a type may safely be converted from
42//!   certain byte sequences (conditional on runtime checks)
43//! - [`FromZeros`] indicates that a sequence of zero bytes represents a valid
44//!   instance of a type
45//! - [`FromBytes`] indicates that a type may safely be converted from an
46//!   arbitrary byte sequence
47//! - [`IntoBytes`] indicates that a type may safely be converted *to* a byte
48//!   sequence
49//!
50//! These traits support sized types, slices, and [slice DSTs][slice-dsts].
51//!
52//! [slice-dsts]: KnownLayout#dynamically-sized-types
53//!
54//! ##### Marker Traits
55//!
56//! Zerocopy provides three derivable marker traits that do not provide any
57//! functionality themselves, but are required to call certain methods provided
58//! by the conversion traits:
59//! - [`KnownLayout`] indicates that zerocopy can reason about certain layout
60//!   qualities of a type
61//! - [`Immutable`] indicates that a type is free from interior mutability,
62//!   except by ownership or an exclusive (`&mut`) borrow
63//! - [`Unaligned`] indicates that a type's alignment requirement is 1
64//!
65//! You should generally derive these marker traits whenever possible.
66//!
67//! ##### Conversion Macros
68//!
69//! Zerocopy provides six macros for safe casting between types:
70//!
71//! - ([`try_`][try_transmute])[`transmute`] (conditionally) converts a value of
72//!   one type to a value of another type of the same size
73//! - ([`try_`][try_transmute_mut])[`transmute_mut`] (conditionally) converts a
74//!   mutable reference of one type to a mutable reference of another type of
75//!   the same size
76//! - ([`try_`][try_transmute_ref])[`transmute_ref`] (conditionally) converts a
77//!   mutable or immutable reference of one type to an immutable reference of
78//!   another type of the same size
79//!
80//! These macros perform *compile-time* size and alignment checks, meaning that
81//! unconditional casts have zero cost at runtime. Conditional casts do not need
82//! to validate size or alignment runtime, but do need to validate contents.
83//!
84//! These macros cannot be used in generic contexts. For generic conversions,
85//! use the methods defined by the [conversion traits](#conversion-traits).
86//!
87//! ##### Byteorder-Aware Numerics
88//!
89//! Zerocopy provides byte-order aware integer types that support these
90//! conversions; see the [`byteorder`] module. These types are especially useful
91//! for network parsing.
92//!
93//! # Cargo Features
94//!
95//! - **`alloc`**
96//!   By default, `zerocopy` is `no_std`. When the `alloc` feature is enabled,
97//!   the `alloc` crate is added as a dependency, and some allocation-related
98//!   functionality is added.
99//!
100//! - **`std`**
101//!   By default, `zerocopy` is `no_std`. When the `std` feature is enabled, the
102//!   `std` crate is added as a dependency (ie, `no_std` is disabled), and
103//!   support for some `std` types is added. `std` implies `alloc`.
104//!
105//! - **`derive`**
106//!   Provides derives for the core marker traits via the `zerocopy-derive`
107//!   crate. These derives are re-exported from `zerocopy`, so it is not
108//!   necessary to depend on `zerocopy-derive` directly.
109//!
110//!   However, you may experience better compile times if you instead directly
111//!   depend on both `zerocopy` and `zerocopy-derive` in your `Cargo.toml`,
112//!   since doing so will allow Rust to compile these crates in parallel. To do
113//!   so, do *not* enable the `derive` feature, and list both dependencies in
114//!   your `Cargo.toml` with the same leading non-zero version number; e.g:
115//!
116//!   ```toml
117//!   [dependencies]
118//!   zerocopy = "0.X"
119//!   zerocopy-derive = "0.X"
120//!   ```
121//!
122//!   To avoid the risk of [duplicate import errors][duplicate-import-errors] if
123//!   one of your dependencies enables zerocopy's `derive` feature, import
124//!   derives as `use zerocopy_derive::*` rather than by name (e.g., `use
125//!   zerocopy_derive::FromBytes`).
126//!
127//! - **`simd`**
128//!   When the `simd` feature is enabled, `FromZeros`, `FromBytes`, and
129//!   `IntoBytes` impls are emitted for all stable SIMD types which exist on the
130//!   target platform. Note that the layout of SIMD types is not yet stabilized,
131//!   so these impls may be removed in the future if layout changes make them
132//!   invalid. For more information, see the Unsafe Code Guidelines Reference
133//!   page on the [layout of packed SIMD vectors][simd-layout].
134//!
135//! - **`simd-nightly`**
136//!   Enables the `simd` feature and adds support for SIMD types which are only
137//!   available on nightly. Since these types are unstable, support for any type
138//!   may be removed at any point in the future.
139//!
140//! - **`float-nightly`**
141//!   Adds support for the unstable `f16` and `f128` types. These types are
142//!   not yet fully implemented and may not be supported on all platforms.
143//!
144//! [duplicate-import-errors]: https://github.com/google/zerocopy/issues/1587
145//! [simd-layout]: https://rust-lang.github.io/unsafe-code-guidelines/layout/packed-simd-vectors.html
146//!
147//! # Security Ethos
148//!
149//! Zerocopy is expressly designed for use in security-critical contexts. We
150//! strive to ensure that that zerocopy code is sound under Rust's current
151//! memory model, and *any future memory model*. We ensure this by:
152//! - **...not 'guessing' about Rust's semantics.**
153//!   We annotate `unsafe` code with a precise rationale for its soundness that
154//!   cites a relevant section of Rust's official documentation. When Rust's
155//!   documented semantics are unclear, we work with the Rust Operational
156//!   Semantics Team to clarify Rust's documentation.
157//! - **...rigorously testing our implementation.**
158//!   We run tests using [Miri], ensuring that zerocopy is sound across a wide
159//!   array of supported target platforms of varying endianness and pointer
160//!   width, and across both current and experimental memory models of Rust.
161//! - **...formally proving the correctness of our implementation.**
162//!   We apply formal verification tools like [Kani][kani] to prove zerocopy's
163//!   correctness.
164//!
165//! For more information, see our full [soundness policy].
166//!
167//! [Miri]: https://github.com/rust-lang/miri
168//! [Kani]: https://github.com/model-checking/kani
169//! [soundness policy]: https://github.com/google/zerocopy/blob/main/POLICIES.md#soundness
170//!
171//! # Relationship to Project Safe Transmute
172//!
173//! [Project Safe Transmute] is an official initiative of the Rust Project to
174//! develop language-level support for safer transmutation. The Project consults
175//! with crates like zerocopy to identify aspects of safer transmutation that
176//! would benefit from compiler support, and has developed an [experimental,
177//! compiler-supported analysis][mcp-transmutability] which determines whether,
178//! for a given type, any value of that type may be soundly transmuted into
179//! another type. Once this functionality is sufficiently mature, zerocopy
180//! intends to replace its internal transmutability analysis (implemented by our
181//! custom derives) with the compiler-supported one. This change will likely be
182//! an implementation detail that is invisible to zerocopy's users.
183//!
184//! Project Safe Transmute will not replace the need for most of zerocopy's
185//! higher-level abstractions. The experimental compiler analysis is a tool for
186//! checking the soundness of `unsafe` code, not a tool to avoid writing
187//! `unsafe` code altogether. For the foreseeable future, crates like zerocopy
188//! will still be required in order to provide higher-level abstractions on top
189//! of the building block provided by Project Safe Transmute.
190//!
191//! [Project Safe Transmute]: https://rust-lang.github.io/rfcs/2835-project-safe-transmute.html
192//! [mcp-transmutability]: https://github.com/rust-lang/compiler-team/issues/411
193//!
194//! # MSRV
195//!
196//! See our [MSRV policy].
197//!
198//! [MSRV policy]: https://github.com/google/zerocopy/blob/main/POLICIES.md#msrv
199//!
200//! # Changelog
201//!
202//! Zerocopy uses [GitHub Releases].
203//!
204//! [GitHub Releases]: https://github.com/google/zerocopy/releases
205//!
206//! # Thanks
207//!
208//! Zerocopy is maintained by engineers at Google and Amazon with help from
209//! [many wonderful contributors][contributors]. Thank you to everyone who has
210//! lent a hand in making Rust a little more secure!
211//!
212//! [contributors]: https://github.com/google/zerocopy/graphs/contributors
213
214// Sometimes we want to use lints which were added after our MSRV.
215// `unknown_lints` is `warn` by default and we deny warnings in CI, so without
216// this attribute, any unknown lint would cause a CI failure when testing with
217// our MSRV.
218#![allow(unknown_lints, non_local_definitions, unreachable_patterns)]
219#![deny(renamed_and_removed_lints)]
220#![deny(
221    anonymous_parameters,
222    deprecated_in_future,
223    late_bound_lifetime_arguments,
224    missing_copy_implementations,
225    missing_debug_implementations,
226    missing_docs,
227    path_statements,
228    patterns_in_fns_without_body,
229    rust_2018_idioms,
230    trivial_numeric_casts,
231    unreachable_pub,
232    unsafe_op_in_unsafe_fn,
233    unused_extern_crates,
234    // We intentionally choose not to deny `unused_qualifications`. When items
235    // are added to the prelude (e.g., `core::mem::size_of`), this has the
236    // consequence of making some uses trigger this lint on the latest toolchain
237    // (e.g., `mem::size_of`), but fixing it (e.g. by replacing with `size_of`)
238    // does not work on older toolchains.
239    //
240    // We tested a more complicated fix in #1413, but ultimately decided that,
241    // since this lint is just a minor style lint, the complexity isn't worth it
242    // - it's fine to occasionally have unused qualifications slip through,
243    // especially since these do not affect our user-facing API in any way.
244    variant_size_differences
245)]
246#![cfg_attr(
247    __ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS,
248    deny(fuzzy_provenance_casts, lossy_provenance_casts)
249)]
250#![deny(
251    clippy::all,
252    clippy::alloc_instead_of_core,
253    clippy::arithmetic_side_effects,
254    clippy::as_underscore,
255    clippy::assertions_on_result_states,
256    clippy::as_conversions,
257    clippy::correctness,
258    clippy::dbg_macro,
259    clippy::decimal_literal_representation,
260    clippy::double_must_use,
261    clippy::get_unwrap,
262    clippy::indexing_slicing,
263    clippy::missing_inline_in_public_items,
264    clippy::missing_safety_doc,
265    clippy::must_use_candidate,
266    clippy::must_use_unit,
267    clippy::obfuscated_if_else,
268    clippy::perf,
269    clippy::print_stdout,
270    clippy::return_self_not_must_use,
271    clippy::std_instead_of_core,
272    clippy::style,
273    clippy::suspicious,
274    clippy::todo,
275    clippy::undocumented_unsafe_blocks,
276    clippy::unimplemented,
277    clippy::unnested_or_patterns,
278    clippy::unwrap_used,
279    clippy::use_debug
280)]
281// `clippy::incompatible_msrv` (implied by `clippy::suspicious`): This sometimes
282// has false positives, and we test on our MSRV in CI, so it doesn't help us
283// anyway.
284#![allow(clippy::needless_lifetimes, clippy::type_complexity, clippy::incompatible_msrv)]
285#![deny(
286    rustdoc::bare_urls,
287    rustdoc::broken_intra_doc_links,
288    rustdoc::invalid_codeblock_attributes,
289    rustdoc::invalid_html_tags,
290    rustdoc::invalid_rust_codeblocks,
291    rustdoc::missing_crate_level_docs,
292    rustdoc::private_intra_doc_links
293)]
294// In test code, it makes sense to weight more heavily towards concise, readable
295// code over correct or debuggable code.
296#![cfg_attr(any(test, kani), allow(
297    // In tests, you get line numbers and have access to source code, so panic
298    // messages are less important. You also often unwrap a lot, which would
299    // make expect'ing instead very verbose.
300    clippy::unwrap_used,
301    // In tests, there's no harm to "panic risks" - the worst that can happen is
302    // that your test will fail, and you'll fix it. By contrast, panic risks in
303    // production code introduce the possibly of code panicking unexpectedly "in
304    // the field".
305    clippy::arithmetic_side_effects,
306    clippy::indexing_slicing,
307))]
308#![cfg_attr(not(any(test, kani, feature = "std")), no_std)]
309#![cfg_attr(
310    all(feature = "simd-nightly", any(target_arch = "x86", target_arch = "x86_64")),
311    feature(stdarch_x86_avx512)
312)]
313#![cfg_attr(
314    all(feature = "simd-nightly", target_arch = "arm"),
315    feature(stdarch_arm_dsp, stdarch_arm_neon_intrinsics)
316)]
317#![cfg_attr(
318    all(feature = "simd-nightly", any(target_arch = "powerpc", target_arch = "powerpc64")),
319    feature(stdarch_powerpc)
320)]
321#![cfg_attr(feature = "float-nightly", feature(f16, f128))]
322#![cfg_attr(doc_cfg, feature(doc_cfg))]
323#![cfg_attr(
324    __ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS,
325    feature(layout_for_ptr, coverage_attribute)
326)]
327
328// This is a hack to allow zerocopy-derive derives to work in this crate. They
329// assume that zerocopy is linked as an extern crate, so they access items from
330// it as `zerocopy::Xxx`. This makes that still work.
331#[cfg(any(feature = "derive", test))]
332extern crate self as zerocopy;
333
334#[doc(hidden)]
335#[macro_use]
336pub mod util;
337
338pub mod byte_slice;
339pub mod byteorder;
340mod deprecated;
341// This module is `pub` so that zerocopy's error types and error handling
342// documentation is grouped together in a cohesive module. In practice, we
343// expect most users to use the re-export of `error`'s items to avoid identifier
344// stuttering.
345pub mod error;
346mod impls;
347#[doc(hidden)]
348pub mod layout;
349mod macros;
350#[doc(hidden)]
351pub mod pointer;
352mod r#ref;
353// TODO(#252): If we make this pub, come up with a better name.
354mod wrappers;
355
356pub use crate::byte_slice::*;
357pub use crate::byteorder::*;
358pub use crate::error::*;
359pub use crate::r#ref::*;
360pub use crate::wrappers::*;
361
362use core::{
363    cell::{Cell, UnsafeCell},
364    cmp::Ordering,
365    fmt::{self, Debug, Display, Formatter},
366    hash::Hasher,
367    marker::PhantomData,
368    mem::{self, ManuallyDrop, MaybeUninit as CoreMaybeUninit},
369    num::{
370        NonZeroI128, NonZeroI16, NonZeroI32, NonZeroI64, NonZeroI8, NonZeroIsize, NonZeroU128,
371        NonZeroU16, NonZeroU32, NonZeroU64, NonZeroU8, NonZeroUsize, Wrapping,
372    },
373    ops::{Deref, DerefMut},
374    ptr::{self, NonNull},
375    slice,
376};
377
378#[cfg(feature = "std")]
379use std::io;
380
381use crate::pointer::invariant::{self, BecauseExclusive};
382
383#[cfg(any(feature = "alloc", test, kani))]
384extern crate alloc;
385#[cfg(any(feature = "alloc", test))]
386use alloc::{boxed::Box, vec::Vec};
387use util::MetadataOf;
388
389#[cfg(any(feature = "alloc", test))]
390use core::alloc::Layout;
391
392// Used by `TryFromBytes::is_bit_valid`.
393#[doc(hidden)]
394pub use crate::pointer::{invariant::BecauseImmutable, Maybe, Ptr};
395// Used by `KnownLayout`.
396#[doc(hidden)]
397pub use crate::layout::*;
398
399// For each trait polyfill, as soon as the corresponding feature is stable, the
400// polyfill import will be unused because method/function resolution will prefer
401// the inherent method/function over a trait method/function. Thus, we suppress
402// the `unused_imports` warning.
403//
404// See the documentation on `util::polyfills` for more information.
405#[allow(unused_imports)]
406use crate::util::polyfills::{self, NonNullExt as _, NumExt as _};
407
408#[rustversion::nightly]
409#[cfg(all(test, not(__ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS)))]
410const _: () = {
411    #[deprecated = "some tests may be skipped due to missing RUSTFLAGS=\"--cfg __ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS\""]
412    const _WARNING: () = ();
413    #[warn(deprecated)]
414    _WARNING
415};
416
417// These exist so that code which was written against the old names will get
418// less confusing error messages when they upgrade to a more recent version of
419// zerocopy. On our MSRV toolchain, the error messages read, for example:
420//
421//   error[E0603]: trait `FromZeroes` is private
422//       --> examples/deprecated.rs:1:15
423//        |
424//   1    | use zerocopy::FromZeroes;
425//        |               ^^^^^^^^^^ private trait
426//        |
427//   note: the trait `FromZeroes` is defined here
428//       --> /Users/josh/workspace/zerocopy/src/lib.rs:1845:5
429//        |
430//   1845 | use FromZeros as FromZeroes;
431//        |     ^^^^^^^^^^^^^^^^^^^^^^^
432//
433// The "note" provides enough context to make it easy to figure out how to fix
434// the error.
435#[allow(unused)]
436use {FromZeros as FromZeroes, IntoBytes as AsBytes, Ref as LayoutVerified};
437
438/// Implements [`KnownLayout`].
439///
440/// This derive analyzes various aspects of a type's layout that are needed for
441/// some of zerocopy's APIs. It can be applied to structs, enums, and unions;
442/// e.g.:
443///
444/// ```
445/// # use zerocopy_derive::KnownLayout;
446/// #[derive(KnownLayout)]
447/// struct MyStruct {
448/// # /*
449///     ...
450/// # */
451/// }
452///
453/// #[derive(KnownLayout)]
454/// enum MyEnum {
455/// #   V00,
456/// # /*
457///     ...
458/// # */
459/// }
460///
461/// #[derive(KnownLayout)]
462/// union MyUnion {
463/// #   variant: u8,
464/// # /*
465///     ...
466/// # */
467/// }
468/// ```
469///
470/// # Limitations
471///
472/// This derive cannot currently be applied to unsized structs without an
473/// explicit `repr` attribute.
474///
475/// Some invocations of this derive run afoul of a [known bug] in Rust's type
476/// privacy checker. For example, this code:
477///
478/// ```compile_fail,E0446
479/// use zerocopy::*;
480/// # use zerocopy_derive::*;
481///
482/// #[derive(KnownLayout)]
483/// #[repr(C)]
484/// pub struct PublicType {
485///     leading: Foo,
486///     trailing: Bar,
487/// }
488///
489/// #[derive(KnownLayout)]
490/// struct Foo;
491///
492/// #[derive(KnownLayout)]
493/// struct Bar;
494/// ```
495///
496/// ...results in a compilation error:
497///
498/// ```text
499/// error[E0446]: private type `Bar` in public interface
500///  --> examples/bug.rs:3:10
501///    |
502/// 3  | #[derive(KnownLayout)]
503///    |          ^^^^^^^^^^^ can't leak private type
504/// ...
505/// 14 | struct Bar;
506///    | ---------- `Bar` declared as private
507///    |
508///    = note: this error originates in the derive macro `KnownLayout` (in Nightly builds, run with -Z macro-backtrace for more info)
509/// ```
510///
511/// This issue arises when `#[derive(KnownLayout)]` is applied to `repr(C)`
512/// structs whose trailing field type is less public than the enclosing struct.
513///
514/// To work around this, mark the trailing field type `pub` and annotate it with
515/// `#[doc(hidden)]`; e.g.:
516///
517/// ```no_run
518/// use zerocopy::*;
519/// # use zerocopy_derive::*;
520///
521/// #[derive(KnownLayout)]
522/// #[repr(C)]
523/// pub struct PublicType {
524///     leading: Foo,
525///     trailing: Bar,
526/// }
527///
528/// #[derive(KnownLayout)]
529/// struct Foo;
530///
531/// #[doc(hidden)]
532/// #[derive(KnownLayout)]
533/// pub struct Bar; // <- `Bar` is now also `pub`
534/// ```
535///
536/// [known bug]: https://github.com/rust-lang/rust/issues/45713
537#[cfg(any(feature = "derive", test))]
538#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
539pub use zerocopy_derive::KnownLayout;
540
541/// Indicates that zerocopy can reason about certain aspects of a type's layout.
542///
543/// This trait is required by many of zerocopy's APIs. It supports sized types,
544/// slices, and [slice DSTs](#dynamically-sized-types).
545///
546/// # Implementation
547///
548/// **Do not implement this trait yourself!** Instead, use
549/// [`#[derive(KnownLayout)]`][derive]; e.g.:
550///
551/// ```
552/// # use zerocopy_derive::KnownLayout;
553/// #[derive(KnownLayout)]
554/// struct MyStruct {
555/// # /*
556///     ...
557/// # */
558/// }
559///
560/// #[derive(KnownLayout)]
561/// enum MyEnum {
562/// # /*
563///     ...
564/// # */
565/// }
566///
567/// #[derive(KnownLayout)]
568/// union MyUnion {
569/// #   variant: u8,
570/// # /*
571///     ...
572/// # */
573/// }
574/// ```
575///
576/// This derive performs a sophisticated analysis to deduce the layout
577/// characteristics of types. You **must** implement this trait via the derive.
578///
579/// # Dynamically-sized types
580///
581/// `KnownLayout` supports slice-based dynamically sized types ("slice DSTs").
582///
583/// A slice DST is a type whose trailing field is either a slice or another
584/// slice DST, rather than a type with fixed size. For example:
585///
586/// ```
587/// #[repr(C)]
588/// struct PacketHeader {
589/// # /*
590///     ...
591/// # */
592/// }
593///
594/// #[repr(C)]
595/// struct Packet {
596///     header: PacketHeader,
597///     body: [u8],
598/// }
599/// ```
600///
601/// It can be useful to think of slice DSTs as a generalization of slices - in
602/// other words, a normal slice is just the special case of a slice DST with
603/// zero leading fields. In particular:
604/// - Like slices, slice DSTs can have different lengths at runtime
605/// - Like slices, slice DSTs cannot be passed by-value, but only by reference
606///   or via other indirection such as `Box`
607/// - Like slices, a reference (or `Box`, or other pointer type) to a slice DST
608///   encodes the number of elements in the trailing slice field
609///
610/// ## Slice DST layout
611///
612/// Just like other composite Rust types, the layout of a slice DST is not
613/// well-defined unless it is specified using an explicit `#[repr(...)]`
614/// attribute such as `#[repr(C)]`. [Other representations are
615/// supported][reprs], but in this section, we'll use `#[repr(C)]` as our
616/// example.
617///
618/// A `#[repr(C)]` slice DST is laid out [just like sized `#[repr(C)]`
619/// types][repr-c-structs], but the presenence of a variable-length field
620/// introduces the possibility of *dynamic padding*. In particular, it may be
621/// necessary to add trailing padding *after* the trailing slice field in order
622/// to satisfy the outer type's alignment, and the amount of padding required
623/// may be a function of the length of the trailing slice field. This is just a
624/// natural consequence of the normal `#[repr(C)]` rules applied to slice DSTs,
625/// but it can result in surprising behavior. For example, consider the
626/// following type:
627///
628/// ```
629/// #[repr(C)]
630/// struct Foo {
631///     a: u32,
632///     b: u8,
633///     z: [u16],
634/// }
635/// ```
636///
637/// Assuming that `u32` has alignment 4 (this is not true on all platforms),
638/// then `Foo` has alignment 4 as well. Here is the smallest possible value for
639/// `Foo`:
640///
641/// ```text
642/// byte offset | 01234567
643///       field | aaaab---
644///                    ><
645/// ```
646///
647/// In this value, `z` has length 0. Abiding by `#[repr(C)]`, the lowest offset
648/// that we can place `z` at is 5, but since `z` has alignment 2, we need to
649/// round up to offset 6. This means that there is one byte of padding between
650/// `b` and `z`, then 0 bytes of `z` itself (denoted `><` in this diagram), and
651/// then two bytes of padding after `z` in order to satisfy the overall
652/// alignment of `Foo`. The size of this instance is 8 bytes.
653///
654/// What about if `z` has length 1?
655///
656/// ```text
657/// byte offset | 01234567
658///       field | aaaab-zz
659/// ```
660///
661/// In this instance, `z` has length 1, and thus takes up 2 bytes. That means
662/// that we no longer need padding after `z` in order to satisfy `Foo`'s
663/// alignment. We've now seen two different values of `Foo` with two different
664/// lengths of `z`, but they both have the same size - 8 bytes.
665///
666/// What about if `z` has length 2?
667///
668/// ```text
669/// byte offset | 012345678901
670///       field | aaaab-zzzz--
671/// ```
672///
673/// Now `z` has length 2, and thus takes up 4 bytes. This brings our un-padded
674/// size to 10, and so we now need another 2 bytes of padding after `z` to
675/// satisfy `Foo`'s alignment.
676///
677/// Again, all of this is just a logical consequence of the `#[repr(C)]` rules
678/// applied to slice DSTs, but it can be surprising that the amount of trailing
679/// padding becomes a function of the trailing slice field's length, and thus
680/// can only be computed at runtime.
681///
682/// [reprs]: https://doc.rust-lang.org/reference/type-layout.html#representations
683/// [repr-c-structs]: https://doc.rust-lang.org/reference/type-layout.html#reprc-structs
684///
685/// ## What is a valid size?
686///
687/// There are two places in zerocopy's API that we refer to "a valid size" of a
688/// type. In normal casts or conversions, where the source is a byte slice, we
689/// need to know whether the source byte slice is a valid size of the
690/// destination type. In prefix or suffix casts, we need to know whether *there
691/// exists* a valid size of the destination type which fits in the source byte
692/// slice and, if so, what the largest such size is.
693///
694/// As outlined above, a slice DST's size is defined by the number of elements
695/// in its trailing slice field. However, there is not necessarily a 1-to-1
696/// mapping between trailing slice field length and overall size. As we saw in
697/// the previous section with the type `Foo`, instances with both 0 and 1
698/// elements in the trailing `z` field result in a `Foo` whose size is 8 bytes.
699///
700/// When we say "x is a valid size of `T`", we mean one of two things:
701/// - If `T: Sized`, then we mean that `x == size_of::<T>()`
702/// - If `T` is a slice DST, then we mean that there exists a `len` such that the instance of
703///   `T` with `len` trailing slice elements has size `x`
704///
705/// When we say "largest possible size of `T` that fits in a byte slice", we
706/// mean one of two things:
707/// - If `T: Sized`, then we mean `size_of::<T>()` if the byte slice is at least
708///   `size_of::<T>()` bytes long
709/// - If `T` is a slice DST, then we mean to consider all values, `len`, such
710///   that the instance of `T` with `len` trailing slice elements fits in the
711///   byte slice, and to choose the largest such `len`, if any
712///
713///
714/// # Safety
715///
716/// This trait does not convey any safety guarantees to code outside this crate.
717///
718/// You must not rely on the `#[doc(hidden)]` internals of `KnownLayout`. Future
719/// releases of zerocopy may make backwards-breaking changes to these items,
720/// including changes that only affect soundness, which may cause code which
721/// uses those items to silently become unsound.
722///
723#[cfg_attr(feature = "derive", doc = "[derive]: zerocopy_derive::KnownLayout")]
724#[cfg_attr(
725    not(feature = "derive"),
726    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.KnownLayout.html"),
727)]
728#[cfg_attr(
729    zerocopy_diagnostic_on_unimplemented_1_78_0,
730    diagnostic::on_unimplemented(note = "Consider adding `#[derive(KnownLayout)]` to `{Self}`")
731)]
732pub unsafe trait KnownLayout {
733    // The `Self: Sized` bound makes it so that `KnownLayout` can still be
734    // object safe. It's not currently object safe thanks to `const LAYOUT`, and
735    // it likely won't be in the future, but there's no reason not to be
736    // forwards-compatible with object safety.
737    #[doc(hidden)]
738    fn only_derive_is_allowed_to_implement_this_trait()
739    where
740        Self: Sized;
741
742    /// The type of metadata stored in a pointer to `Self`.
743    ///
744    /// This is `()` for sized types and `usize` for slice DSTs.
745    type PointerMetadata: PointerMetadata;
746
747    /// A maybe-uninitialized analog of `Self`
748    ///
749    /// # Safety
750    ///
751    /// `Self::LAYOUT` and `Self::MaybeUninit::LAYOUT` are identical.
752    /// `Self::MaybeUninit` admits uninitialized bytes in all positions.
753    #[doc(hidden)]
754    type MaybeUninit: ?Sized + KnownLayout<PointerMetadata = Self::PointerMetadata>;
755
756    /// The layout of `Self`.
757    ///
758    /// # Safety
759    ///
760    /// Callers may assume that `LAYOUT` accurately reflects the layout of
761    /// `Self`. In particular:
762    /// - `LAYOUT.align` is equal to `Self`'s alignment
763    /// - If `Self: Sized`, then `LAYOUT.size_info == SizeInfo::Sized { size }`
764    ///   where `size == size_of::<Self>()`
765    /// - If `Self` is a slice DST, then `LAYOUT.size_info ==
766    ///   SizeInfo::SliceDst(slice_layout)` where:
767    ///   - The size, `size`, of an instance of `Self` with `elems` trailing
768    ///     slice elements is equal to `slice_layout.offset +
769    ///     slice_layout.elem_size * elems` rounded up to the nearest multiple
770    ///     of `LAYOUT.align`
771    ///   - For such an instance, any bytes in the range `[slice_layout.offset +
772    ///     slice_layout.elem_size * elems, size)` are padding and must not be
773    ///     assumed to be initialized
774    #[doc(hidden)]
775    const LAYOUT: DstLayout;
776
777    /// SAFETY: The returned pointer has the same address and provenance as
778    /// `bytes`. If `Self` is a DST, the returned pointer's referent has `elems`
779    /// elements in its trailing slice.
780    #[doc(hidden)]
781    fn raw_from_ptr_len(bytes: NonNull<u8>, meta: Self::PointerMetadata) -> NonNull<Self>;
782
783    /// Extracts the metadata from a pointer to `Self`.
784    ///
785    /// # Safety
786    ///
787    /// `pointer_to_metadata` always returns the correct metadata stored in
788    /// `ptr`.
789    #[doc(hidden)]
790    fn pointer_to_metadata(ptr: *mut Self) -> Self::PointerMetadata;
791
792    /// Computes the length of the byte range addressed by `ptr`.
793    ///
794    /// Returns `None` if the resulting length would not fit in an `usize`.
795    ///
796    /// # Safety
797    ///
798    /// Callers may assume that `size_of_val_raw` always returns the correct
799    /// size.
800    ///
801    /// Callers may assume that, if `ptr` addresses a byte range whose length
802    /// fits in an `usize`, this will return `Some`.
803    #[doc(hidden)]
804    #[must_use]
805    #[inline(always)]
806    fn size_of_val_raw(ptr: NonNull<Self>) -> Option<usize> {
807        let meta = Self::pointer_to_metadata(ptr.as_ptr());
808        // SAFETY: `size_for_metadata` promises to only return `None` if the
809        // resulting size would not fit in a `usize`.
810        meta.size_for_metadata(Self::LAYOUT)
811    }
812}
813
814/// Efficiently produces the [`TrailingSliceLayout`] of `T`.
815#[inline(always)]
816pub(crate) fn trailing_slice_layout<T>() -> TrailingSliceLayout
817where
818    T: ?Sized + KnownLayout<PointerMetadata = usize>,
819{
820    trait LayoutFacts {
821        const SIZE_INFO: TrailingSliceLayout;
822    }
823
824    impl<T: ?Sized> LayoutFacts for T
825    where
826        T: KnownLayout<PointerMetadata = usize>,
827    {
828        const SIZE_INFO: TrailingSliceLayout = match T::LAYOUT.size_info {
829            crate::SizeInfo::Sized { .. } => const_panic!("unreachable"),
830            crate::SizeInfo::SliceDst(info) => info,
831        };
832    }
833
834    T::SIZE_INFO
835}
836
837/// The metadata associated with a [`KnownLayout`] type.
838#[doc(hidden)]
839pub trait PointerMetadata: Copy + Eq + Debug {
840    /// Constructs a `Self` from an element count.
841    ///
842    /// If `Self = ()`, this returns `()`. If `Self = usize`, this returns
843    /// `elems`. No other types are currently supported.
844    fn from_elem_count(elems: usize) -> Self;
845
846    /// Computes the size of the object with the given layout and pointer
847    /// metadata.
848    ///
849    /// # Panics
850    ///
851    /// If `Self = ()`, `layout` must describe a sized type. If `Self = usize`,
852    /// `layout` must describe a slice DST. Otherwise, `size_for_metadata` may
853    /// panic.
854    ///
855    /// # Safety
856    ///
857    /// `size_for_metadata` promises to only return `None` if the resulting size
858    /// would not fit in a `usize`.
859    fn size_for_metadata(&self, layout: DstLayout) -> Option<usize>;
860}
861
862impl PointerMetadata for () {
863    #[inline]
864    #[allow(clippy::unused_unit)]
865    fn from_elem_count(_elems: usize) -> () {}
866
867    #[inline]
868    fn size_for_metadata(&self, layout: DstLayout) -> Option<usize> {
869        match layout.size_info {
870            SizeInfo::Sized { size } => Some(size),
871            // NOTE: This branch is unreachable, but we return `None` rather
872            // than `unreachable!()` to avoid generating panic paths.
873            SizeInfo::SliceDst(_) => None,
874        }
875    }
876}
877
878impl PointerMetadata for usize {
879    #[inline]
880    fn from_elem_count(elems: usize) -> usize {
881        elems
882    }
883
884    #[inline]
885    fn size_for_metadata(&self, layout: DstLayout) -> Option<usize> {
886        match layout.size_info {
887            SizeInfo::SliceDst(TrailingSliceLayout { offset, elem_size }) => {
888                let slice_len = elem_size.checked_mul(*self)?;
889                let without_padding = offset.checked_add(slice_len)?;
890                without_padding.checked_add(util::padding_needed_for(without_padding, layout.align))
891            }
892            // NOTE: This branch is unreachable, but we return `None` rather
893            // than `unreachable!()` to avoid generating panic paths.
894            SizeInfo::Sized { .. } => None,
895        }
896    }
897}
898
899// SAFETY: Delegates safety to `DstLayout::for_slice`.
900unsafe impl<T> KnownLayout for [T] {
901    #[allow(clippy::missing_inline_in_public_items, dead_code)]
902    #[cfg_attr(
903        all(coverage_nightly, __ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS),
904        coverage(off)
905    )]
906    fn only_derive_is_allowed_to_implement_this_trait()
907    where
908        Self: Sized,
909    {
910    }
911
912    type PointerMetadata = usize;
913
914    // SAFETY: `CoreMaybeUninit<T>::LAYOUT` and `T::LAYOUT` are identical
915    // because `CoreMaybeUninit<T>` has the same size and alignment as `T` [1].
916    // Consequently, `[CoreMaybeUninit<T>]::LAYOUT` and `[T]::LAYOUT` are
917    // identical, because they both lack a fixed-sized prefix and because they
918    // inherit the alignments of their inner element type (which are identical)
919    // [2][3].
920    //
921    // `[CoreMaybeUninit<T>]` admits uninitialized bytes at all positions
922    // because `CoreMaybeUninit<T>` admits uninitialized bytes at all positions
923    // and because the inner elements of `[CoreMaybeUninit<T>]` are laid out
924    // back-to-back [2][3].
925    //
926    // [1] Per https://doc.rust-lang.org/1.81.0/std/mem/union.MaybeUninit.html#layout-1:
927    //
928    //   `MaybeUninit<T>` is guaranteed to have the same size, alignment, and ABI as
929    //   `T`
930    //
931    // [2] Per https://doc.rust-lang.org/1.82.0/reference/type-layout.html#slice-layout:
932    //
933    //   Slices have the same layout as the section of the array they slice.
934    //
935    // [3] Per https://doc.rust-lang.org/1.82.0/reference/type-layout.html#array-layout:
936    //
937    //   An array of `[T; N]` has a size of `size_of::<T>() * N` and the same
938    //   alignment of `T`. Arrays are laid out so that the zero-based `nth`
939    //   element of the array is offset from the start of the array by `n *
940    //   size_of::<T>()` bytes.
941    type MaybeUninit = [CoreMaybeUninit<T>];
942
943    const LAYOUT: DstLayout = DstLayout::for_slice::<T>();
944
945    // SAFETY: `.cast` preserves address and provenance. The returned pointer
946    // refers to an object with `elems` elements by construction.
947    #[inline(always)]
948    fn raw_from_ptr_len(data: NonNull<u8>, elems: usize) -> NonNull<Self> {
949        // TODO(#67): Remove this allow. See NonNullExt for more details.
950        #[allow(unstable_name_collisions)]
951        NonNull::slice_from_raw_parts(data.cast::<T>(), elems)
952    }
953
954    #[inline(always)]
955    fn pointer_to_metadata(ptr: *mut [T]) -> usize {
956        #[allow(clippy::as_conversions)]
957        let slc = ptr as *const [()];
958
959        // SAFETY:
960        // - `()` has alignment 1, so `slc` is trivially aligned.
961        // - `slc` was derived from a non-null pointer.
962        // - The size is 0 regardless of the length, so it is sound to
963        //   materialize a reference regardless of location.
964        // - By invariant, `self.ptr` has valid provenance.
965        let slc = unsafe { &*slc };
966
967        // This is correct because the preceding `as` cast preserves the number
968        // of slice elements. [1]
969        //
970        // [1] Per https://doc.rust-lang.org/reference/expressions/operator-expr.html#pointer-to-pointer-cast:
971        //
972        //   For slice types like `[T]` and `[U]`, the raw pointer types `*const
973        //   [T]`, `*mut [T]`, `*const [U]`, and `*mut [U]` encode the number of
974        //   elements in this slice. Casts between these raw pointer types
975        //   preserve the number of elements. ... The same holds for `str` and
976        //   any compound type whose unsized tail is a slice type, such as
977        //   struct `Foo(i32, [u8])` or `(u64, Foo)`.
978        slc.len()
979    }
980}
981
982#[rustfmt::skip]
983impl_known_layout!(
984    (),
985    u8, i8, u16, i16, u32, i32, u64, i64, u128, i128, usize, isize, f32, f64,
986    bool, char,
987    NonZeroU8, NonZeroI8, NonZeroU16, NonZeroI16, NonZeroU32, NonZeroI32,
988    NonZeroU64, NonZeroI64, NonZeroU128, NonZeroI128, NonZeroUsize, NonZeroIsize
989);
990#[rustfmt::skip]
991#[cfg(feature = "float-nightly")]
992impl_known_layout!(
993    #[cfg_attr(doc_cfg, doc(cfg(feature = "float-nightly")))]
994    f16,
995    #[cfg_attr(doc_cfg, doc(cfg(feature = "float-nightly")))]
996    f128
997);
998#[rustfmt::skip]
999impl_known_layout!(
1000    T         => Option<T>,
1001    T: ?Sized => PhantomData<T>,
1002    T         => Wrapping<T>,
1003    T         => CoreMaybeUninit<T>,
1004    T: ?Sized => *const T,
1005    T: ?Sized => *mut T,
1006    T: ?Sized => &'_ T,
1007    T: ?Sized => &'_ mut T,
1008);
1009impl_known_layout!(const N: usize, T => [T; N]);
1010
1011safety_comment! {
1012    /// SAFETY:
1013    /// `str` has the same representation as `[u8]`. `ManuallyDrop<T>` [1],
1014    /// `UnsafeCell<T>` [2], and `Cell<T>` [3] have the same representation as
1015    /// `T`.
1016    ///
1017    /// [1] Per https://doc.rust-lang.org/1.85.0/std/mem/struct.ManuallyDrop.html:
1018    ///
1019    ///   `ManuallyDrop<T>` is guaranteed to have the same layout and bit
1020    ///   validity as `T`
1021    ///
1022    /// [2] Per https://doc.rust-lang.org/1.85.0/core/cell/struct.UnsafeCell.html#memory-layout:
1023    ///
1024    ///   `UnsafeCell<T>` has the same in-memory representation as its inner
1025    ///   type `T`.
1026    ///
1027    /// [3] Per https://doc.rust-lang.org/1.85.0/core/cell/struct.Cell.html#memory-layout:
1028    ///
1029    ///   `Cell<T>` has the same in-memory representation as `T`.
1030    unsafe_impl_known_layout!(#[repr([u8])] str);
1031    unsafe_impl_known_layout!(T: ?Sized + KnownLayout => #[repr(T)] ManuallyDrop<T>);
1032    unsafe_impl_known_layout!(T: ?Sized + KnownLayout => #[repr(T)] UnsafeCell<T>);
1033    unsafe_impl_known_layout!(T: ?Sized + KnownLayout => #[repr(T)] Cell<T>);
1034}
1035
1036safety_comment! {
1037    /// SAFETY:
1038    /// - By consequence of the invariant on `T::MaybeUninit` that `T::LAYOUT`
1039    ///   and `T::MaybeUninit::LAYOUT` are equal, `T` and `T::MaybeUninit`
1040    ///   have the same:
1041    ///   - Fixed prefix size
1042    ///   - Alignment
1043    ///   - (For DSTs) trailing slice element size
1044    /// - By consequence of the above, referents `T::MaybeUninit` and `T` have
1045    ///   the require the same kind of pointer metadata, and thus it is valid to
1046    ///   perform an `as` cast from `*mut T` and `*mut T::MaybeUninit`, and this
1047    ///   operation preserves referent size (ie, `size_of_val_raw`).
1048    unsafe_impl_known_layout!(T: ?Sized + KnownLayout => #[repr(T::MaybeUninit)] MaybeUninit<T>);
1049}
1050
1051mod split_at {
1052    use super::*;
1053    #[cfg(doc)]
1054    use invariant::Exclusive;
1055
1056    /// Types that can be split in two.
1057    ///
1058    /// # Implementation
1059    ///
1060    /// **Do not implement this trait yourself!** Instead, use
1061    /// [`#[derive(SplitAt)]`][derive]; e.g.:
1062    ///
1063    /// ```
1064    /// # use zerocopy_derive::{SplitAt, KnownLayout};
1065    /// #[derive(SplitAt, KnownLayout)]
1066    /// #[repr(C)]
1067    /// struct MyStruct<T: ?Sized> {
1068    /// # /*
1069    ///     ...,
1070    /// # */
1071    ///     // `SplitAt` types must have at least one field.
1072    ///     field: T,
1073    /// }
1074    /// ```
1075    ///
1076    /// This derive performs a sophisticated, compile-time safety analysis to
1077    /// determine whether a type is `SplitAt`.
1078    ///
1079    /// # Safety
1080    ///
1081    /// This trait does not convey any safety guarantees to code outside this crate.
1082    ///
1083    /// You must not rely on the `#[doc(hidden)]` internals of `SplitAt`. Future
1084    /// releases of zerocopy may make backwards-breaking changes to these items,
1085    /// including changes that only affect soundness, which may cause code which
1086    /// uses those items to silently become unsound.
1087    ///
1088    #[cfg_attr(feature = "derive", doc = "[derive]: zerocopy_derive::SplitAt")]
1089    #[cfg_attr(
1090        not(feature = "derive"),
1091        doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.SplitAt.html"),
1092    )]
1093    #[cfg_attr(
1094        zerocopy_diagnostic_on_unimplemented_1_78_0,
1095        diagnostic::on_unimplemented(note = "Consider adding `#[derive(SplitAt)]` to `{Self}`")
1096    )]
1097    // # Safety
1098    //
1099    // The trailing slice is well-aligned for its element type.
1100    pub unsafe trait SplitAt: KnownLayout<PointerMetadata = usize> {
1101        /// The element type of the trailing slice.
1102        type Elem;
1103
1104        #[doc(hidden)]
1105        fn only_derive_is_allowed_to_implement_this_trait()
1106        where
1107            Self: Sized;
1108
1109        /// Unsafely splits `self` in two.
1110        ///
1111        /// # Safety
1112        ///
1113        /// The caller promises that `l_len` is not greater than the length of
1114        /// `self`'s trailing slice.
1115        #[inline]
1116        #[must_use]
1117        unsafe fn split_at_unchecked(&self, l_len: usize) -> (&Self, &[Self::Elem])
1118        where
1119            Self: Immutable,
1120        {
1121            // SAFETY: `&self` is an instance of `&Self` for which the caller has
1122            // promised that `l_len` is not greater than the length of `self`'s
1123            // trailing slice.
1124            let l_len = unsafe { MetadataOf::new_unchecked(l_len) };
1125            let ptr = Ptr::from_ref(self);
1126            // SAFETY:
1127            // 0. The caller promises that `l_len` is not greater than the length of
1128            //    `self`'s trailing slice.
1129            // 1. `ptr`'s aliasing is `Shared` and does not permit interior
1130            //    mutation because `Self: Immutable`.
1131            let (left, right) = unsafe { ptr_split_at_unchecked(ptr, l_len) };
1132            (left.as_ref(), right.as_ref())
1133        }
1134
1135        /// Attempts to split `self` in two.
1136        ///
1137        /// Returns `None` if `l_len` is greater than the length of `self`'s
1138        /// trailing slice.
1139        ///
1140        /// # Examples
1141        ///
1142        /// ```
1143        /// use zerocopy::{SplitAt, FromBytes};
1144        /// # use zerocopy_derive::*;
1145        ///
1146        /// #[derive(SplitAt, FromBytes, KnownLayout, Immutable)]
1147        /// #[repr(C)]
1148        /// struct Packet {
1149        ///     length: u8,
1150        ///     body: [u8],
1151        /// }
1152        ///
1153        /// // These bytes encode a `Packet`.
1154        /// let bytes = &[4, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
1155        ///
1156        /// let packet = Packet::ref_from_bytes(bytes).unwrap();
1157        ///
1158        /// assert_eq!(packet.length, 4);
1159        /// assert_eq!(packet.body, [1, 2, 3, 4, 5, 6, 7, 8, 9]);
1160        ///
1161        /// let (packet, rest) = packet.split_at(packet.length as usize).unwrap();
1162        /// assert_eq!(packet.length, 4);
1163        /// assert_eq!(packet.body, [1, 2, 3, 4]);
1164        /// assert_eq!(rest, [5, 6, 7, 8, 9]);
1165        /// ```
1166        #[inline]
1167        #[must_use = "has no side effects"]
1168        fn split_at(&self, l_len: usize) -> Option<(&Self, &[Self::Elem])>
1169        where
1170            Self: Immutable,
1171        {
1172            if l_len <= Ptr::from_ref(self).len() {
1173                // SAFETY: We have checked that `l_len` is not greater than the
1174                // length of `self`'s trailing slice.
1175                Some(unsafe { self.split_at_unchecked(l_len) })
1176            } else {
1177                None
1178            }
1179        }
1180
1181        /// Unsafely splits `self` in two.
1182        ///
1183        /// # Safety
1184        ///
1185        /// The caller promises that:
1186        /// 0. `l_len` is not greater than the length of `self`'s trailing slice.
1187        /// 1. The trailing padding bytes of the left portion will not overlap
1188        ///    the right portion. For some dynamically sized types, the padding
1189        ///    that appears after the trailing slice field [is a dynamic
1190        ///    function of the trailing slice
1191        ///    length](KnownLayout#slice-dst-layout). Thus, for some types, this
1192        ///    condition is dependent on the value of `l_len`.
1193        #[inline]
1194        #[must_use]
1195        unsafe fn split_at_mut_unchecked(
1196            &mut self,
1197            l_len: usize,
1198        ) -> (&mut Self, &mut [Self::Elem]) {
1199            // SAFETY: `&mut self` is an instance of `&mut Self` for which the
1200            // caller has promised that `l_len` is not greater than the length of
1201            // `self`'s trailing slice.
1202            let l_len = unsafe { MetadataOf::new_unchecked(l_len) };
1203            let ptr = Ptr::from_mut(self);
1204            // SAFETY:
1205            // 0. The caller promises that `l_len` is not greater than the length of
1206            //    `self`'s trailing slice.
1207            // 1. `ptr`'s aliasing is `Exclusive`; the caller promises that
1208            //    `l_len.padding_needed_for() == 0`.
1209            let (left, right) = unsafe { ptr_split_at_unchecked(ptr, l_len) };
1210            (left.as_mut(), right.as_mut())
1211        }
1212
1213        /// Attempts to split `self` in two.
1214        ///
1215        /// Returns `None` if `l_len` is greater than the length of `self`'s
1216        /// trailing slice, or if the given `l_len` would result in [the trailing
1217        /// padding](KnownLayout#slice-dst-layout) of the left portion overlapping
1218        /// the right portion.
1219        ///
1220        ///
1221        /// # Examples
1222        ///
1223        /// ```
1224        /// use zerocopy::{SplitAt, FromBytes};
1225        /// # use zerocopy_derive::*;
1226        ///
1227        /// #[derive(SplitAt, FromBytes, KnownLayout, Immutable, IntoBytes)]
1228        /// #[repr(C)]
1229        /// struct Packet<B: ?Sized> {
1230        ///     length: u8,
1231        ///     body: B,
1232        /// }
1233        ///
1234        /// // These bytes encode a `Packet`.
1235        /// let mut bytes = &mut [4, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
1236        ///
1237        /// let packet = Packet::<[u8]>::mut_from_bytes(bytes).unwrap();
1238        ///
1239        /// assert_eq!(packet.length, 4);
1240        /// assert_eq!(packet.body, [1, 2, 3, 4, 5, 6, 7, 8, 9]);
1241        ///
1242        /// {
1243        ///     let (packet, rest) = packet.split_at_mut(packet.length as usize).unwrap();
1244        ///     assert_eq!(packet.length, 4);
1245        ///     assert_eq!(packet.body, [1, 2, 3, 4]);
1246        ///     assert_eq!(rest, [5, 6, 7, 8, 9]);
1247        ///
1248        ///     rest.fill(0);
1249        /// }
1250        ///
1251        /// assert_eq!(packet.length, 4);
1252        /// assert_eq!(packet.body, [1, 2, 3, 4, 0, 0, 0, 0, 0]);
1253        /// ```
1254        #[inline]
1255        fn split_at_mut(&mut self, l_len: usize) -> Option<(&mut Self, &mut [Self::Elem])> {
1256            match MetadataOf::new_in_bounds(self, l_len) {
1257                Some(l_len) if l_len.padding_needed_for() == 0 => {
1258                    // SAFETY: We have ensured both that:
1259                    // 0. `l_len <= self.len()` (by post-condition on
1260                    //    `MetadataOf::new_in_bounds`)
1261                    // 1. `l_len.padding_needed_for() == 0` (by guard on match arm)
1262                    Some(unsafe { self.split_at_mut_unchecked(l_len.get()) })
1263                }
1264                _ => None,
1265            }
1266        }
1267    }
1268
1269    // SAFETY: `[T]`'s trailing slice is `[T]`, which is trivially aligned.
1270    unsafe impl<T> SplitAt for [T] {
1271        type Elem = T;
1272
1273        #[inline]
1274        #[allow(dead_code)]
1275        fn only_derive_is_allowed_to_implement_this_trait()
1276        where
1277            Self: Sized,
1278        {
1279        }
1280    }
1281
1282    /// Splits `T` in two.
1283    ///
1284    /// # Safety
1285    ///
1286    /// The caller promises that:
1287    /// 0. `l_len.get()` is not greater than the length of `ptr`'s trailing
1288    ///    slice.
1289    /// 1. if `I::Aliasing` is [`Exclusive`] or `T` permits interior mutation,
1290    ///    then `l_len.padding_needed_for() == 0`.
1291    #[inline(always)]
1292    unsafe fn ptr_split_at_unchecked<'a, T, I, R>(
1293        ptr: Ptr<'a, T, I>,
1294        l_len: MetadataOf<T>,
1295    ) -> (Ptr<'a, T, I>, Ptr<'a, [T::Elem], I>)
1296    where
1297        I: invariant::Invariants,
1298        T: ?Sized + pointer::Read<I::Aliasing, R> + SplitAt,
1299    {
1300        let inner = ptr.as_inner();
1301
1302        // SAFETY: The caller promises that `l_len.get()` is not greater than
1303        // the length of `self`'s trailing slice.
1304        let (left, right) = unsafe { inner.split_at_unchecked(l_len) };
1305
1306        // Lemma 0: `left` and `right` conform to the aliasing invariant
1307        // `I::Aliasing`. Proof: If `I::Aliasing` is `Exclusive` or `T` permits
1308        // interior mutation, the caller promises that
1309        // `l_len.padding_needed_for() == 0`. Consequently, by post-condition on
1310        // `PtrInner::split_at_unchecked`, there is no trailing padding after
1311        // `left`'s final element that would overlap into `right`. If
1312        // `I::Aliasing` is shared and `T` forbids interior mutation, then
1313        // overlap between their referents is permissible.
1314
1315        // SAFETY:
1316        // 0. `left` conforms to the aliasing invariant of `I::Aliasing, by
1317        //    Lemma 0.
1318        // 1. `left` conforms to the alignment invariant of `I::Alignment,
1319        //    because the referents of `left` and `Self` have the same address
1320        //    and type (and, thus, alignment requirement).
1321        // 2. `left` conforms to the validity invariant of `I::Validity`,
1322        //    neither the type nor bytes of `left`'s referent have been changed.
1323        let left = unsafe { Ptr::from_inner(left) };
1324
1325        // SAFETY:
1326        // 0. `right` conforms to the aliasing invariant of `I::Aliasing, by
1327        //    Lemma 0.
1328        // 1. `right` conforms to the alignment invariant of `I::Alignment,
1329        //    because if `ptr` with `I::Alignment = Aligned`, then by invariant
1330        //    on `T: SplitAt`, the trailing slice of `ptr` (from which `right`
1331        //    is derived) will also be well-aligned.
1332        // 2. `right` conforms to the validity invariant of `I::Validity`,
1333        //    because `right: [T::Elem]` is derived from the trailing slice of
1334        //    `ptr`, which, by contract on `T: SplitAt::Elem`, has type
1335        //    `[T::Elem]`.
1336        let right = unsafe { Ptr::from_inner(right) };
1337
1338        (left, right)
1339    }
1340
1341    #[cfg(test)]
1342    mod tests {
1343        #[cfg(feature = "derive")]
1344        #[test]
1345        fn test_split_at() {
1346            use crate::{FromBytes, Immutable, IntoBytes, KnownLayout, SplitAt};
1347
1348            #[derive(FromBytes, KnownLayout, SplitAt, IntoBytes, Immutable)]
1349            #[repr(C)]
1350            struct SliceDst<const OFFSET: usize> {
1351                prefix: [u8; OFFSET],
1352                trailing: [u8],
1353            }
1354
1355            #[allow(clippy::as_conversions)]
1356            fn test_split_at<const OFFSET: usize, const BUFFER_SIZE: usize>() {
1357                // Test `split_at`
1358                let n: usize = BUFFER_SIZE - OFFSET;
1359                let arr = [1; BUFFER_SIZE];
1360                let dst = SliceDst::<OFFSET>::ref_from_bytes(&arr[..]).unwrap();
1361                for i in 0..=n {
1362                    let (l, r) = dst.split_at(i).unwrap();
1363                    let l_sum: u8 = l.trailing.iter().sum();
1364                    let r_sum: u8 = r.iter().sum();
1365                    assert_eq!(l_sum, i as u8);
1366                    assert_eq!(r_sum, (n - i) as u8);
1367                    assert_eq!(l_sum + r_sum, n as u8);
1368                }
1369
1370                // Test `split_at_mut`
1371                let n: usize = BUFFER_SIZE - OFFSET;
1372                let mut arr = [1; BUFFER_SIZE];
1373                let dst = SliceDst::<OFFSET>::mut_from_bytes(&mut arr[..]).unwrap();
1374                for i in 0..=n {
1375                    let (l, r) = dst.split_at_mut(i).unwrap();
1376                    let l_sum: u8 = l.trailing.iter().sum();
1377                    let r_sum: u8 = r.iter().sum();
1378                    assert_eq!(l_sum, i as u8);
1379                    assert_eq!(r_sum, (n - i) as u8);
1380                    assert_eq!(l_sum + r_sum, n as u8);
1381                }
1382            }
1383
1384            test_split_at::<0, 16>();
1385            test_split_at::<1, 17>();
1386            test_split_at::<2, 18>();
1387        }
1388    }
1389}
1390
1391pub use split_at::SplitAt;
1392
1393/// Analyzes whether a type is [`FromZeros`].
1394///
1395/// This derive analyzes, at compile time, whether the annotated type satisfies
1396/// the [safety conditions] of `FromZeros` and implements `FromZeros` and its
1397/// supertraits if it is sound to do so. This derive can be applied to structs,
1398/// enums, and unions; e.g.:
1399///
1400/// ```
1401/// # use zerocopy_derive::{FromZeros, Immutable};
1402/// #[derive(FromZeros)]
1403/// struct MyStruct {
1404/// # /*
1405///     ...
1406/// # */
1407/// }
1408///
1409/// #[derive(FromZeros)]
1410/// #[repr(u8)]
1411/// enum MyEnum {
1412/// #   Variant0,
1413/// # /*
1414///     ...
1415/// # */
1416/// }
1417///
1418/// #[derive(FromZeros, Immutable)]
1419/// union MyUnion {
1420/// #   variant: u8,
1421/// # /*
1422///     ...
1423/// # */
1424/// }
1425/// ```
1426///
1427/// [safety conditions]: trait@FromZeros#safety
1428///
1429/// # Analysis
1430///
1431/// *This section describes, roughly, the analysis performed by this derive to
1432/// determine whether it is sound to implement `FromZeros` for a given type.
1433/// Unless you are modifying the implementation of this derive, or attempting to
1434/// manually implement `FromZeros` for a type yourself, you don't need to read
1435/// this section.*
1436///
1437/// If a type has the following properties, then this derive can implement
1438/// `FromZeros` for that type:
1439///
1440/// - If the type is a struct, all of its fields must be `FromZeros`.
1441/// - If the type is an enum:
1442///   - It must have a defined representation (`repr`s `C`, `u8`, `u16`, `u32`,
1443///     `u64`, `usize`, `i8`, `i16`, `i32`, `i64`, or `isize`).
1444///   - It must have a variant with a discriminant/tag of `0`, and its fields
1445///     must be `FromZeros`. See [the reference] for a description of
1446///     discriminant values are specified.
1447///   - The fields of that variant must be `FromZeros`.
1448///
1449/// This analysis is subject to change. Unsafe code may *only* rely on the
1450/// documented [safety conditions] of `FromZeros`, and must *not* rely on the
1451/// implementation details of this derive.
1452///
1453/// [the reference]: https://doc.rust-lang.org/reference/items/enumerations.html#custom-discriminant-values-for-fieldless-enumerations
1454///
1455/// ## Why isn't an explicit representation required for structs?
1456///
1457/// Neither this derive, nor the [safety conditions] of `FromZeros`, requires
1458/// that structs are marked with `#[repr(C)]`.
1459///
1460/// Per the [Rust reference](reference),
1461///
1462/// > The representation of a type can change the padding between fields, but
1463/// > does not change the layout of the fields themselves.
1464///
1465/// [reference]: https://doc.rust-lang.org/reference/type-layout.html#representations
1466///
1467/// Since the layout of structs only consists of padding bytes and field bytes,
1468/// a struct is soundly `FromZeros` if:
1469/// 1. its padding is soundly `FromZeros`, and
1470/// 2. its fields are soundly `FromZeros`.
1471///
1472/// The answer to the first question is always yes: padding bytes do not have
1473/// any validity constraints. A [discussion] of this question in the Unsafe Code
1474/// Guidelines Working Group concluded that it would be virtually unimaginable
1475/// for future versions of rustc to add validity constraints to padding bytes.
1476///
1477/// [discussion]: https://github.com/rust-lang/unsafe-code-guidelines/issues/174
1478///
1479/// Whether a struct is soundly `FromZeros` therefore solely depends on whether
1480/// its fields are `FromZeros`.
1481// TODO(#146): Document why we don't require an enum to have an explicit `repr`
1482// attribute.
1483#[cfg(any(feature = "derive", test))]
1484#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
1485pub use zerocopy_derive::FromZeros;
1486
1487/// Analyzes whether a type is [`Immutable`].
1488///
1489/// This derive analyzes, at compile time, whether the annotated type satisfies
1490/// the [safety conditions] of `Immutable` and implements `Immutable` if it is
1491/// sound to do so. This derive can be applied to structs, enums, and unions;
1492/// e.g.:
1493///
1494/// ```
1495/// # use zerocopy_derive::Immutable;
1496/// #[derive(Immutable)]
1497/// struct MyStruct {
1498/// # /*
1499///     ...
1500/// # */
1501/// }
1502///
1503/// #[derive(Immutable)]
1504/// enum MyEnum {
1505/// #   Variant0,
1506/// # /*
1507///     ...
1508/// # */
1509/// }
1510///
1511/// #[derive(Immutable)]
1512/// union MyUnion {
1513/// #   variant: u8,
1514/// # /*
1515///     ...
1516/// # */
1517/// }
1518/// ```
1519///
1520/// # Analysis
1521///
1522/// *This section describes, roughly, the analysis performed by this derive to
1523/// determine whether it is sound to implement `Immutable` for a given type.
1524/// Unless you are modifying the implementation of this derive, you don't need
1525/// to read this section.*
1526///
1527/// If a type has the following properties, then this derive can implement
1528/// `Immutable` for that type:
1529///
1530/// - All fields must be `Immutable`.
1531///
1532/// This analysis is subject to change. Unsafe code may *only* rely on the
1533/// documented [safety conditions] of `Immutable`, and must *not* rely on the
1534/// implementation details of this derive.
1535///
1536/// [safety conditions]: trait@Immutable#safety
1537#[cfg(any(feature = "derive", test))]
1538#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
1539pub use zerocopy_derive::Immutable;
1540
1541/// Types which are free from interior mutability.
1542///
1543/// `T: Immutable` indicates that `T` does not permit interior mutation, except
1544/// by ownership or an exclusive (`&mut`) borrow.
1545///
1546/// # Implementation
1547///
1548/// **Do not implement this trait yourself!** Instead, use
1549/// [`#[derive(Immutable)]`][derive] (requires the `derive` Cargo feature);
1550/// e.g.:
1551///
1552/// ```
1553/// # use zerocopy_derive::Immutable;
1554/// #[derive(Immutable)]
1555/// struct MyStruct {
1556/// # /*
1557///     ...
1558/// # */
1559/// }
1560///
1561/// #[derive(Immutable)]
1562/// enum MyEnum {
1563/// # /*
1564///     ...
1565/// # */
1566/// }
1567///
1568/// #[derive(Immutable)]
1569/// union MyUnion {
1570/// #   variant: u8,
1571/// # /*
1572///     ...
1573/// # */
1574/// }
1575/// ```
1576///
1577/// This derive performs a sophisticated, compile-time safety analysis to
1578/// determine whether a type is `Immutable`.
1579///
1580/// # Safety
1581///
1582/// Unsafe code outside of this crate must not make any assumptions about `T`
1583/// based on `T: Immutable`. We reserve the right to relax the requirements for
1584/// `Immutable` in the future, and if unsafe code outside of this crate makes
1585/// assumptions based on `T: Immutable`, future relaxations may cause that code
1586/// to become unsound.
1587///
1588// # Safety (Internal)
1589//
1590// If `T: Immutable`, unsafe code *inside of this crate* may assume that, given
1591// `t: &T`, `t` does not contain any [`UnsafeCell`]s at any byte location
1592// within the byte range addressed by `t`. This includes ranges of length 0
1593// (e.g., `UnsafeCell<()>` and `[UnsafeCell<u8>; 0]`). If a type implements
1594// `Immutable` which violates this assumptions, it may cause this crate to
1595// exhibit [undefined behavior].
1596//
1597// [`UnsafeCell`]: core::cell::UnsafeCell
1598// [undefined behavior]: https://raphlinus.github.io/programming/rust/2018/08/17/undefined-behavior.html
1599#[cfg_attr(
1600    feature = "derive",
1601    doc = "[derive]: zerocopy_derive::Immutable",
1602    doc = "[derive-analysis]: zerocopy_derive::Immutable#analysis"
1603)]
1604#[cfg_attr(
1605    not(feature = "derive"),
1606    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Immutable.html"),
1607    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Immutable.html#analysis"),
1608)]
1609#[cfg_attr(
1610    zerocopy_diagnostic_on_unimplemented_1_78_0,
1611    diagnostic::on_unimplemented(note = "Consider adding `#[derive(Immutable)]` to `{Self}`")
1612)]
1613pub unsafe trait Immutable {
1614    // The `Self: Sized` bound makes it so that `Immutable` is still object
1615    // safe.
1616    #[doc(hidden)]
1617    fn only_derive_is_allowed_to_implement_this_trait()
1618    where
1619        Self: Sized;
1620}
1621
1622/// Implements [`TryFromBytes`].
1623///
1624/// This derive synthesizes the runtime checks required to check whether a
1625/// sequence of initialized bytes corresponds to a valid instance of a type.
1626/// This derive can be applied to structs, enums, and unions; e.g.:
1627///
1628/// ```
1629/// # use zerocopy_derive::{TryFromBytes, Immutable};
1630/// #[derive(TryFromBytes)]
1631/// struct MyStruct {
1632/// # /*
1633///     ...
1634/// # */
1635/// }
1636///
1637/// #[derive(TryFromBytes)]
1638/// #[repr(u8)]
1639/// enum MyEnum {
1640/// #   V00,
1641/// # /*
1642///     ...
1643/// # */
1644/// }
1645///
1646/// #[derive(TryFromBytes, Immutable)]
1647/// union MyUnion {
1648/// #   variant: u8,
1649/// # /*
1650///     ...
1651/// # */
1652/// }
1653/// ```
1654///
1655/// # Portability
1656///
1657/// To ensure consistent endianness for enums with multi-byte representations,
1658/// explicitly specify and convert each discriminant using `.to_le()` or
1659/// `.to_be()`; e.g.:
1660///
1661/// ```
1662/// # use zerocopy_derive::TryFromBytes;
1663/// // `DataStoreVersion` is encoded in little-endian.
1664/// #[derive(TryFromBytes)]
1665/// #[repr(u32)]
1666/// pub enum DataStoreVersion {
1667///     /// Version 1 of the data store.
1668///     V1 = 9u32.to_le(),
1669///
1670///     /// Version 2 of the data store.
1671///     V2 = 10u32.to_le(),
1672/// }
1673/// ```
1674///
1675/// [safety conditions]: trait@TryFromBytes#safety
1676#[cfg(any(feature = "derive", test))]
1677#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
1678pub use zerocopy_derive::TryFromBytes;
1679
1680/// Types for which some bit patterns are valid.
1681///
1682/// A memory region of the appropriate length which contains initialized bytes
1683/// can be viewed as a `TryFromBytes` type so long as the runtime value of those
1684/// bytes corresponds to a [*valid instance*] of that type. For example,
1685/// [`bool`] is `TryFromBytes`, so zerocopy can transmute a [`u8`] into a
1686/// [`bool`] so long as it first checks that the value of the [`u8`] is `0` or
1687/// `1`.
1688///
1689/// # Implementation
1690///
1691/// **Do not implement this trait yourself!** Instead, use
1692/// [`#[derive(TryFromBytes)]`][derive]; e.g.:
1693///
1694/// ```
1695/// # use zerocopy_derive::{TryFromBytes, Immutable};
1696/// #[derive(TryFromBytes)]
1697/// struct MyStruct {
1698/// # /*
1699///     ...
1700/// # */
1701/// }
1702///
1703/// #[derive(TryFromBytes)]
1704/// #[repr(u8)]
1705/// enum MyEnum {
1706/// #   V00,
1707/// # /*
1708///     ...
1709/// # */
1710/// }
1711///
1712/// #[derive(TryFromBytes, Immutable)]
1713/// union MyUnion {
1714/// #   variant: u8,
1715/// # /*
1716///     ...
1717/// # */
1718/// }
1719/// ```
1720///
1721/// This derive ensures that the runtime check of whether bytes correspond to a
1722/// valid instance is sound. You **must** implement this trait via the derive.
1723///
1724/// # What is a "valid instance"?
1725///
1726/// In Rust, each type has *bit validity*, which refers to the set of bit
1727/// patterns which may appear in an instance of that type. It is impossible for
1728/// safe Rust code to produce values which violate bit validity (ie, values
1729/// outside of the "valid" set of bit patterns). If `unsafe` code produces an
1730/// invalid value, this is considered [undefined behavior].
1731///
1732/// Rust's bit validity rules are currently being decided, which means that some
1733/// types have three classes of bit patterns: those which are definitely valid,
1734/// and whose validity is documented in the language; those which may or may not
1735/// be considered valid at some point in the future; and those which are
1736/// definitely invalid.
1737///
1738/// Zerocopy takes a conservative approach, and only considers a bit pattern to
1739/// be valid if its validity is a documenteed guarantee provided by the
1740/// language.
1741///
1742/// For most use cases, Rust's current guarantees align with programmers'
1743/// intuitions about what ought to be valid. As a result, zerocopy's
1744/// conservatism should not affect most users.
1745///
1746/// If you are negatively affected by lack of support for a particular type,
1747/// we encourage you to let us know by [filing an issue][github-repo].
1748///
1749/// # `TryFromBytes` is not symmetrical with [`IntoBytes`]
1750///
1751/// There are some types which implement both `TryFromBytes` and [`IntoBytes`],
1752/// but for which `TryFromBytes` is not guaranteed to accept all byte sequences
1753/// produced by `IntoBytes`. In other words, for some `T: TryFromBytes +
1754/// IntoBytes`, there exist values of `t: T` such that
1755/// `TryFromBytes::try_ref_from_bytes(t.as_bytes()) == None`. Code should not
1756/// generally assume that values produced by `IntoBytes` will necessarily be
1757/// accepted as valid by `TryFromBytes`.
1758///
1759/// # Safety
1760///
1761/// On its own, `T: TryFromBytes` does not make any guarantees about the layout
1762/// or representation of `T`. It merely provides the ability to perform a
1763/// validity check at runtime via methods like [`try_ref_from_bytes`].
1764///
1765/// You must not rely on the `#[doc(hidden)]` internals of `TryFromBytes`.
1766/// Future releases of zerocopy may make backwards-breaking changes to these
1767/// items, including changes that only affect soundness, which may cause code
1768/// which uses those items to silently become unsound.
1769///
1770/// [undefined behavior]: https://raphlinus.github.io/programming/rust/2018/08/17/undefined-behavior.html
1771/// [github-repo]: https://github.com/google/zerocopy
1772/// [`try_ref_from_bytes`]: TryFromBytes::try_ref_from_bytes
1773/// [*valid instance*]: #what-is-a-valid-instance
1774#[cfg_attr(feature = "derive", doc = "[derive]: zerocopy_derive::TryFromBytes")]
1775#[cfg_attr(
1776    not(feature = "derive"),
1777    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.TryFromBytes.html"),
1778)]
1779#[cfg_attr(
1780    zerocopy_diagnostic_on_unimplemented_1_78_0,
1781    diagnostic::on_unimplemented(note = "Consider adding `#[derive(TryFromBytes)]` to `{Self}`")
1782)]
1783pub unsafe trait TryFromBytes {
1784    // The `Self: Sized` bound makes it so that `TryFromBytes` is still object
1785    // safe.
1786    #[doc(hidden)]
1787    fn only_derive_is_allowed_to_implement_this_trait()
1788    where
1789        Self: Sized;
1790
1791    /// Does a given memory range contain a valid instance of `Self`?
1792    ///
1793    /// # Safety
1794    ///
1795    /// Unsafe code may assume that, if `is_bit_valid(candidate)` returns true,
1796    /// `*candidate` contains a valid `Self`.
1797    ///
1798    /// # Panics
1799    ///
1800    /// `is_bit_valid` may panic. Callers are responsible for ensuring that any
1801    /// `unsafe` code remains sound even in the face of `is_bit_valid`
1802    /// panicking. (We support user-defined validation routines; so long as
1803    /// these routines are not required to be `unsafe`, there is no way to
1804    /// ensure that these do not generate panics.)
1805    ///
1806    /// Besides user-defined validation routines panicking, `is_bit_valid` will
1807    /// either panic or fail to compile if called on a pointer with [`Shared`]
1808    /// aliasing when `Self: !Immutable`.
1809    ///
1810    /// [`UnsafeCell`]: core::cell::UnsafeCell
1811    /// [`Shared`]: invariant::Shared
1812    #[doc(hidden)]
1813    fn is_bit_valid<A: invariant::Reference>(candidate: Maybe<'_, Self, A>) -> bool;
1814
1815    /// Attempts to interpret the given `source` as a `&Self`.
1816    ///
1817    /// If the bytes of `source` are a valid instance of `Self`, this method
1818    /// returns a reference to those bytes interpreted as a `Self`. If the
1819    /// length of `source` is not a [valid size of `Self`][valid-size], or if
1820    /// `source` is not appropriately aligned, or if `source` is not a valid
1821    /// instance of `Self`, this returns `Err`. If [`Self:
1822    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
1823    /// error][ConvertError::from].
1824    ///
1825    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1826    ///
1827    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1828    /// [self-unaligned]: Unaligned
1829    /// [slice-dst]: KnownLayout#dynamically-sized-types
1830    ///
1831    /// # Compile-Time Assertions
1832    ///
1833    /// This method cannot yet be used on unsized types whose dynamically-sized
1834    /// component is zero-sized. Attempting to use this method on such types
1835    /// results in a compile-time assertion error; e.g.:
1836    ///
1837    /// ```compile_fail,E0080
1838    /// use zerocopy::*;
1839    /// # use zerocopy_derive::*;
1840    ///
1841    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
1842    /// #[repr(C)]
1843    /// struct ZSTy {
1844    ///     leading_sized: u16,
1845    ///     trailing_dst: [()],
1846    /// }
1847    ///
1848    /// let _ = ZSTy::try_ref_from_bytes(0u16.as_bytes()); // âš  Compile Error!
1849    /// ```
1850    ///
1851    /// # Examples
1852    ///
1853    /// ```
1854    /// use zerocopy::TryFromBytes;
1855    /// # use zerocopy_derive::*;
1856    ///
1857    /// // The only valid value of this type is the byte `0xC0`
1858    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1859    /// #[repr(u8)]
1860    /// enum C0 { xC0 = 0xC0 }
1861    ///
1862    /// // The only valid value of this type is the byte sequence `0xC0C0`.
1863    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1864    /// #[repr(C)]
1865    /// struct C0C0(C0, C0);
1866    ///
1867    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1868    /// #[repr(C)]
1869    /// struct Packet {
1870    ///     magic_number: C0C0,
1871    ///     mug_size: u8,
1872    ///     temperature: u8,
1873    ///     marshmallows: [[u8; 2]],
1874    /// }
1875    ///
1876    /// let bytes = &[0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5][..];
1877    ///
1878    /// let packet = Packet::try_ref_from_bytes(bytes).unwrap();
1879    ///
1880    /// assert_eq!(packet.mug_size, 240);
1881    /// assert_eq!(packet.temperature, 77);
1882    /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
1883    ///
1884    /// // These bytes are not valid instance of `Packet`.
1885    /// let bytes = &[0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5][..];
1886    /// assert!(Packet::try_ref_from_bytes(bytes).is_err());
1887    /// ```
1888    #[must_use = "has no side effects"]
1889    #[inline]
1890    fn try_ref_from_bytes(source: &[u8]) -> Result<&Self, TryCastError<&[u8], Self>>
1891    where
1892        Self: KnownLayout + Immutable,
1893    {
1894        static_assert_dst_is_not_zst!(Self);
1895        match Ptr::from_ref(source).try_cast_into_no_leftover::<Self, BecauseImmutable>(None) {
1896            Ok(source) => {
1897                // This call may panic. If that happens, it doesn't cause any soundness
1898                // issues, as we have not generated any invalid state which we need to
1899                // fix before returning.
1900                //
1901                // Note that one panic or post-monomorphization error condition is
1902                // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
1903                // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
1904                // condition will not happen.
1905                match source.try_into_valid() {
1906                    Ok(valid) => Ok(valid.as_ref()),
1907                    Err(e) => {
1908                        Err(e.map_src(|src| src.as_bytes::<BecauseImmutable>().as_ref()).into())
1909                    }
1910                }
1911            }
1912            Err(e) => Err(e.map_src(Ptr::as_ref).into()),
1913        }
1914    }
1915
1916    /// Attempts to interpret the prefix of the given `source` as a `&Self`.
1917    ///
1918    /// This method computes the [largest possible size of `Self`][valid-size]
1919    /// that can fit in the leading bytes of `source`. If that prefix is a valid
1920    /// instance of `Self`, this method returns a reference to those bytes
1921    /// interpreted as `Self`, and a reference to the remaining bytes. If there
1922    /// are insufficient bytes, or if `source` is not appropriately aligned, or
1923    /// if those bytes are not a valid instance of `Self`, this returns `Err`.
1924    /// If [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
1925    /// alignment error][ConvertError::from].
1926    ///
1927    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1928    ///
1929    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1930    /// [self-unaligned]: Unaligned
1931    /// [slice-dst]: KnownLayout#dynamically-sized-types
1932    ///
1933    /// # Compile-Time Assertions
1934    ///
1935    /// This method cannot yet be used on unsized types whose dynamically-sized
1936    /// component is zero-sized. Attempting to use this method on such types
1937    /// results in a compile-time assertion error; e.g.:
1938    ///
1939    /// ```compile_fail,E0080
1940    /// use zerocopy::*;
1941    /// # use zerocopy_derive::*;
1942    ///
1943    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
1944    /// #[repr(C)]
1945    /// struct ZSTy {
1946    ///     leading_sized: u16,
1947    ///     trailing_dst: [()],
1948    /// }
1949    ///
1950    /// let _ = ZSTy::try_ref_from_prefix(0u16.as_bytes()); // âš  Compile Error!
1951    /// ```
1952    ///
1953    /// # Examples
1954    ///
1955    /// ```
1956    /// use zerocopy::TryFromBytes;
1957    /// # use zerocopy_derive::*;
1958    ///
1959    /// // The only valid value of this type is the byte `0xC0`
1960    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1961    /// #[repr(u8)]
1962    /// enum C0 { xC0 = 0xC0 }
1963    ///
1964    /// // The only valid value of this type is the bytes `0xC0C0`.
1965    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1966    /// #[repr(C)]
1967    /// struct C0C0(C0, C0);
1968    ///
1969    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1970    /// #[repr(C)]
1971    /// struct Packet {
1972    ///     magic_number: C0C0,
1973    ///     mug_size: u8,
1974    ///     temperature: u8,
1975    ///     marshmallows: [[u8; 2]],
1976    /// }
1977    ///
1978    /// // These are more bytes than are needed to encode a `Packet`.
1979    /// let bytes = &[0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1980    ///
1981    /// let (packet, suffix) = Packet::try_ref_from_prefix(bytes).unwrap();
1982    ///
1983    /// assert_eq!(packet.mug_size, 240);
1984    /// assert_eq!(packet.temperature, 77);
1985    /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
1986    /// assert_eq!(suffix, &[6u8][..]);
1987    ///
1988    /// // These bytes are not valid instance of `Packet`.
1989    /// let bytes = &[0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1990    /// assert!(Packet::try_ref_from_prefix(bytes).is_err());
1991    /// ```
1992    #[must_use = "has no side effects"]
1993    #[inline]
1994    fn try_ref_from_prefix(source: &[u8]) -> Result<(&Self, &[u8]), TryCastError<&[u8], Self>>
1995    where
1996        Self: KnownLayout + Immutable,
1997    {
1998        static_assert_dst_is_not_zst!(Self);
1999        try_ref_from_prefix_suffix(source, CastType::Prefix, None)
2000    }
2001
2002    /// Attempts to interpret the suffix of the given `source` as a `&Self`.
2003    ///
2004    /// This method computes the [largest possible size of `Self`][valid-size]
2005    /// that can fit in the trailing bytes of `source`. If that suffix is a
2006    /// valid instance of `Self`, this method returns a reference to those bytes
2007    /// interpreted as `Self`, and a reference to the preceding bytes. If there
2008    /// are insufficient bytes, or if the suffix of `source` would not be
2009    /// appropriately aligned, or if the suffix is not a valid instance of
2010    /// `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned], you
2011    /// can [infallibly discard the alignment error][ConvertError::from].
2012    ///
2013    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
2014    ///
2015    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
2016    /// [self-unaligned]: Unaligned
2017    /// [slice-dst]: KnownLayout#dynamically-sized-types
2018    ///
2019    /// # Compile-Time Assertions
2020    ///
2021    /// This method cannot yet be used on unsized types whose dynamically-sized
2022    /// component is zero-sized. Attempting to use this method on such types
2023    /// results in a compile-time assertion error; e.g.:
2024    ///
2025    /// ```compile_fail,E0080
2026    /// use zerocopy::*;
2027    /// # use zerocopy_derive::*;
2028    ///
2029    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
2030    /// #[repr(C)]
2031    /// struct ZSTy {
2032    ///     leading_sized: u16,
2033    ///     trailing_dst: [()],
2034    /// }
2035    ///
2036    /// let _ = ZSTy::try_ref_from_suffix(0u16.as_bytes()); // âš  Compile Error!
2037    /// ```
2038    ///
2039    /// # Examples
2040    ///
2041    /// ```
2042    /// use zerocopy::TryFromBytes;
2043    /// # use zerocopy_derive::*;
2044    ///
2045    /// // The only valid value of this type is the byte `0xC0`
2046    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2047    /// #[repr(u8)]
2048    /// enum C0 { xC0 = 0xC0 }
2049    ///
2050    /// // The only valid value of this type is the bytes `0xC0C0`.
2051    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2052    /// #[repr(C)]
2053    /// struct C0C0(C0, C0);
2054    ///
2055    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2056    /// #[repr(C)]
2057    /// struct Packet {
2058    ///     magic_number: C0C0,
2059    ///     mug_size: u8,
2060    ///     temperature: u8,
2061    ///     marshmallows: [[u8; 2]],
2062    /// }
2063    ///
2064    /// // These are more bytes than are needed to encode a `Packet`.
2065    /// let bytes = &[0, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2066    ///
2067    /// let (prefix, packet) = Packet::try_ref_from_suffix(bytes).unwrap();
2068    ///
2069    /// assert_eq!(packet.mug_size, 240);
2070    /// assert_eq!(packet.temperature, 77);
2071    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2072    /// assert_eq!(prefix, &[0u8][..]);
2073    ///
2074    /// // These bytes are not valid instance of `Packet`.
2075    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0x10][..];
2076    /// assert!(Packet::try_ref_from_suffix(bytes).is_err());
2077    /// ```
2078    #[must_use = "has no side effects"]
2079    #[inline]
2080    fn try_ref_from_suffix(source: &[u8]) -> Result<(&[u8], &Self), TryCastError<&[u8], Self>>
2081    where
2082        Self: KnownLayout + Immutable,
2083    {
2084        static_assert_dst_is_not_zst!(Self);
2085        try_ref_from_prefix_suffix(source, CastType::Suffix, None).map(swap)
2086    }
2087
2088    /// Attempts to interpret the given `source` as a `&mut Self` without
2089    /// copying.
2090    ///
2091    /// If the bytes of `source` are a valid instance of `Self`, this method
2092    /// returns a reference to those bytes interpreted as a `Self`. If the
2093    /// length of `source` is not a [valid size of `Self`][valid-size], or if
2094    /// `source` is not appropriately aligned, or if `source` is not a valid
2095    /// instance of `Self`, this returns `Err`. If [`Self:
2096    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
2097    /// error][ConvertError::from].
2098    ///
2099    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
2100    ///
2101    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
2102    /// [self-unaligned]: Unaligned
2103    /// [slice-dst]: KnownLayout#dynamically-sized-types
2104    ///
2105    /// # Compile-Time Assertions
2106    ///
2107    /// This method cannot yet be used on unsized types whose dynamically-sized
2108    /// component is zero-sized. Attempting to use this method on such types
2109    /// results in a compile-time assertion error; e.g.:
2110    ///
2111    /// ```compile_fail,E0080
2112    /// use zerocopy::*;
2113    /// # use zerocopy_derive::*;
2114    ///
2115    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2116    /// #[repr(C, packed)]
2117    /// struct ZSTy {
2118    ///     leading_sized: [u8; 2],
2119    ///     trailing_dst: [()],
2120    /// }
2121    ///
2122    /// let mut source = [85, 85];
2123    /// let _ = ZSTy::try_mut_from_bytes(&mut source[..]); // âš  Compile Error!
2124    /// ```
2125    ///
2126    /// # Examples
2127    ///
2128    /// ```
2129    /// use zerocopy::TryFromBytes;
2130    /// # use zerocopy_derive::*;
2131    ///
2132    /// // The only valid value of this type is the byte `0xC0`
2133    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2134    /// #[repr(u8)]
2135    /// enum C0 { xC0 = 0xC0 }
2136    ///
2137    /// // The only valid value of this type is the bytes `0xC0C0`.
2138    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2139    /// #[repr(C)]
2140    /// struct C0C0(C0, C0);
2141    ///
2142    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2143    /// #[repr(C, packed)]
2144    /// struct Packet {
2145    ///     magic_number: C0C0,
2146    ///     mug_size: u8,
2147    ///     temperature: u8,
2148    ///     marshmallows: [[u8; 2]],
2149    /// }
2150    ///
2151    /// let bytes = &mut [0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5][..];
2152    ///
2153    /// let packet = Packet::try_mut_from_bytes(bytes).unwrap();
2154    ///
2155    /// assert_eq!(packet.mug_size, 240);
2156    /// assert_eq!(packet.temperature, 77);
2157    /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
2158    ///
2159    /// packet.temperature = 111;
2160    ///
2161    /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 0, 1, 2, 3, 4, 5]);
2162    ///
2163    /// // These bytes are not valid instance of `Packet`.
2164    /// let bytes = &mut [0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
2165    /// assert!(Packet::try_mut_from_bytes(bytes).is_err());
2166    /// ```
2167    #[must_use = "has no side effects"]
2168    #[inline]
2169    fn try_mut_from_bytes(bytes: &mut [u8]) -> Result<&mut Self, TryCastError<&mut [u8], Self>>
2170    where
2171        Self: KnownLayout + IntoBytes,
2172    {
2173        static_assert_dst_is_not_zst!(Self);
2174        match Ptr::from_mut(bytes).try_cast_into_no_leftover::<Self, BecauseExclusive>(None) {
2175            Ok(source) => {
2176                // This call may panic. If that happens, it doesn't cause any soundness
2177                // issues, as we have not generated any invalid state which we need to
2178                // fix before returning.
2179                //
2180                // Note that one panic or post-monomorphization error condition is
2181                // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2182                // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2183                // condition will not happen.
2184                match source.try_into_valid() {
2185                    Ok(source) => Ok(source.as_mut()),
2186                    Err(e) => {
2187                        Err(e.map_src(|src| src.as_bytes::<BecauseExclusive>().as_mut()).into())
2188                    }
2189                }
2190            }
2191            Err(e) => Err(e.map_src(Ptr::as_mut).into()),
2192        }
2193    }
2194
2195    /// Attempts to interpret the prefix of the given `source` as a `&mut
2196    /// Self`.
2197    ///
2198    /// This method computes the [largest possible size of `Self`][valid-size]
2199    /// that can fit in the leading bytes of `source`. If that prefix is a valid
2200    /// instance of `Self`, this method returns a reference to those bytes
2201    /// interpreted as `Self`, and a reference to the remaining bytes. If there
2202    /// are insufficient bytes, or if `source` is not appropriately aligned, or
2203    /// if the bytes are not a valid instance of `Self`, this returns `Err`. If
2204    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
2205    /// alignment error][ConvertError::from].
2206    ///
2207    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
2208    ///
2209    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
2210    /// [self-unaligned]: Unaligned
2211    /// [slice-dst]: KnownLayout#dynamically-sized-types
2212    ///
2213    /// # Compile-Time Assertions
2214    ///
2215    /// This method cannot yet be used on unsized types whose dynamically-sized
2216    /// component is zero-sized. Attempting to use this method on such types
2217    /// results in a compile-time assertion error; e.g.:
2218    ///
2219    /// ```compile_fail,E0080
2220    /// use zerocopy::*;
2221    /// # use zerocopy_derive::*;
2222    ///
2223    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2224    /// #[repr(C, packed)]
2225    /// struct ZSTy {
2226    ///     leading_sized: [u8; 2],
2227    ///     trailing_dst: [()],
2228    /// }
2229    ///
2230    /// let mut source = [85, 85];
2231    /// let _ = ZSTy::try_mut_from_prefix(&mut source[..]); // âš  Compile Error!
2232    /// ```
2233    ///
2234    /// # Examples
2235    ///
2236    /// ```
2237    /// use zerocopy::TryFromBytes;
2238    /// # use zerocopy_derive::*;
2239    ///
2240    /// // The only valid value of this type is the byte `0xC0`
2241    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2242    /// #[repr(u8)]
2243    /// enum C0 { xC0 = 0xC0 }
2244    ///
2245    /// // The only valid value of this type is the bytes `0xC0C0`.
2246    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2247    /// #[repr(C)]
2248    /// struct C0C0(C0, C0);
2249    ///
2250    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2251    /// #[repr(C, packed)]
2252    /// struct Packet {
2253    ///     magic_number: C0C0,
2254    ///     mug_size: u8,
2255    ///     temperature: u8,
2256    ///     marshmallows: [[u8; 2]],
2257    /// }
2258    ///
2259    /// // These are more bytes than are needed to encode a `Packet`.
2260    /// let bytes = &mut [0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
2261    ///
2262    /// let (packet, suffix) = Packet::try_mut_from_prefix(bytes).unwrap();
2263    ///
2264    /// assert_eq!(packet.mug_size, 240);
2265    /// assert_eq!(packet.temperature, 77);
2266    /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
2267    /// assert_eq!(suffix, &[6u8][..]);
2268    ///
2269    /// packet.temperature = 111;
2270    /// suffix[0] = 222;
2271    ///
2272    /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 0, 1, 2, 3, 4, 5, 222]);
2273    ///
2274    /// // These bytes are not valid instance of `Packet`.
2275    /// let bytes = &mut [0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
2276    /// assert!(Packet::try_mut_from_prefix(bytes).is_err());
2277    /// ```
2278    #[must_use = "has no side effects"]
2279    #[inline]
2280    fn try_mut_from_prefix(
2281        source: &mut [u8],
2282    ) -> Result<(&mut Self, &mut [u8]), TryCastError<&mut [u8], Self>>
2283    where
2284        Self: KnownLayout + IntoBytes,
2285    {
2286        static_assert_dst_is_not_zst!(Self);
2287        try_mut_from_prefix_suffix(source, CastType::Prefix, None)
2288    }
2289
2290    /// Attempts to interpret the suffix of the given `source` as a `&mut
2291    /// Self`.
2292    ///
2293    /// This method computes the [largest possible size of `Self`][valid-size]
2294    /// that can fit in the trailing bytes of `source`. If that suffix is a
2295    /// valid instance of `Self`, this method returns a reference to those bytes
2296    /// interpreted as `Self`, and a reference to the preceding bytes. If there
2297    /// are insufficient bytes, or if the suffix of `source` would not be
2298    /// appropriately aligned, or if the suffix is not a valid instance of
2299    /// `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned], you
2300    /// can [infallibly discard the alignment error][ConvertError::from].
2301    ///
2302    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
2303    ///
2304    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
2305    /// [self-unaligned]: Unaligned
2306    /// [slice-dst]: KnownLayout#dynamically-sized-types
2307    ///
2308    /// # Compile-Time Assertions
2309    ///
2310    /// This method cannot yet be used on unsized types whose dynamically-sized
2311    /// component is zero-sized. Attempting to use this method on such types
2312    /// results in a compile-time assertion error; e.g.:
2313    ///
2314    /// ```compile_fail,E0080
2315    /// use zerocopy::*;
2316    /// # use zerocopy_derive::*;
2317    ///
2318    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2319    /// #[repr(C, packed)]
2320    /// struct ZSTy {
2321    ///     leading_sized: u16,
2322    ///     trailing_dst: [()],
2323    /// }
2324    ///
2325    /// let mut source = [85, 85];
2326    /// let _ = ZSTy::try_mut_from_suffix(&mut source[..]); // âš  Compile Error!
2327    /// ```
2328    ///
2329    /// # Examples
2330    ///
2331    /// ```
2332    /// use zerocopy::TryFromBytes;
2333    /// # use zerocopy_derive::*;
2334    ///
2335    /// // The only valid value of this type is the byte `0xC0`
2336    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2337    /// #[repr(u8)]
2338    /// enum C0 { xC0 = 0xC0 }
2339    ///
2340    /// // The only valid value of this type is the bytes `0xC0C0`.
2341    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2342    /// #[repr(C)]
2343    /// struct C0C0(C0, C0);
2344    ///
2345    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2346    /// #[repr(C, packed)]
2347    /// struct Packet {
2348    ///     magic_number: C0C0,
2349    ///     mug_size: u8,
2350    ///     temperature: u8,
2351    ///     marshmallows: [[u8; 2]],
2352    /// }
2353    ///
2354    /// // These are more bytes than are needed to encode a `Packet`.
2355    /// let bytes = &mut [0, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2356    ///
2357    /// let (prefix, packet) = Packet::try_mut_from_suffix(bytes).unwrap();
2358    ///
2359    /// assert_eq!(packet.mug_size, 240);
2360    /// assert_eq!(packet.temperature, 77);
2361    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2362    /// assert_eq!(prefix, &[0u8][..]);
2363    ///
2364    /// prefix[0] = 111;
2365    /// packet.temperature = 222;
2366    ///
2367    /// assert_eq!(bytes, [111, 0xC0, 0xC0, 240, 222, 2, 3, 4, 5, 6, 7]);
2368    ///
2369    /// // These bytes are not valid instance of `Packet`.
2370    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0x10][..];
2371    /// assert!(Packet::try_mut_from_suffix(bytes).is_err());
2372    /// ```
2373    #[must_use = "has no side effects"]
2374    #[inline]
2375    fn try_mut_from_suffix(
2376        source: &mut [u8],
2377    ) -> Result<(&mut [u8], &mut Self), TryCastError<&mut [u8], Self>>
2378    where
2379        Self: KnownLayout + IntoBytes,
2380    {
2381        static_assert_dst_is_not_zst!(Self);
2382        try_mut_from_prefix_suffix(source, CastType::Suffix, None).map(swap)
2383    }
2384
2385    /// Attempts to interpret the given `source` as a `&Self` with a DST length
2386    /// equal to `count`.
2387    ///
2388    /// This method attempts to return a reference to `source` interpreted as a
2389    /// `Self` with `count` trailing elements. If the length of `source` is not
2390    /// equal to the size of `Self` with `count` elements, if `source` is not
2391    /// appropriately aligned, or if `source` does not contain a valid instance
2392    /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2393    /// you can [infallibly discard the alignment error][ConvertError::from].
2394    ///
2395    /// [self-unaligned]: Unaligned
2396    /// [slice-dst]: KnownLayout#dynamically-sized-types
2397    ///
2398    /// # Examples
2399    ///
2400    /// ```
2401    /// # #![allow(non_camel_case_types)] // For C0::xC0
2402    /// use zerocopy::TryFromBytes;
2403    /// # use zerocopy_derive::*;
2404    ///
2405    /// // The only valid value of this type is the byte `0xC0`
2406    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2407    /// #[repr(u8)]
2408    /// enum C0 { xC0 = 0xC0 }
2409    ///
2410    /// // The only valid value of this type is the bytes `0xC0C0`.
2411    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2412    /// #[repr(C)]
2413    /// struct C0C0(C0, C0);
2414    ///
2415    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2416    /// #[repr(C)]
2417    /// struct Packet {
2418    ///     magic_number: C0C0,
2419    ///     mug_size: u8,
2420    ///     temperature: u8,
2421    ///     marshmallows: [[u8; 2]],
2422    /// }
2423    ///
2424    /// let bytes = &[0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2425    ///
2426    /// let packet = Packet::try_ref_from_bytes_with_elems(bytes, 3).unwrap();
2427    ///
2428    /// assert_eq!(packet.mug_size, 240);
2429    /// assert_eq!(packet.temperature, 77);
2430    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2431    ///
2432    /// // These bytes are not valid instance of `Packet`.
2433    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0xC0][..];
2434    /// assert!(Packet::try_ref_from_bytes_with_elems(bytes, 3).is_err());
2435    /// ```
2436    ///
2437    /// Since an explicit `count` is provided, this method supports types with
2438    /// zero-sized trailing slice elements. Methods such as [`try_ref_from_bytes`]
2439    /// which do not take an explicit count do not support such types.
2440    ///
2441    /// ```
2442    /// use core::num::NonZeroU16;
2443    /// use zerocopy::*;
2444    /// # use zerocopy_derive::*;
2445    ///
2446    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
2447    /// #[repr(C)]
2448    /// struct ZSTy {
2449    ///     leading_sized: NonZeroU16,
2450    ///     trailing_dst: [()],
2451    /// }
2452    ///
2453    /// let src = 0xCAFEu16.as_bytes();
2454    /// let zsty = ZSTy::try_ref_from_bytes_with_elems(src, 42).unwrap();
2455    /// assert_eq!(zsty.trailing_dst.len(), 42);
2456    /// ```
2457    ///
2458    /// [`try_ref_from_bytes`]: TryFromBytes::try_ref_from_bytes
2459    #[must_use = "has no side effects"]
2460    #[inline]
2461    fn try_ref_from_bytes_with_elems(
2462        source: &[u8],
2463        count: usize,
2464    ) -> Result<&Self, TryCastError<&[u8], Self>>
2465    where
2466        Self: KnownLayout<PointerMetadata = usize> + Immutable,
2467    {
2468        match Ptr::from_ref(source).try_cast_into_no_leftover::<Self, BecauseImmutable>(Some(count))
2469        {
2470            Ok(source) => {
2471                // This call may panic. If that happens, it doesn't cause any soundness
2472                // issues, as we have not generated any invalid state which we need to
2473                // fix before returning.
2474                //
2475                // Note that one panic or post-monomorphization error condition is
2476                // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2477                // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2478                // condition will not happen.
2479                match source.try_into_valid() {
2480                    Ok(source) => Ok(source.as_ref()),
2481                    Err(e) => {
2482                        Err(e.map_src(|src| src.as_bytes::<BecauseImmutable>().as_ref()).into())
2483                    }
2484                }
2485            }
2486            Err(e) => Err(e.map_src(Ptr::as_ref).into()),
2487        }
2488    }
2489
2490    /// Attempts to interpret the prefix of the given `source` as a `&Self` with
2491    /// a DST length equal to `count`.
2492    ///
2493    /// This method attempts to return a reference to the prefix of `source`
2494    /// interpreted as a `Self` with `count` trailing elements, and a reference
2495    /// to the remaining bytes. If the length of `source` is less than the size
2496    /// of `Self` with `count` elements, if `source` is not appropriately
2497    /// aligned, or if the prefix of `source` does not contain a valid instance
2498    /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2499    /// you can [infallibly discard the alignment error][ConvertError::from].
2500    ///
2501    /// [self-unaligned]: Unaligned
2502    /// [slice-dst]: KnownLayout#dynamically-sized-types
2503    ///
2504    /// # Examples
2505    ///
2506    /// ```
2507    /// # #![allow(non_camel_case_types)] // For C0::xC0
2508    /// use zerocopy::TryFromBytes;
2509    /// # use zerocopy_derive::*;
2510    ///
2511    /// // The only valid value of this type is the byte `0xC0`
2512    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2513    /// #[repr(u8)]
2514    /// enum C0 { xC0 = 0xC0 }
2515    ///
2516    /// // The only valid value of this type is the bytes `0xC0C0`.
2517    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2518    /// #[repr(C)]
2519    /// struct C0C0(C0, C0);
2520    ///
2521    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2522    /// #[repr(C)]
2523    /// struct Packet {
2524    ///     magic_number: C0C0,
2525    ///     mug_size: u8,
2526    ///     temperature: u8,
2527    ///     marshmallows: [[u8; 2]],
2528    /// }
2529    ///
2530    /// let bytes = &[0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7, 8][..];
2531    ///
2532    /// let (packet, suffix) = Packet::try_ref_from_prefix_with_elems(bytes, 3).unwrap();
2533    ///
2534    /// assert_eq!(packet.mug_size, 240);
2535    /// assert_eq!(packet.temperature, 77);
2536    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2537    /// assert_eq!(suffix, &[8u8][..]);
2538    ///
2539    /// // These bytes are not valid instance of `Packet`.
2540    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2541    /// assert!(Packet::try_ref_from_prefix_with_elems(bytes, 3).is_err());
2542    /// ```
2543    ///
2544    /// Since an explicit `count` is provided, this method supports types with
2545    /// zero-sized trailing slice elements. Methods such as [`try_ref_from_prefix`]
2546    /// which do not take an explicit count do not support such types.
2547    ///
2548    /// ```
2549    /// use core::num::NonZeroU16;
2550    /// use zerocopy::*;
2551    /// # use zerocopy_derive::*;
2552    ///
2553    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
2554    /// #[repr(C)]
2555    /// struct ZSTy {
2556    ///     leading_sized: NonZeroU16,
2557    ///     trailing_dst: [()],
2558    /// }
2559    ///
2560    /// let src = 0xCAFEu16.as_bytes();
2561    /// let (zsty, _) = ZSTy::try_ref_from_prefix_with_elems(src, 42).unwrap();
2562    /// assert_eq!(zsty.trailing_dst.len(), 42);
2563    /// ```
2564    ///
2565    /// [`try_ref_from_prefix`]: TryFromBytes::try_ref_from_prefix
2566    #[must_use = "has no side effects"]
2567    #[inline]
2568    fn try_ref_from_prefix_with_elems(
2569        source: &[u8],
2570        count: usize,
2571    ) -> Result<(&Self, &[u8]), TryCastError<&[u8], Self>>
2572    where
2573        Self: KnownLayout<PointerMetadata = usize> + Immutable,
2574    {
2575        try_ref_from_prefix_suffix(source, CastType::Prefix, Some(count))
2576    }
2577
2578    /// Attempts to interpret the suffix of the given `source` as a `&Self` with
2579    /// a DST length equal to `count`.
2580    ///
2581    /// This method attempts to return a reference to the suffix of `source`
2582    /// interpreted as a `Self` with `count` trailing elements, and a reference
2583    /// to the preceding bytes. If the length of `source` is less than the size
2584    /// of `Self` with `count` elements, if the suffix of `source` is not
2585    /// appropriately aligned, or if the suffix of `source` does not contain a
2586    /// valid instance of `Self`, this returns `Err`. If [`Self:
2587    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
2588    /// error][ConvertError::from].
2589    ///
2590    /// [self-unaligned]: Unaligned
2591    /// [slice-dst]: KnownLayout#dynamically-sized-types
2592    ///
2593    /// # Examples
2594    ///
2595    /// ```
2596    /// # #![allow(non_camel_case_types)] // For C0::xC0
2597    /// use zerocopy::TryFromBytes;
2598    /// # use zerocopy_derive::*;
2599    ///
2600    /// // The only valid value of this type is the byte `0xC0`
2601    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2602    /// #[repr(u8)]
2603    /// enum C0 { xC0 = 0xC0 }
2604    ///
2605    /// // The only valid value of this type is the bytes `0xC0C0`.
2606    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2607    /// #[repr(C)]
2608    /// struct C0C0(C0, C0);
2609    ///
2610    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2611    /// #[repr(C)]
2612    /// struct Packet {
2613    ///     magic_number: C0C0,
2614    ///     mug_size: u8,
2615    ///     temperature: u8,
2616    ///     marshmallows: [[u8; 2]],
2617    /// }
2618    ///
2619    /// let bytes = &[123, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2620    ///
2621    /// let (prefix, packet) = Packet::try_ref_from_suffix_with_elems(bytes, 3).unwrap();
2622    ///
2623    /// assert_eq!(packet.mug_size, 240);
2624    /// assert_eq!(packet.temperature, 77);
2625    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2626    /// assert_eq!(prefix, &[123u8][..]);
2627    ///
2628    /// // These bytes are not valid instance of `Packet`.
2629    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2630    /// assert!(Packet::try_ref_from_suffix_with_elems(bytes, 3).is_err());
2631    /// ```
2632    ///
2633    /// Since an explicit `count` is provided, this method supports types with
2634    /// zero-sized trailing slice elements. Methods such as [`try_ref_from_prefix`]
2635    /// which do not take an explicit count do not support such types.
2636    ///
2637    /// ```
2638    /// use core::num::NonZeroU16;
2639    /// use zerocopy::*;
2640    /// # use zerocopy_derive::*;
2641    ///
2642    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
2643    /// #[repr(C)]
2644    /// struct ZSTy {
2645    ///     leading_sized: NonZeroU16,
2646    ///     trailing_dst: [()],
2647    /// }
2648    ///
2649    /// let src = 0xCAFEu16.as_bytes();
2650    /// let (_, zsty) = ZSTy::try_ref_from_suffix_with_elems(src, 42).unwrap();
2651    /// assert_eq!(zsty.trailing_dst.len(), 42);
2652    /// ```
2653    ///
2654    /// [`try_ref_from_prefix`]: TryFromBytes::try_ref_from_prefix
2655    #[must_use = "has no side effects"]
2656    #[inline]
2657    fn try_ref_from_suffix_with_elems(
2658        source: &[u8],
2659        count: usize,
2660    ) -> Result<(&[u8], &Self), TryCastError<&[u8], Self>>
2661    where
2662        Self: KnownLayout<PointerMetadata = usize> + Immutable,
2663    {
2664        try_ref_from_prefix_suffix(source, CastType::Suffix, Some(count)).map(swap)
2665    }
2666
2667    /// Attempts to interpret the given `source` as a `&mut Self` with a DST
2668    /// length equal to `count`.
2669    ///
2670    /// This method attempts to return a reference to `source` interpreted as a
2671    /// `Self` with `count` trailing elements. If the length of `source` is not
2672    /// equal to the size of `Self` with `count` elements, if `source` is not
2673    /// appropriately aligned, or if `source` does not contain a valid instance
2674    /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2675    /// you can [infallibly discard the alignment error][ConvertError::from].
2676    ///
2677    /// [self-unaligned]: Unaligned
2678    /// [slice-dst]: KnownLayout#dynamically-sized-types
2679    ///
2680    /// # Examples
2681    ///
2682    /// ```
2683    /// # #![allow(non_camel_case_types)] // For C0::xC0
2684    /// use zerocopy::TryFromBytes;
2685    /// # use zerocopy_derive::*;
2686    ///
2687    /// // The only valid value of this type is the byte `0xC0`
2688    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2689    /// #[repr(u8)]
2690    /// enum C0 { xC0 = 0xC0 }
2691    ///
2692    /// // The only valid value of this type is the bytes `0xC0C0`.
2693    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2694    /// #[repr(C)]
2695    /// struct C0C0(C0, C0);
2696    ///
2697    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2698    /// #[repr(C, packed)]
2699    /// struct Packet {
2700    ///     magic_number: C0C0,
2701    ///     mug_size: u8,
2702    ///     temperature: u8,
2703    ///     marshmallows: [[u8; 2]],
2704    /// }
2705    ///
2706    /// let bytes = &mut [0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2707    ///
2708    /// let packet = Packet::try_mut_from_bytes_with_elems(bytes, 3).unwrap();
2709    ///
2710    /// assert_eq!(packet.mug_size, 240);
2711    /// assert_eq!(packet.temperature, 77);
2712    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2713    ///
2714    /// packet.temperature = 111;
2715    ///
2716    /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 2, 3, 4, 5, 6, 7]);
2717    ///
2718    /// // These bytes are not valid instance of `Packet`.
2719    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0xC0][..];
2720    /// assert!(Packet::try_mut_from_bytes_with_elems(bytes, 3).is_err());
2721    /// ```
2722    ///
2723    /// Since an explicit `count` is provided, this method supports types with
2724    /// zero-sized trailing slice elements. Methods such as [`try_mut_from_bytes`]
2725    /// which do not take an explicit count do not support such types.
2726    ///
2727    /// ```
2728    /// use core::num::NonZeroU16;
2729    /// use zerocopy::*;
2730    /// # use zerocopy_derive::*;
2731    ///
2732    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2733    /// #[repr(C, packed)]
2734    /// struct ZSTy {
2735    ///     leading_sized: NonZeroU16,
2736    ///     trailing_dst: [()],
2737    /// }
2738    ///
2739    /// let mut src = 0xCAFEu16;
2740    /// let src = src.as_mut_bytes();
2741    /// let zsty = ZSTy::try_mut_from_bytes_with_elems(src, 42).unwrap();
2742    /// assert_eq!(zsty.trailing_dst.len(), 42);
2743    /// ```
2744    ///
2745    /// [`try_mut_from_bytes`]: TryFromBytes::try_mut_from_bytes
2746    #[must_use = "has no side effects"]
2747    #[inline]
2748    fn try_mut_from_bytes_with_elems(
2749        source: &mut [u8],
2750        count: usize,
2751    ) -> Result<&mut Self, TryCastError<&mut [u8], Self>>
2752    where
2753        Self: KnownLayout<PointerMetadata = usize> + IntoBytes,
2754    {
2755        match Ptr::from_mut(source).try_cast_into_no_leftover::<Self, BecauseExclusive>(Some(count))
2756        {
2757            Ok(source) => {
2758                // This call may panic. If that happens, it doesn't cause any soundness
2759                // issues, as we have not generated any invalid state which we need to
2760                // fix before returning.
2761                //
2762                // Note that one panic or post-monomorphization error condition is
2763                // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2764                // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2765                // condition will not happen.
2766                match source.try_into_valid() {
2767                    Ok(source) => Ok(source.as_mut()),
2768                    Err(e) => {
2769                        Err(e.map_src(|src| src.as_bytes::<BecauseExclusive>().as_mut()).into())
2770                    }
2771                }
2772            }
2773            Err(e) => Err(e.map_src(Ptr::as_mut).into()),
2774        }
2775    }
2776
2777    /// Attempts to interpret the prefix of the given `source` as a `&mut Self`
2778    /// with a DST length equal to `count`.
2779    ///
2780    /// This method attempts to return a reference to the prefix of `source`
2781    /// interpreted as a `Self` with `count` trailing elements, and a reference
2782    /// to the remaining bytes. If the length of `source` is less than the size
2783    /// of `Self` with `count` elements, if `source` is not appropriately
2784    /// aligned, or if the prefix of `source` does not contain a valid instance
2785    /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2786    /// you can [infallibly discard the alignment error][ConvertError::from].
2787    ///
2788    /// [self-unaligned]: Unaligned
2789    /// [slice-dst]: KnownLayout#dynamically-sized-types
2790    ///
2791    /// # Examples
2792    ///
2793    /// ```
2794    /// # #![allow(non_camel_case_types)] // For C0::xC0
2795    /// use zerocopy::TryFromBytes;
2796    /// # use zerocopy_derive::*;
2797    ///
2798    /// // The only valid value of this type is the byte `0xC0`
2799    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2800    /// #[repr(u8)]
2801    /// enum C0 { xC0 = 0xC0 }
2802    ///
2803    /// // The only valid value of this type is the bytes `0xC0C0`.
2804    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2805    /// #[repr(C)]
2806    /// struct C0C0(C0, C0);
2807    ///
2808    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2809    /// #[repr(C, packed)]
2810    /// struct Packet {
2811    ///     magic_number: C0C0,
2812    ///     mug_size: u8,
2813    ///     temperature: u8,
2814    ///     marshmallows: [[u8; 2]],
2815    /// }
2816    ///
2817    /// let bytes = &mut [0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7, 8][..];
2818    ///
2819    /// let (packet, suffix) = Packet::try_mut_from_prefix_with_elems(bytes, 3).unwrap();
2820    ///
2821    /// assert_eq!(packet.mug_size, 240);
2822    /// assert_eq!(packet.temperature, 77);
2823    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2824    /// assert_eq!(suffix, &[8u8][..]);
2825    ///
2826    /// packet.temperature = 111;
2827    /// suffix[0] = 222;
2828    ///
2829    /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 2, 3, 4, 5, 6, 7, 222]);
2830    ///
2831    /// // These bytes are not valid instance of `Packet`.
2832    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2833    /// assert!(Packet::try_mut_from_prefix_with_elems(bytes, 3).is_err());
2834    /// ```
2835    ///
2836    /// Since an explicit `count` is provided, this method supports types with
2837    /// zero-sized trailing slice elements. Methods such as [`try_mut_from_prefix`]
2838    /// which do not take an explicit count do not support such types.
2839    ///
2840    /// ```
2841    /// use core::num::NonZeroU16;
2842    /// use zerocopy::*;
2843    /// # use zerocopy_derive::*;
2844    ///
2845    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2846    /// #[repr(C, packed)]
2847    /// struct ZSTy {
2848    ///     leading_sized: NonZeroU16,
2849    ///     trailing_dst: [()],
2850    /// }
2851    ///
2852    /// let mut src = 0xCAFEu16;
2853    /// let src = src.as_mut_bytes();
2854    /// let (zsty, _) = ZSTy::try_mut_from_prefix_with_elems(src, 42).unwrap();
2855    /// assert_eq!(zsty.trailing_dst.len(), 42);
2856    /// ```
2857    ///
2858    /// [`try_mut_from_prefix`]: TryFromBytes::try_mut_from_prefix
2859    #[must_use = "has no side effects"]
2860    #[inline]
2861    fn try_mut_from_prefix_with_elems(
2862        source: &mut [u8],
2863        count: usize,
2864    ) -> Result<(&mut Self, &mut [u8]), TryCastError<&mut [u8], Self>>
2865    where
2866        Self: KnownLayout<PointerMetadata = usize> + IntoBytes,
2867    {
2868        try_mut_from_prefix_suffix(source, CastType::Prefix, Some(count))
2869    }
2870
2871    /// Attempts to interpret the suffix of the given `source` as a `&mut Self`
2872    /// with a DST length equal to `count`.
2873    ///
2874    /// This method attempts to return a reference to the suffix of `source`
2875    /// interpreted as a `Self` with `count` trailing elements, and a reference
2876    /// to the preceding bytes. If the length of `source` is less than the size
2877    /// of `Self` with `count` elements, if the suffix of `source` is not
2878    /// appropriately aligned, or if the suffix of `source` does not contain a
2879    /// valid instance of `Self`, this returns `Err`. If [`Self:
2880    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
2881    /// error][ConvertError::from].
2882    ///
2883    /// [self-unaligned]: Unaligned
2884    /// [slice-dst]: KnownLayout#dynamically-sized-types
2885    ///
2886    /// # Examples
2887    ///
2888    /// ```
2889    /// # #![allow(non_camel_case_types)] // For C0::xC0
2890    /// use zerocopy::TryFromBytes;
2891    /// # use zerocopy_derive::*;
2892    ///
2893    /// // The only valid value of this type is the byte `0xC0`
2894    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2895    /// #[repr(u8)]
2896    /// enum C0 { xC0 = 0xC0 }
2897    ///
2898    /// // The only valid value of this type is the bytes `0xC0C0`.
2899    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2900    /// #[repr(C)]
2901    /// struct C0C0(C0, C0);
2902    ///
2903    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2904    /// #[repr(C, packed)]
2905    /// struct Packet {
2906    ///     magic_number: C0C0,
2907    ///     mug_size: u8,
2908    ///     temperature: u8,
2909    ///     marshmallows: [[u8; 2]],
2910    /// }
2911    ///
2912    /// let bytes = &mut [123, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2913    ///
2914    /// let (prefix, packet) = Packet::try_mut_from_suffix_with_elems(bytes, 3).unwrap();
2915    ///
2916    /// assert_eq!(packet.mug_size, 240);
2917    /// assert_eq!(packet.temperature, 77);
2918    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2919    /// assert_eq!(prefix, &[123u8][..]);
2920    ///
2921    /// prefix[0] = 111;
2922    /// packet.temperature = 222;
2923    ///
2924    /// assert_eq!(bytes, [111, 0xC0, 0xC0, 240, 222, 2, 3, 4, 5, 6, 7]);
2925    ///
2926    /// // These bytes are not valid instance of `Packet`.
2927    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2928    /// assert!(Packet::try_mut_from_suffix_with_elems(bytes, 3).is_err());
2929    /// ```
2930    ///
2931    /// Since an explicit `count` is provided, this method supports types with
2932    /// zero-sized trailing slice elements. Methods such as [`try_mut_from_prefix`]
2933    /// which do not take an explicit count do not support such types.
2934    ///
2935    /// ```
2936    /// use core::num::NonZeroU16;
2937    /// use zerocopy::*;
2938    /// # use zerocopy_derive::*;
2939    ///
2940    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2941    /// #[repr(C, packed)]
2942    /// struct ZSTy {
2943    ///     leading_sized: NonZeroU16,
2944    ///     trailing_dst: [()],
2945    /// }
2946    ///
2947    /// let mut src = 0xCAFEu16;
2948    /// let src = src.as_mut_bytes();
2949    /// let (_, zsty) = ZSTy::try_mut_from_suffix_with_elems(src, 42).unwrap();
2950    /// assert_eq!(zsty.trailing_dst.len(), 42);
2951    /// ```
2952    ///
2953    /// [`try_mut_from_prefix`]: TryFromBytes::try_mut_from_prefix
2954    #[must_use = "has no side effects"]
2955    #[inline]
2956    fn try_mut_from_suffix_with_elems(
2957        source: &mut [u8],
2958        count: usize,
2959    ) -> Result<(&mut [u8], &mut Self), TryCastError<&mut [u8], Self>>
2960    where
2961        Self: KnownLayout<PointerMetadata = usize> + IntoBytes,
2962    {
2963        try_mut_from_prefix_suffix(source, CastType::Suffix, Some(count)).map(swap)
2964    }
2965
2966    /// Attempts to read the given `source` as a `Self`.
2967    ///
2968    /// If `source.len() != size_of::<Self>()` or the bytes are not a valid
2969    /// instance of `Self`, this returns `Err`.
2970    ///
2971    /// # Examples
2972    ///
2973    /// ```
2974    /// use zerocopy::TryFromBytes;
2975    /// # use zerocopy_derive::*;
2976    ///
2977    /// // The only valid value of this type is the byte `0xC0`
2978    /// #[derive(TryFromBytes)]
2979    /// #[repr(u8)]
2980    /// enum C0 { xC0 = 0xC0 }
2981    ///
2982    /// // The only valid value of this type is the bytes `0xC0C0`.
2983    /// #[derive(TryFromBytes)]
2984    /// #[repr(C)]
2985    /// struct C0C0(C0, C0);
2986    ///
2987    /// #[derive(TryFromBytes)]
2988    /// #[repr(C)]
2989    /// struct Packet {
2990    ///     magic_number: C0C0,
2991    ///     mug_size: u8,
2992    ///     temperature: u8,
2993    /// }
2994    ///
2995    /// let bytes = &[0xC0, 0xC0, 240, 77][..];
2996    ///
2997    /// let packet = Packet::try_read_from_bytes(bytes).unwrap();
2998    ///
2999    /// assert_eq!(packet.mug_size, 240);
3000    /// assert_eq!(packet.temperature, 77);
3001    ///
3002    /// // These bytes are not valid instance of `Packet`.
3003    /// let bytes = &mut [0x10, 0xC0, 240, 77][..];
3004    /// assert!(Packet::try_read_from_bytes(bytes).is_err());
3005    /// ```
3006    #[must_use = "has no side effects"]
3007    #[inline]
3008    fn try_read_from_bytes(source: &[u8]) -> Result<Self, TryReadError<&[u8], Self>>
3009    where
3010        Self: Sized,
3011    {
3012        let candidate = match CoreMaybeUninit::<Self>::read_from_bytes(source) {
3013            Ok(candidate) => candidate,
3014            Err(e) => {
3015                return Err(TryReadError::Size(e.with_dst()));
3016            }
3017        };
3018        // SAFETY: `candidate` was copied from from `source: &[u8]`, so all of
3019        // its bytes are initialized.
3020        unsafe { try_read_from(source, candidate) }
3021    }
3022
3023    /// Attempts to read a `Self` from the prefix of the given `source`.
3024    ///
3025    /// This attempts to read a `Self` from the first `size_of::<Self>()` bytes
3026    /// of `source`, returning that `Self` and any remaining bytes. If
3027    /// `source.len() < size_of::<Self>()` or the bytes are not a valid instance
3028    /// of `Self`, it returns `Err`.
3029    ///
3030    /// # Examples
3031    ///
3032    /// ```
3033    /// use zerocopy::TryFromBytes;
3034    /// # use zerocopy_derive::*;
3035    ///
3036    /// // The only valid value of this type is the byte `0xC0`
3037    /// #[derive(TryFromBytes)]
3038    /// #[repr(u8)]
3039    /// enum C0 { xC0 = 0xC0 }
3040    ///
3041    /// // The only valid value of this type is the bytes `0xC0C0`.
3042    /// #[derive(TryFromBytes)]
3043    /// #[repr(C)]
3044    /// struct C0C0(C0, C0);
3045    ///
3046    /// #[derive(TryFromBytes)]
3047    /// #[repr(C)]
3048    /// struct Packet {
3049    ///     magic_number: C0C0,
3050    ///     mug_size: u8,
3051    ///     temperature: u8,
3052    /// }
3053    ///
3054    /// // These are more bytes than are needed to encode a `Packet`.
3055    /// let bytes = &[0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
3056    ///
3057    /// let (packet, suffix) = Packet::try_read_from_prefix(bytes).unwrap();
3058    ///
3059    /// assert_eq!(packet.mug_size, 240);
3060    /// assert_eq!(packet.temperature, 77);
3061    /// assert_eq!(suffix, &[0u8, 1, 2, 3, 4, 5, 6][..]);
3062    ///
3063    /// // These bytes are not valid instance of `Packet`.
3064    /// let bytes = &[0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
3065    /// assert!(Packet::try_read_from_prefix(bytes).is_err());
3066    /// ```
3067    #[must_use = "has no side effects"]
3068    #[inline]
3069    fn try_read_from_prefix(source: &[u8]) -> Result<(Self, &[u8]), TryReadError<&[u8], Self>>
3070    where
3071        Self: Sized,
3072    {
3073        let (candidate, suffix) = match CoreMaybeUninit::<Self>::read_from_prefix(source) {
3074            Ok(candidate) => candidate,
3075            Err(e) => {
3076                return Err(TryReadError::Size(e.with_dst()));
3077            }
3078        };
3079        // SAFETY: `candidate` was copied from from `source: &[u8]`, so all of
3080        // its bytes are initialized.
3081        unsafe { try_read_from(source, candidate).map(|slf| (slf, suffix)) }
3082    }
3083
3084    /// Attempts to read a `Self` from the suffix of the given `source`.
3085    ///
3086    /// This attempts to read a `Self` from the last `size_of::<Self>()` bytes
3087    /// of `source`, returning that `Self` and any preceding bytes. If
3088    /// `source.len() < size_of::<Self>()` or the bytes are not a valid instance
3089    /// of `Self`, it returns `Err`.
3090    ///
3091    /// # Examples
3092    ///
3093    /// ```
3094    /// # #![allow(non_camel_case_types)] // For C0::xC0
3095    /// use zerocopy::TryFromBytes;
3096    /// # use zerocopy_derive::*;
3097    ///
3098    /// // The only valid value of this type is the byte `0xC0`
3099    /// #[derive(TryFromBytes)]
3100    /// #[repr(u8)]
3101    /// enum C0 { xC0 = 0xC0 }
3102    ///
3103    /// // The only valid value of this type is the bytes `0xC0C0`.
3104    /// #[derive(TryFromBytes)]
3105    /// #[repr(C)]
3106    /// struct C0C0(C0, C0);
3107    ///
3108    /// #[derive(TryFromBytes)]
3109    /// #[repr(C)]
3110    /// struct Packet {
3111    ///     magic_number: C0C0,
3112    ///     mug_size: u8,
3113    ///     temperature: u8,
3114    /// }
3115    ///
3116    /// // These are more bytes than are needed to encode a `Packet`.
3117    /// let bytes = &[0, 1, 2, 3, 4, 5, 0xC0, 0xC0, 240, 77][..];
3118    ///
3119    /// let (prefix, packet) = Packet::try_read_from_suffix(bytes).unwrap();
3120    ///
3121    /// assert_eq!(packet.mug_size, 240);
3122    /// assert_eq!(packet.temperature, 77);
3123    /// assert_eq!(prefix, &[0u8, 1, 2, 3, 4, 5][..]);
3124    ///
3125    /// // These bytes are not valid instance of `Packet`.
3126    /// let bytes = &[0, 1, 2, 3, 4, 5, 0x10, 0xC0, 240, 77][..];
3127    /// assert!(Packet::try_read_from_suffix(bytes).is_err());
3128    /// ```
3129    #[must_use = "has no side effects"]
3130    #[inline]
3131    fn try_read_from_suffix(source: &[u8]) -> Result<(&[u8], Self), TryReadError<&[u8], Self>>
3132    where
3133        Self: Sized,
3134    {
3135        let (prefix, candidate) = match CoreMaybeUninit::<Self>::read_from_suffix(source) {
3136            Ok(candidate) => candidate,
3137            Err(e) => {
3138                return Err(TryReadError::Size(e.with_dst()));
3139            }
3140        };
3141        // SAFETY: `candidate` was copied from from `source: &[u8]`, so all of
3142        // its bytes are initialized.
3143        unsafe { try_read_from(source, candidate).map(|slf| (prefix, slf)) }
3144    }
3145}
3146
3147#[inline(always)]
3148fn try_ref_from_prefix_suffix<T: TryFromBytes + KnownLayout + Immutable + ?Sized>(
3149    source: &[u8],
3150    cast_type: CastType,
3151    meta: Option<T::PointerMetadata>,
3152) -> Result<(&T, &[u8]), TryCastError<&[u8], T>> {
3153    match Ptr::from_ref(source).try_cast_into::<T, BecauseImmutable>(cast_type, meta) {
3154        Ok((source, prefix_suffix)) => {
3155            // This call may panic. If that happens, it doesn't cause any soundness
3156            // issues, as we have not generated any invalid state which we need to
3157            // fix before returning.
3158            //
3159            // Note that one panic or post-monomorphization error condition is
3160            // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
3161            // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
3162            // condition will not happen.
3163            match source.try_into_valid() {
3164                Ok(valid) => Ok((valid.as_ref(), prefix_suffix.as_ref())),
3165                Err(e) => Err(e.map_src(|src| src.as_bytes::<BecauseImmutable>().as_ref()).into()),
3166            }
3167        }
3168        Err(e) => Err(e.map_src(Ptr::as_ref).into()),
3169    }
3170}
3171
3172#[inline(always)]
3173fn try_mut_from_prefix_suffix<T: IntoBytes + TryFromBytes + KnownLayout + ?Sized>(
3174    candidate: &mut [u8],
3175    cast_type: CastType,
3176    meta: Option<T::PointerMetadata>,
3177) -> Result<(&mut T, &mut [u8]), TryCastError<&mut [u8], T>> {
3178    match Ptr::from_mut(candidate).try_cast_into::<T, BecauseExclusive>(cast_type, meta) {
3179        Ok((candidate, prefix_suffix)) => {
3180            // This call may panic. If that happens, it doesn't cause any soundness
3181            // issues, as we have not generated any invalid state which we need to
3182            // fix before returning.
3183            //
3184            // Note that one panic or post-monomorphization error condition is
3185            // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
3186            // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
3187            // condition will not happen.
3188            match candidate.try_into_valid() {
3189                Ok(valid) => Ok((valid.as_mut(), prefix_suffix.as_mut())),
3190                Err(e) => Err(e.map_src(|src| src.as_bytes::<BecauseExclusive>().as_mut()).into()),
3191            }
3192        }
3193        Err(e) => Err(e.map_src(Ptr::as_mut).into()),
3194    }
3195}
3196
3197#[inline(always)]
3198fn swap<T, U>((t, u): (T, U)) -> (U, T) {
3199    (u, t)
3200}
3201
3202/// # Safety
3203///
3204/// All bytes of `candidate` must be initialized.
3205#[inline(always)]
3206unsafe fn try_read_from<S, T: TryFromBytes>(
3207    source: S,
3208    mut candidate: CoreMaybeUninit<T>,
3209) -> Result<T, TryReadError<S, T>> {
3210    // We use `from_mut` despite not mutating via `c_ptr` so that we don't need
3211    // to add a `T: Immutable` bound.
3212    let c_ptr = Ptr::from_mut(&mut candidate);
3213    // SAFETY: `c_ptr` has no uninitialized sub-ranges because it derived from
3214    // `candidate`, which the caller promises is entirely initialized. Since
3215    // `candidate` is a `MaybeUninit`, it has no validity requirements, and so
3216    // no values written to an `Initialized` `c_ptr` can violate its validity.
3217    // Since `c_ptr` has `Exclusive` aliasing, no mutations may happen except
3218    // via `c_ptr` so long as it is live, so we don't need to worry about the
3219    // fact that `c_ptr` may have more restricted validity than `candidate`.
3220    let c_ptr = unsafe { c_ptr.assume_validity::<invariant::Initialized>() };
3221    let c_ptr = c_ptr.transmute();
3222
3223    // Since we don't have `T: KnownLayout`, we hack around that by using
3224    // `Wrapping<T>`, which implements `KnownLayout` even if `T` doesn't.
3225    //
3226    // This call may panic. If that happens, it doesn't cause any soundness
3227    // issues, as we have not generated any invalid state which we need to fix
3228    // before returning.
3229    //
3230    // Note that one panic or post-monomorphization error condition is calling
3231    // `try_into_valid` (and thus `is_bit_valid`) with a shared pointer when
3232    // `Self: !Immutable`. Since `Self: Immutable`, this panic condition will
3233    // not happen.
3234    if !Wrapping::<T>::is_bit_valid(c_ptr.forget_aligned()) {
3235        return Err(ValidityError::new(source).into());
3236    }
3237
3238    fn _assert_same_size_and_validity<T>()
3239    where
3240        Wrapping<T>: pointer::TransmuteFrom<T, invariant::Valid, invariant::Valid>,
3241        T: pointer::TransmuteFrom<Wrapping<T>, invariant::Valid, invariant::Valid>,
3242    {
3243    }
3244
3245    _assert_same_size_and_validity::<T>();
3246
3247    // SAFETY: We just validated that `candidate` contains a valid
3248    // `Wrapping<T>`, which has the same size and bit validity as `T`, as
3249    // guaranteed by the preceding type assertion.
3250    Ok(unsafe { candidate.assume_init() })
3251}
3252
3253/// Types for which a sequence of bytes all set to zero represents a valid
3254/// instance of the type.
3255///
3256/// Any memory region of the appropriate length which is guaranteed to contain
3257/// only zero bytes can be viewed as any `FromZeros` type with no runtime
3258/// overhead. This is useful whenever memory is known to be in a zeroed state,
3259/// such memory returned from some allocation routines.
3260///
3261/// # Warning: Padding bytes
3262///
3263/// Note that, when a value is moved or copied, only the non-padding bytes of
3264/// that value are guaranteed to be preserved. It is unsound to assume that
3265/// values written to padding bytes are preserved after a move or copy. For more
3266/// details, see the [`FromBytes` docs][frombytes-warning-padding-bytes].
3267///
3268/// [frombytes-warning-padding-bytes]: FromBytes#warning-padding-bytes
3269///
3270/// # Implementation
3271///
3272/// **Do not implement this trait yourself!** Instead, use
3273/// [`#[derive(FromZeros)]`][derive]; e.g.:
3274///
3275/// ```
3276/// # use zerocopy_derive::{FromZeros, Immutable};
3277/// #[derive(FromZeros)]
3278/// struct MyStruct {
3279/// # /*
3280///     ...
3281/// # */
3282/// }
3283///
3284/// #[derive(FromZeros)]
3285/// #[repr(u8)]
3286/// enum MyEnum {
3287/// #   Variant0,
3288/// # /*
3289///     ...
3290/// # */
3291/// }
3292///
3293/// #[derive(FromZeros, Immutable)]
3294/// union MyUnion {
3295/// #   variant: u8,
3296/// # /*
3297///     ...
3298/// # */
3299/// }
3300/// ```
3301///
3302/// This derive performs a sophisticated, compile-time safety analysis to
3303/// determine whether a type is `FromZeros`.
3304///
3305/// # Safety
3306///
3307/// *This section describes what is required in order for `T: FromZeros`, and
3308/// what unsafe code may assume of such types. If you don't plan on implementing
3309/// `FromZeros` manually, and you don't plan on writing unsafe code that
3310/// operates on `FromZeros` types, then you don't need to read this section.*
3311///
3312/// If `T: FromZeros`, then unsafe code may assume that it is sound to produce a
3313/// `T` whose bytes are all initialized to zero. If a type is marked as
3314/// `FromZeros` which violates this contract, it may cause undefined behavior.
3315///
3316/// `#[derive(FromZeros)]` only permits [types which satisfy these
3317/// requirements][derive-analysis].
3318///
3319#[cfg_attr(
3320    feature = "derive",
3321    doc = "[derive]: zerocopy_derive::FromZeros",
3322    doc = "[derive-analysis]: zerocopy_derive::FromZeros#analysis"
3323)]
3324#[cfg_attr(
3325    not(feature = "derive"),
3326    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromZeros.html"),
3327    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromZeros.html#analysis"),
3328)]
3329#[cfg_attr(
3330    zerocopy_diagnostic_on_unimplemented_1_78_0,
3331    diagnostic::on_unimplemented(note = "Consider adding `#[derive(FromZeros)]` to `{Self}`")
3332)]
3333pub unsafe trait FromZeros: TryFromBytes {
3334    // The `Self: Sized` bound makes it so that `FromZeros` is still object
3335    // safe.
3336    #[doc(hidden)]
3337    fn only_derive_is_allowed_to_implement_this_trait()
3338    where
3339        Self: Sized;
3340
3341    /// Overwrites `self` with zeros.
3342    ///
3343    /// Sets every byte in `self` to 0. While this is similar to doing `*self =
3344    /// Self::new_zeroed()`, it differs in that `zero` does not semantically
3345    /// drop the current value and replace it with a new one — it simply
3346    /// modifies the bytes of the existing value.
3347    ///
3348    /// # Examples
3349    ///
3350    /// ```
3351    /// # use zerocopy::FromZeros;
3352    /// # use zerocopy_derive::*;
3353    /// #
3354    /// #[derive(FromZeros)]
3355    /// #[repr(C)]
3356    /// struct PacketHeader {
3357    ///     src_port: [u8; 2],
3358    ///     dst_port: [u8; 2],
3359    ///     length: [u8; 2],
3360    ///     checksum: [u8; 2],
3361    /// }
3362    ///
3363    /// let mut header = PacketHeader {
3364    ///     src_port: 100u16.to_be_bytes(),
3365    ///     dst_port: 200u16.to_be_bytes(),
3366    ///     length: 300u16.to_be_bytes(),
3367    ///     checksum: 400u16.to_be_bytes(),
3368    /// };
3369    ///
3370    /// header.zero();
3371    ///
3372    /// assert_eq!(header.src_port, [0, 0]);
3373    /// assert_eq!(header.dst_port, [0, 0]);
3374    /// assert_eq!(header.length, [0, 0]);
3375    /// assert_eq!(header.checksum, [0, 0]);
3376    /// ```
3377    #[inline(always)]
3378    fn zero(&mut self) {
3379        let slf: *mut Self = self;
3380        let len = mem::size_of_val(self);
3381        // SAFETY:
3382        // - `self` is guaranteed by the type system to be valid for writes of
3383        //   size `size_of_val(self)`.
3384        // - `u8`'s alignment is 1, and thus `self` is guaranteed to be aligned
3385        //   as required by `u8`.
3386        // - Since `Self: FromZeros`, the all-zeros instance is a valid instance
3387        //   of `Self.`
3388        //
3389        // TODO(#429): Add references to docs and quotes.
3390        unsafe { ptr::write_bytes(slf.cast::<u8>(), 0, len) };
3391    }
3392
3393    /// Creates an instance of `Self` from zeroed bytes.
3394    ///
3395    /// # Examples
3396    ///
3397    /// ```
3398    /// # use zerocopy::FromZeros;
3399    /// # use zerocopy_derive::*;
3400    /// #
3401    /// #[derive(FromZeros)]
3402    /// #[repr(C)]
3403    /// struct PacketHeader {
3404    ///     src_port: [u8; 2],
3405    ///     dst_port: [u8; 2],
3406    ///     length: [u8; 2],
3407    ///     checksum: [u8; 2],
3408    /// }
3409    ///
3410    /// let header: PacketHeader = FromZeros::new_zeroed();
3411    ///
3412    /// assert_eq!(header.src_port, [0, 0]);
3413    /// assert_eq!(header.dst_port, [0, 0]);
3414    /// assert_eq!(header.length, [0, 0]);
3415    /// assert_eq!(header.checksum, [0, 0]);
3416    /// ```
3417    #[must_use = "has no side effects"]
3418    #[inline(always)]
3419    fn new_zeroed() -> Self
3420    where
3421        Self: Sized,
3422    {
3423        // SAFETY: `FromZeros` says that the all-zeros bit pattern is legal.
3424        unsafe { mem::zeroed() }
3425    }
3426
3427    /// Creates a `Box<Self>` from zeroed bytes.
3428    ///
3429    /// This function is useful for allocating large values on the heap and
3430    /// zero-initializing them, without ever creating a temporary instance of
3431    /// `Self` on the stack. For example, `<[u8; 1048576]>::new_box_zeroed()`
3432    /// will allocate `[u8; 1048576]` directly on the heap; it does not require
3433    /// storing `[u8; 1048576]` in a temporary variable on the stack.
3434    ///
3435    /// On systems that use a heap implementation that supports allocating from
3436    /// pre-zeroed memory, using `new_box_zeroed` (or related functions) may
3437    /// have performance benefits.
3438    ///
3439    /// # Errors
3440    ///
3441    /// Returns an error on allocation failure. Allocation failure is guaranteed
3442    /// never to cause a panic or an abort.
3443    #[must_use = "has no side effects (other than allocation)"]
3444    #[cfg(any(feature = "alloc", test))]
3445    #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3446    #[inline]
3447    fn new_box_zeroed() -> Result<Box<Self>, AllocError>
3448    where
3449        Self: Sized,
3450    {
3451        // If `T` is a ZST, then return a proper boxed instance of it. There is
3452        // no allocation, but `Box` does require a correct dangling pointer.
3453        let layout = Layout::new::<Self>();
3454        if layout.size() == 0 {
3455            // Construct the `Box` from a dangling pointer to avoid calling
3456            // `Self::new_zeroed`. This ensures that stack space is never
3457            // allocated for `Self` even on lower opt-levels where this branch
3458            // might not get optimized out.
3459
3460            // SAFETY: Per [1], when `T` is a ZST, `Box<T>`'s only validity
3461            // requirements are that the pointer is non-null and sufficiently
3462            // aligned. Per [2], `NonNull::dangling` produces a pointer which
3463            // is sufficiently aligned. Since the produced pointer is a
3464            // `NonNull`, it is non-null.
3465            //
3466            // [1] Per https://doc.rust-lang.org/nightly/std/boxed/index.html#memory-layout:
3467            //
3468            //   For zero-sized values, the `Box` pointer has to be non-null and sufficiently aligned.
3469            //
3470            // [2] Per https://doc.rust-lang.org/std/ptr/struct.NonNull.html#method.dangling:
3471            //
3472            //   Creates a new `NonNull` that is dangling, but well-aligned.
3473            return Ok(unsafe { Box::from_raw(NonNull::dangling().as_ptr()) });
3474        }
3475
3476        // TODO(#429): Add a "SAFETY" comment and remove this `allow`.
3477        #[allow(clippy::undocumented_unsafe_blocks)]
3478        let ptr = unsafe { alloc::alloc::alloc_zeroed(layout).cast::<Self>() };
3479        if ptr.is_null() {
3480            return Err(AllocError);
3481        }
3482        // TODO(#429): Add a "SAFETY" comment and remove this `allow`.
3483        #[allow(clippy::undocumented_unsafe_blocks)]
3484        Ok(unsafe { Box::from_raw(ptr) })
3485    }
3486
3487    /// Creates a `Box<[Self]>` (a boxed slice) from zeroed bytes.
3488    ///
3489    /// This function is useful for allocating large values of `[Self]` on the
3490    /// heap and zero-initializing them, without ever creating a temporary
3491    /// instance of `[Self; _]` on the stack. For example,
3492    /// `u8::new_box_slice_zeroed(1048576)` will allocate the slice directly on
3493    /// the heap; it does not require storing the slice on the stack.
3494    ///
3495    /// On systems that use a heap implementation that supports allocating from
3496    /// pre-zeroed memory, using `new_box_slice_zeroed` may have performance
3497    /// benefits.
3498    ///
3499    /// If `Self` is a zero-sized type, then this function will return a
3500    /// `Box<[Self]>` that has the correct `len`. Such a box cannot contain any
3501    /// actual information, but its `len()` property will report the correct
3502    /// value.
3503    ///
3504    /// # Errors
3505    ///
3506    /// Returns an error on allocation failure. Allocation failure is
3507    /// guaranteed never to cause a panic or an abort.
3508    #[must_use = "has no side effects (other than allocation)"]
3509    #[cfg(feature = "alloc")]
3510    #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3511    #[inline]
3512    fn new_box_zeroed_with_elems(count: usize) -> Result<Box<Self>, AllocError>
3513    where
3514        Self: KnownLayout<PointerMetadata = usize>,
3515    {
3516        // SAFETY: `alloc::alloc::alloc_zeroed` is a valid argument of
3517        // `new_box`. The referent of the pointer returned by `alloc_zeroed`
3518        // (and, consequently, the `Box` derived from it) is a valid instance of
3519        // `Self`, because `Self` is `FromZeros`.
3520        unsafe { crate::util::new_box(count, alloc::alloc::alloc_zeroed) }
3521    }
3522
3523    #[deprecated(since = "0.8.0", note = "renamed to `FromZeros::new_box_zeroed_with_elems`")]
3524    #[doc(hidden)]
3525    #[cfg(feature = "alloc")]
3526    #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3527    #[must_use = "has no side effects (other than allocation)"]
3528    #[inline(always)]
3529    fn new_box_slice_zeroed(len: usize) -> Result<Box<[Self]>, AllocError>
3530    where
3531        Self: Sized,
3532    {
3533        <[Self]>::new_box_zeroed_with_elems(len)
3534    }
3535
3536    /// Creates a `Vec<Self>` from zeroed bytes.
3537    ///
3538    /// This function is useful for allocating large values of `Vec`s and
3539    /// zero-initializing them, without ever creating a temporary instance of
3540    /// `[Self; _]` (or many temporary instances of `Self`) on the stack. For
3541    /// example, `u8::new_vec_zeroed(1048576)` will allocate directly on the
3542    /// heap; it does not require storing intermediate values on the stack.
3543    ///
3544    /// On systems that use a heap implementation that supports allocating from
3545    /// pre-zeroed memory, using `new_vec_zeroed` may have performance benefits.
3546    ///
3547    /// If `Self` is a zero-sized type, then this function will return a
3548    /// `Vec<Self>` that has the correct `len`. Such a `Vec` cannot contain any
3549    /// actual information, but its `len()` property will report the correct
3550    /// value.
3551    ///
3552    /// # Errors
3553    ///
3554    /// Returns an error on allocation failure. Allocation failure is
3555    /// guaranteed never to cause a panic or an abort.
3556    #[must_use = "has no side effects (other than allocation)"]
3557    #[cfg(feature = "alloc")]
3558    #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3559    #[inline(always)]
3560    fn new_vec_zeroed(len: usize) -> Result<Vec<Self>, AllocError>
3561    where
3562        Self: Sized,
3563    {
3564        <[Self]>::new_box_zeroed_with_elems(len).map(Into::into)
3565    }
3566
3567    /// Extends a `Vec<Self>` by pushing `additional` new items onto the end of
3568    /// the vector. The new items are initialized with zeros.
3569    #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
3570    #[cfg(feature = "alloc")]
3571    #[cfg_attr(doc_cfg, doc(cfg(all(rust = "1.57.0", feature = "alloc"))))]
3572    #[inline(always)]
3573    fn extend_vec_zeroed(v: &mut Vec<Self>, additional: usize) -> Result<(), AllocError>
3574    where
3575        Self: Sized,
3576    {
3577        // PANICS: We pass `v.len()` for `position`, so the `position > v.len()`
3578        // panic condition is not satisfied.
3579        <Self as FromZeros>::insert_vec_zeroed(v, v.len(), additional)
3580    }
3581
3582    /// Inserts `additional` new items into `Vec<Self>` at `position`. The new
3583    /// items are initialized with zeros.
3584    ///
3585    /// # Panics
3586    ///
3587    /// Panics if `position > v.len()`.
3588    #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
3589    #[cfg(feature = "alloc")]
3590    #[cfg_attr(doc_cfg, doc(cfg(all(rust = "1.57.0", feature = "alloc"))))]
3591    #[inline]
3592    fn insert_vec_zeroed(
3593        v: &mut Vec<Self>,
3594        position: usize,
3595        additional: usize,
3596    ) -> Result<(), AllocError>
3597    where
3598        Self: Sized,
3599    {
3600        assert!(position <= v.len());
3601        // We only conditionally compile on versions on which `try_reserve` is
3602        // stable; the Clippy lint is a false positive.
3603        v.try_reserve(additional).map_err(|_| AllocError)?;
3604        // SAFETY: The `try_reserve` call guarantees that these cannot overflow:
3605        // * `ptr.add(position)`
3606        // * `position + additional`
3607        // * `v.len() + additional`
3608        //
3609        // `v.len() - position` cannot overflow because we asserted that
3610        // `position <= v.len()`.
3611        unsafe {
3612            // This is a potentially overlapping copy.
3613            let ptr = v.as_mut_ptr();
3614            #[allow(clippy::arithmetic_side_effects)]
3615            ptr.add(position).copy_to(ptr.add(position + additional), v.len() - position);
3616            ptr.add(position).write_bytes(0, additional);
3617            #[allow(clippy::arithmetic_side_effects)]
3618            v.set_len(v.len() + additional);
3619        }
3620
3621        Ok(())
3622    }
3623}
3624
3625/// Analyzes whether a type is [`FromBytes`].
3626///
3627/// This derive analyzes, at compile time, whether the annotated type satisfies
3628/// the [safety conditions] of `FromBytes` and implements `FromBytes` and its
3629/// supertraits if it is sound to do so. This derive can be applied to structs,
3630/// enums, and unions;
3631/// e.g.:
3632///
3633/// ```
3634/// # use zerocopy_derive::{FromBytes, FromZeros, Immutable};
3635/// #[derive(FromBytes)]
3636/// struct MyStruct {
3637/// # /*
3638///     ...
3639/// # */
3640/// }
3641///
3642/// #[derive(FromBytes)]
3643/// #[repr(u8)]
3644/// enum MyEnum {
3645/// #   V00, V01, V02, V03, V04, V05, V06, V07, V08, V09, V0A, V0B, V0C, V0D, V0E,
3646/// #   V0F, V10, V11, V12, V13, V14, V15, V16, V17, V18, V19, V1A, V1B, V1C, V1D,
3647/// #   V1E, V1F, V20, V21, V22, V23, V24, V25, V26, V27, V28, V29, V2A, V2B, V2C,
3648/// #   V2D, V2E, V2F, V30, V31, V32, V33, V34, V35, V36, V37, V38, V39, V3A, V3B,
3649/// #   V3C, V3D, V3E, V3F, V40, V41, V42, V43, V44, V45, V46, V47, V48, V49, V4A,
3650/// #   V4B, V4C, V4D, V4E, V4F, V50, V51, V52, V53, V54, V55, V56, V57, V58, V59,
3651/// #   V5A, V5B, V5C, V5D, V5E, V5F, V60, V61, V62, V63, V64, V65, V66, V67, V68,
3652/// #   V69, V6A, V6B, V6C, V6D, V6E, V6F, V70, V71, V72, V73, V74, V75, V76, V77,
3653/// #   V78, V79, V7A, V7B, V7C, V7D, V7E, V7F, V80, V81, V82, V83, V84, V85, V86,
3654/// #   V87, V88, V89, V8A, V8B, V8C, V8D, V8E, V8F, V90, V91, V92, V93, V94, V95,
3655/// #   V96, V97, V98, V99, V9A, V9B, V9C, V9D, V9E, V9F, VA0, VA1, VA2, VA3, VA4,
3656/// #   VA5, VA6, VA7, VA8, VA9, VAA, VAB, VAC, VAD, VAE, VAF, VB0, VB1, VB2, VB3,
3657/// #   VB4, VB5, VB6, VB7, VB8, VB9, VBA, VBB, VBC, VBD, VBE, VBF, VC0, VC1, VC2,
3658/// #   VC3, VC4, VC5, VC6, VC7, VC8, VC9, VCA, VCB, VCC, VCD, VCE, VCF, VD0, VD1,
3659/// #   VD2, VD3, VD4, VD5, VD6, VD7, VD8, VD9, VDA, VDB, VDC, VDD, VDE, VDF, VE0,
3660/// #   VE1, VE2, VE3, VE4, VE5, VE6, VE7, VE8, VE9, VEA, VEB, VEC, VED, VEE, VEF,
3661/// #   VF0, VF1, VF2, VF3, VF4, VF5, VF6, VF7, VF8, VF9, VFA, VFB, VFC, VFD, VFE,
3662/// #   VFF,
3663/// # /*
3664///     ...
3665/// # */
3666/// }
3667///
3668/// #[derive(FromBytes, Immutable)]
3669/// union MyUnion {
3670/// #   variant: u8,
3671/// # /*
3672///     ...
3673/// # */
3674/// }
3675/// ```
3676///
3677/// [safety conditions]: trait@FromBytes#safety
3678///
3679/// # Analysis
3680///
3681/// *This section describes, roughly, the analysis performed by this derive to
3682/// determine whether it is sound to implement `FromBytes` for a given type.
3683/// Unless you are modifying the implementation of this derive, or attempting to
3684/// manually implement `FromBytes` for a type yourself, you don't need to read
3685/// this section.*
3686///
3687/// If a type has the following properties, then this derive can implement
3688/// `FromBytes` for that type:
3689///
3690/// - If the type is a struct, all of its fields must be `FromBytes`.
3691/// - If the type is an enum:
3692///   - It must have a defined representation (`repr`s `C`, `u8`, `u16`, `u32`,
3693///     `u64`, `usize`, `i8`, `i16`, `i32`, `i64`, or `isize`).
3694///   - The maximum number of discriminants must be used (so that every possible
3695///     bit pattern is a valid one). Be very careful when using the `C`,
3696///     `usize`, or `isize` representations, as their size is
3697///     platform-dependent.
3698///   - Its fields must be `FromBytes`.
3699///
3700/// This analysis is subject to change. Unsafe code may *only* rely on the
3701/// documented [safety conditions] of `FromBytes`, and must *not* rely on the
3702/// implementation details of this derive.
3703///
3704/// ## Why isn't an explicit representation required for structs?
3705///
3706/// Neither this derive, nor the [safety conditions] of `FromBytes`, requires
3707/// that structs are marked with `#[repr(C)]`.
3708///
3709/// Per the [Rust reference](reference),
3710///
3711/// > The representation of a type can change the padding between fields, but
3712/// > does not change the layout of the fields themselves.
3713///
3714/// [reference]: https://doc.rust-lang.org/reference/type-layout.html#representations
3715///
3716/// Since the layout of structs only consists of padding bytes and field bytes,
3717/// a struct is soundly `FromBytes` if:
3718/// 1. its padding is soundly `FromBytes`, and
3719/// 2. its fields are soundly `FromBytes`.
3720///
3721/// The answer to the first question is always yes: padding bytes do not have
3722/// any validity constraints. A [discussion] of this question in the Unsafe Code
3723/// Guidelines Working Group concluded that it would be virtually unimaginable
3724/// for future versions of rustc to add validity constraints to padding bytes.
3725///
3726/// [discussion]: https://github.com/rust-lang/unsafe-code-guidelines/issues/174
3727///
3728/// Whether a struct is soundly `FromBytes` therefore solely depends on whether
3729/// its fields are `FromBytes`.
3730// TODO(#146): Document why we don't require an enum to have an explicit `repr`
3731// attribute.
3732#[cfg(any(feature = "derive", test))]
3733#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
3734pub use zerocopy_derive::FromBytes;
3735
3736/// Types for which any bit pattern is valid.
3737///
3738/// Any memory region of the appropriate length which contains initialized bytes
3739/// can be viewed as any `FromBytes` type with no runtime overhead. This is
3740/// useful for efficiently parsing bytes as structured data.
3741///
3742/// # Warning: Padding bytes
3743///
3744/// Note that, when a value is moved or copied, only the non-padding bytes of
3745/// that value are guaranteed to be preserved. It is unsound to assume that
3746/// values written to padding bytes are preserved after a move or copy. For
3747/// example, the following is unsound:
3748///
3749/// ```rust,no_run
3750/// use core::mem::{size_of, transmute};
3751/// use zerocopy::FromZeros;
3752/// # use zerocopy_derive::*;
3753///
3754/// // Assume `Foo` is a type with padding bytes.
3755/// #[derive(FromZeros, Default)]
3756/// struct Foo {
3757/// # /*
3758///     ...
3759/// # */
3760/// }
3761///
3762/// let mut foo: Foo = Foo::default();
3763/// FromZeros::zero(&mut foo);
3764/// // UNSOUND: Although `FromZeros::zero` writes zeros to all bytes of `foo`,
3765/// // those writes are not guaranteed to be preserved in padding bytes when
3766/// // `foo` is moved, so this may expose padding bytes as `u8`s.
3767/// let foo_bytes: [u8; size_of::<Foo>()] = unsafe { transmute(foo) };
3768/// ```
3769///
3770/// # Implementation
3771///
3772/// **Do not implement this trait yourself!** Instead, use
3773/// [`#[derive(FromBytes)]`][derive]; e.g.:
3774///
3775/// ```
3776/// # use zerocopy_derive::{FromBytes, Immutable};
3777/// #[derive(FromBytes)]
3778/// struct MyStruct {
3779/// # /*
3780///     ...
3781/// # */
3782/// }
3783///
3784/// #[derive(FromBytes)]
3785/// #[repr(u8)]
3786/// enum MyEnum {
3787/// #   V00, V01, V02, V03, V04, V05, V06, V07, V08, V09, V0A, V0B, V0C, V0D, V0E,
3788/// #   V0F, V10, V11, V12, V13, V14, V15, V16, V17, V18, V19, V1A, V1B, V1C, V1D,
3789/// #   V1E, V1F, V20, V21, V22, V23, V24, V25, V26, V27, V28, V29, V2A, V2B, V2C,
3790/// #   V2D, V2E, V2F, V30, V31, V32, V33, V34, V35, V36, V37, V38, V39, V3A, V3B,
3791/// #   V3C, V3D, V3E, V3F, V40, V41, V42, V43, V44, V45, V46, V47, V48, V49, V4A,
3792/// #   V4B, V4C, V4D, V4E, V4F, V50, V51, V52, V53, V54, V55, V56, V57, V58, V59,
3793/// #   V5A, V5B, V5C, V5D, V5E, V5F, V60, V61, V62, V63, V64, V65, V66, V67, V68,
3794/// #   V69, V6A, V6B, V6C, V6D, V6E, V6F, V70, V71, V72, V73, V74, V75, V76, V77,
3795/// #   V78, V79, V7A, V7B, V7C, V7D, V7E, V7F, V80, V81, V82, V83, V84, V85, V86,
3796/// #   V87, V88, V89, V8A, V8B, V8C, V8D, V8E, V8F, V90, V91, V92, V93, V94, V95,
3797/// #   V96, V97, V98, V99, V9A, V9B, V9C, V9D, V9E, V9F, VA0, VA1, VA2, VA3, VA4,
3798/// #   VA5, VA6, VA7, VA8, VA9, VAA, VAB, VAC, VAD, VAE, VAF, VB0, VB1, VB2, VB3,
3799/// #   VB4, VB5, VB6, VB7, VB8, VB9, VBA, VBB, VBC, VBD, VBE, VBF, VC0, VC1, VC2,
3800/// #   VC3, VC4, VC5, VC6, VC7, VC8, VC9, VCA, VCB, VCC, VCD, VCE, VCF, VD0, VD1,
3801/// #   VD2, VD3, VD4, VD5, VD6, VD7, VD8, VD9, VDA, VDB, VDC, VDD, VDE, VDF, VE0,
3802/// #   VE1, VE2, VE3, VE4, VE5, VE6, VE7, VE8, VE9, VEA, VEB, VEC, VED, VEE, VEF,
3803/// #   VF0, VF1, VF2, VF3, VF4, VF5, VF6, VF7, VF8, VF9, VFA, VFB, VFC, VFD, VFE,
3804/// #   VFF,
3805/// # /*
3806///     ...
3807/// # */
3808/// }
3809///
3810/// #[derive(FromBytes, Immutable)]
3811/// union MyUnion {
3812/// #   variant: u8,
3813/// # /*
3814///     ...
3815/// # */
3816/// }
3817/// ```
3818///
3819/// This derive performs a sophisticated, compile-time safety analysis to
3820/// determine whether a type is `FromBytes`.
3821///
3822/// # Safety
3823///
3824/// *This section describes what is required in order for `T: FromBytes`, and
3825/// what unsafe code may assume of such types. If you don't plan on implementing
3826/// `FromBytes` manually, and you don't plan on writing unsafe code that
3827/// operates on `FromBytes` types, then you don't need to read this section.*
3828///
3829/// If `T: FromBytes`, then unsafe code may assume that it is sound to produce a
3830/// `T` whose bytes are initialized to any sequence of valid `u8`s (in other
3831/// words, any byte value which is not uninitialized). If a type is marked as
3832/// `FromBytes` which violates this contract, it may cause undefined behavior.
3833///
3834/// `#[derive(FromBytes)]` only permits [types which satisfy these
3835/// requirements][derive-analysis].
3836///
3837#[cfg_attr(
3838    feature = "derive",
3839    doc = "[derive]: zerocopy_derive::FromBytes",
3840    doc = "[derive-analysis]: zerocopy_derive::FromBytes#analysis"
3841)]
3842#[cfg_attr(
3843    not(feature = "derive"),
3844    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromBytes.html"),
3845    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromBytes.html#analysis"),
3846)]
3847#[cfg_attr(
3848    zerocopy_diagnostic_on_unimplemented_1_78_0,
3849    diagnostic::on_unimplemented(note = "Consider adding `#[derive(FromBytes)]` to `{Self}`")
3850)]
3851pub unsafe trait FromBytes: FromZeros {
3852    // The `Self: Sized` bound makes it so that `FromBytes` is still object
3853    // safe.
3854    #[doc(hidden)]
3855    fn only_derive_is_allowed_to_implement_this_trait()
3856    where
3857        Self: Sized;
3858
3859    /// Interprets the given `source` as a `&Self`.
3860    ///
3861    /// This method attempts to return a reference to `source` interpreted as a
3862    /// `Self`. If the length of `source` is not a [valid size of
3863    /// `Self`][valid-size], or if `source` is not appropriately aligned, this
3864    /// returns `Err`. If [`Self: Unaligned`][self-unaligned], you can
3865    /// [infallibly discard the alignment error][size-error-from].
3866    ///
3867    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3868    ///
3869    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3870    /// [self-unaligned]: Unaligned
3871    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3872    /// [slice-dst]: KnownLayout#dynamically-sized-types
3873    ///
3874    /// # Compile-Time Assertions
3875    ///
3876    /// This method cannot yet be used on unsized types whose dynamically-sized
3877    /// component is zero-sized. Attempting to use this method on such types
3878    /// results in a compile-time assertion error; e.g.:
3879    ///
3880    /// ```compile_fail,E0080
3881    /// use zerocopy::*;
3882    /// # use zerocopy_derive::*;
3883    ///
3884    /// #[derive(FromBytes, Immutable, KnownLayout)]
3885    /// #[repr(C)]
3886    /// struct ZSTy {
3887    ///     leading_sized: u16,
3888    ///     trailing_dst: [()],
3889    /// }
3890    ///
3891    /// let _ = ZSTy::ref_from_bytes(0u16.as_bytes()); // âš  Compile Error!
3892    /// ```
3893    ///
3894    /// # Examples
3895    ///
3896    /// ```
3897    /// use zerocopy::FromBytes;
3898    /// # use zerocopy_derive::*;
3899    ///
3900    /// #[derive(FromBytes, KnownLayout, Immutable)]
3901    /// #[repr(C)]
3902    /// struct PacketHeader {
3903    ///     src_port: [u8; 2],
3904    ///     dst_port: [u8; 2],
3905    ///     length: [u8; 2],
3906    ///     checksum: [u8; 2],
3907    /// }
3908    ///
3909    /// #[derive(FromBytes, KnownLayout, Immutable)]
3910    /// #[repr(C)]
3911    /// struct Packet {
3912    ///     header: PacketHeader,
3913    ///     body: [u8],
3914    /// }
3915    ///
3916    /// // These bytes encode a `Packet`.
3917    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11][..];
3918    ///
3919    /// let packet = Packet::ref_from_bytes(bytes).unwrap();
3920    ///
3921    /// assert_eq!(packet.header.src_port, [0, 1]);
3922    /// assert_eq!(packet.header.dst_port, [2, 3]);
3923    /// assert_eq!(packet.header.length, [4, 5]);
3924    /// assert_eq!(packet.header.checksum, [6, 7]);
3925    /// assert_eq!(packet.body, [8, 9, 10, 11]);
3926    /// ```
3927    #[must_use = "has no side effects"]
3928    #[inline]
3929    fn ref_from_bytes(source: &[u8]) -> Result<&Self, CastError<&[u8], Self>>
3930    where
3931        Self: KnownLayout + Immutable,
3932    {
3933        static_assert_dst_is_not_zst!(Self);
3934        match Ptr::from_ref(source).try_cast_into_no_leftover::<_, BecauseImmutable>(None) {
3935            Ok(ptr) => Ok(ptr.recall_validity().as_ref()),
3936            Err(err) => Err(err.map_src(|src| src.as_ref())),
3937        }
3938    }
3939
3940    /// Interprets the prefix of the given `source` as a `&Self` without
3941    /// copying.
3942    ///
3943    /// This method computes the [largest possible size of `Self`][valid-size]
3944    /// that can fit in the leading bytes of `source`, then attempts to return
3945    /// both a reference to those bytes interpreted as a `Self`, and a reference
3946    /// to the remaining bytes. If there are insufficient bytes, or if `source`
3947    /// is not appropriately aligned, this returns `Err`. If [`Self:
3948    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
3949    /// error][size-error-from].
3950    ///
3951    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3952    ///
3953    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3954    /// [self-unaligned]: Unaligned
3955    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3956    /// [slice-dst]: KnownLayout#dynamically-sized-types
3957    ///
3958    /// # Compile-Time Assertions
3959    ///
3960    /// This method cannot yet be used on unsized types whose dynamically-sized
3961    /// component is zero-sized. See [`ref_from_prefix_with_elems`], which does
3962    /// support such types. Attempting to use this method on such types results
3963    /// in a compile-time assertion error; e.g.:
3964    ///
3965    /// ```compile_fail,E0080
3966    /// use zerocopy::*;
3967    /// # use zerocopy_derive::*;
3968    ///
3969    /// #[derive(FromBytes, Immutable, KnownLayout)]
3970    /// #[repr(C)]
3971    /// struct ZSTy {
3972    ///     leading_sized: u16,
3973    ///     trailing_dst: [()],
3974    /// }
3975    ///
3976    /// let _ = ZSTy::ref_from_prefix(0u16.as_bytes()); // âš  Compile Error!
3977    /// ```
3978    ///
3979    /// [`ref_from_prefix_with_elems`]: FromBytes::ref_from_prefix_with_elems
3980    ///
3981    /// # Examples
3982    ///
3983    /// ```
3984    /// use zerocopy::FromBytes;
3985    /// # use zerocopy_derive::*;
3986    ///
3987    /// #[derive(FromBytes, KnownLayout, Immutable)]
3988    /// #[repr(C)]
3989    /// struct PacketHeader {
3990    ///     src_port: [u8; 2],
3991    ///     dst_port: [u8; 2],
3992    ///     length: [u8; 2],
3993    ///     checksum: [u8; 2],
3994    /// }
3995    ///
3996    /// #[derive(FromBytes, KnownLayout, Immutable)]
3997    /// #[repr(C)]
3998    /// struct Packet {
3999    ///     header: PacketHeader,
4000    ///     body: [[u8; 2]],
4001    /// }
4002    ///
4003    /// // These are more bytes than are needed to encode a `Packet`.
4004    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14][..];
4005    ///
4006    /// let (packet, suffix) = Packet::ref_from_prefix(bytes).unwrap();
4007    ///
4008    /// assert_eq!(packet.header.src_port, [0, 1]);
4009    /// assert_eq!(packet.header.dst_port, [2, 3]);
4010    /// assert_eq!(packet.header.length, [4, 5]);
4011    /// assert_eq!(packet.header.checksum, [6, 7]);
4012    /// assert_eq!(packet.body, [[8, 9], [10, 11], [12, 13]]);
4013    /// assert_eq!(suffix, &[14u8][..]);
4014    /// ```
4015    #[must_use = "has no side effects"]
4016    #[inline]
4017    fn ref_from_prefix(source: &[u8]) -> Result<(&Self, &[u8]), CastError<&[u8], Self>>
4018    where
4019        Self: KnownLayout + Immutable,
4020    {
4021        static_assert_dst_is_not_zst!(Self);
4022        ref_from_prefix_suffix(source, None, CastType::Prefix)
4023    }
4024
4025    /// Interprets the suffix of the given bytes as a `&Self`.
4026    ///
4027    /// This method computes the [largest possible size of `Self`][valid-size]
4028    /// that can fit in the trailing bytes of `source`, then attempts to return
4029    /// both a reference to those bytes interpreted as a `Self`, and a reference
4030    /// to the preceding bytes. If there are insufficient bytes, or if that
4031    /// suffix of `source` is not appropriately aligned, this returns `Err`. If
4032    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
4033    /// alignment error][size-error-from].
4034    ///
4035    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
4036    ///
4037    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
4038    /// [self-unaligned]: Unaligned
4039    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4040    /// [slice-dst]: KnownLayout#dynamically-sized-types
4041    ///
4042    /// # Compile-Time Assertions
4043    ///
4044    /// This method cannot yet be used on unsized types whose dynamically-sized
4045    /// component is zero-sized. See [`ref_from_suffix_with_elems`], which does
4046    /// support such types. Attempting to use this method on such types results
4047    /// in a compile-time assertion error; e.g.:
4048    ///
4049    /// ```compile_fail,E0080
4050    /// use zerocopy::*;
4051    /// # use zerocopy_derive::*;
4052    ///
4053    /// #[derive(FromBytes, Immutable, KnownLayout)]
4054    /// #[repr(C)]
4055    /// struct ZSTy {
4056    ///     leading_sized: u16,
4057    ///     trailing_dst: [()],
4058    /// }
4059    ///
4060    /// let _ = ZSTy::ref_from_suffix(0u16.as_bytes()); // âš  Compile Error!
4061    /// ```
4062    ///
4063    /// [`ref_from_suffix_with_elems`]: FromBytes::ref_from_suffix_with_elems
4064    ///
4065    /// # Examples
4066    ///
4067    /// ```
4068    /// use zerocopy::FromBytes;
4069    /// # use zerocopy_derive::*;
4070    ///
4071    /// #[derive(FromBytes, Immutable, KnownLayout)]
4072    /// #[repr(C)]
4073    /// struct PacketTrailer {
4074    ///     frame_check_sequence: [u8; 4],
4075    /// }
4076    ///
4077    /// // These are more bytes than are needed to encode a `PacketTrailer`.
4078    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4079    ///
4080    /// let (prefix, trailer) = PacketTrailer::ref_from_suffix(bytes).unwrap();
4081    ///
4082    /// assert_eq!(prefix, &[0, 1, 2, 3, 4, 5][..]);
4083    /// assert_eq!(trailer.frame_check_sequence, [6, 7, 8, 9]);
4084    /// ```
4085    #[must_use = "has no side effects"]
4086    #[inline]
4087    fn ref_from_suffix(source: &[u8]) -> Result<(&[u8], &Self), CastError<&[u8], Self>>
4088    where
4089        Self: Immutable + KnownLayout,
4090    {
4091        static_assert_dst_is_not_zst!(Self);
4092        ref_from_prefix_suffix(source, None, CastType::Suffix).map(swap)
4093    }
4094
4095    /// Interprets the given `source` as a `&mut Self`.
4096    ///
4097    /// This method attempts to return a reference to `source` interpreted as a
4098    /// `Self`. If the length of `source` is not a [valid size of
4099    /// `Self`][valid-size], or if `source` is not appropriately aligned, this
4100    /// returns `Err`. If [`Self: Unaligned`][self-unaligned], you can
4101    /// [infallibly discard the alignment error][size-error-from].
4102    ///
4103    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
4104    ///
4105    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
4106    /// [self-unaligned]: Unaligned
4107    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4108    /// [slice-dst]: KnownLayout#dynamically-sized-types
4109    ///
4110    /// # Compile-Time Assertions
4111    ///
4112    /// This method cannot yet be used on unsized types whose dynamically-sized
4113    /// component is zero-sized. See [`mut_from_prefix_with_elems`], which does
4114    /// support such types. Attempting to use this method on such types results
4115    /// in a compile-time assertion error; e.g.:
4116    ///
4117    /// ```compile_fail,E0080
4118    /// use zerocopy::*;
4119    /// # use zerocopy_derive::*;
4120    ///
4121    /// #[derive(FromBytes, Immutable, IntoBytes, KnownLayout)]
4122    /// #[repr(C, packed)]
4123    /// struct ZSTy {
4124    ///     leading_sized: [u8; 2],
4125    ///     trailing_dst: [()],
4126    /// }
4127    ///
4128    /// let mut source = [85, 85];
4129    /// let _ = ZSTy::mut_from_bytes(&mut source[..]); // âš  Compile Error!
4130    /// ```
4131    ///
4132    /// [`mut_from_prefix_with_elems`]: FromBytes::mut_from_prefix_with_elems
4133    ///
4134    /// # Examples
4135    ///
4136    /// ```
4137    /// use zerocopy::FromBytes;
4138    /// # use zerocopy_derive::*;
4139    ///
4140    /// #[derive(FromBytes, IntoBytes, KnownLayout, Immutable)]
4141    /// #[repr(C)]
4142    /// struct PacketHeader {
4143    ///     src_port: [u8; 2],
4144    ///     dst_port: [u8; 2],
4145    ///     length: [u8; 2],
4146    ///     checksum: [u8; 2],
4147    /// }
4148    ///
4149    /// // These bytes encode a `PacketHeader`.
4150    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7][..];
4151    ///
4152    /// let header = PacketHeader::mut_from_bytes(bytes).unwrap();
4153    ///
4154    /// assert_eq!(header.src_port, [0, 1]);
4155    /// assert_eq!(header.dst_port, [2, 3]);
4156    /// assert_eq!(header.length, [4, 5]);
4157    /// assert_eq!(header.checksum, [6, 7]);
4158    ///
4159    /// header.checksum = [0, 0];
4160    ///
4161    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 0, 0]);
4162    /// ```
4163    #[must_use = "has no side effects"]
4164    #[inline]
4165    fn mut_from_bytes(source: &mut [u8]) -> Result<&mut Self, CastError<&mut [u8], Self>>
4166    where
4167        Self: IntoBytes + KnownLayout,
4168    {
4169        static_assert_dst_is_not_zst!(Self);
4170        match Ptr::from_mut(source).try_cast_into_no_leftover::<_, BecauseExclusive>(None) {
4171            Ok(ptr) => Ok(ptr.recall_validity().as_mut()),
4172            Err(err) => Err(err.map_src(|src| src.as_mut())),
4173        }
4174    }
4175
4176    /// Interprets the prefix of the given `source` as a `&mut Self` without
4177    /// copying.
4178    ///
4179    /// This method computes the [largest possible size of `Self`][valid-size]
4180    /// that can fit in the leading bytes of `source`, then attempts to return
4181    /// both a reference to those bytes interpreted as a `Self`, and a reference
4182    /// to the remaining bytes. If there are insufficient bytes, or if `source`
4183    /// is not appropriately aligned, this returns `Err`. If [`Self:
4184    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
4185    /// error][size-error-from].
4186    ///
4187    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
4188    ///
4189    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
4190    /// [self-unaligned]: Unaligned
4191    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4192    /// [slice-dst]: KnownLayout#dynamically-sized-types
4193    ///
4194    /// # Compile-Time Assertions
4195    ///
4196    /// This method cannot yet be used on unsized types whose dynamically-sized
4197    /// component is zero-sized. See [`mut_from_suffix_with_elems`], which does
4198    /// support such types. Attempting to use this method on such types results
4199    /// in a compile-time assertion error; e.g.:
4200    ///
4201    /// ```compile_fail,E0080
4202    /// use zerocopy::*;
4203    /// # use zerocopy_derive::*;
4204    ///
4205    /// #[derive(FromBytes, Immutable, IntoBytes, KnownLayout)]
4206    /// #[repr(C, packed)]
4207    /// struct ZSTy {
4208    ///     leading_sized: [u8; 2],
4209    ///     trailing_dst: [()],
4210    /// }
4211    ///
4212    /// let mut source = [85, 85];
4213    /// let _ = ZSTy::mut_from_prefix(&mut source[..]); // âš  Compile Error!
4214    /// ```
4215    ///
4216    /// [`mut_from_suffix_with_elems`]: FromBytes::mut_from_suffix_with_elems
4217    ///
4218    /// # Examples
4219    ///
4220    /// ```
4221    /// use zerocopy::FromBytes;
4222    /// # use zerocopy_derive::*;
4223    ///
4224    /// #[derive(FromBytes, IntoBytes, KnownLayout, Immutable)]
4225    /// #[repr(C)]
4226    /// struct PacketHeader {
4227    ///     src_port: [u8; 2],
4228    ///     dst_port: [u8; 2],
4229    ///     length: [u8; 2],
4230    ///     checksum: [u8; 2],
4231    /// }
4232    ///
4233    /// // These are more bytes than are needed to encode a `PacketHeader`.
4234    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4235    ///
4236    /// let (header, body) = PacketHeader::mut_from_prefix(bytes).unwrap();
4237    ///
4238    /// assert_eq!(header.src_port, [0, 1]);
4239    /// assert_eq!(header.dst_port, [2, 3]);
4240    /// assert_eq!(header.length, [4, 5]);
4241    /// assert_eq!(header.checksum, [6, 7]);
4242    /// assert_eq!(body, &[8, 9][..]);
4243    ///
4244    /// header.checksum = [0, 0];
4245    /// body.fill(1);
4246    ///
4247    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 0, 0, 1, 1]);
4248    /// ```
4249    #[must_use = "has no side effects"]
4250    #[inline]
4251    fn mut_from_prefix(
4252        source: &mut [u8],
4253    ) -> Result<(&mut Self, &mut [u8]), CastError<&mut [u8], Self>>
4254    where
4255        Self: IntoBytes + KnownLayout,
4256    {
4257        static_assert_dst_is_not_zst!(Self);
4258        mut_from_prefix_suffix(source, None, CastType::Prefix)
4259    }
4260
4261    /// Interprets the suffix of the given `source` as a `&mut Self` without
4262    /// copying.
4263    ///
4264    /// This method computes the [largest possible size of `Self`][valid-size]
4265    /// that can fit in the trailing bytes of `source`, then attempts to return
4266    /// both a reference to those bytes interpreted as a `Self`, and a reference
4267    /// to the preceding bytes. If there are insufficient bytes, or if that
4268    /// suffix of `source` is not appropriately aligned, this returns `Err`. If
4269    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
4270    /// alignment error][size-error-from].
4271    ///
4272    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
4273    ///
4274    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
4275    /// [self-unaligned]: Unaligned
4276    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4277    /// [slice-dst]: KnownLayout#dynamically-sized-types
4278    ///
4279    /// # Compile-Time Assertions
4280    ///
4281    /// This method cannot yet be used on unsized types whose dynamically-sized
4282    /// component is zero-sized. Attempting to use this method on such types
4283    /// results in a compile-time assertion error; e.g.:
4284    ///
4285    /// ```compile_fail,E0080
4286    /// use zerocopy::*;
4287    /// # use zerocopy_derive::*;
4288    ///
4289    /// #[derive(FromBytes, Immutable, IntoBytes, KnownLayout)]
4290    /// #[repr(C, packed)]
4291    /// struct ZSTy {
4292    ///     leading_sized: [u8; 2],
4293    ///     trailing_dst: [()],
4294    /// }
4295    ///
4296    /// let mut source = [85, 85];
4297    /// let _ = ZSTy::mut_from_suffix(&mut source[..]); // âš  Compile Error!
4298    /// ```
4299    ///
4300    /// # Examples
4301    ///
4302    /// ```
4303    /// use zerocopy::FromBytes;
4304    /// # use zerocopy_derive::*;
4305    ///
4306    /// #[derive(FromBytes, IntoBytes, KnownLayout, Immutable)]
4307    /// #[repr(C)]
4308    /// struct PacketTrailer {
4309    ///     frame_check_sequence: [u8; 4],
4310    /// }
4311    ///
4312    /// // These are more bytes than are needed to encode a `PacketTrailer`.
4313    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4314    ///
4315    /// let (prefix, trailer) = PacketTrailer::mut_from_suffix(bytes).unwrap();
4316    ///
4317    /// assert_eq!(prefix, &[0u8, 1, 2, 3, 4, 5][..]);
4318    /// assert_eq!(trailer.frame_check_sequence, [6, 7, 8, 9]);
4319    ///
4320    /// prefix.fill(0);
4321    /// trailer.frame_check_sequence.fill(1);
4322    ///
4323    /// assert_eq!(bytes, [0, 0, 0, 0, 0, 0, 1, 1, 1, 1]);
4324    /// ```
4325    #[must_use = "has no side effects"]
4326    #[inline]
4327    fn mut_from_suffix(
4328        source: &mut [u8],
4329    ) -> Result<(&mut [u8], &mut Self), CastError<&mut [u8], Self>>
4330    where
4331        Self: IntoBytes + KnownLayout,
4332    {
4333        static_assert_dst_is_not_zst!(Self);
4334        mut_from_prefix_suffix(source, None, CastType::Suffix).map(swap)
4335    }
4336
4337    /// Interprets the given `source` as a `&Self` with a DST length equal to
4338    /// `count`.
4339    ///
4340    /// This method attempts to return a reference to `source` interpreted as a
4341    /// `Self` with `count` trailing elements. If the length of `source` is not
4342    /// equal to the size of `Self` with `count` elements, or if `source` is not
4343    /// appropriately aligned, this returns `Err`. If [`Self:
4344    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
4345    /// error][size-error-from].
4346    ///
4347    /// [self-unaligned]: Unaligned
4348    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4349    ///
4350    /// # Examples
4351    ///
4352    /// ```
4353    /// use zerocopy::FromBytes;
4354    /// # use zerocopy_derive::*;
4355    ///
4356    /// # #[derive(Debug, PartialEq, Eq)]
4357    /// #[derive(FromBytes, Immutable)]
4358    /// #[repr(C)]
4359    /// struct Pixel {
4360    ///     r: u8,
4361    ///     g: u8,
4362    ///     b: u8,
4363    ///     a: u8,
4364    /// }
4365    ///
4366    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7][..];
4367    ///
4368    /// let pixels = <[Pixel]>::ref_from_bytes_with_elems(bytes, 2).unwrap();
4369    ///
4370    /// assert_eq!(pixels, &[
4371    ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
4372    ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
4373    /// ]);
4374    ///
4375    /// ```
4376    ///
4377    /// Since an explicit `count` is provided, this method supports types with
4378    /// zero-sized trailing slice elements. Methods such as [`ref_from_bytes`]
4379    /// which do not take an explicit count do not support such types.
4380    ///
4381    /// ```
4382    /// use zerocopy::*;
4383    /// # use zerocopy_derive::*;
4384    ///
4385    /// #[derive(FromBytes, Immutable, KnownLayout)]
4386    /// #[repr(C)]
4387    /// struct ZSTy {
4388    ///     leading_sized: [u8; 2],
4389    ///     trailing_dst: [()],
4390    /// }
4391    ///
4392    /// let src = &[85, 85][..];
4393    /// let zsty = ZSTy::ref_from_bytes_with_elems(src, 42).unwrap();
4394    /// assert_eq!(zsty.trailing_dst.len(), 42);
4395    /// ```
4396    ///
4397    /// [`ref_from_bytes`]: FromBytes::ref_from_bytes
4398    #[must_use = "has no side effects"]
4399    #[inline]
4400    fn ref_from_bytes_with_elems(
4401        source: &[u8],
4402        count: usize,
4403    ) -> Result<&Self, CastError<&[u8], Self>>
4404    where
4405        Self: KnownLayout<PointerMetadata = usize> + Immutable,
4406    {
4407        let source = Ptr::from_ref(source);
4408        let maybe_slf = source.try_cast_into_no_leftover::<_, BecauseImmutable>(Some(count));
4409        match maybe_slf {
4410            Ok(slf) => Ok(slf.recall_validity().as_ref()),
4411            Err(err) => Err(err.map_src(|s| s.as_ref())),
4412        }
4413    }
4414
4415    /// Interprets the prefix of the given `source` as a DST `&Self` with length
4416    /// equal to `count`.
4417    ///
4418    /// This method attempts to return a reference to the prefix of `source`
4419    /// interpreted as a `Self` with `count` trailing elements, and a reference
4420    /// to the remaining bytes. If there are insufficient bytes, or if `source`
4421    /// is not appropriately aligned, this returns `Err`. If [`Self:
4422    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
4423    /// error][size-error-from].
4424    ///
4425    /// [self-unaligned]: Unaligned
4426    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4427    ///
4428    /// # Examples
4429    ///
4430    /// ```
4431    /// use zerocopy::FromBytes;
4432    /// # use zerocopy_derive::*;
4433    ///
4434    /// # #[derive(Debug, PartialEq, Eq)]
4435    /// #[derive(FromBytes, Immutable)]
4436    /// #[repr(C)]
4437    /// struct Pixel {
4438    ///     r: u8,
4439    ///     g: u8,
4440    ///     b: u8,
4441    ///     a: u8,
4442    /// }
4443    ///
4444    /// // These are more bytes than are needed to encode two `Pixel`s.
4445    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4446    ///
4447    /// let (pixels, suffix) = <[Pixel]>::ref_from_prefix_with_elems(bytes, 2).unwrap();
4448    ///
4449    /// assert_eq!(pixels, &[
4450    ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
4451    ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
4452    /// ]);
4453    ///
4454    /// assert_eq!(suffix, &[8, 9]);
4455    /// ```
4456    ///
4457    /// Since an explicit `count` is provided, this method supports types with
4458    /// zero-sized trailing slice elements. Methods such as [`ref_from_prefix`]
4459    /// which do not take an explicit count do not support such types.
4460    ///
4461    /// ```
4462    /// use zerocopy::*;
4463    /// # use zerocopy_derive::*;
4464    ///
4465    /// #[derive(FromBytes, Immutable, KnownLayout)]
4466    /// #[repr(C)]
4467    /// struct ZSTy {
4468    ///     leading_sized: [u8; 2],
4469    ///     trailing_dst: [()],
4470    /// }
4471    ///
4472    /// let src = &[85, 85][..];
4473    /// let (zsty, _) = ZSTy::ref_from_prefix_with_elems(src, 42).unwrap();
4474    /// assert_eq!(zsty.trailing_dst.len(), 42);
4475    /// ```
4476    ///
4477    /// [`ref_from_prefix`]: FromBytes::ref_from_prefix
4478    #[must_use = "has no side effects"]
4479    #[inline]
4480    fn ref_from_prefix_with_elems(
4481        source: &[u8],
4482        count: usize,
4483    ) -> Result<(&Self, &[u8]), CastError<&[u8], Self>>
4484    where
4485        Self: KnownLayout<PointerMetadata = usize> + Immutable,
4486    {
4487        ref_from_prefix_suffix(source, Some(count), CastType::Prefix)
4488    }
4489
4490    /// Interprets the suffix of the given `source` as a DST `&Self` with length
4491    /// equal to `count`.
4492    ///
4493    /// This method attempts to return a reference to the suffix of `source`
4494    /// interpreted as a `Self` with `count` trailing elements, and a reference
4495    /// to the preceding bytes. If there are insufficient bytes, or if that
4496    /// suffix of `source` is not appropriately aligned, this returns `Err`. If
4497    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
4498    /// alignment error][size-error-from].
4499    ///
4500    /// [self-unaligned]: Unaligned
4501    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4502    ///
4503    /// # Examples
4504    ///
4505    /// ```
4506    /// use zerocopy::FromBytes;
4507    /// # use zerocopy_derive::*;
4508    ///
4509    /// # #[derive(Debug, PartialEq, Eq)]
4510    /// #[derive(FromBytes, Immutable)]
4511    /// #[repr(C)]
4512    /// struct Pixel {
4513    ///     r: u8,
4514    ///     g: u8,
4515    ///     b: u8,
4516    ///     a: u8,
4517    /// }
4518    ///
4519    /// // These are more bytes than are needed to encode two `Pixel`s.
4520    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4521    ///
4522    /// let (prefix, pixels) = <[Pixel]>::ref_from_suffix_with_elems(bytes, 2).unwrap();
4523    ///
4524    /// assert_eq!(prefix, &[0, 1]);
4525    ///
4526    /// assert_eq!(pixels, &[
4527    ///     Pixel { r: 2, g: 3, b: 4, a: 5 },
4528    ///     Pixel { r: 6, g: 7, b: 8, a: 9 },
4529    /// ]);
4530    /// ```
4531    ///
4532    /// Since an explicit `count` is provided, this method supports types with
4533    /// zero-sized trailing slice elements. Methods such as [`ref_from_suffix`]
4534    /// which do not take an explicit count do not support such types.
4535    ///
4536    /// ```
4537    /// use zerocopy::*;
4538    /// # use zerocopy_derive::*;
4539    ///
4540    /// #[derive(FromBytes, Immutable, KnownLayout)]
4541    /// #[repr(C)]
4542    /// struct ZSTy {
4543    ///     leading_sized: [u8; 2],
4544    ///     trailing_dst: [()],
4545    /// }
4546    ///
4547    /// let src = &[85, 85][..];
4548    /// let (_, zsty) = ZSTy::ref_from_suffix_with_elems(src, 42).unwrap();
4549    /// assert_eq!(zsty.trailing_dst.len(), 42);
4550    /// ```
4551    ///
4552    /// [`ref_from_suffix`]: FromBytes::ref_from_suffix
4553    #[must_use = "has no side effects"]
4554    #[inline]
4555    fn ref_from_suffix_with_elems(
4556        source: &[u8],
4557        count: usize,
4558    ) -> Result<(&[u8], &Self), CastError<&[u8], Self>>
4559    where
4560        Self: KnownLayout<PointerMetadata = usize> + Immutable,
4561    {
4562        ref_from_prefix_suffix(source, Some(count), CastType::Suffix).map(swap)
4563    }
4564
4565    /// Interprets the given `source` as a `&mut Self` with a DST length equal
4566    /// to `count`.
4567    ///
4568    /// This method attempts to return a reference to `source` interpreted as a
4569    /// `Self` with `count` trailing elements. If the length of `source` is not
4570    /// equal to the size of `Self` with `count` elements, or if `source` is not
4571    /// appropriately aligned, this returns `Err`. If [`Self:
4572    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
4573    /// error][size-error-from].
4574    ///
4575    /// [self-unaligned]: Unaligned
4576    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4577    ///
4578    /// # Examples
4579    ///
4580    /// ```
4581    /// use zerocopy::FromBytes;
4582    /// # use zerocopy_derive::*;
4583    ///
4584    /// # #[derive(Debug, PartialEq, Eq)]
4585    /// #[derive(KnownLayout, FromBytes, IntoBytes, Immutable)]
4586    /// #[repr(C)]
4587    /// struct Pixel {
4588    ///     r: u8,
4589    ///     g: u8,
4590    ///     b: u8,
4591    ///     a: u8,
4592    /// }
4593    ///
4594    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7][..];
4595    ///
4596    /// let pixels = <[Pixel]>::mut_from_bytes_with_elems(bytes, 2).unwrap();
4597    ///
4598    /// assert_eq!(pixels, &[
4599    ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
4600    ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
4601    /// ]);
4602    ///
4603    /// pixels[1] = Pixel { r: 0, g: 0, b: 0, a: 0 };
4604    ///
4605    /// assert_eq!(bytes, [0, 1, 2, 3, 0, 0, 0, 0]);
4606    /// ```
4607    ///
4608    /// Since an explicit `count` is provided, this method supports types with
4609    /// zero-sized trailing slice elements. Methods such as [`mut_from`] which
4610    /// do not take an explicit count do not support such types.
4611    ///
4612    /// ```
4613    /// use zerocopy::*;
4614    /// # use zerocopy_derive::*;
4615    ///
4616    /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
4617    /// #[repr(C, packed)]
4618    /// struct ZSTy {
4619    ///     leading_sized: [u8; 2],
4620    ///     trailing_dst: [()],
4621    /// }
4622    ///
4623    /// let src = &mut [85, 85][..];
4624    /// let zsty = ZSTy::mut_from_bytes_with_elems(src, 42).unwrap();
4625    /// assert_eq!(zsty.trailing_dst.len(), 42);
4626    /// ```
4627    ///
4628    /// [`mut_from`]: FromBytes::mut_from
4629    #[must_use = "has no side effects"]
4630    #[inline]
4631    fn mut_from_bytes_with_elems(
4632        source: &mut [u8],
4633        count: usize,
4634    ) -> Result<&mut Self, CastError<&mut [u8], Self>>
4635    where
4636        Self: IntoBytes + KnownLayout<PointerMetadata = usize> + Immutable,
4637    {
4638        let source = Ptr::from_mut(source);
4639        let maybe_slf = source.try_cast_into_no_leftover::<_, BecauseImmutable>(Some(count));
4640        match maybe_slf {
4641            Ok(slf) => Ok(slf
4642                .recall_validity::<_, (_, (_, (BecauseExclusive, BecauseExclusive)))>()
4643                .as_mut()),
4644            Err(err) => Err(err.map_src(|s| s.as_mut())),
4645        }
4646    }
4647
4648    /// Interprets the prefix of the given `source` as a `&mut Self` with DST
4649    /// length equal to `count`.
4650    ///
4651    /// This method attempts to return a reference to the prefix of `source`
4652    /// interpreted as a `Self` with `count` trailing elements, and a reference
4653    /// to the preceding bytes. If there are insufficient bytes, or if `source`
4654    /// is not appropriately aligned, this returns `Err`. If [`Self:
4655    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
4656    /// error][size-error-from].
4657    ///
4658    /// [self-unaligned]: Unaligned
4659    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4660    ///
4661    /// # Examples
4662    ///
4663    /// ```
4664    /// use zerocopy::FromBytes;
4665    /// # use zerocopy_derive::*;
4666    ///
4667    /// # #[derive(Debug, PartialEq, Eq)]
4668    /// #[derive(KnownLayout, FromBytes, IntoBytes, Immutable)]
4669    /// #[repr(C)]
4670    /// struct Pixel {
4671    ///     r: u8,
4672    ///     g: u8,
4673    ///     b: u8,
4674    ///     a: u8,
4675    /// }
4676    ///
4677    /// // These are more bytes than are needed to encode two `Pixel`s.
4678    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4679    ///
4680    /// let (pixels, suffix) = <[Pixel]>::mut_from_prefix_with_elems(bytes, 2).unwrap();
4681    ///
4682    /// assert_eq!(pixels, &[
4683    ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
4684    ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
4685    /// ]);
4686    ///
4687    /// assert_eq!(suffix, &[8, 9]);
4688    ///
4689    /// pixels[1] = Pixel { r: 0, g: 0, b: 0, a: 0 };
4690    /// suffix.fill(1);
4691    ///
4692    /// assert_eq!(bytes, [0, 1, 2, 3, 0, 0, 0, 0, 1, 1]);
4693    /// ```
4694    ///
4695    /// Since an explicit `count` is provided, this method supports types with
4696    /// zero-sized trailing slice elements. Methods such as [`mut_from_prefix`]
4697    /// which do not take an explicit count do not support such types.
4698    ///
4699    /// ```
4700    /// use zerocopy::*;
4701    /// # use zerocopy_derive::*;
4702    ///
4703    /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
4704    /// #[repr(C, packed)]
4705    /// struct ZSTy {
4706    ///     leading_sized: [u8; 2],
4707    ///     trailing_dst: [()],
4708    /// }
4709    ///
4710    /// let src = &mut [85, 85][..];
4711    /// let (zsty, _) = ZSTy::mut_from_prefix_with_elems(src, 42).unwrap();
4712    /// assert_eq!(zsty.trailing_dst.len(), 42);
4713    /// ```
4714    ///
4715    /// [`mut_from_prefix`]: FromBytes::mut_from_prefix
4716    #[must_use = "has no side effects"]
4717    #[inline]
4718    fn mut_from_prefix_with_elems(
4719        source: &mut [u8],
4720        count: usize,
4721    ) -> Result<(&mut Self, &mut [u8]), CastError<&mut [u8], Self>>
4722    where
4723        Self: IntoBytes + KnownLayout<PointerMetadata = usize>,
4724    {
4725        mut_from_prefix_suffix(source, Some(count), CastType::Prefix)
4726    }
4727
4728    /// Interprets the suffix of the given `source` as a `&mut Self` with DST
4729    /// length equal to `count`.
4730    ///
4731    /// This method attempts to return a reference to the suffix of `source`
4732    /// interpreted as a `Self` with `count` trailing elements, and a reference
4733    /// to the remaining bytes. If there are insufficient bytes, or if that
4734    /// suffix of `source` is not appropriately aligned, this returns `Err`. If
4735    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
4736    /// alignment error][size-error-from].
4737    ///
4738    /// [self-unaligned]: Unaligned
4739    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4740    ///
4741    /// # Examples
4742    ///
4743    /// ```
4744    /// use zerocopy::FromBytes;
4745    /// # use zerocopy_derive::*;
4746    ///
4747    /// # #[derive(Debug, PartialEq, Eq)]
4748    /// #[derive(FromBytes, IntoBytes, Immutable)]
4749    /// #[repr(C)]
4750    /// struct Pixel {
4751    ///     r: u8,
4752    ///     g: u8,
4753    ///     b: u8,
4754    ///     a: u8,
4755    /// }
4756    ///
4757    /// // These are more bytes than are needed to encode two `Pixel`s.
4758    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4759    ///
4760    /// let (prefix, pixels) = <[Pixel]>::mut_from_suffix_with_elems(bytes, 2).unwrap();
4761    ///
4762    /// assert_eq!(prefix, &[0, 1]);
4763    ///
4764    /// assert_eq!(pixels, &[
4765    ///     Pixel { r: 2, g: 3, b: 4, a: 5 },
4766    ///     Pixel { r: 6, g: 7, b: 8, a: 9 },
4767    /// ]);
4768    ///
4769    /// prefix.fill(9);
4770    /// pixels[1] = Pixel { r: 0, g: 0, b: 0, a: 0 };
4771    ///
4772    /// assert_eq!(bytes, [9, 9, 2, 3, 4, 5, 0, 0, 0, 0]);
4773    /// ```
4774    ///
4775    /// Since an explicit `count` is provided, this method supports types with
4776    /// zero-sized trailing slice elements. Methods such as [`mut_from_suffix`]
4777    /// which do not take an explicit count do not support such types.
4778    ///
4779    /// ```
4780    /// use zerocopy::*;
4781    /// # use zerocopy_derive::*;
4782    ///
4783    /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
4784    /// #[repr(C, packed)]
4785    /// struct ZSTy {
4786    ///     leading_sized: [u8; 2],
4787    ///     trailing_dst: [()],
4788    /// }
4789    ///
4790    /// let src = &mut [85, 85][..];
4791    /// let (_, zsty) = ZSTy::mut_from_suffix_with_elems(src, 42).unwrap();
4792    /// assert_eq!(zsty.trailing_dst.len(), 42);
4793    /// ```
4794    ///
4795    /// [`mut_from_suffix`]: FromBytes::mut_from_suffix
4796    #[must_use = "has no side effects"]
4797    #[inline]
4798    fn mut_from_suffix_with_elems(
4799        source: &mut [u8],
4800        count: usize,
4801    ) -> Result<(&mut [u8], &mut Self), CastError<&mut [u8], Self>>
4802    where
4803        Self: IntoBytes + KnownLayout<PointerMetadata = usize>,
4804    {
4805        mut_from_prefix_suffix(source, Some(count), CastType::Suffix).map(swap)
4806    }
4807
4808    /// Reads a copy of `Self` from the given `source`.
4809    ///
4810    /// If `source.len() != size_of::<Self>()`, `read_from_bytes` returns `Err`.
4811    ///
4812    /// # Examples
4813    ///
4814    /// ```
4815    /// use zerocopy::FromBytes;
4816    /// # use zerocopy_derive::*;
4817    ///
4818    /// #[derive(FromBytes)]
4819    /// #[repr(C)]
4820    /// struct PacketHeader {
4821    ///     src_port: [u8; 2],
4822    ///     dst_port: [u8; 2],
4823    ///     length: [u8; 2],
4824    ///     checksum: [u8; 2],
4825    /// }
4826    ///
4827    /// // These bytes encode a `PacketHeader`.
4828    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7][..];
4829    ///
4830    /// let header = PacketHeader::read_from_bytes(bytes).unwrap();
4831    ///
4832    /// assert_eq!(header.src_port, [0, 1]);
4833    /// assert_eq!(header.dst_port, [2, 3]);
4834    /// assert_eq!(header.length, [4, 5]);
4835    /// assert_eq!(header.checksum, [6, 7]);
4836    /// ```
4837    #[must_use = "has no side effects"]
4838    #[inline]
4839    fn read_from_bytes(source: &[u8]) -> Result<Self, SizeError<&[u8], Self>>
4840    where
4841        Self: Sized,
4842    {
4843        match Ref::<_, Unalign<Self>>::sized_from(source) {
4844            Ok(r) => Ok(Ref::read(&r).into_inner()),
4845            Err(CastError::Size(e)) => Err(e.with_dst()),
4846            Err(CastError::Alignment(_)) => {
4847                // SAFETY: `Unalign<Self>` is trivially aligned, so
4848                // `Ref::sized_from` cannot fail due to unmet alignment
4849                // requirements.
4850                unsafe { core::hint::unreachable_unchecked() }
4851            }
4852            Err(CastError::Validity(i)) => match i {},
4853        }
4854    }
4855
4856    /// Reads a copy of `Self` from the prefix of the given `source`.
4857    ///
4858    /// This attempts to read a `Self` from the first `size_of::<Self>()` bytes
4859    /// of `source`, returning that `Self` and any remaining bytes. If
4860    /// `source.len() < size_of::<Self>()`, it returns `Err`.
4861    ///
4862    /// # Examples
4863    ///
4864    /// ```
4865    /// use zerocopy::FromBytes;
4866    /// # use zerocopy_derive::*;
4867    ///
4868    /// #[derive(FromBytes)]
4869    /// #[repr(C)]
4870    /// struct PacketHeader {
4871    ///     src_port: [u8; 2],
4872    ///     dst_port: [u8; 2],
4873    ///     length: [u8; 2],
4874    ///     checksum: [u8; 2],
4875    /// }
4876    ///
4877    /// // These are more bytes than are needed to encode a `PacketHeader`.
4878    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4879    ///
4880    /// let (header, body) = PacketHeader::read_from_prefix(bytes).unwrap();
4881    ///
4882    /// assert_eq!(header.src_port, [0, 1]);
4883    /// assert_eq!(header.dst_port, [2, 3]);
4884    /// assert_eq!(header.length, [4, 5]);
4885    /// assert_eq!(header.checksum, [6, 7]);
4886    /// assert_eq!(body, [8, 9]);
4887    /// ```
4888    #[must_use = "has no side effects"]
4889    #[inline]
4890    fn read_from_prefix(source: &[u8]) -> Result<(Self, &[u8]), SizeError<&[u8], Self>>
4891    where
4892        Self: Sized,
4893    {
4894        match Ref::<_, Unalign<Self>>::sized_from_prefix(source) {
4895            Ok((r, suffix)) => Ok((Ref::read(&r).into_inner(), suffix)),
4896            Err(CastError::Size(e)) => Err(e.with_dst()),
4897            Err(CastError::Alignment(_)) => {
4898                // SAFETY: `Unalign<Self>` is trivially aligned, so
4899                // `Ref::sized_from_prefix` cannot fail due to unmet alignment
4900                // requirements.
4901                unsafe { core::hint::unreachable_unchecked() }
4902            }
4903            Err(CastError::Validity(i)) => match i {},
4904        }
4905    }
4906
4907    /// Reads a copy of `Self` from the suffix of the given `source`.
4908    ///
4909    /// This attempts to read a `Self` from the last `size_of::<Self>()` bytes
4910    /// of `source`, returning that `Self` and any preceding bytes. If
4911    /// `source.len() < size_of::<Self>()`, it returns `Err`.
4912    ///
4913    /// # Examples
4914    ///
4915    /// ```
4916    /// use zerocopy::FromBytes;
4917    /// # use zerocopy_derive::*;
4918    ///
4919    /// #[derive(FromBytes)]
4920    /// #[repr(C)]
4921    /// struct PacketTrailer {
4922    ///     frame_check_sequence: [u8; 4],
4923    /// }
4924    ///
4925    /// // These are more bytes than are needed to encode a `PacketTrailer`.
4926    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4927    ///
4928    /// let (prefix, trailer) = PacketTrailer::read_from_suffix(bytes).unwrap();
4929    ///
4930    /// assert_eq!(prefix, [0, 1, 2, 3, 4, 5]);
4931    /// assert_eq!(trailer.frame_check_sequence, [6, 7, 8, 9]);
4932    /// ```
4933    #[must_use = "has no side effects"]
4934    #[inline]
4935    fn read_from_suffix(source: &[u8]) -> Result<(&[u8], Self), SizeError<&[u8], Self>>
4936    where
4937        Self: Sized,
4938    {
4939        match Ref::<_, Unalign<Self>>::sized_from_suffix(source) {
4940            Ok((prefix, r)) => Ok((prefix, Ref::read(&r).into_inner())),
4941            Err(CastError::Size(e)) => Err(e.with_dst()),
4942            Err(CastError::Alignment(_)) => {
4943                // SAFETY: `Unalign<Self>` is trivially aligned, so
4944                // `Ref::sized_from_suffix` cannot fail due to unmet alignment
4945                // requirements.
4946                unsafe { core::hint::unreachable_unchecked() }
4947            }
4948            Err(CastError::Validity(i)) => match i {},
4949        }
4950    }
4951
4952    /// Reads a copy of `self` from an `io::Read`.
4953    ///
4954    /// This is useful for interfacing with operating system byte sinks (files,
4955    /// sockets, etc.).
4956    ///
4957    /// # Examples
4958    ///
4959    /// ```no_run
4960    /// use zerocopy::{byteorder::big_endian::*, FromBytes};
4961    /// use std::fs::File;
4962    /// # use zerocopy_derive::*;
4963    ///
4964    /// #[derive(FromBytes)]
4965    /// #[repr(C)]
4966    /// struct BitmapFileHeader {
4967    ///     signature: [u8; 2],
4968    ///     size: U32,
4969    ///     reserved: U64,
4970    ///     offset: U64,
4971    /// }
4972    ///
4973    /// let mut file = File::open("image.bin").unwrap();
4974    /// let header = BitmapFileHeader::read_from_io(&mut file).unwrap();
4975    /// ```
4976    #[cfg(feature = "std")]
4977    #[inline(always)]
4978    fn read_from_io<R>(mut src: R) -> io::Result<Self>
4979    where
4980        Self: Sized,
4981        R: io::Read,
4982    {
4983        // NOTE(#2319, #2320): We do `buf.zero()` separately rather than
4984        // constructing `let buf = CoreMaybeUninit::zeroed()` because, if `Self`
4985        // contains padding bytes, then a typed copy of `CoreMaybeUninit<Self>`
4986        // will not necessarily preserve zeros written to those padding byte
4987        // locations, and so `buf` could contain uninitialized bytes.
4988        let mut buf = CoreMaybeUninit::<Self>::uninit();
4989        buf.zero();
4990
4991        let ptr = Ptr::from_mut(&mut buf);
4992        // SAFETY: After `buf.zero()`, `buf` consists entirely of initialized,
4993        // zeroed bytes. Since `MaybeUninit` has no validity requirements, `ptr`
4994        // cannot be used to write values which will violate `buf`'s bit
4995        // validity. Since `ptr` has `Exclusive` aliasing, nothing other than
4996        // `ptr` may be used to mutate `ptr`'s referent, and so its bit validity
4997        // cannot be violated even though `buf` may have more permissive bit
4998        // validity than `ptr`.
4999        let ptr = unsafe { ptr.assume_validity::<invariant::Initialized>() };
5000        let ptr = ptr.as_bytes::<BecauseExclusive>();
5001        src.read_exact(ptr.as_mut())?;
5002        // SAFETY: `buf` entirely consists of initialized bytes, and `Self` is
5003        // `FromBytes`.
5004        Ok(unsafe { buf.assume_init() })
5005    }
5006
5007    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::ref_from_bytes`")]
5008    #[doc(hidden)]
5009    #[must_use = "has no side effects"]
5010    #[inline(always)]
5011    fn ref_from(source: &[u8]) -> Option<&Self>
5012    where
5013        Self: KnownLayout + Immutable,
5014    {
5015        Self::ref_from_bytes(source).ok()
5016    }
5017
5018    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::mut_from_bytes`")]
5019    #[doc(hidden)]
5020    #[must_use = "has no side effects"]
5021    #[inline(always)]
5022    fn mut_from(source: &mut [u8]) -> Option<&mut Self>
5023    where
5024        Self: KnownLayout + IntoBytes,
5025    {
5026        Self::mut_from_bytes(source).ok()
5027    }
5028
5029    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::ref_from_prefix_with_elems`")]
5030    #[doc(hidden)]
5031    #[must_use = "has no side effects"]
5032    #[inline(always)]
5033    fn slice_from_prefix(source: &[u8], count: usize) -> Option<(&[Self], &[u8])>
5034    where
5035        Self: Sized + Immutable,
5036    {
5037        <[Self]>::ref_from_prefix_with_elems(source, count).ok()
5038    }
5039
5040    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::ref_from_suffix_with_elems`")]
5041    #[doc(hidden)]
5042    #[must_use = "has no side effects"]
5043    #[inline(always)]
5044    fn slice_from_suffix(source: &[u8], count: usize) -> Option<(&[u8], &[Self])>
5045    where
5046        Self: Sized + Immutable,
5047    {
5048        <[Self]>::ref_from_suffix_with_elems(source, count).ok()
5049    }
5050
5051    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::mut_from_prefix_with_elems`")]
5052    #[doc(hidden)]
5053    #[must_use = "has no side effects"]
5054    #[inline(always)]
5055    fn mut_slice_from_prefix(source: &mut [u8], count: usize) -> Option<(&mut [Self], &mut [u8])>
5056    where
5057        Self: Sized + IntoBytes,
5058    {
5059        <[Self]>::mut_from_prefix_with_elems(source, count).ok()
5060    }
5061
5062    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::mut_from_suffix_with_elems`")]
5063    #[doc(hidden)]
5064    #[must_use = "has no side effects"]
5065    #[inline(always)]
5066    fn mut_slice_from_suffix(source: &mut [u8], count: usize) -> Option<(&mut [u8], &mut [Self])>
5067    where
5068        Self: Sized + IntoBytes,
5069    {
5070        <[Self]>::mut_from_suffix_with_elems(source, count).ok()
5071    }
5072
5073    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::read_from_bytes`")]
5074    #[doc(hidden)]
5075    #[must_use = "has no side effects"]
5076    #[inline(always)]
5077    fn read_from(source: &[u8]) -> Option<Self>
5078    where
5079        Self: Sized,
5080    {
5081        Self::read_from_bytes(source).ok()
5082    }
5083}
5084
5085/// Interprets the given affix of the given bytes as a `&Self`.
5086///
5087/// This method computes the largest possible size of `Self` that can fit in the
5088/// prefix or suffix bytes of `source`, then attempts to return both a reference
5089/// to those bytes interpreted as a `Self`, and a reference to the excess bytes.
5090/// If there are insufficient bytes, or if that affix of `source` is not
5091/// appropriately aligned, this returns `Err`.
5092#[inline(always)]
5093fn ref_from_prefix_suffix<T: FromBytes + KnownLayout + Immutable + ?Sized>(
5094    source: &[u8],
5095    meta: Option<T::PointerMetadata>,
5096    cast_type: CastType,
5097) -> Result<(&T, &[u8]), CastError<&[u8], T>> {
5098    let (slf, prefix_suffix) = Ptr::from_ref(source)
5099        .try_cast_into::<_, BecauseImmutable>(cast_type, meta)
5100        .map_err(|err| err.map_src(|s| s.as_ref()))?;
5101    Ok((slf.recall_validity().as_ref(), prefix_suffix.as_ref()))
5102}
5103
5104/// Interprets the given affix of the given bytes as a `&mut Self` without
5105/// copying.
5106///
5107/// This method computes the largest possible size of `Self` that can fit in the
5108/// prefix or suffix bytes of `source`, then attempts to return both a reference
5109/// to those bytes interpreted as a `Self`, and a reference to the excess bytes.
5110/// If there are insufficient bytes, or if that affix of `source` is not
5111/// appropriately aligned, this returns `Err`.
5112#[inline(always)]
5113fn mut_from_prefix_suffix<T: FromBytes + IntoBytes + KnownLayout + ?Sized>(
5114    source: &mut [u8],
5115    meta: Option<T::PointerMetadata>,
5116    cast_type: CastType,
5117) -> Result<(&mut T, &mut [u8]), CastError<&mut [u8], T>> {
5118    let (slf, prefix_suffix) = Ptr::from_mut(source)
5119        .try_cast_into::<_, BecauseExclusive>(cast_type, meta)
5120        .map_err(|err| err.map_src(|s| s.as_mut()))?;
5121    Ok((slf.recall_validity().as_mut(), prefix_suffix.as_mut()))
5122}
5123
5124/// Analyzes whether a type is [`IntoBytes`].
5125///
5126/// This derive analyzes, at compile time, whether the annotated type satisfies
5127/// the [safety conditions] of `IntoBytes` and implements `IntoBytes` if it is
5128/// sound to do so. This derive can be applied to structs and enums (see below
5129/// for union support); e.g.:
5130///
5131/// ```
5132/// # use zerocopy_derive::{IntoBytes};
5133/// #[derive(IntoBytes)]
5134/// #[repr(C)]
5135/// struct MyStruct {
5136/// # /*
5137///     ...
5138/// # */
5139/// }
5140///
5141/// #[derive(IntoBytes)]
5142/// #[repr(u8)]
5143/// enum MyEnum {
5144/// #   Variant,
5145/// # /*
5146///     ...
5147/// # */
5148/// }
5149/// ```
5150///
5151/// [safety conditions]: trait@IntoBytes#safety
5152///
5153/// # Error Messages
5154///
5155/// On Rust toolchains prior to 1.78.0, due to the way that the custom derive
5156/// for `IntoBytes` is implemented, you may get an error like this:
5157///
5158/// ```text
5159/// error[E0277]: the trait bound `(): PaddingFree<Foo, true>` is not satisfied
5160///   --> lib.rs:23:10
5161///    |
5162///  1 | #[derive(IntoBytes)]
5163///    |          ^^^^^^^^^ the trait `PaddingFree<Foo, true>` is not implemented for `()`
5164///    |
5165///    = help: the following implementations were found:
5166///                   <() as PaddingFree<T, false>>
5167/// ```
5168///
5169/// This error indicates that the type being annotated has padding bytes, which
5170/// is illegal for `IntoBytes` types. Consider reducing the alignment of some
5171/// fields by using types in the [`byteorder`] module, wrapping field types in
5172/// [`Unalign`], adding explicit struct fields where those padding bytes would
5173/// be, or using `#[repr(packed)]`. See the Rust Reference's page on [type
5174/// layout] for more information about type layout and padding.
5175///
5176/// [type layout]: https://doc.rust-lang.org/reference/type-layout.html
5177///
5178/// # Unions
5179///
5180/// Currently, union bit validity is [up in the air][union-validity], and so
5181/// zerocopy does not support `#[derive(IntoBytes)]` on unions by default.
5182/// However, implementing `IntoBytes` on a union type is likely sound on all
5183/// existing Rust toolchains - it's just that it may become unsound in the
5184/// future. You can opt-in to `#[derive(IntoBytes)]` support on unions by
5185/// passing the unstable `zerocopy_derive_union_into_bytes` cfg:
5186///
5187/// ```shell
5188/// $ RUSTFLAGS='--cfg zerocopy_derive_union_into_bytes' cargo build
5189/// ```
5190///
5191/// However, it is your responsibility to ensure that this derive is sound on
5192/// the specific versions of the Rust toolchain you are using! We make no
5193/// stability or soundness guarantees regarding this cfg, and may remove it at
5194/// any point.
5195///
5196/// We are actively working with Rust to stabilize the necessary language
5197/// guarantees to support this in a forwards-compatible way, which will enable
5198/// us to remove the cfg gate. As part of this effort, we need to know how much
5199/// demand there is for this feature. If you would like to use `IntoBytes` on
5200/// unions, [please let us know][discussion].
5201///
5202/// [union-validity]: https://github.com/rust-lang/unsafe-code-guidelines/issues/438
5203/// [discussion]: https://github.com/google/zerocopy/discussions/1802
5204///
5205/// # Analysis
5206///
5207/// *This section describes, roughly, the analysis performed by this derive to
5208/// determine whether it is sound to implement `IntoBytes` for a given type.
5209/// Unless you are modifying the implementation of this derive, or attempting to
5210/// manually implement `IntoBytes` for a type yourself, you don't need to read
5211/// this section.*
5212///
5213/// If a type has the following properties, then this derive can implement
5214/// `IntoBytes` for that type:
5215///
5216/// - If the type is a struct, its fields must be [`IntoBytes`]. Additionally:
5217///     - if the type is `repr(transparent)` or `repr(packed)`, it is
5218///       [`IntoBytes`] if its fields are [`IntoBytes`]; else,
5219///     - if the type is `repr(C)` with at most one field, it is [`IntoBytes`]
5220///       if its field is [`IntoBytes`]; else,
5221///     - if the type has no generic parameters, it is [`IntoBytes`] if the type
5222///       is sized and has no padding bytes; else,
5223///     - if the type is `repr(C)`, its fields must be [`Unaligned`].
5224/// - If the type is an enum:
5225///   - It must have a defined representation (`repr`s `C`, `u8`, `u16`, `u32`,
5226///     `u64`, `usize`, `i8`, `i16`, `i32`, `i64`, or `isize`).
5227///   - It must have no padding bytes.
5228///   - Its fields must be [`IntoBytes`].
5229///
5230/// This analysis is subject to change. Unsafe code may *only* rely on the
5231/// documented [safety conditions] of `FromBytes`, and must *not* rely on the
5232/// implementation details of this derive.
5233///
5234/// [Rust Reference]: https://doc.rust-lang.org/reference/type-layout.html
5235#[cfg(any(feature = "derive", test))]
5236#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
5237pub use zerocopy_derive::IntoBytes;
5238
5239/// Types that can be converted to an immutable slice of initialized bytes.
5240///
5241/// Any `IntoBytes` type can be converted to a slice of initialized bytes of the
5242/// same size. This is useful for efficiently serializing structured data as raw
5243/// bytes.
5244///
5245/// # Implementation
5246///
5247/// **Do not implement this trait yourself!** Instead, use
5248/// [`#[derive(IntoBytes)]`][derive]; e.g.:
5249///
5250/// ```
5251/// # use zerocopy_derive::IntoBytes;
5252/// #[derive(IntoBytes)]
5253/// #[repr(C)]
5254/// struct MyStruct {
5255/// # /*
5256///     ...
5257/// # */
5258/// }
5259///
5260/// #[derive(IntoBytes)]
5261/// #[repr(u8)]
5262/// enum MyEnum {
5263/// #   Variant0,
5264/// # /*
5265///     ...
5266/// # */
5267/// }
5268/// ```
5269///
5270/// This derive performs a sophisticated, compile-time safety analysis to
5271/// determine whether a type is `IntoBytes`. See the [derive
5272/// documentation][derive] for guidance on how to interpret error messages
5273/// produced by the derive's analysis.
5274///
5275/// # Safety
5276///
5277/// *This section describes what is required in order for `T: IntoBytes`, and
5278/// what unsafe code may assume of such types. If you don't plan on implementing
5279/// `IntoBytes` manually, and you don't plan on writing unsafe code that
5280/// operates on `IntoBytes` types, then you don't need to read this section.*
5281///
5282/// If `T: IntoBytes`, then unsafe code may assume that it is sound to treat any
5283/// `t: T` as an immutable `[u8]` of length `size_of_val(t)`. If a type is
5284/// marked as `IntoBytes` which violates this contract, it may cause undefined
5285/// behavior.
5286///
5287/// `#[derive(IntoBytes)]` only permits [types which satisfy these
5288/// requirements][derive-analysis].
5289///
5290#[cfg_attr(
5291    feature = "derive",
5292    doc = "[derive]: zerocopy_derive::IntoBytes",
5293    doc = "[derive-analysis]: zerocopy_derive::IntoBytes#analysis"
5294)]
5295#[cfg_attr(
5296    not(feature = "derive"),
5297    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.IntoBytes.html"),
5298    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.IntoBytes.html#analysis"),
5299)]
5300#[cfg_attr(
5301    zerocopy_diagnostic_on_unimplemented_1_78_0,
5302    diagnostic::on_unimplemented(note = "Consider adding `#[derive(IntoBytes)]` to `{Self}`")
5303)]
5304pub unsafe trait IntoBytes {
5305    // The `Self: Sized` bound makes it so that this function doesn't prevent
5306    // `IntoBytes` from being object safe. Note that other `IntoBytes` methods
5307    // prevent object safety, but those provide a benefit in exchange for object
5308    // safety. If at some point we remove those methods, change their type
5309    // signatures, or move them out of this trait so that `IntoBytes` is object
5310    // safe again, it's important that this function not prevent object safety.
5311    #[doc(hidden)]
5312    fn only_derive_is_allowed_to_implement_this_trait()
5313    where
5314        Self: Sized;
5315
5316    /// Gets the bytes of this value.
5317    ///
5318    /// # Examples
5319    ///
5320    /// ```
5321    /// use zerocopy::IntoBytes;
5322    /// # use zerocopy_derive::*;
5323    ///
5324    /// #[derive(IntoBytes, Immutable)]
5325    /// #[repr(C)]
5326    /// struct PacketHeader {
5327    ///     src_port: [u8; 2],
5328    ///     dst_port: [u8; 2],
5329    ///     length: [u8; 2],
5330    ///     checksum: [u8; 2],
5331    /// }
5332    ///
5333    /// let header = PacketHeader {
5334    ///     src_port: [0, 1],
5335    ///     dst_port: [2, 3],
5336    ///     length: [4, 5],
5337    ///     checksum: [6, 7],
5338    /// };
5339    ///
5340    /// let bytes = header.as_bytes();
5341    ///
5342    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7]);
5343    /// ```
5344    #[must_use = "has no side effects"]
5345    #[inline(always)]
5346    fn as_bytes(&self) -> &[u8]
5347    where
5348        Self: Immutable,
5349    {
5350        // Note that this method does not have a `Self: Sized` bound;
5351        // `size_of_val` works for unsized values too.
5352        let len = mem::size_of_val(self);
5353        let slf: *const Self = self;
5354
5355        // SAFETY:
5356        // - `slf.cast::<u8>()` is valid for reads for `len * size_of::<u8>()`
5357        //   many bytes because...
5358        //   - `slf` is the same pointer as `self`, and `self` is a reference
5359        //     which points to an object whose size is `len`. Thus...
5360        //     - The entire region of `len` bytes starting at `slf` is contained
5361        //       within a single allocation.
5362        //     - `slf` is non-null.
5363        //   - `slf` is trivially aligned to `align_of::<u8>() == 1`.
5364        // - `Self: IntoBytes` ensures that all of the bytes of `slf` are
5365        //   initialized.
5366        // - Since `slf` is derived from `self`, and `self` is an immutable
5367        //   reference, the only other references to this memory region that
5368        //   could exist are other immutable references, and those don't allow
5369        //   mutation. `Self: Immutable` prohibits types which contain
5370        //   `UnsafeCell`s, which are the only types for which this rule
5371        //   wouldn't be sufficient.
5372        // - The total size of the resulting slice is no larger than
5373        //   `isize::MAX` because no allocation produced by safe code can be
5374        //   larger than `isize::MAX`.
5375        //
5376        // TODO(#429): Add references to docs and quotes.
5377        unsafe { slice::from_raw_parts(slf.cast::<u8>(), len) }
5378    }
5379
5380    /// Gets the bytes of this value mutably.
5381    ///
5382    /// # Examples
5383    ///
5384    /// ```
5385    /// use zerocopy::IntoBytes;
5386    /// # use zerocopy_derive::*;
5387    ///
5388    /// # #[derive(Eq, PartialEq, Debug)]
5389    /// #[derive(FromBytes, IntoBytes, Immutable)]
5390    /// #[repr(C)]
5391    /// struct PacketHeader {
5392    ///     src_port: [u8; 2],
5393    ///     dst_port: [u8; 2],
5394    ///     length: [u8; 2],
5395    ///     checksum: [u8; 2],
5396    /// }
5397    ///
5398    /// let mut header = PacketHeader {
5399    ///     src_port: [0, 1],
5400    ///     dst_port: [2, 3],
5401    ///     length: [4, 5],
5402    ///     checksum: [6, 7],
5403    /// };
5404    ///
5405    /// let bytes = header.as_mut_bytes();
5406    ///
5407    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7]);
5408    ///
5409    /// bytes.reverse();
5410    ///
5411    /// assert_eq!(header, PacketHeader {
5412    ///     src_port: [7, 6],
5413    ///     dst_port: [5, 4],
5414    ///     length: [3, 2],
5415    ///     checksum: [1, 0],
5416    /// });
5417    /// ```
5418    #[must_use = "has no side effects"]
5419    #[inline(always)]
5420    fn as_mut_bytes(&mut self) -> &mut [u8]
5421    where
5422        Self: FromBytes,
5423    {
5424        // Note that this method does not have a `Self: Sized` bound;
5425        // `size_of_val` works for unsized values too.
5426        let len = mem::size_of_val(self);
5427        let slf: *mut Self = self;
5428
5429        // SAFETY:
5430        // - `slf.cast::<u8>()` is valid for reads and writes for `len *
5431        //   size_of::<u8>()` many bytes because...
5432        //   - `slf` is the same pointer as `self`, and `self` is a reference
5433        //     which points to an object whose size is `len`. Thus...
5434        //     - The entire region of `len` bytes starting at `slf` is contained
5435        //       within a single allocation.
5436        //     - `slf` is non-null.
5437        //   - `slf` is trivially aligned to `align_of::<u8>() == 1`.
5438        // - `Self: IntoBytes` ensures that all of the bytes of `slf` are
5439        //   initialized.
5440        // - `Self: FromBytes` ensures that no write to this memory region
5441        //   could result in it containing an invalid `Self`.
5442        // - Since `slf` is derived from `self`, and `self` is a mutable
5443        //   reference, no other references to this memory region can exist.
5444        // - The total size of the resulting slice is no larger than
5445        //   `isize::MAX` because no allocation produced by safe code can be
5446        //   larger than `isize::MAX`.
5447        //
5448        // TODO(#429): Add references to docs and quotes.
5449        unsafe { slice::from_raw_parts_mut(slf.cast::<u8>(), len) }
5450    }
5451
5452    /// Writes a copy of `self` to `dst`.
5453    ///
5454    /// If `dst.len() != size_of_val(self)`, `write_to` returns `Err`.
5455    ///
5456    /// # Examples
5457    ///
5458    /// ```
5459    /// use zerocopy::IntoBytes;
5460    /// # use zerocopy_derive::*;
5461    ///
5462    /// #[derive(IntoBytes, Immutable)]
5463    /// #[repr(C)]
5464    /// struct PacketHeader {
5465    ///     src_port: [u8; 2],
5466    ///     dst_port: [u8; 2],
5467    ///     length: [u8; 2],
5468    ///     checksum: [u8; 2],
5469    /// }
5470    ///
5471    /// let header = PacketHeader {
5472    ///     src_port: [0, 1],
5473    ///     dst_port: [2, 3],
5474    ///     length: [4, 5],
5475    ///     checksum: [6, 7],
5476    /// };
5477    ///
5478    /// let mut bytes = [0, 0, 0, 0, 0, 0, 0, 0];
5479    ///
5480    /// header.write_to(&mut bytes[..]);
5481    ///
5482    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7]);
5483    /// ```
5484    ///
5485    /// If too many or too few target bytes are provided, `write_to` returns
5486    /// `Err` and leaves the target bytes unmodified:
5487    ///
5488    /// ```
5489    /// # use zerocopy::IntoBytes;
5490    /// # let header = u128::MAX;
5491    /// let mut excessive_bytes = &mut [0u8; 128][..];
5492    ///
5493    /// let write_result = header.write_to(excessive_bytes);
5494    ///
5495    /// assert!(write_result.is_err());
5496    /// assert_eq!(excessive_bytes, [0u8; 128]);
5497    /// ```
5498    #[must_use = "callers should check the return value to see if the operation succeeded"]
5499    #[inline]
5500    fn write_to(&self, dst: &mut [u8]) -> Result<(), SizeError<&Self, &mut [u8]>>
5501    where
5502        Self: Immutable,
5503    {
5504        let src = self.as_bytes();
5505        if dst.len() == src.len() {
5506            // SAFETY: Within this branch of the conditional, we have ensured
5507            // that `dst.len()` is equal to `src.len()`. Neither the size of the
5508            // source nor the size of the destination change between the above
5509            // size check and the invocation of `copy_unchecked`.
5510            unsafe { util::copy_unchecked(src, dst) }
5511            Ok(())
5512        } else {
5513            Err(SizeError::new(self))
5514        }
5515    }
5516
5517    /// Writes a copy of `self` to the prefix of `dst`.
5518    ///
5519    /// `write_to_prefix` writes `self` to the first `size_of_val(self)` bytes
5520    /// of `dst`. If `dst.len() < size_of_val(self)`, it returns `Err`.
5521    ///
5522    /// # Examples
5523    ///
5524    /// ```
5525    /// use zerocopy::IntoBytes;
5526    /// # use zerocopy_derive::*;
5527    ///
5528    /// #[derive(IntoBytes, Immutable)]
5529    /// #[repr(C)]
5530    /// struct PacketHeader {
5531    ///     src_port: [u8; 2],
5532    ///     dst_port: [u8; 2],
5533    ///     length: [u8; 2],
5534    ///     checksum: [u8; 2],
5535    /// }
5536    ///
5537    /// let header = PacketHeader {
5538    ///     src_port: [0, 1],
5539    ///     dst_port: [2, 3],
5540    ///     length: [4, 5],
5541    ///     checksum: [6, 7],
5542    /// };
5543    ///
5544    /// let mut bytes = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
5545    ///
5546    /// header.write_to_prefix(&mut bytes[..]);
5547    ///
5548    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7, 0, 0]);
5549    /// ```
5550    ///
5551    /// If insufficient target bytes are provided, `write_to_prefix` returns
5552    /// `Err` and leaves the target bytes unmodified:
5553    ///
5554    /// ```
5555    /// # use zerocopy::IntoBytes;
5556    /// # let header = u128::MAX;
5557    /// let mut insufficent_bytes = &mut [0, 0][..];
5558    ///
5559    /// let write_result = header.write_to_suffix(insufficent_bytes);
5560    ///
5561    /// assert!(write_result.is_err());
5562    /// assert_eq!(insufficent_bytes, [0, 0]);
5563    /// ```
5564    #[must_use = "callers should check the return value to see if the operation succeeded"]
5565    #[inline]
5566    fn write_to_prefix(&self, dst: &mut [u8]) -> Result<(), SizeError<&Self, &mut [u8]>>
5567    where
5568        Self: Immutable,
5569    {
5570        let src = self.as_bytes();
5571        match dst.get_mut(..src.len()) {
5572            Some(dst) => {
5573                // SAFETY: Within this branch of the `match`, we have ensured
5574                // through fallible subslicing that `dst.len()` is equal to
5575                // `src.len()`. Neither the size of the source nor the size of
5576                // the destination change between the above subslicing operation
5577                // and the invocation of `copy_unchecked`.
5578                unsafe { util::copy_unchecked(src, dst) }
5579                Ok(())
5580            }
5581            None => Err(SizeError::new(self)),
5582        }
5583    }
5584
5585    /// Writes a copy of `self` to the suffix of `dst`.
5586    ///
5587    /// `write_to_suffix` writes `self` to the last `size_of_val(self)` bytes of
5588    /// `dst`. If `dst.len() < size_of_val(self)`, it returns `Err`.
5589    ///
5590    /// # Examples
5591    ///
5592    /// ```
5593    /// use zerocopy::IntoBytes;
5594    /// # use zerocopy_derive::*;
5595    ///
5596    /// #[derive(IntoBytes, Immutable)]
5597    /// #[repr(C)]
5598    /// struct PacketHeader {
5599    ///     src_port: [u8; 2],
5600    ///     dst_port: [u8; 2],
5601    ///     length: [u8; 2],
5602    ///     checksum: [u8; 2],
5603    /// }
5604    ///
5605    /// let header = PacketHeader {
5606    ///     src_port: [0, 1],
5607    ///     dst_port: [2, 3],
5608    ///     length: [4, 5],
5609    ///     checksum: [6, 7],
5610    /// };
5611    ///
5612    /// let mut bytes = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
5613    ///
5614    /// header.write_to_suffix(&mut bytes[..]);
5615    ///
5616    /// assert_eq!(bytes, [0, 0, 0, 1, 2, 3, 4, 5, 6, 7]);
5617    ///
5618    /// let mut insufficent_bytes = &mut [0, 0][..];
5619    ///
5620    /// let write_result = header.write_to_suffix(insufficent_bytes);
5621    ///
5622    /// assert!(write_result.is_err());
5623    /// assert_eq!(insufficent_bytes, [0, 0]);
5624    /// ```
5625    ///
5626    /// If insufficient target bytes are provided, `write_to_suffix` returns
5627    /// `Err` and leaves the target bytes unmodified:
5628    ///
5629    /// ```
5630    /// # use zerocopy::IntoBytes;
5631    /// # let header = u128::MAX;
5632    /// let mut insufficent_bytes = &mut [0, 0][..];
5633    ///
5634    /// let write_result = header.write_to_suffix(insufficent_bytes);
5635    ///
5636    /// assert!(write_result.is_err());
5637    /// assert_eq!(insufficent_bytes, [0, 0]);
5638    /// ```
5639    #[must_use = "callers should check the return value to see if the operation succeeded"]
5640    #[inline]
5641    fn write_to_suffix(&self, dst: &mut [u8]) -> Result<(), SizeError<&Self, &mut [u8]>>
5642    where
5643        Self: Immutable,
5644    {
5645        let src = self.as_bytes();
5646        let start = if let Some(start) = dst.len().checked_sub(src.len()) {
5647            start
5648        } else {
5649            return Err(SizeError::new(self));
5650        };
5651        let dst = if let Some(dst) = dst.get_mut(start..) {
5652            dst
5653        } else {
5654            // get_mut() should never return None here. We return a `SizeError`
5655            // rather than .unwrap() because in the event the branch is not
5656            // optimized away, returning a value is generally lighter-weight
5657            // than panicking.
5658            return Err(SizeError::new(self));
5659        };
5660        // SAFETY: Through fallible subslicing of `dst`, we have ensured that
5661        // `dst.len()` is equal to `src.len()`. Neither the size of the source
5662        // nor the size of the destination change between the above subslicing
5663        // operation and the invocation of `copy_unchecked`.
5664        unsafe {
5665            util::copy_unchecked(src, dst);
5666        }
5667        Ok(())
5668    }
5669
5670    /// Writes a copy of `self` to an `io::Write`.
5671    ///
5672    /// This is a shorthand for `dst.write_all(self.as_bytes())`, and is useful
5673    /// for interfacing with operating system byte sinks (files, sockets, etc.).
5674    ///
5675    /// # Examples
5676    ///
5677    /// ```no_run
5678    /// use zerocopy::{byteorder::big_endian::U16, FromBytes, IntoBytes};
5679    /// use std::fs::File;
5680    /// # use zerocopy_derive::*;
5681    ///
5682    /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
5683    /// #[repr(C, packed)]
5684    /// struct GrayscaleImage {
5685    ///     height: U16,
5686    ///     width: U16,
5687    ///     pixels: [U16],
5688    /// }
5689    ///
5690    /// let image = GrayscaleImage::ref_from_bytes(&[0, 0, 0, 0][..]).unwrap();
5691    /// let mut file = File::create("image.bin").unwrap();
5692    /// image.write_to_io(&mut file).unwrap();
5693    /// ```
5694    ///
5695    /// If the write fails, `write_to_io` returns `Err` and a partial write may
5696    /// have occured; e.g.:
5697    ///
5698    /// ```
5699    /// # use zerocopy::IntoBytes;
5700    ///
5701    /// let src = u128::MAX;
5702    /// let mut dst = [0u8; 2];
5703    ///
5704    /// let write_result = src.write_to_io(&mut dst[..]);
5705    ///
5706    /// assert!(write_result.is_err());
5707    /// assert_eq!(dst, [255, 255]);
5708    /// ```
5709    #[cfg(feature = "std")]
5710    #[inline(always)]
5711    fn write_to_io<W>(&self, mut dst: W) -> io::Result<()>
5712    where
5713        Self: Immutable,
5714        W: io::Write,
5715    {
5716        dst.write_all(self.as_bytes())
5717    }
5718
5719    #[deprecated(since = "0.8.0", note = "`IntoBytes::as_bytes_mut` was renamed to `as_mut_bytes`")]
5720    #[doc(hidden)]
5721    #[inline]
5722    fn as_bytes_mut(&mut self) -> &mut [u8]
5723    where
5724        Self: FromBytes,
5725    {
5726        self.as_mut_bytes()
5727    }
5728}
5729
5730/// Analyzes whether a type is [`Unaligned`].
5731///
5732/// This derive analyzes, at compile time, whether the annotated type satisfies
5733/// the [safety conditions] of `Unaligned` and implements `Unaligned` if it is
5734/// sound to do so. This derive can be applied to structs, enums, and unions;
5735/// e.g.:
5736///
5737/// ```
5738/// # use zerocopy_derive::Unaligned;
5739/// #[derive(Unaligned)]
5740/// #[repr(C)]
5741/// struct MyStruct {
5742/// # /*
5743///     ...
5744/// # */
5745/// }
5746///
5747/// #[derive(Unaligned)]
5748/// #[repr(u8)]
5749/// enum MyEnum {
5750/// #   Variant0,
5751/// # /*
5752///     ...
5753/// # */
5754/// }
5755///
5756/// #[derive(Unaligned)]
5757/// #[repr(packed)]
5758/// union MyUnion {
5759/// #   variant: u8,
5760/// # /*
5761///     ...
5762/// # */
5763/// }
5764/// ```
5765///
5766/// # Analysis
5767///
5768/// *This section describes, roughly, the analysis performed by this derive to
5769/// determine whether it is sound to implement `Unaligned` for a given type.
5770/// Unless you are modifying the implementation of this derive, or attempting to
5771/// manually implement `Unaligned` for a type yourself, you don't need to read
5772/// this section.*
5773///
5774/// If a type has the following properties, then this derive can implement
5775/// `Unaligned` for that type:
5776///
5777/// - If the type is a struct or union:
5778///   - If `repr(align(N))` is provided, `N` must equal 1.
5779///   - If the type is `repr(C)` or `repr(transparent)`, all fields must be
5780///     [`Unaligned`].
5781///   - If the type is not `repr(C)` or `repr(transparent)`, it must be
5782///     `repr(packed)` or `repr(packed(1))`.
5783/// - If the type is an enum:
5784///   - If `repr(align(N))` is provided, `N` must equal 1.
5785///   - It must be a field-less enum (meaning that all variants have no fields).
5786///   - It must be `repr(i8)` or `repr(u8)`.
5787///
5788/// [safety conditions]: trait@Unaligned#safety
5789#[cfg(any(feature = "derive", test))]
5790#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
5791pub use zerocopy_derive::Unaligned;
5792
5793/// Types with no alignment requirement.
5794///
5795/// If `T: Unaligned`, then `align_of::<T>() == 1`.
5796///
5797/// # Implementation
5798///
5799/// **Do not implement this trait yourself!** Instead, use
5800/// [`#[derive(Unaligned)]`][derive]; e.g.:
5801///
5802/// ```
5803/// # use zerocopy_derive::Unaligned;
5804/// #[derive(Unaligned)]
5805/// #[repr(C)]
5806/// struct MyStruct {
5807/// # /*
5808///     ...
5809/// # */
5810/// }
5811///
5812/// #[derive(Unaligned)]
5813/// #[repr(u8)]
5814/// enum MyEnum {
5815/// #   Variant0,
5816/// # /*
5817///     ...
5818/// # */
5819/// }
5820///
5821/// #[derive(Unaligned)]
5822/// #[repr(packed)]
5823/// union MyUnion {
5824/// #   variant: u8,
5825/// # /*
5826///     ...
5827/// # */
5828/// }
5829/// ```
5830///
5831/// This derive performs a sophisticated, compile-time safety analysis to
5832/// determine whether a type is `Unaligned`.
5833///
5834/// # Safety
5835///
5836/// *This section describes what is required in order for `T: Unaligned`, and
5837/// what unsafe code may assume of such types. If you don't plan on implementing
5838/// `Unaligned` manually, and you don't plan on writing unsafe code that
5839/// operates on `Unaligned` types, then you don't need to read this section.*
5840///
5841/// If `T: Unaligned`, then unsafe code may assume that it is sound to produce a
5842/// reference to `T` at any memory location regardless of alignment. If a type
5843/// is marked as `Unaligned` which violates this contract, it may cause
5844/// undefined behavior.
5845///
5846/// `#[derive(Unaligned)]` only permits [types which satisfy these
5847/// requirements][derive-analysis].
5848///
5849#[cfg_attr(
5850    feature = "derive",
5851    doc = "[derive]: zerocopy_derive::Unaligned",
5852    doc = "[derive-analysis]: zerocopy_derive::Unaligned#analysis"
5853)]
5854#[cfg_attr(
5855    not(feature = "derive"),
5856    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Unaligned.html"),
5857    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Unaligned.html#analysis"),
5858)]
5859#[cfg_attr(
5860    zerocopy_diagnostic_on_unimplemented_1_78_0,
5861    diagnostic::on_unimplemented(note = "Consider adding `#[derive(Unaligned)]` to `{Self}`")
5862)]
5863pub unsafe trait Unaligned {
5864    // The `Self: Sized` bound makes it so that `Unaligned` is still object
5865    // safe.
5866    #[doc(hidden)]
5867    fn only_derive_is_allowed_to_implement_this_trait()
5868    where
5869        Self: Sized;
5870}
5871
5872/// Derives an optimized implementation of [`Hash`] for types that implement
5873/// [`IntoBytes`] and [`Immutable`].
5874///
5875/// The standard library's derive for `Hash` generates a recursive descent
5876/// into the fields of the type it is applied to. Instead, the implementation
5877/// derived by this macro makes a single call to [`Hasher::write()`] for both
5878/// [`Hash::hash()`] and [`Hash::hash_slice()`], feeding the hasher the bytes
5879/// of the type or slice all at once.
5880///
5881/// [`Hash`]: core::hash::Hash
5882/// [`Hash::hash()`]: core::hash::Hash::hash()
5883/// [`Hash::hash_slice()`]: core::hash::Hash::hash_slice()
5884#[cfg(any(feature = "derive", test))]
5885#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
5886pub use zerocopy_derive::ByteHash;
5887
5888/// Derives an optimized implementation of [`PartialEq`] and [`Eq`] for types
5889/// that implement [`IntoBytes`] and [`Immutable`].
5890///
5891/// The standard library's derive for [`PartialEq`] generates a recursive
5892/// descent into the fields of the type it is applied to. Instead, the
5893/// implementation derived by this macro performs a single slice comparison of
5894/// the bytes of the two values being compared.
5895#[cfg(any(feature = "derive", test))]
5896#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
5897pub use zerocopy_derive::ByteEq;
5898
5899/// Derives [`SplitAt`].
5900///
5901/// This derive can be applied to structs; e.g.:
5902///
5903/// ```
5904/// # use zerocopy_derive::{ByteEq, Immutable, IntoBytes};
5905/// #[derive(ByteEq, Immutable, IntoBytes)]
5906/// #[repr(C)]
5907/// struct MyStruct {
5908/// # /*
5909///     ...
5910/// # */
5911/// }
5912/// ```
5913#[cfg(any(feature = "derive", test))]
5914#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
5915pub use zerocopy_derive::SplitAt;
5916
5917#[cfg(feature = "alloc")]
5918#[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
5919#[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
5920mod alloc_support {
5921    use super::*;
5922
5923    /// Extends a `Vec<T>` by pushing `additional` new items onto the end of the
5924    /// vector. The new items are initialized with zeros.
5925    #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
5926    #[doc(hidden)]
5927    #[deprecated(since = "0.8.0", note = "moved to `FromZeros`")]
5928    #[inline(always)]
5929    pub fn extend_vec_zeroed<T: FromZeros>(
5930        v: &mut Vec<T>,
5931        additional: usize,
5932    ) -> Result<(), AllocError> {
5933        <T as FromZeros>::extend_vec_zeroed(v, additional)
5934    }
5935
5936    /// Inserts `additional` new items into `Vec<T>` at `position`. The new
5937    /// items are initialized with zeros.
5938    ///
5939    /// # Panics
5940    ///
5941    /// Panics if `position > v.len()`.
5942    #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
5943    #[doc(hidden)]
5944    #[deprecated(since = "0.8.0", note = "moved to `FromZeros`")]
5945    #[inline(always)]
5946    pub fn insert_vec_zeroed<T: FromZeros>(
5947        v: &mut Vec<T>,
5948        position: usize,
5949        additional: usize,
5950    ) -> Result<(), AllocError> {
5951        <T as FromZeros>::insert_vec_zeroed(v, position, additional)
5952    }
5953}
5954
5955#[cfg(feature = "alloc")]
5956#[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
5957#[doc(hidden)]
5958pub use alloc_support::*;
5959
5960#[cfg(test)]
5961#[allow(clippy::assertions_on_result_states, clippy::unreadable_literal)]
5962mod tests {
5963    use static_assertions::assert_impl_all;
5964
5965    use super::*;
5966    use crate::util::testutil::*;
5967
5968    // An unsized type.
5969    //
5970    // This is used to test the custom derives of our traits. The `[u8]` type
5971    // gets a hand-rolled impl, so it doesn't exercise our custom derives.
5972    #[derive(Debug, Eq, PartialEq, FromBytes, IntoBytes, Unaligned, Immutable)]
5973    #[repr(transparent)]
5974    struct Unsized([u8]);
5975
5976    impl Unsized {
5977        fn from_mut_slice(slc: &mut [u8]) -> &mut Unsized {
5978            // SAFETY: This *probably* sound - since the layouts of `[u8]` and
5979            // `Unsized` are the same, so are the layouts of `&mut [u8]` and
5980            // `&mut Unsized`. [1] Even if it turns out that this isn't actually
5981            // guaranteed by the language spec, we can just change this since
5982            // it's in test code.
5983            //
5984            // [1] https://github.com/rust-lang/unsafe-code-guidelines/issues/375
5985            unsafe { mem::transmute(slc) }
5986        }
5987    }
5988
5989    #[test]
5990    fn test_known_layout() {
5991        // Test that `$ty` and `ManuallyDrop<$ty>` have the expected layout.
5992        // Test that `PhantomData<$ty>` has the same layout as `()` regardless
5993        // of `$ty`.
5994        macro_rules! test {
5995            ($ty:ty, $expect:expr) => {
5996                let expect = $expect;
5997                assert_eq!(<$ty as KnownLayout>::LAYOUT, expect);
5998                assert_eq!(<ManuallyDrop<$ty> as KnownLayout>::LAYOUT, expect);
5999                assert_eq!(<PhantomData<$ty> as KnownLayout>::LAYOUT, <() as KnownLayout>::LAYOUT);
6000            };
6001        }
6002
6003        let layout = |offset, align, _trailing_slice_elem_size| DstLayout {
6004            align: NonZeroUsize::new(align).unwrap(),
6005            size_info: match _trailing_slice_elem_size {
6006                None => SizeInfo::Sized { size: offset },
6007                Some(elem_size) => SizeInfo::SliceDst(TrailingSliceLayout { offset, elem_size }),
6008            },
6009        };
6010
6011        test!((), layout(0, 1, None));
6012        test!(u8, layout(1, 1, None));
6013        // Use `align_of` because `u64` alignment may be smaller than 8 on some
6014        // platforms.
6015        test!(u64, layout(8, mem::align_of::<u64>(), None));
6016        test!(AU64, layout(8, 8, None));
6017
6018        test!(Option<&'static ()>, usize::LAYOUT);
6019
6020        test!([()], layout(0, 1, Some(0)));
6021        test!([u8], layout(0, 1, Some(1)));
6022        test!(str, layout(0, 1, Some(1)));
6023    }
6024
6025    #[cfg(feature = "derive")]
6026    #[test]
6027    fn test_known_layout_derive() {
6028        // In this and other files (`late_compile_pass.rs`,
6029        // `mid_compile_pass.rs`, and `struct.rs`), we test success and failure
6030        // modes of `derive(KnownLayout)` for the following combination of
6031        // properties:
6032        //
6033        // +------------+--------------------------------------+-----------+
6034        // |            |      trailing field properties       |           |
6035        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
6036        // |------------+----------+----------------+----------+-----------|
6037        // |          N |        N |              N |        N |      KL00 |
6038        // |          N |        N |              N |        Y |      KL01 |
6039        // |          N |        N |              Y |        N |      KL02 |
6040        // |          N |        N |              Y |        Y |      KL03 |
6041        // |          N |        Y |              N |        N |      KL04 |
6042        // |          N |        Y |              N |        Y |      KL05 |
6043        // |          N |        Y |              Y |        N |      KL06 |
6044        // |          N |        Y |              Y |        Y |      KL07 |
6045        // |          Y |        N |              N |        N |      KL08 |
6046        // |          Y |        N |              N |        Y |      KL09 |
6047        // |          Y |        N |              Y |        N |      KL10 |
6048        // |          Y |        N |              Y |        Y |      KL11 |
6049        // |          Y |        Y |              N |        N |      KL12 |
6050        // |          Y |        Y |              N |        Y |      KL13 |
6051        // |          Y |        Y |              Y |        N |      KL14 |
6052        // |          Y |        Y |              Y |        Y |      KL15 |
6053        // +------------+----------+----------------+----------+-----------+
6054
6055        struct NotKnownLayout<T = ()> {
6056            _t: T,
6057        }
6058
6059        #[derive(KnownLayout)]
6060        #[repr(C)]
6061        struct AlignSize<const ALIGN: usize, const SIZE: usize>
6062        where
6063            elain::Align<ALIGN>: elain::Alignment,
6064        {
6065            _align: elain::Align<ALIGN>,
6066            size: [u8; SIZE],
6067        }
6068
6069        type AU16 = AlignSize<2, 2>;
6070        type AU32 = AlignSize<4, 4>;
6071
6072        fn _assert_kl<T: ?Sized + KnownLayout>(_: &T) {}
6073
6074        let sized_layout = |align, size| DstLayout {
6075            align: NonZeroUsize::new(align).unwrap(),
6076            size_info: SizeInfo::Sized { size },
6077        };
6078
6079        let unsized_layout = |align, elem_size, offset| DstLayout {
6080            align: NonZeroUsize::new(align).unwrap(),
6081            size_info: SizeInfo::SliceDst(TrailingSliceLayout { offset, elem_size }),
6082        };
6083
6084        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
6085        // |          N |        N |              N |        Y |      KL01 |
6086        #[allow(dead_code)]
6087        #[derive(KnownLayout)]
6088        struct KL01(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
6089
6090        let expected = DstLayout::for_type::<KL01>();
6091
6092        assert_eq!(<KL01 as KnownLayout>::LAYOUT, expected);
6093        assert_eq!(<KL01 as KnownLayout>::LAYOUT, sized_layout(4, 8));
6094
6095        // ...with `align(N)`:
6096        #[allow(dead_code)]
6097        #[derive(KnownLayout)]
6098        #[repr(align(64))]
6099        struct KL01Align(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
6100
6101        let expected = DstLayout::for_type::<KL01Align>();
6102
6103        assert_eq!(<KL01Align as KnownLayout>::LAYOUT, expected);
6104        assert_eq!(<KL01Align as KnownLayout>::LAYOUT, sized_layout(64, 64));
6105
6106        // ...with `packed`:
6107        #[allow(dead_code)]
6108        #[derive(KnownLayout)]
6109        #[repr(packed)]
6110        struct KL01Packed(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
6111
6112        let expected = DstLayout::for_type::<KL01Packed>();
6113
6114        assert_eq!(<KL01Packed as KnownLayout>::LAYOUT, expected);
6115        assert_eq!(<KL01Packed as KnownLayout>::LAYOUT, sized_layout(1, 6));
6116
6117        // ...with `packed(N)`:
6118        #[allow(dead_code)]
6119        #[derive(KnownLayout)]
6120        #[repr(packed(2))]
6121        struct KL01PackedN(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
6122
6123        assert_impl_all!(KL01PackedN: KnownLayout);
6124
6125        let expected = DstLayout::for_type::<KL01PackedN>();
6126
6127        assert_eq!(<KL01PackedN as KnownLayout>::LAYOUT, expected);
6128        assert_eq!(<KL01PackedN as KnownLayout>::LAYOUT, sized_layout(2, 6));
6129
6130        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
6131        // |          N |        N |              Y |        Y |      KL03 |
6132        #[allow(dead_code)]
6133        #[derive(KnownLayout)]
6134        struct KL03(NotKnownLayout, u8);
6135
6136        let expected = DstLayout::for_type::<KL03>();
6137
6138        assert_eq!(<KL03 as KnownLayout>::LAYOUT, expected);
6139        assert_eq!(<KL03 as KnownLayout>::LAYOUT, sized_layout(1, 1));
6140
6141        // ... with `align(N)`
6142        #[allow(dead_code)]
6143        #[derive(KnownLayout)]
6144        #[repr(align(64))]
6145        struct KL03Align(NotKnownLayout<AU32>, u8);
6146
6147        let expected = DstLayout::for_type::<KL03Align>();
6148
6149        assert_eq!(<KL03Align as KnownLayout>::LAYOUT, expected);
6150        assert_eq!(<KL03Align as KnownLayout>::LAYOUT, sized_layout(64, 64));
6151
6152        // ... with `packed`:
6153        #[allow(dead_code)]
6154        #[derive(KnownLayout)]
6155        #[repr(packed)]
6156        struct KL03Packed(NotKnownLayout<AU32>, u8);
6157
6158        let expected = DstLayout::for_type::<KL03Packed>();
6159
6160        assert_eq!(<KL03Packed as KnownLayout>::LAYOUT, expected);
6161        assert_eq!(<KL03Packed as KnownLayout>::LAYOUT, sized_layout(1, 5));
6162
6163        // ... with `packed(N)`
6164        #[allow(dead_code)]
6165        #[derive(KnownLayout)]
6166        #[repr(packed(2))]
6167        struct KL03PackedN(NotKnownLayout<AU32>, u8);
6168
6169        assert_impl_all!(KL03PackedN: KnownLayout);
6170
6171        let expected = DstLayout::for_type::<KL03PackedN>();
6172
6173        assert_eq!(<KL03PackedN as KnownLayout>::LAYOUT, expected);
6174        assert_eq!(<KL03PackedN as KnownLayout>::LAYOUT, sized_layout(2, 6));
6175
6176        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
6177        // |          N |        Y |              N |        Y |      KL05 |
6178        #[allow(dead_code)]
6179        #[derive(KnownLayout)]
6180        struct KL05<T>(u8, T);
6181
6182        fn _test_kl05<T>(t: T) -> impl KnownLayout {
6183            KL05(0u8, t)
6184        }
6185
6186        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
6187        // |          N |        Y |              Y |        Y |      KL07 |
6188        #[allow(dead_code)]
6189        #[derive(KnownLayout)]
6190        struct KL07<T: KnownLayout>(u8, T);
6191
6192        fn _test_kl07<T: KnownLayout>(t: T) -> impl KnownLayout {
6193            let _ = KL07(0u8, t);
6194        }
6195
6196        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
6197        // |          Y |        N |              Y |        N |      KL10 |
6198        #[allow(dead_code)]
6199        #[derive(KnownLayout)]
6200        #[repr(C)]
6201        struct KL10(NotKnownLayout<AU32>, [u8]);
6202
6203        let expected = DstLayout::new_zst(None)
6204            .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), None)
6205            .extend(<[u8] as KnownLayout>::LAYOUT, None)
6206            .pad_to_align();
6207
6208        assert_eq!(<KL10 as KnownLayout>::LAYOUT, expected);
6209        assert_eq!(<KL10 as KnownLayout>::LAYOUT, unsized_layout(4, 1, 4));
6210
6211        // ...with `align(N)`:
6212        #[allow(dead_code)]
6213        #[derive(KnownLayout)]
6214        #[repr(C, align(64))]
6215        struct KL10Align(NotKnownLayout<AU32>, [u8]);
6216
6217        let repr_align = NonZeroUsize::new(64);
6218
6219        let expected = DstLayout::new_zst(repr_align)
6220            .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), None)
6221            .extend(<[u8] as KnownLayout>::LAYOUT, None)
6222            .pad_to_align();
6223
6224        assert_eq!(<KL10Align as KnownLayout>::LAYOUT, expected);
6225        assert_eq!(<KL10Align as KnownLayout>::LAYOUT, unsized_layout(64, 1, 4));
6226
6227        // ...with `packed`:
6228        #[allow(dead_code)]
6229        #[derive(KnownLayout)]
6230        #[repr(C, packed)]
6231        struct KL10Packed(NotKnownLayout<AU32>, [u8]);
6232
6233        let repr_packed = NonZeroUsize::new(1);
6234
6235        let expected = DstLayout::new_zst(None)
6236            .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), repr_packed)
6237            .extend(<[u8] as KnownLayout>::LAYOUT, repr_packed)
6238            .pad_to_align();
6239
6240        assert_eq!(<KL10Packed as KnownLayout>::LAYOUT, expected);
6241        assert_eq!(<KL10Packed as KnownLayout>::LAYOUT, unsized_layout(1, 1, 4));
6242
6243        // ...with `packed(N)`:
6244        #[allow(dead_code)]
6245        #[derive(KnownLayout)]
6246        #[repr(C, packed(2))]
6247        struct KL10PackedN(NotKnownLayout<AU32>, [u8]);
6248
6249        let repr_packed = NonZeroUsize::new(2);
6250
6251        let expected = DstLayout::new_zst(None)
6252            .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), repr_packed)
6253            .extend(<[u8] as KnownLayout>::LAYOUT, repr_packed)
6254            .pad_to_align();
6255
6256        assert_eq!(<KL10PackedN as KnownLayout>::LAYOUT, expected);
6257        assert_eq!(<KL10PackedN as KnownLayout>::LAYOUT, unsized_layout(2, 1, 4));
6258
6259        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
6260        // |          Y |        N |              Y |        Y |      KL11 |
6261        #[allow(dead_code)]
6262        #[derive(KnownLayout)]
6263        #[repr(C)]
6264        struct KL11(NotKnownLayout<AU64>, u8);
6265
6266        let expected = DstLayout::new_zst(None)
6267            .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), None)
6268            .extend(<u8 as KnownLayout>::LAYOUT, None)
6269            .pad_to_align();
6270
6271        assert_eq!(<KL11 as KnownLayout>::LAYOUT, expected);
6272        assert_eq!(<KL11 as KnownLayout>::LAYOUT, sized_layout(8, 16));
6273
6274        // ...with `align(N)`:
6275        #[allow(dead_code)]
6276        #[derive(KnownLayout)]
6277        #[repr(C, align(64))]
6278        struct KL11Align(NotKnownLayout<AU64>, u8);
6279
6280        let repr_align = NonZeroUsize::new(64);
6281
6282        let expected = DstLayout::new_zst(repr_align)
6283            .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), None)
6284            .extend(<u8 as KnownLayout>::LAYOUT, None)
6285            .pad_to_align();
6286
6287        assert_eq!(<KL11Align as KnownLayout>::LAYOUT, expected);
6288        assert_eq!(<KL11Align as KnownLayout>::LAYOUT, sized_layout(64, 64));
6289
6290        // ...with `packed`:
6291        #[allow(dead_code)]
6292        #[derive(KnownLayout)]
6293        #[repr(C, packed)]
6294        struct KL11Packed(NotKnownLayout<AU64>, u8);
6295
6296        let repr_packed = NonZeroUsize::new(1);
6297
6298        let expected = DstLayout::new_zst(None)
6299            .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), repr_packed)
6300            .extend(<u8 as KnownLayout>::LAYOUT, repr_packed)
6301            .pad_to_align();
6302
6303        assert_eq!(<KL11Packed as KnownLayout>::LAYOUT, expected);
6304        assert_eq!(<KL11Packed as KnownLayout>::LAYOUT, sized_layout(1, 9));
6305
6306        // ...with `packed(N)`:
6307        #[allow(dead_code)]
6308        #[derive(KnownLayout)]
6309        #[repr(C, packed(2))]
6310        struct KL11PackedN(NotKnownLayout<AU64>, u8);
6311
6312        let repr_packed = NonZeroUsize::new(2);
6313
6314        let expected = DstLayout::new_zst(None)
6315            .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), repr_packed)
6316            .extend(<u8 as KnownLayout>::LAYOUT, repr_packed)
6317            .pad_to_align();
6318
6319        assert_eq!(<KL11PackedN as KnownLayout>::LAYOUT, expected);
6320        assert_eq!(<KL11PackedN as KnownLayout>::LAYOUT, sized_layout(2, 10));
6321
6322        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
6323        // |          Y |        Y |              Y |        N |      KL14 |
6324        #[allow(dead_code)]
6325        #[derive(KnownLayout)]
6326        #[repr(C)]
6327        struct KL14<T: ?Sized + KnownLayout>(u8, T);
6328
6329        fn _test_kl14<T: ?Sized + KnownLayout>(kl: &KL14<T>) {
6330            _assert_kl(kl)
6331        }
6332
6333        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
6334        // |          Y |        Y |              Y |        Y |      KL15 |
6335        #[allow(dead_code)]
6336        #[derive(KnownLayout)]
6337        #[repr(C)]
6338        struct KL15<T: KnownLayout>(u8, T);
6339
6340        fn _test_kl15<T: KnownLayout>(t: T) -> impl KnownLayout {
6341            let _ = KL15(0u8, t);
6342        }
6343
6344        // Test a variety of combinations of field types:
6345        //  - ()
6346        //  - u8
6347        //  - AU16
6348        //  - [()]
6349        //  - [u8]
6350        //  - [AU16]
6351
6352        #[allow(clippy::upper_case_acronyms, dead_code)]
6353        #[derive(KnownLayout)]
6354        #[repr(C)]
6355        struct KLTU<T, U: ?Sized>(T, U);
6356
6357        assert_eq!(<KLTU<(), ()> as KnownLayout>::LAYOUT, sized_layout(1, 0));
6358
6359        assert_eq!(<KLTU<(), u8> as KnownLayout>::LAYOUT, sized_layout(1, 1));
6360
6361        assert_eq!(<KLTU<(), AU16> as KnownLayout>::LAYOUT, sized_layout(2, 2));
6362
6363        assert_eq!(<KLTU<(), [()]> as KnownLayout>::LAYOUT, unsized_layout(1, 0, 0));
6364
6365        assert_eq!(<KLTU<(), [u8]> as KnownLayout>::LAYOUT, unsized_layout(1, 1, 0));
6366
6367        assert_eq!(<KLTU<(), [AU16]> as KnownLayout>::LAYOUT, unsized_layout(2, 2, 0));
6368
6369        assert_eq!(<KLTU<u8, ()> as KnownLayout>::LAYOUT, sized_layout(1, 1));
6370
6371        assert_eq!(<KLTU<u8, u8> as KnownLayout>::LAYOUT, sized_layout(1, 2));
6372
6373        assert_eq!(<KLTU<u8, AU16> as KnownLayout>::LAYOUT, sized_layout(2, 4));
6374
6375        assert_eq!(<KLTU<u8, [()]> as KnownLayout>::LAYOUT, unsized_layout(1, 0, 1));
6376
6377        assert_eq!(<KLTU<u8, [u8]> as KnownLayout>::LAYOUT, unsized_layout(1, 1, 1));
6378
6379        assert_eq!(<KLTU<u8, [AU16]> as KnownLayout>::LAYOUT, unsized_layout(2, 2, 2));
6380
6381        assert_eq!(<KLTU<AU16, ()> as KnownLayout>::LAYOUT, sized_layout(2, 2));
6382
6383        assert_eq!(<KLTU<AU16, u8> as KnownLayout>::LAYOUT, sized_layout(2, 4));
6384
6385        assert_eq!(<KLTU<AU16, AU16> as KnownLayout>::LAYOUT, sized_layout(2, 4));
6386
6387        assert_eq!(<KLTU<AU16, [()]> as KnownLayout>::LAYOUT, unsized_layout(2, 0, 2));
6388
6389        assert_eq!(<KLTU<AU16, [u8]> as KnownLayout>::LAYOUT, unsized_layout(2, 1, 2));
6390
6391        assert_eq!(<KLTU<AU16, [AU16]> as KnownLayout>::LAYOUT, unsized_layout(2, 2, 2));
6392
6393        // Test a variety of field counts.
6394
6395        #[derive(KnownLayout)]
6396        #[repr(C)]
6397        struct KLF0;
6398
6399        assert_eq!(<KLF0 as KnownLayout>::LAYOUT, sized_layout(1, 0));
6400
6401        #[derive(KnownLayout)]
6402        #[repr(C)]
6403        struct KLF1([u8]);
6404
6405        assert_eq!(<KLF1 as KnownLayout>::LAYOUT, unsized_layout(1, 1, 0));
6406
6407        #[derive(KnownLayout)]
6408        #[repr(C)]
6409        struct KLF2(NotKnownLayout<u8>, [u8]);
6410
6411        assert_eq!(<KLF2 as KnownLayout>::LAYOUT, unsized_layout(1, 1, 1));
6412
6413        #[derive(KnownLayout)]
6414        #[repr(C)]
6415        struct KLF3(NotKnownLayout<u8>, NotKnownLayout<AU16>, [u8]);
6416
6417        assert_eq!(<KLF3 as KnownLayout>::LAYOUT, unsized_layout(2, 1, 4));
6418
6419        #[derive(KnownLayout)]
6420        #[repr(C)]
6421        struct KLF4(NotKnownLayout<u8>, NotKnownLayout<AU16>, NotKnownLayout<AU32>, [u8]);
6422
6423        assert_eq!(<KLF4 as KnownLayout>::LAYOUT, unsized_layout(4, 1, 8));
6424    }
6425
6426    #[test]
6427    fn test_object_safety() {
6428        fn _takes_no_cell(_: &dyn Immutable) {}
6429        fn _takes_unaligned(_: &dyn Unaligned) {}
6430    }
6431
6432    #[test]
6433    fn test_from_zeros_only() {
6434        // Test types that implement `FromZeros` but not `FromBytes`.
6435
6436        assert!(!bool::new_zeroed());
6437        assert_eq!(char::new_zeroed(), '\0');
6438
6439        #[cfg(feature = "alloc")]
6440        {
6441            assert_eq!(bool::new_box_zeroed(), Ok(Box::new(false)));
6442            assert_eq!(char::new_box_zeroed(), Ok(Box::new('\0')));
6443
6444            assert_eq!(
6445                <[bool]>::new_box_zeroed_with_elems(3).unwrap().as_ref(),
6446                [false, false, false]
6447            );
6448            assert_eq!(
6449                <[char]>::new_box_zeroed_with_elems(3).unwrap().as_ref(),
6450                ['\0', '\0', '\0']
6451            );
6452
6453            assert_eq!(bool::new_vec_zeroed(3).unwrap().as_ref(), [false, false, false]);
6454            assert_eq!(char::new_vec_zeroed(3).unwrap().as_ref(), ['\0', '\0', '\0']);
6455        }
6456
6457        let mut string = "hello".to_string();
6458        let s: &mut str = string.as_mut();
6459        assert_eq!(s, "hello");
6460        s.zero();
6461        assert_eq!(s, "\0\0\0\0\0");
6462    }
6463
6464    #[test]
6465    fn test_zst_count_preserved() {
6466        // Test that, when an explicit count is provided to for a type with a
6467        // ZST trailing slice element, that count is preserved. This is
6468        // important since, for such types, all element counts result in objects
6469        // of the same size, and so the correct behavior is ambiguous. However,
6470        // preserving the count as requested by the user is the behavior that we
6471        // document publicly.
6472
6473        // FromZeros methods
6474        #[cfg(feature = "alloc")]
6475        assert_eq!(<[()]>::new_box_zeroed_with_elems(3).unwrap().len(), 3);
6476        #[cfg(feature = "alloc")]
6477        assert_eq!(<()>::new_vec_zeroed(3).unwrap().len(), 3);
6478
6479        // FromBytes methods
6480        assert_eq!(<[()]>::ref_from_bytes_with_elems(&[][..], 3).unwrap().len(), 3);
6481        assert_eq!(<[()]>::ref_from_prefix_with_elems(&[][..], 3).unwrap().0.len(), 3);
6482        assert_eq!(<[()]>::ref_from_suffix_with_elems(&[][..], 3).unwrap().1.len(), 3);
6483        assert_eq!(<[()]>::mut_from_bytes_with_elems(&mut [][..], 3).unwrap().len(), 3);
6484        assert_eq!(<[()]>::mut_from_prefix_with_elems(&mut [][..], 3).unwrap().0.len(), 3);
6485        assert_eq!(<[()]>::mut_from_suffix_with_elems(&mut [][..], 3).unwrap().1.len(), 3);
6486    }
6487
6488    #[test]
6489    fn test_read_write() {
6490        const VAL: u64 = 0x12345678;
6491        #[cfg(target_endian = "big")]
6492        const VAL_BYTES: [u8; 8] = VAL.to_be_bytes();
6493        #[cfg(target_endian = "little")]
6494        const VAL_BYTES: [u8; 8] = VAL.to_le_bytes();
6495        const ZEROS: [u8; 8] = [0u8; 8];
6496
6497        // Test `FromBytes::{read_from, read_from_prefix, read_from_suffix}`.
6498
6499        assert_eq!(u64::read_from_bytes(&VAL_BYTES[..]), Ok(VAL));
6500        // The first 8 bytes are from `VAL_BYTES` and the second 8 bytes are all
6501        // zeros.
6502        let bytes_with_prefix: [u8; 16] = transmute!([VAL_BYTES, [0; 8]]);
6503        assert_eq!(u64::read_from_prefix(&bytes_with_prefix[..]), Ok((VAL, &ZEROS[..])));
6504        assert_eq!(u64::read_from_suffix(&bytes_with_prefix[..]), Ok((&VAL_BYTES[..], 0)));
6505        // The first 8 bytes are all zeros and the second 8 bytes are from
6506        // `VAL_BYTES`
6507        let bytes_with_suffix: [u8; 16] = transmute!([[0; 8], VAL_BYTES]);
6508        assert_eq!(u64::read_from_prefix(&bytes_with_suffix[..]), Ok((0, &VAL_BYTES[..])));
6509        assert_eq!(u64::read_from_suffix(&bytes_with_suffix[..]), Ok((&ZEROS[..], VAL)));
6510
6511        // Test `IntoBytes::{write_to, write_to_prefix, write_to_suffix}`.
6512
6513        let mut bytes = [0u8; 8];
6514        assert_eq!(VAL.write_to(&mut bytes[..]), Ok(()));
6515        assert_eq!(bytes, VAL_BYTES);
6516        let mut bytes = [0u8; 16];
6517        assert_eq!(VAL.write_to_prefix(&mut bytes[..]), Ok(()));
6518        let want: [u8; 16] = transmute!([VAL_BYTES, [0; 8]]);
6519        assert_eq!(bytes, want);
6520        let mut bytes = [0u8; 16];
6521        assert_eq!(VAL.write_to_suffix(&mut bytes[..]), Ok(()));
6522        let want: [u8; 16] = transmute!([[0; 8], VAL_BYTES]);
6523        assert_eq!(bytes, want);
6524    }
6525
6526    #[test]
6527    #[cfg(feature = "std")]
6528    fn test_read_io_with_padding_soundness() {
6529        // This test is designed to exhibit potential UB in
6530        // `FromBytes::read_from_io`. (see #2319, #2320).
6531
6532        // On most platforms (where `align_of::<u16>() == 2`), `WithPadding`
6533        // will have inter-field padding between `x` and `y`.
6534        #[derive(FromBytes)]
6535        #[repr(C)]
6536        struct WithPadding {
6537            x: u8,
6538            y: u16,
6539        }
6540        struct ReadsInRead;
6541        impl std::io::Read for ReadsInRead {
6542            fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize> {
6543                // This body branches on every byte of `buf`, ensuring that it
6544                // exhibits UB if any byte of `buf` is uninitialized.
6545                if buf.iter().all(|&x| x == 0) {
6546                    Ok(buf.len())
6547                } else {
6548                    buf.iter_mut().for_each(|x| *x = 0);
6549                    Ok(buf.len())
6550                }
6551            }
6552        }
6553        assert!(matches!(WithPadding::read_from_io(ReadsInRead), Ok(WithPadding { x: 0, y: 0 })));
6554    }
6555
6556    #[test]
6557    #[cfg(feature = "std")]
6558    fn test_read_write_io() {
6559        let mut long_buffer = [0, 0, 0, 0];
6560        assert!(matches!(u16::MAX.write_to_io(&mut long_buffer[..]), Ok(())));
6561        assert_eq!(long_buffer, [255, 255, 0, 0]);
6562        assert!(matches!(u16::read_from_io(&long_buffer[..]), Ok(u16::MAX)));
6563
6564        let mut short_buffer = [0, 0];
6565        assert!(u32::MAX.write_to_io(&mut short_buffer[..]).is_err());
6566        assert_eq!(short_buffer, [255, 255]);
6567        assert!(u32::read_from_io(&short_buffer[..]).is_err());
6568    }
6569
6570    #[test]
6571    fn test_try_from_bytes_try_read_from() {
6572        assert_eq!(<bool as TryFromBytes>::try_read_from_bytes(&[0]), Ok(false));
6573        assert_eq!(<bool as TryFromBytes>::try_read_from_bytes(&[1]), Ok(true));
6574
6575        assert_eq!(<bool as TryFromBytes>::try_read_from_prefix(&[0, 2]), Ok((false, &[2][..])));
6576        assert_eq!(<bool as TryFromBytes>::try_read_from_prefix(&[1, 2]), Ok((true, &[2][..])));
6577
6578        assert_eq!(<bool as TryFromBytes>::try_read_from_suffix(&[2, 0]), Ok((&[2][..], false)));
6579        assert_eq!(<bool as TryFromBytes>::try_read_from_suffix(&[2, 1]), Ok((&[2][..], true)));
6580
6581        // If we don't pass enough bytes, it fails.
6582        assert!(matches!(
6583            <u8 as TryFromBytes>::try_read_from_bytes(&[]),
6584            Err(TryReadError::Size(_))
6585        ));
6586        assert!(matches!(
6587            <u8 as TryFromBytes>::try_read_from_prefix(&[]),
6588            Err(TryReadError::Size(_))
6589        ));
6590        assert!(matches!(
6591            <u8 as TryFromBytes>::try_read_from_suffix(&[]),
6592            Err(TryReadError::Size(_))
6593        ));
6594
6595        // If we pass too many bytes, it fails.
6596        assert!(matches!(
6597            <u8 as TryFromBytes>::try_read_from_bytes(&[0, 0]),
6598            Err(TryReadError::Size(_))
6599        ));
6600
6601        // If we pass an invalid value, it fails.
6602        assert!(matches!(
6603            <bool as TryFromBytes>::try_read_from_bytes(&[2]),
6604            Err(TryReadError::Validity(_))
6605        ));
6606        assert!(matches!(
6607            <bool as TryFromBytes>::try_read_from_prefix(&[2, 0]),
6608            Err(TryReadError::Validity(_))
6609        ));
6610        assert!(matches!(
6611            <bool as TryFromBytes>::try_read_from_suffix(&[0, 2]),
6612            Err(TryReadError::Validity(_))
6613        ));
6614
6615        // Reading from a misaligned buffer should still succeed. Since `AU64`'s
6616        // alignment is 8, and since we read from two adjacent addresses one
6617        // byte apart, it is guaranteed that at least one of them (though
6618        // possibly both) will be misaligned.
6619        let bytes: [u8; 9] = [0, 0, 0, 0, 0, 0, 0, 0, 0];
6620        assert_eq!(<AU64 as TryFromBytes>::try_read_from_bytes(&bytes[..8]), Ok(AU64(0)));
6621        assert_eq!(<AU64 as TryFromBytes>::try_read_from_bytes(&bytes[1..9]), Ok(AU64(0)));
6622
6623        assert_eq!(
6624            <AU64 as TryFromBytes>::try_read_from_prefix(&bytes[..8]),
6625            Ok((AU64(0), &[][..]))
6626        );
6627        assert_eq!(
6628            <AU64 as TryFromBytes>::try_read_from_prefix(&bytes[1..9]),
6629            Ok((AU64(0), &[][..]))
6630        );
6631
6632        assert_eq!(
6633            <AU64 as TryFromBytes>::try_read_from_suffix(&bytes[..8]),
6634            Ok((&[][..], AU64(0)))
6635        );
6636        assert_eq!(
6637            <AU64 as TryFromBytes>::try_read_from_suffix(&bytes[1..9]),
6638            Ok((&[][..], AU64(0)))
6639        );
6640    }
6641
6642    #[test]
6643    fn test_ref_from_mut_from() {
6644        // Test `FromBytes::{ref_from, mut_from}{,_prefix,Suffix}` success cases
6645        // Exhaustive coverage for these methods is covered by the `Ref` tests above,
6646        // which these helper methods defer to.
6647
6648        let mut buf =
6649            Align::<[u8; 16], AU64>::new([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]);
6650
6651        assert_eq!(
6652            AU64::ref_from_bytes(&buf.t[8..]).unwrap().0.to_ne_bytes(),
6653            [8, 9, 10, 11, 12, 13, 14, 15]
6654        );
6655        let suffix = AU64::mut_from_bytes(&mut buf.t[8..]).unwrap();
6656        suffix.0 = 0x0101010101010101;
6657        // The `[u8:9]` is a non-half size of the full buffer, which would catch
6658        // `from_prefix` having the same implementation as `from_suffix` (issues #506, #511).
6659        assert_eq!(
6660            <[u8; 9]>::ref_from_suffix(&buf.t[..]).unwrap(),
6661            (&[0, 1, 2, 3, 4, 5, 6][..], &[7u8, 1, 1, 1, 1, 1, 1, 1, 1])
6662        );
6663        let (prefix, suffix) = AU64::mut_from_suffix(&mut buf.t[1..]).unwrap();
6664        assert_eq!(prefix, &mut [1u8, 2, 3, 4, 5, 6, 7][..]);
6665        suffix.0 = 0x0202020202020202;
6666        let (prefix, suffix) = <[u8; 10]>::mut_from_suffix(&mut buf.t[..]).unwrap();
6667        assert_eq!(prefix, &mut [0u8, 1, 2, 3, 4, 5][..]);
6668        suffix[0] = 42;
6669        assert_eq!(
6670            <[u8; 9]>::ref_from_prefix(&buf.t[..]).unwrap(),
6671            (&[0u8, 1, 2, 3, 4, 5, 42, 7, 2], &[2u8, 2, 2, 2, 2, 2, 2][..])
6672        );
6673        <[u8; 2]>::mut_from_prefix(&mut buf.t[..]).unwrap().0[1] = 30;
6674        assert_eq!(buf.t, [0, 30, 2, 3, 4, 5, 42, 7, 2, 2, 2, 2, 2, 2, 2, 2]);
6675    }
6676
6677    #[test]
6678    fn test_ref_from_mut_from_error() {
6679        // Test `FromBytes::{ref_from, mut_from}{,_prefix,Suffix}` error cases.
6680
6681        // Fail because the buffer is too large.
6682        let mut buf = Align::<[u8; 16], AU64>::default();
6683        // `buf.t` should be aligned to 8, so only the length check should fail.
6684        assert!(AU64::ref_from_bytes(&buf.t[..]).is_err());
6685        assert!(AU64::mut_from_bytes(&mut buf.t[..]).is_err());
6686        assert!(<[u8; 8]>::ref_from_bytes(&buf.t[..]).is_err());
6687        assert!(<[u8; 8]>::mut_from_bytes(&mut buf.t[..]).is_err());
6688
6689        // Fail because the buffer is too small.
6690        let mut buf = Align::<[u8; 4], AU64>::default();
6691        assert!(AU64::ref_from_bytes(&buf.t[..]).is_err());
6692        assert!(AU64::mut_from_bytes(&mut buf.t[..]).is_err());
6693        assert!(<[u8; 8]>::ref_from_bytes(&buf.t[..]).is_err());
6694        assert!(<[u8; 8]>::mut_from_bytes(&mut buf.t[..]).is_err());
6695        assert!(AU64::ref_from_prefix(&buf.t[..]).is_err());
6696        assert!(AU64::mut_from_prefix(&mut buf.t[..]).is_err());
6697        assert!(AU64::ref_from_suffix(&buf.t[..]).is_err());
6698        assert!(AU64::mut_from_suffix(&mut buf.t[..]).is_err());
6699        assert!(<[u8; 8]>::ref_from_prefix(&buf.t[..]).is_err());
6700        assert!(<[u8; 8]>::mut_from_prefix(&mut buf.t[..]).is_err());
6701        assert!(<[u8; 8]>::ref_from_suffix(&buf.t[..]).is_err());
6702        assert!(<[u8; 8]>::mut_from_suffix(&mut buf.t[..]).is_err());
6703
6704        // Fail because the alignment is insufficient.
6705        let mut buf = Align::<[u8; 13], AU64>::default();
6706        assert!(AU64::ref_from_bytes(&buf.t[1..]).is_err());
6707        assert!(AU64::mut_from_bytes(&mut buf.t[1..]).is_err());
6708        assert!(AU64::ref_from_bytes(&buf.t[1..]).is_err());
6709        assert!(AU64::mut_from_bytes(&mut buf.t[1..]).is_err());
6710        assert!(AU64::ref_from_prefix(&buf.t[1..]).is_err());
6711        assert!(AU64::mut_from_prefix(&mut buf.t[1..]).is_err());
6712        assert!(AU64::ref_from_suffix(&buf.t[..]).is_err());
6713        assert!(AU64::mut_from_suffix(&mut buf.t[..]).is_err());
6714    }
6715
6716    #[test]
6717    fn test_to_methods() {
6718        /// Run a series of tests by calling `IntoBytes` methods on `t`.
6719        ///
6720        /// `bytes` is the expected byte sequence returned from `t.as_bytes()`
6721        /// before `t` has been modified. `post_mutation` is the expected
6722        /// sequence returned from `t.as_bytes()` after `t.as_mut_bytes()[0]`
6723        /// has had its bits flipped (by applying `^= 0xFF`).
6724        ///
6725        /// `N` is the size of `t` in bytes.
6726        fn test<T: FromBytes + IntoBytes + Immutable + Debug + Eq + ?Sized, const N: usize>(
6727            t: &mut T,
6728            bytes: &[u8],
6729            post_mutation: &T,
6730        ) {
6731            // Test that we can access the underlying bytes, and that we get the
6732            // right bytes and the right number of bytes.
6733            assert_eq!(t.as_bytes(), bytes);
6734
6735            // Test that changes to the underlying byte slices are reflected in
6736            // the original object.
6737            t.as_mut_bytes()[0] ^= 0xFF;
6738            assert_eq!(t, post_mutation);
6739            t.as_mut_bytes()[0] ^= 0xFF;
6740
6741            // `write_to` rejects slices that are too small or too large.
6742            assert!(t.write_to(&mut vec![0; N - 1][..]).is_err());
6743            assert!(t.write_to(&mut vec![0; N + 1][..]).is_err());
6744
6745            // `write_to` works as expected.
6746            let mut bytes = [0; N];
6747            assert_eq!(t.write_to(&mut bytes[..]), Ok(()));
6748            assert_eq!(bytes, t.as_bytes());
6749
6750            // `write_to_prefix` rejects slices that are too small.
6751            assert!(t.write_to_prefix(&mut vec![0; N - 1][..]).is_err());
6752
6753            // `write_to_prefix` works with exact-sized slices.
6754            let mut bytes = [0; N];
6755            assert_eq!(t.write_to_prefix(&mut bytes[..]), Ok(()));
6756            assert_eq!(bytes, t.as_bytes());
6757
6758            // `write_to_prefix` works with too-large slices, and any bytes past
6759            // the prefix aren't modified.
6760            let mut too_many_bytes = vec![0; N + 1];
6761            too_many_bytes[N] = 123;
6762            assert_eq!(t.write_to_prefix(&mut too_many_bytes[..]), Ok(()));
6763            assert_eq!(&too_many_bytes[..N], t.as_bytes());
6764            assert_eq!(too_many_bytes[N], 123);
6765
6766            // `write_to_suffix` rejects slices that are too small.
6767            assert!(t.write_to_suffix(&mut vec![0; N - 1][..]).is_err());
6768
6769            // `write_to_suffix` works with exact-sized slices.
6770            let mut bytes = [0; N];
6771            assert_eq!(t.write_to_suffix(&mut bytes[..]), Ok(()));
6772            assert_eq!(bytes, t.as_bytes());
6773
6774            // `write_to_suffix` works with too-large slices, and any bytes
6775            // before the suffix aren't modified.
6776            let mut too_many_bytes = vec![0; N + 1];
6777            too_many_bytes[0] = 123;
6778            assert_eq!(t.write_to_suffix(&mut too_many_bytes[..]), Ok(()));
6779            assert_eq!(&too_many_bytes[1..], t.as_bytes());
6780            assert_eq!(too_many_bytes[0], 123);
6781        }
6782
6783        #[derive(Debug, Eq, PartialEq, FromBytes, IntoBytes, Immutable)]
6784        #[repr(C)]
6785        struct Foo {
6786            a: u32,
6787            b: Wrapping<u32>,
6788            c: Option<NonZeroU32>,
6789        }
6790
6791        let expected_bytes: Vec<u8> = if cfg!(target_endian = "little") {
6792            vec![1, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0]
6793        } else {
6794            vec![0, 0, 0, 1, 0, 0, 0, 2, 0, 0, 0, 0]
6795        };
6796        let post_mutation_expected_a =
6797            if cfg!(target_endian = "little") { 0x00_00_00_FE } else { 0xFF_00_00_01 };
6798        test::<_, 12>(
6799            &mut Foo { a: 1, b: Wrapping(2), c: None },
6800            expected_bytes.as_bytes(),
6801            &Foo { a: post_mutation_expected_a, b: Wrapping(2), c: None },
6802        );
6803        test::<_, 3>(
6804            Unsized::from_mut_slice(&mut [1, 2, 3]),
6805            &[1, 2, 3],
6806            Unsized::from_mut_slice(&mut [0xFE, 2, 3]),
6807        );
6808    }
6809
6810    #[test]
6811    fn test_array() {
6812        #[derive(FromBytes, IntoBytes, Immutable)]
6813        #[repr(C)]
6814        struct Foo {
6815            a: [u16; 33],
6816        }
6817
6818        let foo = Foo { a: [0xFFFF; 33] };
6819        let expected = [0xFFu8; 66];
6820        assert_eq!(foo.as_bytes(), &expected[..]);
6821    }
6822
6823    #[test]
6824    fn test_new_zeroed() {
6825        assert!(!bool::new_zeroed());
6826        assert_eq!(u64::new_zeroed(), 0);
6827        // This test exists in order to exercise unsafe code, especially when
6828        // running under Miri.
6829        #[allow(clippy::unit_cmp)]
6830        {
6831            assert_eq!(<()>::new_zeroed(), ());
6832        }
6833    }
6834
6835    #[test]
6836    fn test_transparent_packed_generic_struct() {
6837        #[derive(IntoBytes, FromBytes, Unaligned)]
6838        #[repr(transparent)]
6839        #[allow(dead_code)] // We never construct this type
6840        struct Foo<T> {
6841            _t: T,
6842            _phantom: PhantomData<()>,
6843        }
6844
6845        assert_impl_all!(Foo<u32>: FromZeros, FromBytes, IntoBytes);
6846        assert_impl_all!(Foo<u8>: Unaligned);
6847
6848        #[derive(IntoBytes, FromBytes, Unaligned)]
6849        #[repr(C, packed)]
6850        #[allow(dead_code)] // We never construct this type
6851        struct Bar<T, U> {
6852            _t: T,
6853            _u: U,
6854        }
6855
6856        assert_impl_all!(Bar<u8, AU64>: FromZeros, FromBytes, IntoBytes, Unaligned);
6857    }
6858
6859    #[cfg(feature = "alloc")]
6860    mod alloc {
6861        use super::*;
6862
6863        #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
6864        #[test]
6865        fn test_extend_vec_zeroed() {
6866            // Test extending when there is an existing allocation.
6867            let mut v = vec![100u16, 200, 300];
6868            FromZeros::extend_vec_zeroed(&mut v, 3).unwrap();
6869            assert_eq!(v.len(), 6);
6870            assert_eq!(&*v, &[100, 200, 300, 0, 0, 0]);
6871            drop(v);
6872
6873            // Test extending when there is no existing allocation.
6874            let mut v: Vec<u64> = Vec::new();
6875            FromZeros::extend_vec_zeroed(&mut v, 3).unwrap();
6876            assert_eq!(v.len(), 3);
6877            assert_eq!(&*v, &[0, 0, 0]);
6878            drop(v);
6879        }
6880
6881        #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
6882        #[test]
6883        fn test_extend_vec_zeroed_zst() {
6884            // Test extending when there is an existing (fake) allocation.
6885            let mut v = vec![(), (), ()];
6886            <()>::extend_vec_zeroed(&mut v, 3).unwrap();
6887            assert_eq!(v.len(), 6);
6888            assert_eq!(&*v, &[(), (), (), (), (), ()]);
6889            drop(v);
6890
6891            // Test extending when there is no existing (fake) allocation.
6892            let mut v: Vec<()> = Vec::new();
6893            <()>::extend_vec_zeroed(&mut v, 3).unwrap();
6894            assert_eq!(&*v, &[(), (), ()]);
6895            drop(v);
6896        }
6897
6898        #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
6899        #[test]
6900        fn test_insert_vec_zeroed() {
6901            // Insert at start (no existing allocation).
6902            let mut v: Vec<u64> = Vec::new();
6903            u64::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6904            assert_eq!(v.len(), 2);
6905            assert_eq!(&*v, &[0, 0]);
6906            drop(v);
6907
6908            // Insert at start.
6909            let mut v = vec![100u64, 200, 300];
6910            u64::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6911            assert_eq!(v.len(), 5);
6912            assert_eq!(&*v, &[0, 0, 100, 200, 300]);
6913            drop(v);
6914
6915            // Insert at middle.
6916            let mut v = vec![100u64, 200, 300];
6917            u64::insert_vec_zeroed(&mut v, 1, 1).unwrap();
6918            assert_eq!(v.len(), 4);
6919            assert_eq!(&*v, &[100, 0, 200, 300]);
6920            drop(v);
6921
6922            // Insert at end.
6923            let mut v = vec![100u64, 200, 300];
6924            u64::insert_vec_zeroed(&mut v, 3, 1).unwrap();
6925            assert_eq!(v.len(), 4);
6926            assert_eq!(&*v, &[100, 200, 300, 0]);
6927            drop(v);
6928        }
6929
6930        #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
6931        #[test]
6932        fn test_insert_vec_zeroed_zst() {
6933            // Insert at start (no existing fake allocation).
6934            let mut v: Vec<()> = Vec::new();
6935            <()>::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6936            assert_eq!(v.len(), 2);
6937            assert_eq!(&*v, &[(), ()]);
6938            drop(v);
6939
6940            // Insert at start.
6941            let mut v = vec![(), (), ()];
6942            <()>::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6943            assert_eq!(v.len(), 5);
6944            assert_eq!(&*v, &[(), (), (), (), ()]);
6945            drop(v);
6946
6947            // Insert at middle.
6948            let mut v = vec![(), (), ()];
6949            <()>::insert_vec_zeroed(&mut v, 1, 1).unwrap();
6950            assert_eq!(v.len(), 4);
6951            assert_eq!(&*v, &[(), (), (), ()]);
6952            drop(v);
6953
6954            // Insert at end.
6955            let mut v = vec![(), (), ()];
6956            <()>::insert_vec_zeroed(&mut v, 3, 1).unwrap();
6957            assert_eq!(v.len(), 4);
6958            assert_eq!(&*v, &[(), (), (), ()]);
6959            drop(v);
6960        }
6961
6962        #[test]
6963        fn test_new_box_zeroed() {
6964            assert_eq!(u64::new_box_zeroed(), Ok(Box::new(0)));
6965        }
6966
6967        #[test]
6968        fn test_new_box_zeroed_array() {
6969            drop(<[u32; 0x1000]>::new_box_zeroed());
6970        }
6971
6972        #[test]
6973        fn test_new_box_zeroed_zst() {
6974            // This test exists in order to exercise unsafe code, especially
6975            // when running under Miri.
6976            #[allow(clippy::unit_cmp)]
6977            {
6978                assert_eq!(<()>::new_box_zeroed(), Ok(Box::new(())));
6979            }
6980        }
6981
6982        #[test]
6983        fn test_new_box_zeroed_with_elems() {
6984            let mut s: Box<[u64]> = <[u64]>::new_box_zeroed_with_elems(3).unwrap();
6985            assert_eq!(s.len(), 3);
6986            assert_eq!(&*s, &[0, 0, 0]);
6987            s[1] = 3;
6988            assert_eq!(&*s, &[0, 3, 0]);
6989        }
6990
6991        #[test]
6992        fn test_new_box_zeroed_with_elems_empty() {
6993            let s: Box<[u64]> = <[u64]>::new_box_zeroed_with_elems(0).unwrap();
6994            assert_eq!(s.len(), 0);
6995        }
6996
6997        #[test]
6998        fn test_new_box_zeroed_with_elems_zst() {
6999            let mut s: Box<[()]> = <[()]>::new_box_zeroed_with_elems(3).unwrap();
7000            assert_eq!(s.len(), 3);
7001            assert!(s.get(10).is_none());
7002            // This test exists in order to exercise unsafe code, especially
7003            // when running under Miri.
7004            #[allow(clippy::unit_cmp)]
7005            {
7006                assert_eq!(s[1], ());
7007            }
7008            s[2] = ();
7009        }
7010
7011        #[test]
7012        fn test_new_box_zeroed_with_elems_zst_empty() {
7013            let s: Box<[()]> = <[()]>::new_box_zeroed_with_elems(0).unwrap();
7014            assert_eq!(s.len(), 0);
7015        }
7016
7017        #[test]
7018        fn new_box_zeroed_with_elems_errors() {
7019            assert_eq!(<[u16]>::new_box_zeroed_with_elems(usize::MAX), Err(AllocError));
7020
7021            let max = <usize as core::convert::TryFrom<_>>::try_from(isize::MAX).unwrap();
7022            assert_eq!(
7023                <[u16]>::new_box_zeroed_with_elems((max / mem::size_of::<u16>()) + 1),
7024                Err(AllocError)
7025            );
7026        }
7027    }
7028}