zerocopy/
lib.rs

1// Copyright 2018 The Fuchsia Authors
2//
3// Licensed under the 2-Clause BSD License <LICENSE-BSD or
4// https://opensource.org/license/bsd-2-clause>, Apache License, Version 2.0
5// <LICENSE-APACHE or https://www.apache.org/licenses/LICENSE-2.0>, or the MIT
6// license <LICENSE-MIT or https://opensource.org/licenses/MIT>, at your option.
7// This file may not be copied, modified, or distributed except according to
8// those terms.
9
10// After updating the following doc comment, make sure to run the following
11// command to update `README.md` based on its contents:
12//
13//   cargo -q run --manifest-path tools/Cargo.toml -p generate-readme > README.md
14
15//! *<span style="font-size: 100%; color:grey;">Need more out of zerocopy?
16//! Submit a [customer request issue][customer-request-issue]!</span>*
17//!
18//! ***<span style="font-size: 140%">Fast, safe, <span
19//! style="color:red;">compile error</span>. Pick two.</span>***
20//!
21//! Zerocopy makes zero-cost memory manipulation effortless. We write `unsafe`
22//! so you don't have to.
23//!
24//! *Thanks for using zerocopy 0.8! For an overview of what changes from 0.7,
25//! check out our [release notes][release-notes], which include a step-by-step
26//! guide for upgrading from 0.7.*
27//!
28//! *Have questions? Need help? Ask the maintainers on [GitHub][github-q-a] or
29//! on [Discord][discord]!*
30//!
31//! [customer-request-issue]: https://github.com/google/zerocopy/issues/new/choose
32//! [release-notes]: https://github.com/google/zerocopy/discussions/1680
33//! [github-q-a]: https://github.com/google/zerocopy/discussions/categories/q-a
34//! [discord]: https://discord.gg/MAvWH2R6zk
35//!
36//! # Overview
37//!
38//! ##### Conversion Traits
39//!
40//! Zerocopy provides four derivable traits for zero-cost conversions:
41//! - [`TryFromBytes`] indicates that a type may safely be converted from
42//!   certain byte sequences (conditional on runtime checks)
43//! - [`FromZeros`] indicates that a sequence of zero bytes represents a valid
44//!   instance of a type
45//! - [`FromBytes`] indicates that a type may safely be converted from an
46//!   arbitrary byte sequence
47//! - [`IntoBytes`] indicates that a type may safely be converted *to* a byte
48//!   sequence
49//!
50//! These traits support sized types, slices, and [slice DSTs][slice-dsts].
51//!
52//! [slice-dsts]: KnownLayout#dynamically-sized-types
53//!
54//! ##### Marker Traits
55//!
56//! Zerocopy provides three derivable marker traits that do not provide any
57//! functionality themselves, but are required to call certain methods provided
58//! by the conversion traits:
59//! - [`KnownLayout`] indicates that zerocopy can reason about certain layout
60//!   qualities of a type
61//! - [`Immutable`] indicates that a type is free from interior mutability,
62//!   except by ownership or an exclusive (`&mut`) borrow
63//! - [`Unaligned`] indicates that a type's alignment requirement is 1
64//!
65//! You should generally derive these marker traits whenever possible.
66//!
67//! ##### Conversion Macros
68//!
69//! Zerocopy provides six macros for safe casting between types:
70//!
71//! - ([`try_`][try_transmute])[`transmute`] (conditionally) converts a value of
72//!   one type to a value of another type of the same size
73//! - ([`try_`][try_transmute_mut])[`transmute_mut`] (conditionally) converts a
74//!   mutable reference of one type to a mutable reference of another type of
75//!   the same size
76//! - ([`try_`][try_transmute_ref])[`transmute_ref`] (conditionally) converts a
77//!   mutable or immutable reference of one type to an immutable reference of
78//!   another type of the same size
79//!
80//! These macros perform *compile-time* size and alignment checks, meaning that
81//! unconditional casts have zero cost at runtime. Conditional casts do not need
82//! to validate size or alignment runtime, but do need to validate contents.
83//!
84//! These macros cannot be used in generic contexts. For generic conversions,
85//! use the methods defined by the [conversion traits](#conversion-traits).
86//!
87//! ##### Byteorder-Aware Numerics
88//!
89//! Zerocopy provides byte-order aware integer types that support these
90//! conversions; see the [`byteorder`] module. These types are especially useful
91//! for network parsing.
92//!
93//! # Cargo Features
94//!
95//! - **`alloc`**
96//!   By default, `zerocopy` is `no_std`. When the `alloc` feature is enabled,
97//!   the `alloc` crate is added as a dependency, and some allocation-related
98//!   functionality is added.
99//!
100//! - **`std`**
101//!   By default, `zerocopy` is `no_std`. When the `std` feature is enabled, the
102//!   `std` crate is added as a dependency (ie, `no_std` is disabled), and
103//!   support for some `std` types is added. `std` implies `alloc`.
104//!
105//! - **`derive`**
106//!   Provides derives for the core marker traits via the `zerocopy-derive`
107//!   crate. These derives are re-exported from `zerocopy`, so it is not
108//!   necessary to depend on `zerocopy-derive` directly.
109//!
110//!   However, you may experience better compile times if you instead directly
111//!   depend on both `zerocopy` and `zerocopy-derive` in your `Cargo.toml`,
112//!   since doing so will allow Rust to compile these crates in parallel. To do
113//!   so, do *not* enable the `derive` feature, and list both dependencies in
114//!   your `Cargo.toml` with the same leading non-zero version number; e.g:
115//!
116//!   ```toml
117//!   [dependencies]
118//!   zerocopy = "0.X"
119//!   zerocopy-derive = "0.X"
120//!   ```
121//!
122//!   To avoid the risk of [duplicate import errors][duplicate-import-errors] if
123//!   one of your dependencies enables zerocopy's `derive` feature, import
124//!   derives as `use zerocopy_derive::*` rather than by name (e.g., `use
125//!   zerocopy_derive::FromBytes`).
126//!
127//! - **`simd`**
128//!   When the `simd` feature is enabled, `FromZeros`, `FromBytes`, and
129//!   `IntoBytes` impls are emitted for all stable SIMD types which exist on the
130//!   target platform. Note that the layout of SIMD types is not yet stabilized,
131//!   so these impls may be removed in the future if layout changes make them
132//!   invalid. For more information, see the Unsafe Code Guidelines Reference
133//!   page on the [layout of packed SIMD vectors][simd-layout].
134//!
135//! - **`simd-nightly`**
136//!   Enables the `simd` feature and adds support for SIMD types which are only
137//!   available on nightly. Since these types are unstable, support for any type
138//!   may be removed at any point in the future.
139//!
140//! - **`float-nightly`**
141//!   Adds support for the unstable `f16` and `f128` types. These types are
142//!   not yet fully implemented and may not be supported on all platforms.
143//!
144//! [duplicate-import-errors]: https://github.com/google/zerocopy/issues/1587
145//! [simd-layout]: https://rust-lang.github.io/unsafe-code-guidelines/layout/packed-simd-vectors.html
146//!
147//! # Security Ethos
148//!
149//! Zerocopy is expressly designed for use in security-critical contexts. We
150//! strive to ensure that that zerocopy code is sound under Rust's current
151//! memory model, and *any future memory model*. We ensure this by:
152//! - **...not 'guessing' about Rust's semantics.**
153//!   We annotate `unsafe` code with a precise rationale for its soundness that
154//!   cites a relevant section of Rust's official documentation. When Rust's
155//!   documented semantics are unclear, we work with the Rust Operational
156//!   Semantics Team to clarify Rust's documentation.
157//! - **...rigorously testing our implementation.**
158//!   We run tests using [Miri], ensuring that zerocopy is sound across a wide
159//!   array of supported target platforms of varying endianness and pointer
160//!   width, and across both current and experimental memory models of Rust.
161//! - **...formally proving the correctness of our implementation.**
162//!   We apply formal verification tools like [Kani][kani] to prove zerocopy's
163//!   correctness.
164//!
165//! For more information, see our full [soundness policy].
166//!
167//! [Miri]: https://github.com/rust-lang/miri
168//! [Kani]: https://github.com/model-checking/kani
169//! [soundness policy]: https://github.com/google/zerocopy/blob/main/POLICIES.md#soundness
170//!
171//! # Relationship to Project Safe Transmute
172//!
173//! [Project Safe Transmute] is an official initiative of the Rust Project to
174//! develop language-level support for safer transmutation. The Project consults
175//! with crates like zerocopy to identify aspects of safer transmutation that
176//! would benefit from compiler support, and has developed an [experimental,
177//! compiler-supported analysis][mcp-transmutability] which determines whether,
178//! for a given type, any value of that type may be soundly transmuted into
179//! another type. Once this functionality is sufficiently mature, zerocopy
180//! intends to replace its internal transmutability analysis (implemented by our
181//! custom derives) with the compiler-supported one. This change will likely be
182//! an implementation detail that is invisible to zerocopy's users.
183//!
184//! Project Safe Transmute will not replace the need for most of zerocopy's
185//! higher-level abstractions. The experimental compiler analysis is a tool for
186//! checking the soundness of `unsafe` code, not a tool to avoid writing
187//! `unsafe` code altogether. For the foreseeable future, crates like zerocopy
188//! will still be required in order to provide higher-level abstractions on top
189//! of the building block provided by Project Safe Transmute.
190//!
191//! [Project Safe Transmute]: https://rust-lang.github.io/rfcs/2835-project-safe-transmute.html
192//! [mcp-transmutability]: https://github.com/rust-lang/compiler-team/issues/411
193//!
194//! # MSRV
195//!
196//! See our [MSRV policy].
197//!
198//! [MSRV policy]: https://github.com/google/zerocopy/blob/main/POLICIES.md#msrv
199//!
200//! # Changelog
201//!
202//! Zerocopy uses [GitHub Releases].
203//!
204//! [GitHub Releases]: https://github.com/google/zerocopy/releases
205//!
206//! # Thanks
207//!
208//! Zerocopy is maintained by engineers at Google and Amazon with help from
209//! [many wonderful contributors][contributors]. Thank you to everyone who has
210//! lent a hand in making Rust a little more secure!
211//!
212//! [contributors]: https://github.com/google/zerocopy/graphs/contributors
213
214// Sometimes we want to use lints which were added after our MSRV.
215// `unknown_lints` is `warn` by default and we deny warnings in CI, so without
216// this attribute, any unknown lint would cause a CI failure when testing with
217// our MSRV.
218#![allow(unknown_lints, non_local_definitions, unreachable_patterns)]
219#![deny(renamed_and_removed_lints)]
220#![deny(
221    anonymous_parameters,
222    deprecated_in_future,
223    late_bound_lifetime_arguments,
224    missing_copy_implementations,
225    missing_debug_implementations,
226    missing_docs,
227    path_statements,
228    patterns_in_fns_without_body,
229    rust_2018_idioms,
230    trivial_numeric_casts,
231    unreachable_pub,
232    unsafe_op_in_unsafe_fn,
233    unused_extern_crates,
234    // We intentionally choose not to deny `unused_qualifications`. When items
235    // are added to the prelude (e.g., `core::mem::size_of`), this has the
236    // consequence of making some uses trigger this lint on the latest toolchain
237    // (e.g., `mem::size_of`), but fixing it (e.g. by replacing with `size_of`)
238    // does not work on older toolchains.
239    //
240    // We tested a more complicated fix in #1413, but ultimately decided that,
241    // since this lint is just a minor style lint, the complexity isn't worth it
242    // - it's fine to occasionally have unused qualifications slip through,
243    // especially since these do not affect our user-facing API in any way.
244    variant_size_differences
245)]
246#![cfg_attr(
247    __ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS,
248    deny(fuzzy_provenance_casts, lossy_provenance_casts)
249)]
250#![deny(
251    clippy::all,
252    clippy::alloc_instead_of_core,
253    clippy::arithmetic_side_effects,
254    clippy::as_underscore,
255    clippy::assertions_on_result_states,
256    clippy::as_conversions,
257    clippy::correctness,
258    clippy::dbg_macro,
259    clippy::decimal_literal_representation,
260    clippy::double_must_use,
261    clippy::get_unwrap,
262    clippy::indexing_slicing,
263    clippy::missing_inline_in_public_items,
264    clippy::missing_safety_doc,
265    clippy::must_use_candidate,
266    clippy::must_use_unit,
267    clippy::obfuscated_if_else,
268    clippy::perf,
269    clippy::print_stdout,
270    clippy::return_self_not_must_use,
271    clippy::std_instead_of_core,
272    clippy::style,
273    clippy::suspicious,
274    clippy::todo,
275    clippy::undocumented_unsafe_blocks,
276    clippy::unimplemented,
277    clippy::unnested_or_patterns,
278    clippy::unwrap_used,
279    clippy::use_debug
280)]
281// `clippy::incompatible_msrv` (implied by `clippy::suspicious`): This sometimes
282// has false positives, and we test on our MSRV in CI, so it doesn't help us
283// anyway.
284#![allow(clippy::needless_lifetimes, clippy::type_complexity, clippy::incompatible_msrv)]
285#![deny(
286    rustdoc::bare_urls,
287    rustdoc::broken_intra_doc_links,
288    rustdoc::invalid_codeblock_attributes,
289    rustdoc::invalid_html_tags,
290    rustdoc::invalid_rust_codeblocks,
291    rustdoc::missing_crate_level_docs,
292    rustdoc::private_intra_doc_links
293)]
294// In test code, it makes sense to weight more heavily towards concise, readable
295// code over correct or debuggable code.
296#![cfg_attr(any(test, kani), allow(
297    // In tests, you get line numbers and have access to source code, so panic
298    // messages are less important. You also often unwrap a lot, which would
299    // make expect'ing instead very verbose.
300    clippy::unwrap_used,
301    // In tests, there's no harm to "panic risks" - the worst that can happen is
302    // that your test will fail, and you'll fix it. By contrast, panic risks in
303    // production code introduce the possibly of code panicking unexpectedly "in
304    // the field".
305    clippy::arithmetic_side_effects,
306    clippy::indexing_slicing,
307))]
308#![cfg_attr(not(any(test, kani, feature = "std")), no_std)]
309#![cfg_attr(
310    all(feature = "simd-nightly", any(target_arch = "x86", target_arch = "x86_64")),
311    feature(stdarch_x86_avx512)
312)]
313#![cfg_attr(
314    all(feature = "simd-nightly", target_arch = "arm"),
315    feature(stdarch_arm_dsp, stdarch_arm_neon_intrinsics)
316)]
317#![cfg_attr(
318    all(feature = "simd-nightly", any(target_arch = "powerpc", target_arch = "powerpc64")),
319    feature(stdarch_powerpc)
320)]
321#![cfg_attr(feature = "float-nightly", feature(f16, f128))]
322#![cfg_attr(doc_cfg, feature(doc_cfg))]
323#![cfg_attr(
324    __ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS,
325    feature(layout_for_ptr, coverage_attribute)
326)]
327
328// This is a hack to allow zerocopy-derive derives to work in this crate. They
329// assume that zerocopy is linked as an extern crate, so they access items from
330// it as `zerocopy::Xxx`. This makes that still work.
331#[cfg(any(feature = "derive", test))]
332extern crate self as zerocopy;
333
334#[doc(hidden)]
335#[macro_use]
336pub mod util;
337
338pub mod byte_slice;
339pub mod byteorder;
340mod deprecated;
341
342#[doc(hidden)]
343pub mod doctests;
344
345// This module is `pub` so that zerocopy's error types and error handling
346// documentation is grouped together in a cohesive module. In practice, we
347// expect most users to use the re-export of `error`'s items to avoid identifier
348// stuttering.
349pub mod error;
350mod impls;
351#[doc(hidden)]
352pub mod layout;
353mod macros;
354#[doc(hidden)]
355pub mod pointer;
356mod r#ref;
357mod split_at;
358// FIXME(#252): If we make this pub, come up with a better name.
359mod wrappers;
360
361pub use crate::byte_slice::*;
362pub use crate::byteorder::*;
363pub use crate::error::*;
364pub use crate::r#ref::*;
365pub use crate::split_at::{Split, SplitAt};
366pub use crate::wrappers::*;
367
368use core::{
369    cell::{Cell, UnsafeCell},
370    cmp::Ordering,
371    fmt::{self, Debug, Display, Formatter},
372    hash::Hasher,
373    marker::PhantomData,
374    mem::{self, ManuallyDrop, MaybeUninit as CoreMaybeUninit},
375    num::{
376        NonZeroI128, NonZeroI16, NonZeroI32, NonZeroI64, NonZeroI8, NonZeroIsize, NonZeroU128,
377        NonZeroU16, NonZeroU32, NonZeroU64, NonZeroU8, NonZeroUsize, Wrapping,
378    },
379    ops::{Deref, DerefMut},
380    ptr::{self, NonNull},
381    slice,
382};
383
384#[cfg(feature = "std")]
385use std::io;
386
387use crate::pointer::invariant::{self, BecauseExclusive};
388
389#[cfg(any(feature = "alloc", test, kani))]
390extern crate alloc;
391#[cfg(any(feature = "alloc", test))]
392use alloc::{boxed::Box, vec::Vec};
393use util::MetadataOf;
394
395#[cfg(any(feature = "alloc", test))]
396use core::alloc::Layout;
397
398// Used by `TryFromBytes::is_bit_valid`.
399#[doc(hidden)]
400pub use crate::pointer::{invariant::BecauseImmutable, Maybe, Ptr};
401// Used by `KnownLayout`.
402#[doc(hidden)]
403pub use crate::layout::*;
404
405// For each trait polyfill, as soon as the corresponding feature is stable, the
406// polyfill import will be unused because method/function resolution will prefer
407// the inherent method/function over a trait method/function. Thus, we suppress
408// the `unused_imports` warning.
409//
410// See the documentation on `util::polyfills` for more information.
411#[allow(unused_imports)]
412use crate::util::polyfills::{self, NonNullExt as _, NumExt as _};
413
414#[rustversion::nightly]
415#[cfg(all(test, not(__ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS)))]
416const _: () = {
417    #[deprecated = "some tests may be skipped due to missing RUSTFLAGS=\"--cfg __ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS\""]
418    const _WARNING: () = ();
419    #[warn(deprecated)]
420    _WARNING
421};
422
423// These exist so that code which was written against the old names will get
424// less confusing error messages when they upgrade to a more recent version of
425// zerocopy. On our MSRV toolchain, the error messages read, for example:
426//
427//   error[E0603]: trait `FromZeroes` is private
428//       --> examples/deprecated.rs:1:15
429//        |
430//   1    | use zerocopy::FromZeroes;
431//        |               ^^^^^^^^^^ private trait
432//        |
433//   note: the trait `FromZeroes` is defined here
434//       --> /Users/josh/workspace/zerocopy/src/lib.rs:1845:5
435//        |
436//   1845 | use FromZeros as FromZeroes;
437//        |     ^^^^^^^^^^^^^^^^^^^^^^^
438//
439// The "note" provides enough context to make it easy to figure out how to fix
440// the error.
441#[allow(unused)]
442use {FromZeros as FromZeroes, IntoBytes as AsBytes, Ref as LayoutVerified};
443
444/// Implements [`KnownLayout`].
445///
446/// This derive analyzes various aspects of a type's layout that are needed for
447/// some of zerocopy's APIs. It can be applied to structs, enums, and unions;
448/// e.g.:
449///
450/// ```
451/// # use zerocopy_derive::KnownLayout;
452/// #[derive(KnownLayout)]
453/// struct MyStruct {
454/// # /*
455///     ...
456/// # */
457/// }
458///
459/// #[derive(KnownLayout)]
460/// enum MyEnum {
461/// #   V00,
462/// # /*
463///     ...
464/// # */
465/// }
466///
467/// #[derive(KnownLayout)]
468/// union MyUnion {
469/// #   variant: u8,
470/// # /*
471///     ...
472/// # */
473/// }
474/// ```
475///
476/// # Limitations
477///
478/// This derive cannot currently be applied to unsized structs without an
479/// explicit `repr` attribute.
480///
481/// Some invocations of this derive run afoul of a [known bug] in Rust's type
482/// privacy checker. For example, this code:
483///
484/// ```compile_fail,E0446
485/// use zerocopy::*;
486/// # use zerocopy_derive::*;
487///
488/// #[derive(KnownLayout)]
489/// #[repr(C)]
490/// pub struct PublicType {
491///     leading: Foo,
492///     trailing: Bar,
493/// }
494///
495/// #[derive(KnownLayout)]
496/// struct Foo;
497///
498/// #[derive(KnownLayout)]
499/// struct Bar;
500/// ```
501///
502/// ...results in a compilation error:
503///
504/// ```text
505/// error[E0446]: private type `Bar` in public interface
506///  --> examples/bug.rs:3:10
507///    |
508/// 3  | #[derive(KnownLayout)]
509///    |          ^^^^^^^^^^^ can't leak private type
510/// ...
511/// 14 | struct Bar;
512///    | ---------- `Bar` declared as private
513///    |
514///    = note: this error originates in the derive macro `KnownLayout` (in Nightly builds, run with -Z macro-backtrace for more info)
515/// ```
516///
517/// This issue arises when `#[derive(KnownLayout)]` is applied to `repr(C)`
518/// structs whose trailing field type is less public than the enclosing struct.
519///
520/// To work around this, mark the trailing field type `pub` and annotate it with
521/// `#[doc(hidden)]`; e.g.:
522///
523/// ```no_run
524/// use zerocopy::*;
525/// # use zerocopy_derive::*;
526///
527/// #[derive(KnownLayout)]
528/// #[repr(C)]
529/// pub struct PublicType {
530///     leading: Foo,
531///     trailing: Bar,
532/// }
533///
534/// #[derive(KnownLayout)]
535/// struct Foo;
536///
537/// #[doc(hidden)]
538/// #[derive(KnownLayout)]
539/// pub struct Bar; // <- `Bar` is now also `pub`
540/// ```
541///
542/// [known bug]: https://github.com/rust-lang/rust/issues/45713
543#[cfg(any(feature = "derive", test))]
544#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
545pub use zerocopy_derive::KnownLayout;
546
547/// Indicates that zerocopy can reason about certain aspects of a type's layout.
548///
549/// This trait is required by many of zerocopy's APIs. It supports sized types,
550/// slices, and [slice DSTs](#dynamically-sized-types).
551///
552/// # Implementation
553///
554/// **Do not implement this trait yourself!** Instead, use
555/// [`#[derive(KnownLayout)]`][derive]; e.g.:
556///
557/// ```
558/// # use zerocopy_derive::KnownLayout;
559/// #[derive(KnownLayout)]
560/// struct MyStruct {
561/// # /*
562///     ...
563/// # */
564/// }
565///
566/// #[derive(KnownLayout)]
567/// enum MyEnum {
568/// # /*
569///     ...
570/// # */
571/// }
572///
573/// #[derive(KnownLayout)]
574/// union MyUnion {
575/// #   variant: u8,
576/// # /*
577///     ...
578/// # */
579/// }
580/// ```
581///
582/// This derive performs a sophisticated analysis to deduce the layout
583/// characteristics of types. You **must** implement this trait via the derive.
584///
585/// # Dynamically-sized types
586///
587/// `KnownLayout` supports slice-based dynamically sized types ("slice DSTs").
588///
589/// A slice DST is a type whose trailing field is either a slice or another
590/// slice DST, rather than a type with fixed size. For example:
591///
592/// ```
593/// #[repr(C)]
594/// struct PacketHeader {
595/// # /*
596///     ...
597/// # */
598/// }
599///
600/// #[repr(C)]
601/// struct Packet {
602///     header: PacketHeader,
603///     body: [u8],
604/// }
605/// ```
606///
607/// It can be useful to think of slice DSTs as a generalization of slices - in
608/// other words, a normal slice is just the special case of a slice DST with
609/// zero leading fields. In particular:
610/// - Like slices, slice DSTs can have different lengths at runtime
611/// - Like slices, slice DSTs cannot be passed by-value, but only by reference
612///   or via other indirection such as `Box`
613/// - Like slices, a reference (or `Box`, or other pointer type) to a slice DST
614///   encodes the number of elements in the trailing slice field
615///
616/// ## Slice DST layout
617///
618/// Just like other composite Rust types, the layout of a slice DST is not
619/// well-defined unless it is specified using an explicit `#[repr(...)]`
620/// attribute such as `#[repr(C)]`. [Other representations are
621/// supported][reprs], but in this section, we'll use `#[repr(C)]` as our
622/// example.
623///
624/// A `#[repr(C)]` slice DST is laid out [just like sized `#[repr(C)]`
625/// types][repr-c-structs], but the presenence of a variable-length field
626/// introduces the possibility of *dynamic padding*. In particular, it may be
627/// necessary to add trailing padding *after* the trailing slice field in order
628/// to satisfy the outer type's alignment, and the amount of padding required
629/// may be a function of the length of the trailing slice field. This is just a
630/// natural consequence of the normal `#[repr(C)]` rules applied to slice DSTs,
631/// but it can result in surprising behavior. For example, consider the
632/// following type:
633///
634/// ```
635/// #[repr(C)]
636/// struct Foo {
637///     a: u32,
638///     b: u8,
639///     z: [u16],
640/// }
641/// ```
642///
643/// Assuming that `u32` has alignment 4 (this is not true on all platforms),
644/// then `Foo` has alignment 4 as well. Here is the smallest possible value for
645/// `Foo`:
646///
647/// ```text
648/// byte offset | 01234567
649///       field | aaaab---
650///                    ><
651/// ```
652///
653/// In this value, `z` has length 0. Abiding by `#[repr(C)]`, the lowest offset
654/// that we can place `z` at is 5, but since `z` has alignment 2, we need to
655/// round up to offset 6. This means that there is one byte of padding between
656/// `b` and `z`, then 0 bytes of `z` itself (denoted `><` in this diagram), and
657/// then two bytes of padding after `z` in order to satisfy the overall
658/// alignment of `Foo`. The size of this instance is 8 bytes.
659///
660/// What about if `z` has length 1?
661///
662/// ```text
663/// byte offset | 01234567
664///       field | aaaab-zz
665/// ```
666///
667/// In this instance, `z` has length 1, and thus takes up 2 bytes. That means
668/// that we no longer need padding after `z` in order to satisfy `Foo`'s
669/// alignment. We've now seen two different values of `Foo` with two different
670/// lengths of `z`, but they both have the same size - 8 bytes.
671///
672/// What about if `z` has length 2?
673///
674/// ```text
675/// byte offset | 012345678901
676///       field | aaaab-zzzz--
677/// ```
678///
679/// Now `z` has length 2, and thus takes up 4 bytes. This brings our un-padded
680/// size to 10, and so we now need another 2 bytes of padding after `z` to
681/// satisfy `Foo`'s alignment.
682///
683/// Again, all of this is just a logical consequence of the `#[repr(C)]` rules
684/// applied to slice DSTs, but it can be surprising that the amount of trailing
685/// padding becomes a function of the trailing slice field's length, and thus
686/// can only be computed at runtime.
687///
688/// [reprs]: https://doc.rust-lang.org/reference/type-layout.html#representations
689/// [repr-c-structs]: https://doc.rust-lang.org/reference/type-layout.html#reprc-structs
690///
691/// ## What is a valid size?
692///
693/// There are two places in zerocopy's API that we refer to "a valid size" of a
694/// type. In normal casts or conversions, where the source is a byte slice, we
695/// need to know whether the source byte slice is a valid size of the
696/// destination type. In prefix or suffix casts, we need to know whether *there
697/// exists* a valid size of the destination type which fits in the source byte
698/// slice and, if so, what the largest such size is.
699///
700/// As outlined above, a slice DST's size is defined by the number of elements
701/// in its trailing slice field. However, there is not necessarily a 1-to-1
702/// mapping between trailing slice field length and overall size. As we saw in
703/// the previous section with the type `Foo`, instances with both 0 and 1
704/// elements in the trailing `z` field result in a `Foo` whose size is 8 bytes.
705///
706/// When we say "x is a valid size of `T`", we mean one of two things:
707/// - If `T: Sized`, then we mean that `x == size_of::<T>()`
708/// - If `T` is a slice DST, then we mean that there exists a `len` such that the instance of
709///   `T` with `len` trailing slice elements has size `x`
710///
711/// When we say "largest possible size of `T` that fits in a byte slice", we
712/// mean one of two things:
713/// - If `T: Sized`, then we mean `size_of::<T>()` if the byte slice is at least
714///   `size_of::<T>()` bytes long
715/// - If `T` is a slice DST, then we mean to consider all values, `len`, such
716///   that the instance of `T` with `len` trailing slice elements fits in the
717///   byte slice, and to choose the largest such `len`, if any
718///
719///
720/// # Safety
721///
722/// This trait does not convey any safety guarantees to code outside this crate.
723///
724/// You must not rely on the `#[doc(hidden)]` internals of `KnownLayout`. Future
725/// releases of zerocopy may make backwards-breaking changes to these items,
726/// including changes that only affect soundness, which may cause code which
727/// uses those items to silently become unsound.
728///
729#[cfg_attr(feature = "derive", doc = "[derive]: zerocopy_derive::KnownLayout")]
730#[cfg_attr(
731    not(feature = "derive"),
732    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.KnownLayout.html"),
733)]
734#[cfg_attr(
735    zerocopy_diagnostic_on_unimplemented_1_78_0,
736    diagnostic::on_unimplemented(note = "Consider adding `#[derive(KnownLayout)]` to `{Self}`")
737)]
738pub unsafe trait KnownLayout {
739    // The `Self: Sized` bound makes it so that `KnownLayout` can still be
740    // object safe. It's not currently object safe thanks to `const LAYOUT`, and
741    // it likely won't be in the future, but there's no reason not to be
742    // forwards-compatible with object safety.
743    #[doc(hidden)]
744    fn only_derive_is_allowed_to_implement_this_trait()
745    where
746        Self: Sized;
747
748    /// The type of metadata stored in a pointer to `Self`.
749    ///
750    /// This is `()` for sized types and `usize` for slice DSTs.
751    type PointerMetadata: PointerMetadata;
752
753    /// A maybe-uninitialized analog of `Self`
754    ///
755    /// # Safety
756    ///
757    /// `Self::LAYOUT` and `Self::MaybeUninit::LAYOUT` are identical.
758    /// `Self::MaybeUninit` admits uninitialized bytes in all positions.
759    #[doc(hidden)]
760    type MaybeUninit: ?Sized + KnownLayout<PointerMetadata = Self::PointerMetadata>;
761
762    /// The layout of `Self`.
763    ///
764    /// # Safety
765    ///
766    /// Callers may assume that `LAYOUT` accurately reflects the layout of
767    /// `Self`. In particular:
768    /// - `LAYOUT.align` is equal to `Self`'s alignment
769    /// - If `Self: Sized`, then `LAYOUT.size_info == SizeInfo::Sized { size }`
770    ///   where `size == size_of::<Self>()`
771    /// - If `Self` is a slice DST, then `LAYOUT.size_info ==
772    ///   SizeInfo::SliceDst(slice_layout)` where:
773    ///   - The size, `size`, of an instance of `Self` with `elems` trailing
774    ///     slice elements is equal to `slice_layout.offset +
775    ///     slice_layout.elem_size * elems` rounded up to the nearest multiple
776    ///     of `LAYOUT.align`
777    ///   - For such an instance, any bytes in the range `[slice_layout.offset +
778    ///     slice_layout.elem_size * elems, size)` are padding and must not be
779    ///     assumed to be initialized
780    #[doc(hidden)]
781    const LAYOUT: DstLayout;
782
783    /// SAFETY: The returned pointer has the same address and provenance as
784    /// `bytes`. If `Self` is a DST, the returned pointer's referent has `elems`
785    /// elements in its trailing slice.
786    #[doc(hidden)]
787    fn raw_from_ptr_len(bytes: NonNull<u8>, meta: Self::PointerMetadata) -> NonNull<Self>;
788
789    /// Extracts the metadata from a pointer to `Self`.
790    ///
791    /// # Safety
792    ///
793    /// `pointer_to_metadata` always returns the correct metadata stored in
794    /// `ptr`.
795    #[doc(hidden)]
796    fn pointer_to_metadata(ptr: *mut Self) -> Self::PointerMetadata;
797
798    /// Computes the length of the byte range addressed by `ptr`.
799    ///
800    /// Returns `None` if the resulting length would not fit in an `usize`.
801    ///
802    /// # Safety
803    ///
804    /// Callers may assume that `size_of_val_raw` always returns the correct
805    /// size.
806    ///
807    /// Callers may assume that, if `ptr` addresses a byte range whose length
808    /// fits in an `usize`, this will return `Some`.
809    #[doc(hidden)]
810    #[must_use]
811    #[inline(always)]
812    fn size_of_val_raw(ptr: NonNull<Self>) -> Option<usize> {
813        let meta = Self::pointer_to_metadata(ptr.as_ptr());
814        // SAFETY: `size_for_metadata` promises to only return `None` if the
815        // resulting size would not fit in a `usize`.
816        meta.size_for_metadata(Self::LAYOUT)
817    }
818
819    #[doc(hidden)]
820    #[must_use]
821    #[inline(always)]
822    fn raw_dangling() -> NonNull<Self> {
823        let meta = Self::PointerMetadata::from_elem_count(0);
824        Self::raw_from_ptr_len(NonNull::dangling(), meta)
825    }
826}
827
828/// Efficiently produces the [`TrailingSliceLayout`] of `T`.
829#[inline(always)]
830pub(crate) fn trailing_slice_layout<T>() -> TrailingSliceLayout
831where
832    T: ?Sized + KnownLayout<PointerMetadata = usize>,
833{
834    trait LayoutFacts {
835        const SIZE_INFO: TrailingSliceLayout;
836    }
837
838    impl<T: ?Sized> LayoutFacts for T
839    where
840        T: KnownLayout<PointerMetadata = usize>,
841    {
842        const SIZE_INFO: TrailingSliceLayout = match T::LAYOUT.size_info {
843            crate::SizeInfo::Sized { .. } => const_panic!("unreachable"),
844            crate::SizeInfo::SliceDst(info) => info,
845        };
846    }
847
848    T::SIZE_INFO
849}
850
851/// The metadata associated with a [`KnownLayout`] type.
852#[doc(hidden)]
853pub trait PointerMetadata: Copy + Eq + Debug {
854    /// Constructs a `Self` from an element count.
855    ///
856    /// If `Self = ()`, this returns `()`. If `Self = usize`, this returns
857    /// `elems`. No other types are currently supported.
858    fn from_elem_count(elems: usize) -> Self;
859
860    /// Computes the size of the object with the given layout and pointer
861    /// metadata.
862    ///
863    /// # Panics
864    ///
865    /// If `Self = ()`, `layout` must describe a sized type. If `Self = usize`,
866    /// `layout` must describe a slice DST. Otherwise, `size_for_metadata` may
867    /// panic.
868    ///
869    /// # Safety
870    ///
871    /// `size_for_metadata` promises to only return `None` if the resulting size
872    /// would not fit in a `usize`.
873    fn size_for_metadata(self, layout: DstLayout) -> Option<usize>;
874}
875
876impl PointerMetadata for () {
877    #[inline]
878    #[allow(clippy::unused_unit)]
879    fn from_elem_count(_elems: usize) -> () {}
880
881    #[inline]
882    fn size_for_metadata(self, layout: DstLayout) -> Option<usize> {
883        match layout.size_info {
884            SizeInfo::Sized { size } => Some(size),
885            // NOTE: This branch is unreachable, but we return `None` rather
886            // than `unreachable!()` to avoid generating panic paths.
887            SizeInfo::SliceDst(_) => None,
888        }
889    }
890}
891
892impl PointerMetadata for usize {
893    #[inline]
894    fn from_elem_count(elems: usize) -> usize {
895        elems
896    }
897
898    #[inline]
899    fn size_for_metadata(self, layout: DstLayout) -> Option<usize> {
900        match layout.size_info {
901            SizeInfo::SliceDst(TrailingSliceLayout { offset, elem_size }) => {
902                let slice_len = elem_size.checked_mul(self)?;
903                let without_padding = offset.checked_add(slice_len)?;
904                without_padding.checked_add(util::padding_needed_for(without_padding, layout.align))
905            }
906            // NOTE: This branch is unreachable, but we return `None` rather
907            // than `unreachable!()` to avoid generating panic paths.
908            SizeInfo::Sized { .. } => None,
909        }
910    }
911}
912
913// SAFETY: Delegates safety to `DstLayout::for_slice`.
914unsafe impl<T> KnownLayout for [T] {
915    #[allow(clippy::missing_inline_in_public_items, dead_code)]
916    #[cfg_attr(
917        all(coverage_nightly, __ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS),
918        coverage(off)
919    )]
920    fn only_derive_is_allowed_to_implement_this_trait()
921    where
922        Self: Sized,
923    {
924    }
925
926    type PointerMetadata = usize;
927
928    // SAFETY: `CoreMaybeUninit<T>::LAYOUT` and `T::LAYOUT` are identical
929    // because `CoreMaybeUninit<T>` has the same size and alignment as `T` [1].
930    // Consequently, `[CoreMaybeUninit<T>]::LAYOUT` and `[T]::LAYOUT` are
931    // identical, because they both lack a fixed-sized prefix and because they
932    // inherit the alignments of their inner element type (which are identical)
933    // [2][3].
934    //
935    // `[CoreMaybeUninit<T>]` admits uninitialized bytes at all positions
936    // because `CoreMaybeUninit<T>` admits uninitialized bytes at all positions
937    // and because the inner elements of `[CoreMaybeUninit<T>]` are laid out
938    // back-to-back [2][3].
939    //
940    // [1] Per https://doc.rust-lang.org/1.81.0/std/mem/union.MaybeUninit.html#layout-1:
941    //
942    //   `MaybeUninit<T>` is guaranteed to have the same size, alignment, and ABI as
943    //   `T`
944    //
945    // [2] Per https://doc.rust-lang.org/1.82.0/reference/type-layout.html#slice-layout:
946    //
947    //   Slices have the same layout as the section of the array they slice.
948    //
949    // [3] Per https://doc.rust-lang.org/1.82.0/reference/type-layout.html#array-layout:
950    //
951    //   An array of `[T; N]` has a size of `size_of::<T>() * N` and the same
952    //   alignment of `T`. Arrays are laid out so that the zero-based `nth`
953    //   element of the array is offset from the start of the array by `n *
954    //   size_of::<T>()` bytes.
955    type MaybeUninit = [CoreMaybeUninit<T>];
956
957    const LAYOUT: DstLayout = DstLayout::for_slice::<T>();
958
959    // SAFETY: `.cast` preserves address and provenance. The returned pointer
960    // refers to an object with `elems` elements by construction.
961    #[inline(always)]
962    fn raw_from_ptr_len(data: NonNull<u8>, elems: usize) -> NonNull<Self> {
963        // FIXME(#67): Remove this allow. See NonNullExt for more details.
964        #[allow(unstable_name_collisions)]
965        NonNull::slice_from_raw_parts(data.cast::<T>(), elems)
966    }
967
968    #[inline(always)]
969    fn pointer_to_metadata(ptr: *mut [T]) -> usize {
970        #[allow(clippy::as_conversions)]
971        let slc = ptr as *const [()];
972
973        // SAFETY:
974        // - `()` has alignment 1, so `slc` is trivially aligned.
975        // - `slc` was derived from a non-null pointer.
976        // - The size is 0 regardless of the length, so it is sound to
977        //   materialize a reference regardless of location.
978        // - By invariant, `self.ptr` has valid provenance.
979        let slc = unsafe { &*slc };
980
981        // This is correct because the preceding `as` cast preserves the number
982        // of slice elements. [1]
983        //
984        // [1] Per https://doc.rust-lang.org/reference/expressions/operator-expr.html#pointer-to-pointer-cast:
985        //
986        //   For slice types like `[T]` and `[U]`, the raw pointer types `*const
987        //   [T]`, `*mut [T]`, `*const [U]`, and `*mut [U]` encode the number of
988        //   elements in this slice. Casts between these raw pointer types
989        //   preserve the number of elements. ... The same holds for `str` and
990        //   any compound type whose unsized tail is a slice type, such as
991        //   struct `Foo(i32, [u8])` or `(u64, Foo)`.
992        slc.len()
993    }
994}
995
996#[rustfmt::skip]
997impl_known_layout!(
998    (),
999    u8, i8, u16, i16, u32, i32, u64, i64, u128, i128, usize, isize, f32, f64,
1000    bool, char,
1001    NonZeroU8, NonZeroI8, NonZeroU16, NonZeroI16, NonZeroU32, NonZeroI32,
1002    NonZeroU64, NonZeroI64, NonZeroU128, NonZeroI128, NonZeroUsize, NonZeroIsize
1003);
1004#[rustfmt::skip]
1005#[cfg(feature = "float-nightly")]
1006impl_known_layout!(
1007    #[cfg_attr(doc_cfg, doc(cfg(feature = "float-nightly")))]
1008    f16,
1009    #[cfg_attr(doc_cfg, doc(cfg(feature = "float-nightly")))]
1010    f128
1011);
1012#[rustfmt::skip]
1013impl_known_layout!(
1014    T         => Option<T>,
1015    T: ?Sized => PhantomData<T>,
1016    T         => Wrapping<T>,
1017    T         => CoreMaybeUninit<T>,
1018    T: ?Sized => *const T,
1019    T: ?Sized => *mut T,
1020    T: ?Sized => &'_ T,
1021    T: ?Sized => &'_ mut T,
1022);
1023impl_known_layout!(const N: usize, T => [T; N]);
1024
1025// SAFETY: `str` has the same representation as `[u8]`. `ManuallyDrop<T>` [1],
1026// `UnsafeCell<T>` [2], and `Cell<T>` [3] have the same representation as `T`.
1027//
1028// [1] Per https://doc.rust-lang.org/1.85.0/std/mem/struct.ManuallyDrop.html:
1029//
1030//   `ManuallyDrop<T>` is guaranteed to have the same layout and bit validity as
1031//   `T`
1032//
1033// [2] Per https://doc.rust-lang.org/1.85.0/core/cell/struct.UnsafeCell.html#memory-layout:
1034//
1035//   `UnsafeCell<T>` has the same in-memory representation as its inner type
1036//   `T`.
1037//
1038// [3] Per https://doc.rust-lang.org/1.85.0/core/cell/struct.Cell.html#memory-layout:
1039//
1040//   `Cell<T>` has the same in-memory representation as `T`.
1041const _: () = unsafe {
1042    unsafe_impl_known_layout!(
1043        #[repr([u8])]
1044        str
1045    );
1046    unsafe_impl_known_layout!(T: ?Sized + KnownLayout => #[repr(T)] ManuallyDrop<T>);
1047    unsafe_impl_known_layout!(T: ?Sized + KnownLayout => #[repr(T)] UnsafeCell<T>);
1048    unsafe_impl_known_layout!(T: ?Sized + KnownLayout => #[repr(T)] Cell<T>);
1049};
1050
1051// SAFETY:
1052// - By consequence of the invariant on `T::MaybeUninit` that `T::LAYOUT` and
1053//   `T::MaybeUninit::LAYOUT` are equal, `T` and `T::MaybeUninit` have the same:
1054//   - Fixed prefix size
1055//   - Alignment
1056//   - (For DSTs) trailing slice element size
1057// - By consequence of the above, referents `T::MaybeUninit` and `T` have the
1058//   require the same kind of pointer metadata, and thus it is valid to perform
1059//   an `as` cast from `*mut T` and `*mut T::MaybeUninit`, and this operation
1060//   preserves referent size (ie, `size_of_val_raw`).
1061const _: () = unsafe {
1062    unsafe_impl_known_layout!(T: ?Sized + KnownLayout => #[repr(T::MaybeUninit)] MaybeUninit<T>)
1063};
1064
1065/// Analyzes whether a type is [`FromZeros`].
1066///
1067/// This derive analyzes, at compile time, whether the annotated type satisfies
1068/// the [safety conditions] of `FromZeros` and implements `FromZeros` and its
1069/// supertraits if it is sound to do so. This derive can be applied to structs,
1070/// enums, and unions; e.g.:
1071///
1072/// ```
1073/// # use zerocopy_derive::{FromZeros, Immutable};
1074/// #[derive(FromZeros)]
1075/// struct MyStruct {
1076/// # /*
1077///     ...
1078/// # */
1079/// }
1080///
1081/// #[derive(FromZeros)]
1082/// #[repr(u8)]
1083/// enum MyEnum {
1084/// #   Variant0,
1085/// # /*
1086///     ...
1087/// # */
1088/// }
1089///
1090/// #[derive(FromZeros, Immutable)]
1091/// union MyUnion {
1092/// #   variant: u8,
1093/// # /*
1094///     ...
1095/// # */
1096/// }
1097/// ```
1098///
1099/// [safety conditions]: trait@FromZeros#safety
1100///
1101/// # Analysis
1102///
1103/// *This section describes, roughly, the analysis performed by this derive to
1104/// determine whether it is sound to implement `FromZeros` for a given type.
1105/// Unless you are modifying the implementation of this derive, or attempting to
1106/// manually implement `FromZeros` for a type yourself, you don't need to read
1107/// this section.*
1108///
1109/// If a type has the following properties, then this derive can implement
1110/// `FromZeros` for that type:
1111///
1112/// - If the type is a struct, all of its fields must be `FromZeros`.
1113/// - If the type is an enum:
1114///   - It must have a defined representation (`repr`s `C`, `u8`, `u16`, `u32`,
1115///     `u64`, `usize`, `i8`, `i16`, `i32`, `i64`, or `isize`).
1116///   - It must have a variant with a discriminant/tag of `0`, and its fields
1117///     must be `FromZeros`. See [the reference] for a description of
1118///     discriminant values are specified.
1119///   - The fields of that variant must be `FromZeros`.
1120///
1121/// This analysis is subject to change. Unsafe code may *only* rely on the
1122/// documented [safety conditions] of `FromZeros`, and must *not* rely on the
1123/// implementation details of this derive.
1124///
1125/// [the reference]: https://doc.rust-lang.org/reference/items/enumerations.html#custom-discriminant-values-for-fieldless-enumerations
1126///
1127/// ## Why isn't an explicit representation required for structs?
1128///
1129/// Neither this derive, nor the [safety conditions] of `FromZeros`, requires
1130/// that structs are marked with `#[repr(C)]`.
1131///
1132/// Per the [Rust reference](reference),
1133///
1134/// > The representation of a type can change the padding between fields, but
1135/// > does not change the layout of the fields themselves.
1136///
1137/// [reference]: https://doc.rust-lang.org/reference/type-layout.html#representations
1138///
1139/// Since the layout of structs only consists of padding bytes and field bytes,
1140/// a struct is soundly `FromZeros` if:
1141/// 1. its padding is soundly `FromZeros`, and
1142/// 2. its fields are soundly `FromZeros`.
1143///
1144/// The answer to the first question is always yes: padding bytes do not have
1145/// any validity constraints. A [discussion] of this question in the Unsafe Code
1146/// Guidelines Working Group concluded that it would be virtually unimaginable
1147/// for future versions of rustc to add validity constraints to padding bytes.
1148///
1149/// [discussion]: https://github.com/rust-lang/unsafe-code-guidelines/issues/174
1150///
1151/// Whether a struct is soundly `FromZeros` therefore solely depends on whether
1152/// its fields are `FromZeros`.
1153// FIXME(#146): Document why we don't require an enum to have an explicit `repr`
1154// attribute.
1155#[cfg(any(feature = "derive", test))]
1156#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
1157pub use zerocopy_derive::FromZeros;
1158
1159/// Analyzes whether a type is [`Immutable`].
1160///
1161/// This derive analyzes, at compile time, whether the annotated type satisfies
1162/// the [safety conditions] of `Immutable` and implements `Immutable` if it is
1163/// sound to do so. This derive can be applied to structs, enums, and unions;
1164/// e.g.:
1165///
1166/// ```
1167/// # use zerocopy_derive::Immutable;
1168/// #[derive(Immutable)]
1169/// struct MyStruct {
1170/// # /*
1171///     ...
1172/// # */
1173/// }
1174///
1175/// #[derive(Immutable)]
1176/// enum MyEnum {
1177/// #   Variant0,
1178/// # /*
1179///     ...
1180/// # */
1181/// }
1182///
1183/// #[derive(Immutable)]
1184/// union MyUnion {
1185/// #   variant: u8,
1186/// # /*
1187///     ...
1188/// # */
1189/// }
1190/// ```
1191///
1192/// # Analysis
1193///
1194/// *This section describes, roughly, the analysis performed by this derive to
1195/// determine whether it is sound to implement `Immutable` for a given type.
1196/// Unless you are modifying the implementation of this derive, you don't need
1197/// to read this section.*
1198///
1199/// If a type has the following properties, then this derive can implement
1200/// `Immutable` for that type:
1201///
1202/// - All fields must be `Immutable`.
1203///
1204/// This analysis is subject to change. Unsafe code may *only* rely on the
1205/// documented [safety conditions] of `Immutable`, and must *not* rely on the
1206/// implementation details of this derive.
1207///
1208/// [safety conditions]: trait@Immutable#safety
1209#[cfg(any(feature = "derive", test))]
1210#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
1211pub use zerocopy_derive::Immutable;
1212
1213/// Types which are free from interior mutability.
1214///
1215/// `T: Immutable` indicates that `T` does not permit interior mutation, except
1216/// by ownership or an exclusive (`&mut`) borrow.
1217///
1218/// # Implementation
1219///
1220/// **Do not implement this trait yourself!** Instead, use
1221/// [`#[derive(Immutable)]`][derive] (requires the `derive` Cargo feature);
1222/// e.g.:
1223///
1224/// ```
1225/// # use zerocopy_derive::Immutable;
1226/// #[derive(Immutable)]
1227/// struct MyStruct {
1228/// # /*
1229///     ...
1230/// # */
1231/// }
1232///
1233/// #[derive(Immutable)]
1234/// enum MyEnum {
1235/// # /*
1236///     ...
1237/// # */
1238/// }
1239///
1240/// #[derive(Immutable)]
1241/// union MyUnion {
1242/// #   variant: u8,
1243/// # /*
1244///     ...
1245/// # */
1246/// }
1247/// ```
1248///
1249/// This derive performs a sophisticated, compile-time safety analysis to
1250/// determine whether a type is `Immutable`.
1251///
1252/// # Safety
1253///
1254/// Unsafe code outside of this crate must not make any assumptions about `T`
1255/// based on `T: Immutable`. We reserve the right to relax the requirements for
1256/// `Immutable` in the future, and if unsafe code outside of this crate makes
1257/// assumptions based on `T: Immutable`, future relaxations may cause that code
1258/// to become unsound.
1259///
1260// # Safety (Internal)
1261//
1262// If `T: Immutable`, unsafe code *inside of this crate* may assume that, given
1263// `t: &T`, `t` does not contain any [`UnsafeCell`]s at any byte location
1264// within the byte range addressed by `t`. This includes ranges of length 0
1265// (e.g., `UnsafeCell<()>` and `[UnsafeCell<u8>; 0]`). If a type implements
1266// `Immutable` which violates this assumptions, it may cause this crate to
1267// exhibit [undefined behavior].
1268//
1269// [`UnsafeCell`]: core::cell::UnsafeCell
1270// [undefined behavior]: https://raphlinus.github.io/programming/rust/2018/08/17/undefined-behavior.html
1271#[cfg_attr(
1272    feature = "derive",
1273    doc = "[derive]: zerocopy_derive::Immutable",
1274    doc = "[derive-analysis]: zerocopy_derive::Immutable#analysis"
1275)]
1276#[cfg_attr(
1277    not(feature = "derive"),
1278    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Immutable.html"),
1279    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Immutable.html#analysis"),
1280)]
1281#[cfg_attr(
1282    zerocopy_diagnostic_on_unimplemented_1_78_0,
1283    diagnostic::on_unimplemented(note = "Consider adding `#[derive(Immutable)]` to `{Self}`")
1284)]
1285pub unsafe trait Immutable {
1286    // The `Self: Sized` bound makes it so that `Immutable` is still object
1287    // safe.
1288    #[doc(hidden)]
1289    fn only_derive_is_allowed_to_implement_this_trait()
1290    where
1291        Self: Sized;
1292}
1293
1294/// Implements [`TryFromBytes`].
1295///
1296/// This derive synthesizes the runtime checks required to check whether a
1297/// sequence of initialized bytes corresponds to a valid instance of a type.
1298/// This derive can be applied to structs, enums, and unions; e.g.:
1299///
1300/// ```
1301/// # use zerocopy_derive::{TryFromBytes, Immutable};
1302/// #[derive(TryFromBytes)]
1303/// struct MyStruct {
1304/// # /*
1305///     ...
1306/// # */
1307/// }
1308///
1309/// #[derive(TryFromBytes)]
1310/// #[repr(u8)]
1311/// enum MyEnum {
1312/// #   V00,
1313/// # /*
1314///     ...
1315/// # */
1316/// }
1317///
1318/// #[derive(TryFromBytes, Immutable)]
1319/// union MyUnion {
1320/// #   variant: u8,
1321/// # /*
1322///     ...
1323/// # */
1324/// }
1325/// ```
1326///
1327/// # Portability
1328///
1329/// To ensure consistent endianness for enums with multi-byte representations,
1330/// explicitly specify and convert each discriminant using `.to_le()` or
1331/// `.to_be()`; e.g.:
1332///
1333/// ```
1334/// # use zerocopy_derive::TryFromBytes;
1335/// // `DataStoreVersion` is encoded in little-endian.
1336/// #[derive(TryFromBytes)]
1337/// #[repr(u32)]
1338/// pub enum DataStoreVersion {
1339///     /// Version 1 of the data store.
1340///     V1 = 9u32.to_le(),
1341///
1342///     /// Version 2 of the data store.
1343///     V2 = 10u32.to_le(),
1344/// }
1345/// ```
1346///
1347/// [safety conditions]: trait@TryFromBytes#safety
1348#[cfg(any(feature = "derive", test))]
1349#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
1350pub use zerocopy_derive::TryFromBytes;
1351
1352/// Types for which some bit patterns are valid.
1353///
1354/// A memory region of the appropriate length which contains initialized bytes
1355/// can be viewed as a `TryFromBytes` type so long as the runtime value of those
1356/// bytes corresponds to a [*valid instance*] of that type. For example,
1357/// [`bool`] is `TryFromBytes`, so zerocopy can transmute a [`u8`] into a
1358/// [`bool`] so long as it first checks that the value of the [`u8`] is `0` or
1359/// `1`.
1360///
1361/// # Implementation
1362///
1363/// **Do not implement this trait yourself!** Instead, use
1364/// [`#[derive(TryFromBytes)]`][derive]; e.g.:
1365///
1366/// ```
1367/// # use zerocopy_derive::{TryFromBytes, Immutable};
1368/// #[derive(TryFromBytes)]
1369/// struct MyStruct {
1370/// # /*
1371///     ...
1372/// # */
1373/// }
1374///
1375/// #[derive(TryFromBytes)]
1376/// #[repr(u8)]
1377/// enum MyEnum {
1378/// #   V00,
1379/// # /*
1380///     ...
1381/// # */
1382/// }
1383///
1384/// #[derive(TryFromBytes, Immutable)]
1385/// union MyUnion {
1386/// #   variant: u8,
1387/// # /*
1388///     ...
1389/// # */
1390/// }
1391/// ```
1392///
1393/// This derive ensures that the runtime check of whether bytes correspond to a
1394/// valid instance is sound. You **must** implement this trait via the derive.
1395///
1396/// # What is a "valid instance"?
1397///
1398/// In Rust, each type has *bit validity*, which refers to the set of bit
1399/// patterns which may appear in an instance of that type. It is impossible for
1400/// safe Rust code to produce values which violate bit validity (ie, values
1401/// outside of the "valid" set of bit patterns). If `unsafe` code produces an
1402/// invalid value, this is considered [undefined behavior].
1403///
1404/// Rust's bit validity rules are currently being decided, which means that some
1405/// types have three classes of bit patterns: those which are definitely valid,
1406/// and whose validity is documented in the language; those which may or may not
1407/// be considered valid at some point in the future; and those which are
1408/// definitely invalid.
1409///
1410/// Zerocopy takes a conservative approach, and only considers a bit pattern to
1411/// be valid if its validity is a documenteed guarantee provided by the
1412/// language.
1413///
1414/// For most use cases, Rust's current guarantees align with programmers'
1415/// intuitions about what ought to be valid. As a result, zerocopy's
1416/// conservatism should not affect most users.
1417///
1418/// If you are negatively affected by lack of support for a particular type,
1419/// we encourage you to let us know by [filing an issue][github-repo].
1420///
1421/// # `TryFromBytes` is not symmetrical with [`IntoBytes`]
1422///
1423/// There are some types which implement both `TryFromBytes` and [`IntoBytes`],
1424/// but for which `TryFromBytes` is not guaranteed to accept all byte sequences
1425/// produced by `IntoBytes`. In other words, for some `T: TryFromBytes +
1426/// IntoBytes`, there exist values of `t: T` such that
1427/// `TryFromBytes::try_ref_from_bytes(t.as_bytes()) == None`. Code should not
1428/// generally assume that values produced by `IntoBytes` will necessarily be
1429/// accepted as valid by `TryFromBytes`.
1430///
1431/// # Safety
1432///
1433/// On its own, `T: TryFromBytes` does not make any guarantees about the layout
1434/// or representation of `T`. It merely provides the ability to perform a
1435/// validity check at runtime via methods like [`try_ref_from_bytes`].
1436///
1437/// You must not rely on the `#[doc(hidden)]` internals of `TryFromBytes`.
1438/// Future releases of zerocopy may make backwards-breaking changes to these
1439/// items, including changes that only affect soundness, which may cause code
1440/// which uses those items to silently become unsound.
1441///
1442/// [undefined behavior]: https://raphlinus.github.io/programming/rust/2018/08/17/undefined-behavior.html
1443/// [github-repo]: https://github.com/google/zerocopy
1444/// [`try_ref_from_bytes`]: TryFromBytes::try_ref_from_bytes
1445/// [*valid instance*]: #what-is-a-valid-instance
1446#[cfg_attr(feature = "derive", doc = "[derive]: zerocopy_derive::TryFromBytes")]
1447#[cfg_attr(
1448    not(feature = "derive"),
1449    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.TryFromBytes.html"),
1450)]
1451#[cfg_attr(
1452    zerocopy_diagnostic_on_unimplemented_1_78_0,
1453    diagnostic::on_unimplemented(note = "Consider adding `#[derive(TryFromBytes)]` to `{Self}`")
1454)]
1455pub unsafe trait TryFromBytes {
1456    // The `Self: Sized` bound makes it so that `TryFromBytes` is still object
1457    // safe.
1458    #[doc(hidden)]
1459    fn only_derive_is_allowed_to_implement_this_trait()
1460    where
1461        Self: Sized;
1462
1463    /// Does a given memory range contain a valid instance of `Self`?
1464    ///
1465    /// # Safety
1466    ///
1467    /// Unsafe code may assume that, if `is_bit_valid(candidate)` returns true,
1468    /// `*candidate` contains a valid `Self`.
1469    ///
1470    /// # Panics
1471    ///
1472    /// `is_bit_valid` may panic. Callers are responsible for ensuring that any
1473    /// `unsafe` code remains sound even in the face of `is_bit_valid`
1474    /// panicking. (We support user-defined validation routines; so long as
1475    /// these routines are not required to be `unsafe`, there is no way to
1476    /// ensure that these do not generate panics.)
1477    ///
1478    /// Besides user-defined validation routines panicking, `is_bit_valid` will
1479    /// either panic or fail to compile if called on a pointer with [`Shared`]
1480    /// aliasing when `Self: !Immutable`.
1481    ///
1482    /// [`UnsafeCell`]: core::cell::UnsafeCell
1483    /// [`Shared`]: invariant::Shared
1484    #[doc(hidden)]
1485    fn is_bit_valid<A: invariant::Reference>(candidate: Maybe<'_, Self, A>) -> bool;
1486
1487    /// Attempts to interpret the given `source` as a `&Self`.
1488    ///
1489    /// If the bytes of `source` are a valid instance of `Self`, this method
1490    /// returns a reference to those bytes interpreted as a `Self`. If the
1491    /// length of `source` is not a [valid size of `Self`][valid-size], or if
1492    /// `source` is not appropriately aligned, or if `source` is not a valid
1493    /// instance of `Self`, this returns `Err`. If [`Self:
1494    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
1495    /// error][ConvertError::from].
1496    ///
1497    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1498    ///
1499    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1500    /// [self-unaligned]: Unaligned
1501    /// [slice-dst]: KnownLayout#dynamically-sized-types
1502    ///
1503    /// # Compile-Time Assertions
1504    ///
1505    /// This method cannot yet be used on unsized types whose dynamically-sized
1506    /// component is zero-sized. Attempting to use this method on such types
1507    /// results in a compile-time assertion error; e.g.:
1508    ///
1509    /// ```compile_fail,E0080
1510    /// use zerocopy::*;
1511    /// # use zerocopy_derive::*;
1512    ///
1513    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
1514    /// #[repr(C)]
1515    /// struct ZSTy {
1516    ///     leading_sized: u16,
1517    ///     trailing_dst: [()],
1518    /// }
1519    ///
1520    /// let _ = ZSTy::try_ref_from_bytes(0u16.as_bytes()); // âš  Compile Error!
1521    /// ```
1522    ///
1523    /// # Examples
1524    ///
1525    /// ```
1526    /// use zerocopy::TryFromBytes;
1527    /// # use zerocopy_derive::*;
1528    ///
1529    /// // The only valid value of this type is the byte `0xC0`
1530    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1531    /// #[repr(u8)]
1532    /// enum C0 { xC0 = 0xC0 }
1533    ///
1534    /// // The only valid value of this type is the byte sequence `0xC0C0`.
1535    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1536    /// #[repr(C)]
1537    /// struct C0C0(C0, C0);
1538    ///
1539    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1540    /// #[repr(C)]
1541    /// struct Packet {
1542    ///     magic_number: C0C0,
1543    ///     mug_size: u8,
1544    ///     temperature: u8,
1545    ///     marshmallows: [[u8; 2]],
1546    /// }
1547    ///
1548    /// let bytes = &[0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5][..];
1549    ///
1550    /// let packet = Packet::try_ref_from_bytes(bytes).unwrap();
1551    ///
1552    /// assert_eq!(packet.mug_size, 240);
1553    /// assert_eq!(packet.temperature, 77);
1554    /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
1555    ///
1556    /// // These bytes are not valid instance of `Packet`.
1557    /// let bytes = &[0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5][..];
1558    /// assert!(Packet::try_ref_from_bytes(bytes).is_err());
1559    /// ```
1560    #[must_use = "has no side effects"]
1561    #[inline]
1562    fn try_ref_from_bytes(source: &[u8]) -> Result<&Self, TryCastError<&[u8], Self>>
1563    where
1564        Self: KnownLayout + Immutable,
1565    {
1566        static_assert_dst_is_not_zst!(Self);
1567        match Ptr::from_ref(source).try_cast_into_no_leftover::<Self, BecauseImmutable>(None) {
1568            Ok(source) => {
1569                // This call may panic. If that happens, it doesn't cause any soundness
1570                // issues, as we have not generated any invalid state which we need to
1571                // fix before returning.
1572                //
1573                // Note that one panic or post-monomorphization error condition is
1574                // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
1575                // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
1576                // condition will not happen.
1577                match source.try_into_valid() {
1578                    Ok(valid) => Ok(valid.as_ref()),
1579                    Err(e) => {
1580                        Err(e.map_src(|src| src.as_bytes::<BecauseImmutable>().as_ref()).into())
1581                    }
1582                }
1583            }
1584            Err(e) => Err(e.map_src(Ptr::as_ref).into()),
1585        }
1586    }
1587
1588    /// Attempts to interpret the prefix of the given `source` as a `&Self`.
1589    ///
1590    /// This method computes the [largest possible size of `Self`][valid-size]
1591    /// that can fit in the leading bytes of `source`. If that prefix is a valid
1592    /// instance of `Self`, this method returns a reference to those bytes
1593    /// interpreted as `Self`, and a reference to the remaining bytes. If there
1594    /// are insufficient bytes, or if `source` is not appropriately aligned, or
1595    /// if those bytes are not a valid instance of `Self`, this returns `Err`.
1596    /// If [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
1597    /// alignment error][ConvertError::from].
1598    ///
1599    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1600    ///
1601    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1602    /// [self-unaligned]: Unaligned
1603    /// [slice-dst]: KnownLayout#dynamically-sized-types
1604    ///
1605    /// # Compile-Time Assertions
1606    ///
1607    /// This method cannot yet be used on unsized types whose dynamically-sized
1608    /// component is zero-sized. Attempting to use this method on such types
1609    /// results in a compile-time assertion error; e.g.:
1610    ///
1611    /// ```compile_fail,E0080
1612    /// use zerocopy::*;
1613    /// # use zerocopy_derive::*;
1614    ///
1615    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
1616    /// #[repr(C)]
1617    /// struct ZSTy {
1618    ///     leading_sized: u16,
1619    ///     trailing_dst: [()],
1620    /// }
1621    ///
1622    /// let _ = ZSTy::try_ref_from_prefix(0u16.as_bytes()); // âš  Compile Error!
1623    /// ```
1624    ///
1625    /// # Examples
1626    ///
1627    /// ```
1628    /// use zerocopy::TryFromBytes;
1629    /// # use zerocopy_derive::*;
1630    ///
1631    /// // The only valid value of this type is the byte `0xC0`
1632    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1633    /// #[repr(u8)]
1634    /// enum C0 { xC0 = 0xC0 }
1635    ///
1636    /// // The only valid value of this type is the bytes `0xC0C0`.
1637    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1638    /// #[repr(C)]
1639    /// struct C0C0(C0, C0);
1640    ///
1641    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1642    /// #[repr(C)]
1643    /// struct Packet {
1644    ///     magic_number: C0C0,
1645    ///     mug_size: u8,
1646    ///     temperature: u8,
1647    ///     marshmallows: [[u8; 2]],
1648    /// }
1649    ///
1650    /// // These are more bytes than are needed to encode a `Packet`.
1651    /// let bytes = &[0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1652    ///
1653    /// let (packet, suffix) = Packet::try_ref_from_prefix(bytes).unwrap();
1654    ///
1655    /// assert_eq!(packet.mug_size, 240);
1656    /// assert_eq!(packet.temperature, 77);
1657    /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
1658    /// assert_eq!(suffix, &[6u8][..]);
1659    ///
1660    /// // These bytes are not valid instance of `Packet`.
1661    /// let bytes = &[0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1662    /// assert!(Packet::try_ref_from_prefix(bytes).is_err());
1663    /// ```
1664    #[must_use = "has no side effects"]
1665    #[inline]
1666    fn try_ref_from_prefix(source: &[u8]) -> Result<(&Self, &[u8]), TryCastError<&[u8], Self>>
1667    where
1668        Self: KnownLayout + Immutable,
1669    {
1670        static_assert_dst_is_not_zst!(Self);
1671        try_ref_from_prefix_suffix(source, CastType::Prefix, None)
1672    }
1673
1674    /// Attempts to interpret the suffix of the given `source` as a `&Self`.
1675    ///
1676    /// This method computes the [largest possible size of `Self`][valid-size]
1677    /// that can fit in the trailing bytes of `source`. If that suffix is a
1678    /// valid instance of `Self`, this method returns a reference to those bytes
1679    /// interpreted as `Self`, and a reference to the preceding bytes. If there
1680    /// are insufficient bytes, or if the suffix of `source` would not be
1681    /// appropriately aligned, or if the suffix is not a valid instance of
1682    /// `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned], you
1683    /// can [infallibly discard the alignment error][ConvertError::from].
1684    ///
1685    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1686    ///
1687    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1688    /// [self-unaligned]: Unaligned
1689    /// [slice-dst]: KnownLayout#dynamically-sized-types
1690    ///
1691    /// # Compile-Time Assertions
1692    ///
1693    /// This method cannot yet be used on unsized types whose dynamically-sized
1694    /// component is zero-sized. Attempting to use this method on such types
1695    /// results in a compile-time assertion error; e.g.:
1696    ///
1697    /// ```compile_fail,E0080
1698    /// use zerocopy::*;
1699    /// # use zerocopy_derive::*;
1700    ///
1701    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
1702    /// #[repr(C)]
1703    /// struct ZSTy {
1704    ///     leading_sized: u16,
1705    ///     trailing_dst: [()],
1706    /// }
1707    ///
1708    /// let _ = ZSTy::try_ref_from_suffix(0u16.as_bytes()); // âš  Compile Error!
1709    /// ```
1710    ///
1711    /// # Examples
1712    ///
1713    /// ```
1714    /// use zerocopy::TryFromBytes;
1715    /// # use zerocopy_derive::*;
1716    ///
1717    /// // The only valid value of this type is the byte `0xC0`
1718    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1719    /// #[repr(u8)]
1720    /// enum C0 { xC0 = 0xC0 }
1721    ///
1722    /// // The only valid value of this type is the bytes `0xC0C0`.
1723    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1724    /// #[repr(C)]
1725    /// struct C0C0(C0, C0);
1726    ///
1727    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1728    /// #[repr(C)]
1729    /// struct Packet {
1730    ///     magic_number: C0C0,
1731    ///     mug_size: u8,
1732    ///     temperature: u8,
1733    ///     marshmallows: [[u8; 2]],
1734    /// }
1735    ///
1736    /// // These are more bytes than are needed to encode a `Packet`.
1737    /// let bytes = &[0, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
1738    ///
1739    /// let (prefix, packet) = Packet::try_ref_from_suffix(bytes).unwrap();
1740    ///
1741    /// assert_eq!(packet.mug_size, 240);
1742    /// assert_eq!(packet.temperature, 77);
1743    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
1744    /// assert_eq!(prefix, &[0u8][..]);
1745    ///
1746    /// // These bytes are not valid instance of `Packet`.
1747    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0x10][..];
1748    /// assert!(Packet::try_ref_from_suffix(bytes).is_err());
1749    /// ```
1750    #[must_use = "has no side effects"]
1751    #[inline]
1752    fn try_ref_from_suffix(source: &[u8]) -> Result<(&[u8], &Self), TryCastError<&[u8], Self>>
1753    where
1754        Self: KnownLayout + Immutable,
1755    {
1756        static_assert_dst_is_not_zst!(Self);
1757        try_ref_from_prefix_suffix(source, CastType::Suffix, None).map(swap)
1758    }
1759
1760    /// Attempts to interpret the given `source` as a `&mut Self` without
1761    /// copying.
1762    ///
1763    /// If the bytes of `source` are a valid instance of `Self`, this method
1764    /// returns a reference to those bytes interpreted as a `Self`. If the
1765    /// length of `source` is not a [valid size of `Self`][valid-size], or if
1766    /// `source` is not appropriately aligned, or if `source` is not a valid
1767    /// instance of `Self`, this returns `Err`. If [`Self:
1768    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
1769    /// error][ConvertError::from].
1770    ///
1771    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1772    ///
1773    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1774    /// [self-unaligned]: Unaligned
1775    /// [slice-dst]: KnownLayout#dynamically-sized-types
1776    ///
1777    /// # Compile-Time Assertions
1778    ///
1779    /// This method cannot yet be used on unsized types whose dynamically-sized
1780    /// component is zero-sized. Attempting to use this method on such types
1781    /// results in a compile-time assertion error; e.g.:
1782    ///
1783    /// ```compile_fail,E0080
1784    /// use zerocopy::*;
1785    /// # use zerocopy_derive::*;
1786    ///
1787    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1788    /// #[repr(C, packed)]
1789    /// struct ZSTy {
1790    ///     leading_sized: [u8; 2],
1791    ///     trailing_dst: [()],
1792    /// }
1793    ///
1794    /// let mut source = [85, 85];
1795    /// let _ = ZSTy::try_mut_from_bytes(&mut source[..]); // âš  Compile Error!
1796    /// ```
1797    ///
1798    /// # Examples
1799    ///
1800    /// ```
1801    /// use zerocopy::TryFromBytes;
1802    /// # use zerocopy_derive::*;
1803    ///
1804    /// // The only valid value of this type is the byte `0xC0`
1805    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1806    /// #[repr(u8)]
1807    /// enum C0 { xC0 = 0xC0 }
1808    ///
1809    /// // The only valid value of this type is the bytes `0xC0C0`.
1810    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1811    /// #[repr(C)]
1812    /// struct C0C0(C0, C0);
1813    ///
1814    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1815    /// #[repr(C, packed)]
1816    /// struct Packet {
1817    ///     magic_number: C0C0,
1818    ///     mug_size: u8,
1819    ///     temperature: u8,
1820    ///     marshmallows: [[u8; 2]],
1821    /// }
1822    ///
1823    /// let bytes = &mut [0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5][..];
1824    ///
1825    /// let packet = Packet::try_mut_from_bytes(bytes).unwrap();
1826    ///
1827    /// assert_eq!(packet.mug_size, 240);
1828    /// assert_eq!(packet.temperature, 77);
1829    /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
1830    ///
1831    /// packet.temperature = 111;
1832    ///
1833    /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 0, 1, 2, 3, 4, 5]);
1834    ///
1835    /// // These bytes are not valid instance of `Packet`.
1836    /// let bytes = &mut [0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1837    /// assert!(Packet::try_mut_from_bytes(bytes).is_err());
1838    /// ```
1839    #[must_use = "has no side effects"]
1840    #[inline]
1841    fn try_mut_from_bytes(bytes: &mut [u8]) -> Result<&mut Self, TryCastError<&mut [u8], Self>>
1842    where
1843        Self: KnownLayout + IntoBytes,
1844    {
1845        static_assert_dst_is_not_zst!(Self);
1846        match Ptr::from_mut(bytes).try_cast_into_no_leftover::<Self, BecauseExclusive>(None) {
1847            Ok(source) => {
1848                // This call may panic. If that happens, it doesn't cause any soundness
1849                // issues, as we have not generated any invalid state which we need to
1850                // fix before returning.
1851                //
1852                // Note that one panic or post-monomorphization error condition is
1853                // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
1854                // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
1855                // condition will not happen.
1856                match source.try_into_valid() {
1857                    Ok(source) => Ok(source.as_mut()),
1858                    Err(e) => {
1859                        Err(e.map_src(|src| src.as_bytes::<BecauseExclusive>().as_mut()).into())
1860                    }
1861                }
1862            }
1863            Err(e) => Err(e.map_src(Ptr::as_mut).into()),
1864        }
1865    }
1866
1867    /// Attempts to interpret the prefix of the given `source` as a `&mut
1868    /// Self`.
1869    ///
1870    /// This method computes the [largest possible size of `Self`][valid-size]
1871    /// that can fit in the leading bytes of `source`. If that prefix is a valid
1872    /// instance of `Self`, this method returns a reference to those bytes
1873    /// interpreted as `Self`, and a reference to the remaining bytes. If there
1874    /// are insufficient bytes, or if `source` is not appropriately aligned, or
1875    /// if the bytes are not a valid instance of `Self`, this returns `Err`. If
1876    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
1877    /// alignment error][ConvertError::from].
1878    ///
1879    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1880    ///
1881    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1882    /// [self-unaligned]: Unaligned
1883    /// [slice-dst]: KnownLayout#dynamically-sized-types
1884    ///
1885    /// # Compile-Time Assertions
1886    ///
1887    /// This method cannot yet be used on unsized types whose dynamically-sized
1888    /// component is zero-sized. Attempting to use this method on such types
1889    /// results in a compile-time assertion error; e.g.:
1890    ///
1891    /// ```compile_fail,E0080
1892    /// use zerocopy::*;
1893    /// # use zerocopy_derive::*;
1894    ///
1895    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1896    /// #[repr(C, packed)]
1897    /// struct ZSTy {
1898    ///     leading_sized: [u8; 2],
1899    ///     trailing_dst: [()],
1900    /// }
1901    ///
1902    /// let mut source = [85, 85];
1903    /// let _ = ZSTy::try_mut_from_prefix(&mut source[..]); // âš  Compile Error!
1904    /// ```
1905    ///
1906    /// # Examples
1907    ///
1908    /// ```
1909    /// use zerocopy::TryFromBytes;
1910    /// # use zerocopy_derive::*;
1911    ///
1912    /// // The only valid value of this type is the byte `0xC0`
1913    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1914    /// #[repr(u8)]
1915    /// enum C0 { xC0 = 0xC0 }
1916    ///
1917    /// // The only valid value of this type is the bytes `0xC0C0`.
1918    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1919    /// #[repr(C)]
1920    /// struct C0C0(C0, C0);
1921    ///
1922    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1923    /// #[repr(C, packed)]
1924    /// struct Packet {
1925    ///     magic_number: C0C0,
1926    ///     mug_size: u8,
1927    ///     temperature: u8,
1928    ///     marshmallows: [[u8; 2]],
1929    /// }
1930    ///
1931    /// // These are more bytes than are needed to encode a `Packet`.
1932    /// let bytes = &mut [0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1933    ///
1934    /// let (packet, suffix) = Packet::try_mut_from_prefix(bytes).unwrap();
1935    ///
1936    /// assert_eq!(packet.mug_size, 240);
1937    /// assert_eq!(packet.temperature, 77);
1938    /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
1939    /// assert_eq!(suffix, &[6u8][..]);
1940    ///
1941    /// packet.temperature = 111;
1942    /// suffix[0] = 222;
1943    ///
1944    /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 0, 1, 2, 3, 4, 5, 222]);
1945    ///
1946    /// // These bytes are not valid instance of `Packet`.
1947    /// let bytes = &mut [0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1948    /// assert!(Packet::try_mut_from_prefix(bytes).is_err());
1949    /// ```
1950    #[must_use = "has no side effects"]
1951    #[inline]
1952    fn try_mut_from_prefix(
1953        source: &mut [u8],
1954    ) -> Result<(&mut Self, &mut [u8]), TryCastError<&mut [u8], Self>>
1955    where
1956        Self: KnownLayout + IntoBytes,
1957    {
1958        static_assert_dst_is_not_zst!(Self);
1959        try_mut_from_prefix_suffix(source, CastType::Prefix, None)
1960    }
1961
1962    /// Attempts to interpret the suffix of the given `source` as a `&mut
1963    /// Self`.
1964    ///
1965    /// This method computes the [largest possible size of `Self`][valid-size]
1966    /// that can fit in the trailing bytes of `source`. If that suffix is a
1967    /// valid instance of `Self`, this method returns a reference to those bytes
1968    /// interpreted as `Self`, and a reference to the preceding bytes. If there
1969    /// are insufficient bytes, or if the suffix of `source` would not be
1970    /// appropriately aligned, or if the suffix is not a valid instance of
1971    /// `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned], you
1972    /// can [infallibly discard the alignment error][ConvertError::from].
1973    ///
1974    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1975    ///
1976    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1977    /// [self-unaligned]: Unaligned
1978    /// [slice-dst]: KnownLayout#dynamically-sized-types
1979    ///
1980    /// # Compile-Time Assertions
1981    ///
1982    /// This method cannot yet be used on unsized types whose dynamically-sized
1983    /// component is zero-sized. Attempting to use this method on such types
1984    /// results in a compile-time assertion error; e.g.:
1985    ///
1986    /// ```compile_fail,E0080
1987    /// use zerocopy::*;
1988    /// # use zerocopy_derive::*;
1989    ///
1990    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1991    /// #[repr(C, packed)]
1992    /// struct ZSTy {
1993    ///     leading_sized: u16,
1994    ///     trailing_dst: [()],
1995    /// }
1996    ///
1997    /// let mut source = [85, 85];
1998    /// let _ = ZSTy::try_mut_from_suffix(&mut source[..]); // âš  Compile Error!
1999    /// ```
2000    ///
2001    /// # Examples
2002    ///
2003    /// ```
2004    /// use zerocopy::TryFromBytes;
2005    /// # use zerocopy_derive::*;
2006    ///
2007    /// // The only valid value of this type is the byte `0xC0`
2008    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2009    /// #[repr(u8)]
2010    /// enum C0 { xC0 = 0xC0 }
2011    ///
2012    /// // The only valid value of this type is the bytes `0xC0C0`.
2013    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2014    /// #[repr(C)]
2015    /// struct C0C0(C0, C0);
2016    ///
2017    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2018    /// #[repr(C, packed)]
2019    /// struct Packet {
2020    ///     magic_number: C0C0,
2021    ///     mug_size: u8,
2022    ///     temperature: u8,
2023    ///     marshmallows: [[u8; 2]],
2024    /// }
2025    ///
2026    /// // These are more bytes than are needed to encode a `Packet`.
2027    /// let bytes = &mut [0, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2028    ///
2029    /// let (prefix, packet) = Packet::try_mut_from_suffix(bytes).unwrap();
2030    ///
2031    /// assert_eq!(packet.mug_size, 240);
2032    /// assert_eq!(packet.temperature, 77);
2033    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2034    /// assert_eq!(prefix, &[0u8][..]);
2035    ///
2036    /// prefix[0] = 111;
2037    /// packet.temperature = 222;
2038    ///
2039    /// assert_eq!(bytes, [111, 0xC0, 0xC0, 240, 222, 2, 3, 4, 5, 6, 7]);
2040    ///
2041    /// // These bytes are not valid instance of `Packet`.
2042    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0x10][..];
2043    /// assert!(Packet::try_mut_from_suffix(bytes).is_err());
2044    /// ```
2045    #[must_use = "has no side effects"]
2046    #[inline]
2047    fn try_mut_from_suffix(
2048        source: &mut [u8],
2049    ) -> Result<(&mut [u8], &mut Self), TryCastError<&mut [u8], Self>>
2050    where
2051        Self: KnownLayout + IntoBytes,
2052    {
2053        static_assert_dst_is_not_zst!(Self);
2054        try_mut_from_prefix_suffix(source, CastType::Suffix, None).map(swap)
2055    }
2056
2057    /// Attempts to interpret the given `source` as a `&Self` with a DST length
2058    /// equal to `count`.
2059    ///
2060    /// This method attempts to return a reference to `source` interpreted as a
2061    /// `Self` with `count` trailing elements. If the length of `source` is not
2062    /// equal to the size of `Self` with `count` elements, if `source` is not
2063    /// appropriately aligned, or if `source` does not contain a valid instance
2064    /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2065    /// you can [infallibly discard the alignment error][ConvertError::from].
2066    ///
2067    /// [self-unaligned]: Unaligned
2068    /// [slice-dst]: KnownLayout#dynamically-sized-types
2069    ///
2070    /// # Examples
2071    ///
2072    /// ```
2073    /// # #![allow(non_camel_case_types)] // For C0::xC0
2074    /// use zerocopy::TryFromBytes;
2075    /// # use zerocopy_derive::*;
2076    ///
2077    /// // The only valid value of this type is the byte `0xC0`
2078    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2079    /// #[repr(u8)]
2080    /// enum C0 { xC0 = 0xC0 }
2081    ///
2082    /// // The only valid value of this type is the bytes `0xC0C0`.
2083    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2084    /// #[repr(C)]
2085    /// struct C0C0(C0, C0);
2086    ///
2087    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2088    /// #[repr(C)]
2089    /// struct Packet {
2090    ///     magic_number: C0C0,
2091    ///     mug_size: u8,
2092    ///     temperature: u8,
2093    ///     marshmallows: [[u8; 2]],
2094    /// }
2095    ///
2096    /// let bytes = &[0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2097    ///
2098    /// let packet = Packet::try_ref_from_bytes_with_elems(bytes, 3).unwrap();
2099    ///
2100    /// assert_eq!(packet.mug_size, 240);
2101    /// assert_eq!(packet.temperature, 77);
2102    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2103    ///
2104    /// // These bytes are not valid instance of `Packet`.
2105    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0xC0][..];
2106    /// assert!(Packet::try_ref_from_bytes_with_elems(bytes, 3).is_err());
2107    /// ```
2108    ///
2109    /// Since an explicit `count` is provided, this method supports types with
2110    /// zero-sized trailing slice elements. Methods such as [`try_ref_from_bytes`]
2111    /// which do not take an explicit count do not support such types.
2112    ///
2113    /// ```
2114    /// use core::num::NonZeroU16;
2115    /// use zerocopy::*;
2116    /// # use zerocopy_derive::*;
2117    ///
2118    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
2119    /// #[repr(C)]
2120    /// struct ZSTy {
2121    ///     leading_sized: NonZeroU16,
2122    ///     trailing_dst: [()],
2123    /// }
2124    ///
2125    /// let src = 0xCAFEu16.as_bytes();
2126    /// let zsty = ZSTy::try_ref_from_bytes_with_elems(src, 42).unwrap();
2127    /// assert_eq!(zsty.trailing_dst.len(), 42);
2128    /// ```
2129    ///
2130    /// [`try_ref_from_bytes`]: TryFromBytes::try_ref_from_bytes
2131    #[must_use = "has no side effects"]
2132    #[inline]
2133    fn try_ref_from_bytes_with_elems(
2134        source: &[u8],
2135        count: usize,
2136    ) -> Result<&Self, TryCastError<&[u8], Self>>
2137    where
2138        Self: KnownLayout<PointerMetadata = usize> + Immutable,
2139    {
2140        match Ptr::from_ref(source).try_cast_into_no_leftover::<Self, BecauseImmutable>(Some(count))
2141        {
2142            Ok(source) => {
2143                // This call may panic. If that happens, it doesn't cause any soundness
2144                // issues, as we have not generated any invalid state which we need to
2145                // fix before returning.
2146                //
2147                // Note that one panic or post-monomorphization error condition is
2148                // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2149                // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2150                // condition will not happen.
2151                match source.try_into_valid() {
2152                    Ok(source) => Ok(source.as_ref()),
2153                    Err(e) => {
2154                        Err(e.map_src(|src| src.as_bytes::<BecauseImmutable>().as_ref()).into())
2155                    }
2156                }
2157            }
2158            Err(e) => Err(e.map_src(Ptr::as_ref).into()),
2159        }
2160    }
2161
2162    /// Attempts to interpret the prefix of the given `source` as a `&Self` with
2163    /// a DST length equal to `count`.
2164    ///
2165    /// This method attempts to return a reference to the prefix of `source`
2166    /// interpreted as a `Self` with `count` trailing elements, and a reference
2167    /// to the remaining bytes. If the length of `source` is less than the size
2168    /// of `Self` with `count` elements, if `source` is not appropriately
2169    /// aligned, or if the prefix of `source` does not contain a valid instance
2170    /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2171    /// you can [infallibly discard the alignment error][ConvertError::from].
2172    ///
2173    /// [self-unaligned]: Unaligned
2174    /// [slice-dst]: KnownLayout#dynamically-sized-types
2175    ///
2176    /// # Examples
2177    ///
2178    /// ```
2179    /// # #![allow(non_camel_case_types)] // For C0::xC0
2180    /// use zerocopy::TryFromBytes;
2181    /// # use zerocopy_derive::*;
2182    ///
2183    /// // The only valid value of this type is the byte `0xC0`
2184    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2185    /// #[repr(u8)]
2186    /// enum C0 { xC0 = 0xC0 }
2187    ///
2188    /// // The only valid value of this type is the bytes `0xC0C0`.
2189    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2190    /// #[repr(C)]
2191    /// struct C0C0(C0, C0);
2192    ///
2193    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2194    /// #[repr(C)]
2195    /// struct Packet {
2196    ///     magic_number: C0C0,
2197    ///     mug_size: u8,
2198    ///     temperature: u8,
2199    ///     marshmallows: [[u8; 2]],
2200    /// }
2201    ///
2202    /// let bytes = &[0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7, 8][..];
2203    ///
2204    /// let (packet, suffix) = Packet::try_ref_from_prefix_with_elems(bytes, 3).unwrap();
2205    ///
2206    /// assert_eq!(packet.mug_size, 240);
2207    /// assert_eq!(packet.temperature, 77);
2208    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2209    /// assert_eq!(suffix, &[8u8][..]);
2210    ///
2211    /// // These bytes are not valid instance of `Packet`.
2212    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2213    /// assert!(Packet::try_ref_from_prefix_with_elems(bytes, 3).is_err());
2214    /// ```
2215    ///
2216    /// Since an explicit `count` is provided, this method supports types with
2217    /// zero-sized trailing slice elements. Methods such as [`try_ref_from_prefix`]
2218    /// which do not take an explicit count do not support such types.
2219    ///
2220    /// ```
2221    /// use core::num::NonZeroU16;
2222    /// use zerocopy::*;
2223    /// # use zerocopy_derive::*;
2224    ///
2225    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
2226    /// #[repr(C)]
2227    /// struct ZSTy {
2228    ///     leading_sized: NonZeroU16,
2229    ///     trailing_dst: [()],
2230    /// }
2231    ///
2232    /// let src = 0xCAFEu16.as_bytes();
2233    /// let (zsty, _) = ZSTy::try_ref_from_prefix_with_elems(src, 42).unwrap();
2234    /// assert_eq!(zsty.trailing_dst.len(), 42);
2235    /// ```
2236    ///
2237    /// [`try_ref_from_prefix`]: TryFromBytes::try_ref_from_prefix
2238    #[must_use = "has no side effects"]
2239    #[inline]
2240    fn try_ref_from_prefix_with_elems(
2241        source: &[u8],
2242        count: usize,
2243    ) -> Result<(&Self, &[u8]), TryCastError<&[u8], Self>>
2244    where
2245        Self: KnownLayout<PointerMetadata = usize> + Immutable,
2246    {
2247        try_ref_from_prefix_suffix(source, CastType::Prefix, Some(count))
2248    }
2249
2250    /// Attempts to interpret the suffix of the given `source` as a `&Self` with
2251    /// a DST length equal to `count`.
2252    ///
2253    /// This method attempts to return a reference to the suffix of `source`
2254    /// interpreted as a `Self` with `count` trailing elements, and a reference
2255    /// to the preceding bytes. If the length of `source` is less than the size
2256    /// of `Self` with `count` elements, if the suffix of `source` is not
2257    /// appropriately aligned, or if the suffix of `source` does not contain a
2258    /// valid instance of `Self`, this returns `Err`. If [`Self:
2259    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
2260    /// error][ConvertError::from].
2261    ///
2262    /// [self-unaligned]: Unaligned
2263    /// [slice-dst]: KnownLayout#dynamically-sized-types
2264    ///
2265    /// # Examples
2266    ///
2267    /// ```
2268    /// # #![allow(non_camel_case_types)] // For C0::xC0
2269    /// use zerocopy::TryFromBytes;
2270    /// # use zerocopy_derive::*;
2271    ///
2272    /// // The only valid value of this type is the byte `0xC0`
2273    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2274    /// #[repr(u8)]
2275    /// enum C0 { xC0 = 0xC0 }
2276    ///
2277    /// // The only valid value of this type is the bytes `0xC0C0`.
2278    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2279    /// #[repr(C)]
2280    /// struct C0C0(C0, C0);
2281    ///
2282    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2283    /// #[repr(C)]
2284    /// struct Packet {
2285    ///     magic_number: C0C0,
2286    ///     mug_size: u8,
2287    ///     temperature: u8,
2288    ///     marshmallows: [[u8; 2]],
2289    /// }
2290    ///
2291    /// let bytes = &[123, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2292    ///
2293    /// let (prefix, packet) = Packet::try_ref_from_suffix_with_elems(bytes, 3).unwrap();
2294    ///
2295    /// assert_eq!(packet.mug_size, 240);
2296    /// assert_eq!(packet.temperature, 77);
2297    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2298    /// assert_eq!(prefix, &[123u8][..]);
2299    ///
2300    /// // These bytes are not valid instance of `Packet`.
2301    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2302    /// assert!(Packet::try_ref_from_suffix_with_elems(bytes, 3).is_err());
2303    /// ```
2304    ///
2305    /// Since an explicit `count` is provided, this method supports types with
2306    /// zero-sized trailing slice elements. Methods such as [`try_ref_from_prefix`]
2307    /// which do not take an explicit count do not support such types.
2308    ///
2309    /// ```
2310    /// use core::num::NonZeroU16;
2311    /// use zerocopy::*;
2312    /// # use zerocopy_derive::*;
2313    ///
2314    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
2315    /// #[repr(C)]
2316    /// struct ZSTy {
2317    ///     leading_sized: NonZeroU16,
2318    ///     trailing_dst: [()],
2319    /// }
2320    ///
2321    /// let src = 0xCAFEu16.as_bytes();
2322    /// let (_, zsty) = ZSTy::try_ref_from_suffix_with_elems(src, 42).unwrap();
2323    /// assert_eq!(zsty.trailing_dst.len(), 42);
2324    /// ```
2325    ///
2326    /// [`try_ref_from_prefix`]: TryFromBytes::try_ref_from_prefix
2327    #[must_use = "has no side effects"]
2328    #[inline]
2329    fn try_ref_from_suffix_with_elems(
2330        source: &[u8],
2331        count: usize,
2332    ) -> Result<(&[u8], &Self), TryCastError<&[u8], Self>>
2333    where
2334        Self: KnownLayout<PointerMetadata = usize> + Immutable,
2335    {
2336        try_ref_from_prefix_suffix(source, CastType::Suffix, Some(count)).map(swap)
2337    }
2338
2339    /// Attempts to interpret the given `source` as a `&mut Self` with a DST
2340    /// length equal to `count`.
2341    ///
2342    /// This method attempts to return a reference to `source` interpreted as a
2343    /// `Self` with `count` trailing elements. If the length of `source` is not
2344    /// equal to the size of `Self` with `count` elements, if `source` is not
2345    /// appropriately aligned, or if `source` does not contain a valid instance
2346    /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2347    /// you can [infallibly discard the alignment error][ConvertError::from].
2348    ///
2349    /// [self-unaligned]: Unaligned
2350    /// [slice-dst]: KnownLayout#dynamically-sized-types
2351    ///
2352    /// # Examples
2353    ///
2354    /// ```
2355    /// # #![allow(non_camel_case_types)] // For C0::xC0
2356    /// use zerocopy::TryFromBytes;
2357    /// # use zerocopy_derive::*;
2358    ///
2359    /// // The only valid value of this type is the byte `0xC0`
2360    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2361    /// #[repr(u8)]
2362    /// enum C0 { xC0 = 0xC0 }
2363    ///
2364    /// // The only valid value of this type is the bytes `0xC0C0`.
2365    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2366    /// #[repr(C)]
2367    /// struct C0C0(C0, C0);
2368    ///
2369    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2370    /// #[repr(C, packed)]
2371    /// struct Packet {
2372    ///     magic_number: C0C0,
2373    ///     mug_size: u8,
2374    ///     temperature: u8,
2375    ///     marshmallows: [[u8; 2]],
2376    /// }
2377    ///
2378    /// let bytes = &mut [0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2379    ///
2380    /// let packet = Packet::try_mut_from_bytes_with_elems(bytes, 3).unwrap();
2381    ///
2382    /// assert_eq!(packet.mug_size, 240);
2383    /// assert_eq!(packet.temperature, 77);
2384    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2385    ///
2386    /// packet.temperature = 111;
2387    ///
2388    /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 2, 3, 4, 5, 6, 7]);
2389    ///
2390    /// // These bytes are not valid instance of `Packet`.
2391    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0xC0][..];
2392    /// assert!(Packet::try_mut_from_bytes_with_elems(bytes, 3).is_err());
2393    /// ```
2394    ///
2395    /// Since an explicit `count` is provided, this method supports types with
2396    /// zero-sized trailing slice elements. Methods such as [`try_mut_from_bytes`]
2397    /// which do not take an explicit count do not support such types.
2398    ///
2399    /// ```
2400    /// use core::num::NonZeroU16;
2401    /// use zerocopy::*;
2402    /// # use zerocopy_derive::*;
2403    ///
2404    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2405    /// #[repr(C, packed)]
2406    /// struct ZSTy {
2407    ///     leading_sized: NonZeroU16,
2408    ///     trailing_dst: [()],
2409    /// }
2410    ///
2411    /// let mut src = 0xCAFEu16;
2412    /// let src = src.as_mut_bytes();
2413    /// let zsty = ZSTy::try_mut_from_bytes_with_elems(src, 42).unwrap();
2414    /// assert_eq!(zsty.trailing_dst.len(), 42);
2415    /// ```
2416    ///
2417    /// [`try_mut_from_bytes`]: TryFromBytes::try_mut_from_bytes
2418    #[must_use = "has no side effects"]
2419    #[inline]
2420    fn try_mut_from_bytes_with_elems(
2421        source: &mut [u8],
2422        count: usize,
2423    ) -> Result<&mut Self, TryCastError<&mut [u8], Self>>
2424    where
2425        Self: KnownLayout<PointerMetadata = usize> + IntoBytes,
2426    {
2427        match Ptr::from_mut(source).try_cast_into_no_leftover::<Self, BecauseExclusive>(Some(count))
2428        {
2429            Ok(source) => {
2430                // This call may panic. If that happens, it doesn't cause any soundness
2431                // issues, as we have not generated any invalid state which we need to
2432                // fix before returning.
2433                //
2434                // Note that one panic or post-monomorphization error condition is
2435                // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2436                // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2437                // condition will not happen.
2438                match source.try_into_valid() {
2439                    Ok(source) => Ok(source.as_mut()),
2440                    Err(e) => {
2441                        Err(e.map_src(|src| src.as_bytes::<BecauseExclusive>().as_mut()).into())
2442                    }
2443                }
2444            }
2445            Err(e) => Err(e.map_src(Ptr::as_mut).into()),
2446        }
2447    }
2448
2449    /// Attempts to interpret the prefix of the given `source` as a `&mut Self`
2450    /// with a DST length equal to `count`.
2451    ///
2452    /// This method attempts to return a reference to the prefix of `source`
2453    /// interpreted as a `Self` with `count` trailing elements, and a reference
2454    /// to the remaining bytes. If the length of `source` is less than the size
2455    /// of `Self` with `count` elements, if `source` is not appropriately
2456    /// aligned, or if the prefix of `source` does not contain a valid instance
2457    /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2458    /// you can [infallibly discard the alignment error][ConvertError::from].
2459    ///
2460    /// [self-unaligned]: Unaligned
2461    /// [slice-dst]: KnownLayout#dynamically-sized-types
2462    ///
2463    /// # Examples
2464    ///
2465    /// ```
2466    /// # #![allow(non_camel_case_types)] // For C0::xC0
2467    /// use zerocopy::TryFromBytes;
2468    /// # use zerocopy_derive::*;
2469    ///
2470    /// // The only valid value of this type is the byte `0xC0`
2471    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2472    /// #[repr(u8)]
2473    /// enum C0 { xC0 = 0xC0 }
2474    ///
2475    /// // The only valid value of this type is the bytes `0xC0C0`.
2476    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2477    /// #[repr(C)]
2478    /// struct C0C0(C0, C0);
2479    ///
2480    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2481    /// #[repr(C, packed)]
2482    /// struct Packet {
2483    ///     magic_number: C0C0,
2484    ///     mug_size: u8,
2485    ///     temperature: u8,
2486    ///     marshmallows: [[u8; 2]],
2487    /// }
2488    ///
2489    /// let bytes = &mut [0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7, 8][..];
2490    ///
2491    /// let (packet, suffix) = Packet::try_mut_from_prefix_with_elems(bytes, 3).unwrap();
2492    ///
2493    /// assert_eq!(packet.mug_size, 240);
2494    /// assert_eq!(packet.temperature, 77);
2495    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2496    /// assert_eq!(suffix, &[8u8][..]);
2497    ///
2498    /// packet.temperature = 111;
2499    /// suffix[0] = 222;
2500    ///
2501    /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 2, 3, 4, 5, 6, 7, 222]);
2502    ///
2503    /// // These bytes are not valid instance of `Packet`.
2504    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2505    /// assert!(Packet::try_mut_from_prefix_with_elems(bytes, 3).is_err());
2506    /// ```
2507    ///
2508    /// Since an explicit `count` is provided, this method supports types with
2509    /// zero-sized trailing slice elements. Methods such as [`try_mut_from_prefix`]
2510    /// which do not take an explicit count do not support such types.
2511    ///
2512    /// ```
2513    /// use core::num::NonZeroU16;
2514    /// use zerocopy::*;
2515    /// # use zerocopy_derive::*;
2516    ///
2517    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2518    /// #[repr(C, packed)]
2519    /// struct ZSTy {
2520    ///     leading_sized: NonZeroU16,
2521    ///     trailing_dst: [()],
2522    /// }
2523    ///
2524    /// let mut src = 0xCAFEu16;
2525    /// let src = src.as_mut_bytes();
2526    /// let (zsty, _) = ZSTy::try_mut_from_prefix_with_elems(src, 42).unwrap();
2527    /// assert_eq!(zsty.trailing_dst.len(), 42);
2528    /// ```
2529    ///
2530    /// [`try_mut_from_prefix`]: TryFromBytes::try_mut_from_prefix
2531    #[must_use = "has no side effects"]
2532    #[inline]
2533    fn try_mut_from_prefix_with_elems(
2534        source: &mut [u8],
2535        count: usize,
2536    ) -> Result<(&mut Self, &mut [u8]), TryCastError<&mut [u8], Self>>
2537    where
2538        Self: KnownLayout<PointerMetadata = usize> + IntoBytes,
2539    {
2540        try_mut_from_prefix_suffix(source, CastType::Prefix, Some(count))
2541    }
2542
2543    /// Attempts to interpret the suffix of the given `source` as a `&mut Self`
2544    /// with a DST length equal to `count`.
2545    ///
2546    /// This method attempts to return a reference to the suffix of `source`
2547    /// interpreted as a `Self` with `count` trailing elements, and a reference
2548    /// to the preceding bytes. If the length of `source` is less than the size
2549    /// of `Self` with `count` elements, if the suffix of `source` is not
2550    /// appropriately aligned, or if the suffix of `source` does not contain a
2551    /// valid instance of `Self`, this returns `Err`. If [`Self:
2552    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
2553    /// error][ConvertError::from].
2554    ///
2555    /// [self-unaligned]: Unaligned
2556    /// [slice-dst]: KnownLayout#dynamically-sized-types
2557    ///
2558    /// # Examples
2559    ///
2560    /// ```
2561    /// # #![allow(non_camel_case_types)] // For C0::xC0
2562    /// use zerocopy::TryFromBytes;
2563    /// # use zerocopy_derive::*;
2564    ///
2565    /// // The only valid value of this type is the byte `0xC0`
2566    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2567    /// #[repr(u8)]
2568    /// enum C0 { xC0 = 0xC0 }
2569    ///
2570    /// // The only valid value of this type is the bytes `0xC0C0`.
2571    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2572    /// #[repr(C)]
2573    /// struct C0C0(C0, C0);
2574    ///
2575    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2576    /// #[repr(C, packed)]
2577    /// struct Packet {
2578    ///     magic_number: C0C0,
2579    ///     mug_size: u8,
2580    ///     temperature: u8,
2581    ///     marshmallows: [[u8; 2]],
2582    /// }
2583    ///
2584    /// let bytes = &mut [123, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2585    ///
2586    /// let (prefix, packet) = Packet::try_mut_from_suffix_with_elems(bytes, 3).unwrap();
2587    ///
2588    /// assert_eq!(packet.mug_size, 240);
2589    /// assert_eq!(packet.temperature, 77);
2590    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2591    /// assert_eq!(prefix, &[123u8][..]);
2592    ///
2593    /// prefix[0] = 111;
2594    /// packet.temperature = 222;
2595    ///
2596    /// assert_eq!(bytes, [111, 0xC0, 0xC0, 240, 222, 2, 3, 4, 5, 6, 7]);
2597    ///
2598    /// // These bytes are not valid instance of `Packet`.
2599    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2600    /// assert!(Packet::try_mut_from_suffix_with_elems(bytes, 3).is_err());
2601    /// ```
2602    ///
2603    /// Since an explicit `count` is provided, this method supports types with
2604    /// zero-sized trailing slice elements. Methods such as [`try_mut_from_prefix`]
2605    /// which do not take an explicit count do not support such types.
2606    ///
2607    /// ```
2608    /// use core::num::NonZeroU16;
2609    /// use zerocopy::*;
2610    /// # use zerocopy_derive::*;
2611    ///
2612    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2613    /// #[repr(C, packed)]
2614    /// struct ZSTy {
2615    ///     leading_sized: NonZeroU16,
2616    ///     trailing_dst: [()],
2617    /// }
2618    ///
2619    /// let mut src = 0xCAFEu16;
2620    /// let src = src.as_mut_bytes();
2621    /// let (_, zsty) = ZSTy::try_mut_from_suffix_with_elems(src, 42).unwrap();
2622    /// assert_eq!(zsty.trailing_dst.len(), 42);
2623    /// ```
2624    ///
2625    /// [`try_mut_from_prefix`]: TryFromBytes::try_mut_from_prefix
2626    #[must_use = "has no side effects"]
2627    #[inline]
2628    fn try_mut_from_suffix_with_elems(
2629        source: &mut [u8],
2630        count: usize,
2631    ) -> Result<(&mut [u8], &mut Self), TryCastError<&mut [u8], Self>>
2632    where
2633        Self: KnownLayout<PointerMetadata = usize> + IntoBytes,
2634    {
2635        try_mut_from_prefix_suffix(source, CastType::Suffix, Some(count)).map(swap)
2636    }
2637
2638    /// Attempts to read the given `source` as a `Self`.
2639    ///
2640    /// If `source.len() != size_of::<Self>()` or the bytes are not a valid
2641    /// instance of `Self`, this returns `Err`.
2642    ///
2643    /// # Examples
2644    ///
2645    /// ```
2646    /// use zerocopy::TryFromBytes;
2647    /// # use zerocopy_derive::*;
2648    ///
2649    /// // The only valid value of this type is the byte `0xC0`
2650    /// #[derive(TryFromBytes)]
2651    /// #[repr(u8)]
2652    /// enum C0 { xC0 = 0xC0 }
2653    ///
2654    /// // The only valid value of this type is the bytes `0xC0C0`.
2655    /// #[derive(TryFromBytes)]
2656    /// #[repr(C)]
2657    /// struct C0C0(C0, C0);
2658    ///
2659    /// #[derive(TryFromBytes)]
2660    /// #[repr(C)]
2661    /// struct Packet {
2662    ///     magic_number: C0C0,
2663    ///     mug_size: u8,
2664    ///     temperature: u8,
2665    /// }
2666    ///
2667    /// let bytes = &[0xC0, 0xC0, 240, 77][..];
2668    ///
2669    /// let packet = Packet::try_read_from_bytes(bytes).unwrap();
2670    ///
2671    /// assert_eq!(packet.mug_size, 240);
2672    /// assert_eq!(packet.temperature, 77);
2673    ///
2674    /// // These bytes are not valid instance of `Packet`.
2675    /// let bytes = &mut [0x10, 0xC0, 240, 77][..];
2676    /// assert!(Packet::try_read_from_bytes(bytes).is_err());
2677    /// ```
2678    #[must_use = "has no side effects"]
2679    #[inline]
2680    fn try_read_from_bytes(source: &[u8]) -> Result<Self, TryReadError<&[u8], Self>>
2681    where
2682        Self: Sized,
2683    {
2684        let candidate = match CoreMaybeUninit::<Self>::read_from_bytes(source) {
2685            Ok(candidate) => candidate,
2686            Err(e) => {
2687                return Err(TryReadError::Size(e.with_dst()));
2688            }
2689        };
2690        // SAFETY: `candidate` was copied from from `source: &[u8]`, so all of
2691        // its bytes are initialized.
2692        unsafe { try_read_from(source, candidate) }
2693    }
2694
2695    /// Attempts to read a `Self` from the prefix of the given `source`.
2696    ///
2697    /// This attempts to read a `Self` from the first `size_of::<Self>()` bytes
2698    /// of `source`, returning that `Self` and any remaining bytes. If
2699    /// `source.len() < size_of::<Self>()` or the bytes are not a valid instance
2700    /// of `Self`, it returns `Err`.
2701    ///
2702    /// # Examples
2703    ///
2704    /// ```
2705    /// use zerocopy::TryFromBytes;
2706    /// # use zerocopy_derive::*;
2707    ///
2708    /// // The only valid value of this type is the byte `0xC0`
2709    /// #[derive(TryFromBytes)]
2710    /// #[repr(u8)]
2711    /// enum C0 { xC0 = 0xC0 }
2712    ///
2713    /// // The only valid value of this type is the bytes `0xC0C0`.
2714    /// #[derive(TryFromBytes)]
2715    /// #[repr(C)]
2716    /// struct C0C0(C0, C0);
2717    ///
2718    /// #[derive(TryFromBytes)]
2719    /// #[repr(C)]
2720    /// struct Packet {
2721    ///     magic_number: C0C0,
2722    ///     mug_size: u8,
2723    ///     temperature: u8,
2724    /// }
2725    ///
2726    /// // These are more bytes than are needed to encode a `Packet`.
2727    /// let bytes = &[0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
2728    ///
2729    /// let (packet, suffix) = Packet::try_read_from_prefix(bytes).unwrap();
2730    ///
2731    /// assert_eq!(packet.mug_size, 240);
2732    /// assert_eq!(packet.temperature, 77);
2733    /// assert_eq!(suffix, &[0u8, 1, 2, 3, 4, 5, 6][..]);
2734    ///
2735    /// // These bytes are not valid instance of `Packet`.
2736    /// let bytes = &[0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
2737    /// assert!(Packet::try_read_from_prefix(bytes).is_err());
2738    /// ```
2739    #[must_use = "has no side effects"]
2740    #[inline]
2741    fn try_read_from_prefix(source: &[u8]) -> Result<(Self, &[u8]), TryReadError<&[u8], Self>>
2742    where
2743        Self: Sized,
2744    {
2745        let (candidate, suffix) = match CoreMaybeUninit::<Self>::read_from_prefix(source) {
2746            Ok(candidate) => candidate,
2747            Err(e) => {
2748                return Err(TryReadError::Size(e.with_dst()));
2749            }
2750        };
2751        // SAFETY: `candidate` was copied from from `source: &[u8]`, so all of
2752        // its bytes are initialized.
2753        unsafe { try_read_from(source, candidate).map(|slf| (slf, suffix)) }
2754    }
2755
2756    /// Attempts to read a `Self` from the suffix of the given `source`.
2757    ///
2758    /// This attempts to read a `Self` from the last `size_of::<Self>()` bytes
2759    /// of `source`, returning that `Self` and any preceding bytes. If
2760    /// `source.len() < size_of::<Self>()` or the bytes are not a valid instance
2761    /// of `Self`, it returns `Err`.
2762    ///
2763    /// # Examples
2764    ///
2765    /// ```
2766    /// # #![allow(non_camel_case_types)] // For C0::xC0
2767    /// use zerocopy::TryFromBytes;
2768    /// # use zerocopy_derive::*;
2769    ///
2770    /// // The only valid value of this type is the byte `0xC0`
2771    /// #[derive(TryFromBytes)]
2772    /// #[repr(u8)]
2773    /// enum C0 { xC0 = 0xC0 }
2774    ///
2775    /// // The only valid value of this type is the bytes `0xC0C0`.
2776    /// #[derive(TryFromBytes)]
2777    /// #[repr(C)]
2778    /// struct C0C0(C0, C0);
2779    ///
2780    /// #[derive(TryFromBytes)]
2781    /// #[repr(C)]
2782    /// struct Packet {
2783    ///     magic_number: C0C0,
2784    ///     mug_size: u8,
2785    ///     temperature: u8,
2786    /// }
2787    ///
2788    /// // These are more bytes than are needed to encode a `Packet`.
2789    /// let bytes = &[0, 1, 2, 3, 4, 5, 0xC0, 0xC0, 240, 77][..];
2790    ///
2791    /// let (prefix, packet) = Packet::try_read_from_suffix(bytes).unwrap();
2792    ///
2793    /// assert_eq!(packet.mug_size, 240);
2794    /// assert_eq!(packet.temperature, 77);
2795    /// assert_eq!(prefix, &[0u8, 1, 2, 3, 4, 5][..]);
2796    ///
2797    /// // These bytes are not valid instance of `Packet`.
2798    /// let bytes = &[0, 1, 2, 3, 4, 5, 0x10, 0xC0, 240, 77][..];
2799    /// assert!(Packet::try_read_from_suffix(bytes).is_err());
2800    /// ```
2801    #[must_use = "has no side effects"]
2802    #[inline]
2803    fn try_read_from_suffix(source: &[u8]) -> Result<(&[u8], Self), TryReadError<&[u8], Self>>
2804    where
2805        Self: Sized,
2806    {
2807        let (prefix, candidate) = match CoreMaybeUninit::<Self>::read_from_suffix(source) {
2808            Ok(candidate) => candidate,
2809            Err(e) => {
2810                return Err(TryReadError::Size(e.with_dst()));
2811            }
2812        };
2813        // SAFETY: `candidate` was copied from from `source: &[u8]`, so all of
2814        // its bytes are initialized.
2815        unsafe { try_read_from(source, candidate).map(|slf| (prefix, slf)) }
2816    }
2817}
2818
2819#[inline(always)]
2820fn try_ref_from_prefix_suffix<T: TryFromBytes + KnownLayout + Immutable + ?Sized>(
2821    source: &[u8],
2822    cast_type: CastType,
2823    meta: Option<T::PointerMetadata>,
2824) -> Result<(&T, &[u8]), TryCastError<&[u8], T>> {
2825    match Ptr::from_ref(source).try_cast_into::<T, BecauseImmutable>(cast_type, meta) {
2826        Ok((source, prefix_suffix)) => {
2827            // This call may panic. If that happens, it doesn't cause any soundness
2828            // issues, as we have not generated any invalid state which we need to
2829            // fix before returning.
2830            //
2831            // Note that one panic or post-monomorphization error condition is
2832            // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2833            // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2834            // condition will not happen.
2835            match source.try_into_valid() {
2836                Ok(valid) => Ok((valid.as_ref(), prefix_suffix.as_ref())),
2837                Err(e) => Err(e.map_src(|src| src.as_bytes::<BecauseImmutable>().as_ref()).into()),
2838            }
2839        }
2840        Err(e) => Err(e.map_src(Ptr::as_ref).into()),
2841    }
2842}
2843
2844#[inline(always)]
2845fn try_mut_from_prefix_suffix<T: IntoBytes + TryFromBytes + KnownLayout + ?Sized>(
2846    candidate: &mut [u8],
2847    cast_type: CastType,
2848    meta: Option<T::PointerMetadata>,
2849) -> Result<(&mut T, &mut [u8]), TryCastError<&mut [u8], T>> {
2850    match Ptr::from_mut(candidate).try_cast_into::<T, BecauseExclusive>(cast_type, meta) {
2851        Ok((candidate, prefix_suffix)) => {
2852            // This call may panic. If that happens, it doesn't cause any soundness
2853            // issues, as we have not generated any invalid state which we need to
2854            // fix before returning.
2855            //
2856            // Note that one panic or post-monomorphization error condition is
2857            // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2858            // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2859            // condition will not happen.
2860            match candidate.try_into_valid() {
2861                Ok(valid) => Ok((valid.as_mut(), prefix_suffix.as_mut())),
2862                Err(e) => Err(e.map_src(|src| src.as_bytes::<BecauseExclusive>().as_mut()).into()),
2863            }
2864        }
2865        Err(e) => Err(e.map_src(Ptr::as_mut).into()),
2866    }
2867}
2868
2869#[inline(always)]
2870fn swap<T, U>((t, u): (T, U)) -> (U, T) {
2871    (u, t)
2872}
2873
2874/// # Safety
2875///
2876/// All bytes of `candidate` must be initialized.
2877#[inline(always)]
2878unsafe fn try_read_from<S, T: TryFromBytes>(
2879    source: S,
2880    mut candidate: CoreMaybeUninit<T>,
2881) -> Result<T, TryReadError<S, T>> {
2882    // We use `from_mut` despite not mutating via `c_ptr` so that we don't need
2883    // to add a `T: Immutable` bound.
2884    let c_ptr = Ptr::from_mut(&mut candidate);
2885    // SAFETY: `c_ptr` has no uninitialized sub-ranges because it derived from
2886    // `candidate`, which the caller promises is entirely initialized. Since
2887    // `candidate` is a `MaybeUninit`, it has no validity requirements, and so
2888    // no values written to an `Initialized` `c_ptr` can violate its validity.
2889    // Since `c_ptr` has `Exclusive` aliasing, no mutations may happen except
2890    // via `c_ptr` so long as it is live, so we don't need to worry about the
2891    // fact that `c_ptr` may have more restricted validity than `candidate`.
2892    let c_ptr = unsafe { c_ptr.assume_validity::<invariant::Initialized>() };
2893    let c_ptr = c_ptr.transmute();
2894
2895    // Since we don't have `T: KnownLayout`, we hack around that by using
2896    // `Wrapping<T>`, which implements `KnownLayout` even if `T` doesn't.
2897    //
2898    // This call may panic. If that happens, it doesn't cause any soundness
2899    // issues, as we have not generated any invalid state which we need to fix
2900    // before returning.
2901    //
2902    // Note that one panic or post-monomorphization error condition is calling
2903    // `try_into_valid` (and thus `is_bit_valid`) with a shared pointer when
2904    // `Self: !Immutable`. Since `Self: Immutable`, this panic condition will
2905    // not happen.
2906    if !Wrapping::<T>::is_bit_valid(c_ptr.forget_aligned()) {
2907        return Err(ValidityError::new(source).into());
2908    }
2909
2910    fn _assert_same_size_and_validity<T>()
2911    where
2912        Wrapping<T>: pointer::TransmuteFrom<T, invariant::Valid, invariant::Valid>,
2913        T: pointer::TransmuteFrom<Wrapping<T>, invariant::Valid, invariant::Valid>,
2914    {
2915    }
2916
2917    _assert_same_size_and_validity::<T>();
2918
2919    // SAFETY: We just validated that `candidate` contains a valid
2920    // `Wrapping<T>`, which has the same size and bit validity as `T`, as
2921    // guaranteed by the preceding type assertion.
2922    Ok(unsafe { candidate.assume_init() })
2923}
2924
2925/// Types for which a sequence of `0` bytes is a valid instance.
2926///
2927/// Any memory region of the appropriate length which is guaranteed to contain
2928/// only zero bytes can be viewed as any `FromZeros` type with no runtime
2929/// overhead. This is useful whenever memory is known to be in a zeroed state,
2930/// such memory returned from some allocation routines.
2931///
2932/// # Warning: Padding bytes
2933///
2934/// Note that, when a value is moved or copied, only the non-padding bytes of
2935/// that value are guaranteed to be preserved. It is unsound to assume that
2936/// values written to padding bytes are preserved after a move or copy. For more
2937/// details, see the [`FromBytes` docs][frombytes-warning-padding-bytes].
2938///
2939/// [frombytes-warning-padding-bytes]: FromBytes#warning-padding-bytes
2940///
2941/// # Implementation
2942///
2943/// **Do not implement this trait yourself!** Instead, use
2944/// [`#[derive(FromZeros)]`][derive]; e.g.:
2945///
2946/// ```
2947/// # use zerocopy_derive::{FromZeros, Immutable};
2948/// #[derive(FromZeros)]
2949/// struct MyStruct {
2950/// # /*
2951///     ...
2952/// # */
2953/// }
2954///
2955/// #[derive(FromZeros)]
2956/// #[repr(u8)]
2957/// enum MyEnum {
2958/// #   Variant0,
2959/// # /*
2960///     ...
2961/// # */
2962/// }
2963///
2964/// #[derive(FromZeros, Immutable)]
2965/// union MyUnion {
2966/// #   variant: u8,
2967/// # /*
2968///     ...
2969/// # */
2970/// }
2971/// ```
2972///
2973/// This derive performs a sophisticated, compile-time safety analysis to
2974/// determine whether a type is `FromZeros`.
2975///
2976/// # Safety
2977///
2978/// *This section describes what is required in order for `T: FromZeros`, and
2979/// what unsafe code may assume of such types. If you don't plan on implementing
2980/// `FromZeros` manually, and you don't plan on writing unsafe code that
2981/// operates on `FromZeros` types, then you don't need to read this section.*
2982///
2983/// If `T: FromZeros`, then unsafe code may assume that it is sound to produce a
2984/// `T` whose bytes are all initialized to zero. If a type is marked as
2985/// `FromZeros` which violates this contract, it may cause undefined behavior.
2986///
2987/// `#[derive(FromZeros)]` only permits [types which satisfy these
2988/// requirements][derive-analysis].
2989///
2990#[cfg_attr(
2991    feature = "derive",
2992    doc = "[derive]: zerocopy_derive::FromZeros",
2993    doc = "[derive-analysis]: zerocopy_derive::FromZeros#analysis"
2994)]
2995#[cfg_attr(
2996    not(feature = "derive"),
2997    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromZeros.html"),
2998    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromZeros.html#analysis"),
2999)]
3000#[cfg_attr(
3001    zerocopy_diagnostic_on_unimplemented_1_78_0,
3002    diagnostic::on_unimplemented(note = "Consider adding `#[derive(FromZeros)]` to `{Self}`")
3003)]
3004pub unsafe trait FromZeros: TryFromBytes {
3005    // The `Self: Sized` bound makes it so that `FromZeros` is still object
3006    // safe.
3007    #[doc(hidden)]
3008    fn only_derive_is_allowed_to_implement_this_trait()
3009    where
3010        Self: Sized;
3011
3012    /// Overwrites `self` with zeros.
3013    ///
3014    /// Sets every byte in `self` to 0. While this is similar to doing `*self =
3015    /// Self::new_zeroed()`, it differs in that `zero` does not semantically
3016    /// drop the current value and replace it with a new one — it simply
3017    /// modifies the bytes of the existing value.
3018    ///
3019    /// # Examples
3020    ///
3021    /// ```
3022    /// # use zerocopy::FromZeros;
3023    /// # use zerocopy_derive::*;
3024    /// #
3025    /// #[derive(FromZeros)]
3026    /// #[repr(C)]
3027    /// struct PacketHeader {
3028    ///     src_port: [u8; 2],
3029    ///     dst_port: [u8; 2],
3030    ///     length: [u8; 2],
3031    ///     checksum: [u8; 2],
3032    /// }
3033    ///
3034    /// let mut header = PacketHeader {
3035    ///     src_port: 100u16.to_be_bytes(),
3036    ///     dst_port: 200u16.to_be_bytes(),
3037    ///     length: 300u16.to_be_bytes(),
3038    ///     checksum: 400u16.to_be_bytes(),
3039    /// };
3040    ///
3041    /// header.zero();
3042    ///
3043    /// assert_eq!(header.src_port, [0, 0]);
3044    /// assert_eq!(header.dst_port, [0, 0]);
3045    /// assert_eq!(header.length, [0, 0]);
3046    /// assert_eq!(header.checksum, [0, 0]);
3047    /// ```
3048    #[inline(always)]
3049    fn zero(&mut self) {
3050        let slf: *mut Self = self;
3051        let len = mem::size_of_val(self);
3052        // SAFETY:
3053        // - `self` is guaranteed by the type system to be valid for writes of
3054        //   size `size_of_val(self)`.
3055        // - `u8`'s alignment is 1, and thus `self` is guaranteed to be aligned
3056        //   as required by `u8`.
3057        // - Since `Self: FromZeros`, the all-zeros instance is a valid instance
3058        //   of `Self.`
3059        //
3060        // FIXME(#429): Add references to docs and quotes.
3061        unsafe { ptr::write_bytes(slf.cast::<u8>(), 0, len) };
3062    }
3063
3064    /// Creates an instance of `Self` from zeroed bytes.
3065    ///
3066    /// # Examples
3067    ///
3068    /// ```
3069    /// # use zerocopy::FromZeros;
3070    /// # use zerocopy_derive::*;
3071    /// #
3072    /// #[derive(FromZeros)]
3073    /// #[repr(C)]
3074    /// struct PacketHeader {
3075    ///     src_port: [u8; 2],
3076    ///     dst_port: [u8; 2],
3077    ///     length: [u8; 2],
3078    ///     checksum: [u8; 2],
3079    /// }
3080    ///
3081    /// let header: PacketHeader = FromZeros::new_zeroed();
3082    ///
3083    /// assert_eq!(header.src_port, [0, 0]);
3084    /// assert_eq!(header.dst_port, [0, 0]);
3085    /// assert_eq!(header.length, [0, 0]);
3086    /// assert_eq!(header.checksum, [0, 0]);
3087    /// ```
3088    #[must_use = "has no side effects"]
3089    #[inline(always)]
3090    fn new_zeroed() -> Self
3091    where
3092        Self: Sized,
3093    {
3094        // SAFETY: `FromZeros` says that the all-zeros bit pattern is legal.
3095        unsafe { mem::zeroed() }
3096    }
3097
3098    /// Creates a `Box<Self>` from zeroed bytes.
3099    ///
3100    /// This function is useful for allocating large values on the heap and
3101    /// zero-initializing them, without ever creating a temporary instance of
3102    /// `Self` on the stack. For example, `<[u8; 1048576]>::new_box_zeroed()`
3103    /// will allocate `[u8; 1048576]` directly on the heap; it does not require
3104    /// storing `[u8; 1048576]` in a temporary variable on the stack.
3105    ///
3106    /// On systems that use a heap implementation that supports allocating from
3107    /// pre-zeroed memory, using `new_box_zeroed` (or related functions) may
3108    /// have performance benefits.
3109    ///
3110    /// # Errors
3111    ///
3112    /// Returns an error on allocation failure. Allocation failure is guaranteed
3113    /// never to cause a panic or an abort.
3114    #[must_use = "has no side effects (other than allocation)"]
3115    #[cfg(any(feature = "alloc", test))]
3116    #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3117    #[inline]
3118    fn new_box_zeroed() -> Result<Box<Self>, AllocError>
3119    where
3120        Self: Sized,
3121    {
3122        // If `T` is a ZST, then return a proper boxed instance of it. There is
3123        // no allocation, but `Box` does require a correct dangling pointer.
3124        let layout = Layout::new::<Self>();
3125        if layout.size() == 0 {
3126            // Construct the `Box` from a dangling pointer to avoid calling
3127            // `Self::new_zeroed`. This ensures that stack space is never
3128            // allocated for `Self` even on lower opt-levels where this branch
3129            // might not get optimized out.
3130
3131            // SAFETY: Per [1], when `T` is a ZST, `Box<T>`'s only validity
3132            // requirements are that the pointer is non-null and sufficiently
3133            // aligned. Per [2], `NonNull::dangling` produces a pointer which
3134            // is sufficiently aligned. Since the produced pointer is a
3135            // `NonNull`, it is non-null.
3136            //
3137            // [1] Per https://doc.rust-lang.org/nightly/std/boxed/index.html#memory-layout:
3138            //
3139            //   For zero-sized values, the `Box` pointer has to be non-null and sufficiently aligned.
3140            //
3141            // [2] Per https://doc.rust-lang.org/std/ptr/struct.NonNull.html#method.dangling:
3142            //
3143            //   Creates a new `NonNull` that is dangling, but well-aligned.
3144            return Ok(unsafe { Box::from_raw(NonNull::dangling().as_ptr()) });
3145        }
3146
3147        // FIXME(#429): Add a "SAFETY" comment and remove this `allow`.
3148        #[allow(clippy::undocumented_unsafe_blocks)]
3149        let ptr = unsafe { alloc::alloc::alloc_zeroed(layout).cast::<Self>() };
3150        if ptr.is_null() {
3151            return Err(AllocError);
3152        }
3153        // FIXME(#429): Add a "SAFETY" comment and remove this `allow`.
3154        #[allow(clippy::undocumented_unsafe_blocks)]
3155        Ok(unsafe { Box::from_raw(ptr) })
3156    }
3157
3158    /// Creates a `Box<[Self]>` (a boxed slice) from zeroed bytes.
3159    ///
3160    /// This function is useful for allocating large values of `[Self]` on the
3161    /// heap and zero-initializing them, without ever creating a temporary
3162    /// instance of `[Self; _]` on the stack. For example,
3163    /// `u8::new_box_slice_zeroed(1048576)` will allocate the slice directly on
3164    /// the heap; it does not require storing the slice on the stack.
3165    ///
3166    /// On systems that use a heap implementation that supports allocating from
3167    /// pre-zeroed memory, using `new_box_slice_zeroed` may have performance
3168    /// benefits.
3169    ///
3170    /// If `Self` is a zero-sized type, then this function will return a
3171    /// `Box<[Self]>` that has the correct `len`. Such a box cannot contain any
3172    /// actual information, but its `len()` property will report the correct
3173    /// value.
3174    ///
3175    /// # Errors
3176    ///
3177    /// Returns an error on allocation failure. Allocation failure is
3178    /// guaranteed never to cause a panic or an abort.
3179    #[must_use = "has no side effects (other than allocation)"]
3180    #[cfg(feature = "alloc")]
3181    #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3182    #[inline]
3183    fn new_box_zeroed_with_elems(count: usize) -> Result<Box<Self>, AllocError>
3184    where
3185        Self: KnownLayout<PointerMetadata = usize>,
3186    {
3187        // SAFETY: `alloc::alloc::alloc_zeroed` is a valid argument of
3188        // `new_box`. The referent of the pointer returned by `alloc_zeroed`
3189        // (and, consequently, the `Box` derived from it) is a valid instance of
3190        // `Self`, because `Self` is `FromZeros`.
3191        unsafe { crate::util::new_box(count, alloc::alloc::alloc_zeroed) }
3192    }
3193
3194    #[deprecated(since = "0.8.0", note = "renamed to `FromZeros::new_box_zeroed_with_elems`")]
3195    #[doc(hidden)]
3196    #[cfg(feature = "alloc")]
3197    #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3198    #[must_use = "has no side effects (other than allocation)"]
3199    #[inline(always)]
3200    fn new_box_slice_zeroed(len: usize) -> Result<Box<[Self]>, AllocError>
3201    where
3202        Self: Sized,
3203    {
3204        <[Self]>::new_box_zeroed_with_elems(len)
3205    }
3206
3207    /// Creates a `Vec<Self>` from zeroed bytes.
3208    ///
3209    /// This function is useful for allocating large values of `Vec`s and
3210    /// zero-initializing them, without ever creating a temporary instance of
3211    /// `[Self; _]` (or many temporary instances of `Self`) on the stack. For
3212    /// example, `u8::new_vec_zeroed(1048576)` will allocate directly on the
3213    /// heap; it does not require storing intermediate values on the stack.
3214    ///
3215    /// On systems that use a heap implementation that supports allocating from
3216    /// pre-zeroed memory, using `new_vec_zeroed` may have performance benefits.
3217    ///
3218    /// If `Self` is a zero-sized type, then this function will return a
3219    /// `Vec<Self>` that has the correct `len`. Such a `Vec` cannot contain any
3220    /// actual information, but its `len()` property will report the correct
3221    /// value.
3222    ///
3223    /// # Errors
3224    ///
3225    /// Returns an error on allocation failure. Allocation failure is
3226    /// guaranteed never to cause a panic or an abort.
3227    #[must_use = "has no side effects (other than allocation)"]
3228    #[cfg(feature = "alloc")]
3229    #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3230    #[inline(always)]
3231    fn new_vec_zeroed(len: usize) -> Result<Vec<Self>, AllocError>
3232    where
3233        Self: Sized,
3234    {
3235        <[Self]>::new_box_zeroed_with_elems(len).map(Into::into)
3236    }
3237
3238    /// Extends a `Vec<Self>` by pushing `additional` new items onto the end of
3239    /// the vector. The new items are initialized with zeros.
3240    #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
3241    #[cfg(feature = "alloc")]
3242    #[cfg_attr(doc_cfg, doc(cfg(all(rust = "1.57.0", feature = "alloc"))))]
3243    #[inline(always)]
3244    fn extend_vec_zeroed(v: &mut Vec<Self>, additional: usize) -> Result<(), AllocError>
3245    where
3246        Self: Sized,
3247    {
3248        // PANICS: We pass `v.len()` for `position`, so the `position > v.len()`
3249        // panic condition is not satisfied.
3250        <Self as FromZeros>::insert_vec_zeroed(v, v.len(), additional)
3251    }
3252
3253    /// Inserts `additional` new items into `Vec<Self>` at `position`. The new
3254    /// items are initialized with zeros.
3255    ///
3256    /// # Panics
3257    ///
3258    /// Panics if `position > v.len()`.
3259    #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
3260    #[cfg(feature = "alloc")]
3261    #[cfg_attr(doc_cfg, doc(cfg(all(rust = "1.57.0", feature = "alloc"))))]
3262    #[inline]
3263    fn insert_vec_zeroed(
3264        v: &mut Vec<Self>,
3265        position: usize,
3266        additional: usize,
3267    ) -> Result<(), AllocError>
3268    where
3269        Self: Sized,
3270    {
3271        assert!(position <= v.len());
3272        // We only conditionally compile on versions on which `try_reserve` is
3273        // stable; the Clippy lint is a false positive.
3274        v.try_reserve(additional).map_err(|_| AllocError)?;
3275        // SAFETY: The `try_reserve` call guarantees that these cannot overflow:
3276        // * `ptr.add(position)`
3277        // * `position + additional`
3278        // * `v.len() + additional`
3279        //
3280        // `v.len() - position` cannot overflow because we asserted that
3281        // `position <= v.len()`.
3282        unsafe {
3283            // This is a potentially overlapping copy.
3284            let ptr = v.as_mut_ptr();
3285            #[allow(clippy::arithmetic_side_effects)]
3286            ptr.add(position).copy_to(ptr.add(position + additional), v.len() - position);
3287            ptr.add(position).write_bytes(0, additional);
3288            #[allow(clippy::arithmetic_side_effects)]
3289            v.set_len(v.len() + additional);
3290        }
3291
3292        Ok(())
3293    }
3294}
3295
3296/// Analyzes whether a type is [`FromBytes`].
3297///
3298/// This derive analyzes, at compile time, whether the annotated type satisfies
3299/// the [safety conditions] of `FromBytes` and implements `FromBytes` and its
3300/// supertraits if it is sound to do so. This derive can be applied to structs,
3301/// enums, and unions;
3302/// e.g.:
3303///
3304/// ```
3305/// # use zerocopy_derive::{FromBytes, FromZeros, Immutable};
3306/// #[derive(FromBytes)]
3307/// struct MyStruct {
3308/// # /*
3309///     ...
3310/// # */
3311/// }
3312///
3313/// #[derive(FromBytes)]
3314/// #[repr(u8)]
3315/// enum MyEnum {
3316/// #   V00, V01, V02, V03, V04, V05, V06, V07, V08, V09, V0A, V0B, V0C, V0D, V0E,
3317/// #   V0F, V10, V11, V12, V13, V14, V15, V16, V17, V18, V19, V1A, V1B, V1C, V1D,
3318/// #   V1E, V1F, V20, V21, V22, V23, V24, V25, V26, V27, V28, V29, V2A, V2B, V2C,
3319/// #   V2D, V2E, V2F, V30, V31, V32, V33, V34, V35, V36, V37, V38, V39, V3A, V3B,
3320/// #   V3C, V3D, V3E, V3F, V40, V41, V42, V43, V44, V45, V46, V47, V48, V49, V4A,
3321/// #   V4B, V4C, V4D, V4E, V4F, V50, V51, V52, V53, V54, V55, V56, V57, V58, V59,
3322/// #   V5A, V5B, V5C, V5D, V5E, V5F, V60, V61, V62, V63, V64, V65, V66, V67, V68,
3323/// #   V69, V6A, V6B, V6C, V6D, V6E, V6F, V70, V71, V72, V73, V74, V75, V76, V77,
3324/// #   V78, V79, V7A, V7B, V7C, V7D, V7E, V7F, V80, V81, V82, V83, V84, V85, V86,
3325/// #   V87, V88, V89, V8A, V8B, V8C, V8D, V8E, V8F, V90, V91, V92, V93, V94, V95,
3326/// #   V96, V97, V98, V99, V9A, V9B, V9C, V9D, V9E, V9F, VA0, VA1, VA2, VA3, VA4,
3327/// #   VA5, VA6, VA7, VA8, VA9, VAA, VAB, VAC, VAD, VAE, VAF, VB0, VB1, VB2, VB3,
3328/// #   VB4, VB5, VB6, VB7, VB8, VB9, VBA, VBB, VBC, VBD, VBE, VBF, VC0, VC1, VC2,
3329/// #   VC3, VC4, VC5, VC6, VC7, VC8, VC9, VCA, VCB, VCC, VCD, VCE, VCF, VD0, VD1,
3330/// #   VD2, VD3, VD4, VD5, VD6, VD7, VD8, VD9, VDA, VDB, VDC, VDD, VDE, VDF, VE0,
3331/// #   VE1, VE2, VE3, VE4, VE5, VE6, VE7, VE8, VE9, VEA, VEB, VEC, VED, VEE, VEF,
3332/// #   VF0, VF1, VF2, VF3, VF4, VF5, VF6, VF7, VF8, VF9, VFA, VFB, VFC, VFD, VFE,
3333/// #   VFF,
3334/// # /*
3335///     ...
3336/// # */
3337/// }
3338///
3339/// #[derive(FromBytes, Immutable)]
3340/// union MyUnion {
3341/// #   variant: u8,
3342/// # /*
3343///     ...
3344/// # */
3345/// }
3346/// ```
3347///
3348/// [safety conditions]: trait@FromBytes#safety
3349///
3350/// # Analysis
3351///
3352/// *This section describes, roughly, the analysis performed by this derive to
3353/// determine whether it is sound to implement `FromBytes` for a given type.
3354/// Unless you are modifying the implementation of this derive, or attempting to
3355/// manually implement `FromBytes` for a type yourself, you don't need to read
3356/// this section.*
3357///
3358/// If a type has the following properties, then this derive can implement
3359/// `FromBytes` for that type:
3360///
3361/// - If the type is a struct, all of its fields must be `FromBytes`.
3362/// - If the type is an enum:
3363///   - It must have a defined representation (`repr`s `C`, `u8`, `u16`, `u32`,
3364///     `u64`, `usize`, `i8`, `i16`, `i32`, `i64`, or `isize`).
3365///   - The maximum number of discriminants must be used (so that every possible
3366///     bit pattern is a valid one). Be very careful when using the `C`,
3367///     `usize`, or `isize` representations, as their size is
3368///     platform-dependent.
3369///   - Its fields must be `FromBytes`.
3370///
3371/// This analysis is subject to change. Unsafe code may *only* rely on the
3372/// documented [safety conditions] of `FromBytes`, and must *not* rely on the
3373/// implementation details of this derive.
3374///
3375/// ## Why isn't an explicit representation required for structs?
3376///
3377/// Neither this derive, nor the [safety conditions] of `FromBytes`, requires
3378/// that structs are marked with `#[repr(C)]`.
3379///
3380/// Per the [Rust reference](reference),
3381///
3382/// > The representation of a type can change the padding between fields, but
3383/// > does not change the layout of the fields themselves.
3384///
3385/// [reference]: https://doc.rust-lang.org/reference/type-layout.html#representations
3386///
3387/// Since the layout of structs only consists of padding bytes and field bytes,
3388/// a struct is soundly `FromBytes` if:
3389/// 1. its padding is soundly `FromBytes`, and
3390/// 2. its fields are soundly `FromBytes`.
3391///
3392/// The answer to the first question is always yes: padding bytes do not have
3393/// any validity constraints. A [discussion] of this question in the Unsafe Code
3394/// Guidelines Working Group concluded that it would be virtually unimaginable
3395/// for future versions of rustc to add validity constraints to padding bytes.
3396///
3397/// [discussion]: https://github.com/rust-lang/unsafe-code-guidelines/issues/174
3398///
3399/// Whether a struct is soundly `FromBytes` therefore solely depends on whether
3400/// its fields are `FromBytes`.
3401// FIXME(#146): Document why we don't require an enum to have an explicit `repr`
3402// attribute.
3403#[cfg(any(feature = "derive", test))]
3404#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
3405pub use zerocopy_derive::FromBytes;
3406
3407/// Types for which any bit pattern is valid.
3408///
3409/// Any memory region of the appropriate length which contains initialized bytes
3410/// can be viewed as any `FromBytes` type with no runtime overhead. This is
3411/// useful for efficiently parsing bytes as structured data.
3412///
3413/// # Warning: Padding bytes
3414///
3415/// Note that, when a value is moved or copied, only the non-padding bytes of
3416/// that value are guaranteed to be preserved. It is unsound to assume that
3417/// values written to padding bytes are preserved after a move or copy. For
3418/// example, the following is unsound:
3419///
3420/// ```rust,no_run
3421/// use core::mem::{size_of, transmute};
3422/// use zerocopy::FromZeros;
3423/// # use zerocopy_derive::*;
3424///
3425/// // Assume `Foo` is a type with padding bytes.
3426/// #[derive(FromZeros, Default)]
3427/// struct Foo {
3428/// # /*
3429///     ...
3430/// # */
3431/// }
3432///
3433/// let mut foo: Foo = Foo::default();
3434/// FromZeros::zero(&mut foo);
3435/// // UNSOUND: Although `FromZeros::zero` writes zeros to all bytes of `foo`,
3436/// // those writes are not guaranteed to be preserved in padding bytes when
3437/// // `foo` is moved, so this may expose padding bytes as `u8`s.
3438/// let foo_bytes: [u8; size_of::<Foo>()] = unsafe { transmute(foo) };
3439/// ```
3440///
3441/// # Implementation
3442///
3443/// **Do not implement this trait yourself!** Instead, use
3444/// [`#[derive(FromBytes)]`][derive]; e.g.:
3445///
3446/// ```
3447/// # use zerocopy_derive::{FromBytes, Immutable};
3448/// #[derive(FromBytes)]
3449/// struct MyStruct {
3450/// # /*
3451///     ...
3452/// # */
3453/// }
3454///
3455/// #[derive(FromBytes)]
3456/// #[repr(u8)]
3457/// enum MyEnum {
3458/// #   V00, V01, V02, V03, V04, V05, V06, V07, V08, V09, V0A, V0B, V0C, V0D, V0E,
3459/// #   V0F, V10, V11, V12, V13, V14, V15, V16, V17, V18, V19, V1A, V1B, V1C, V1D,
3460/// #   V1E, V1F, V20, V21, V22, V23, V24, V25, V26, V27, V28, V29, V2A, V2B, V2C,
3461/// #   V2D, V2E, V2F, V30, V31, V32, V33, V34, V35, V36, V37, V38, V39, V3A, V3B,
3462/// #   V3C, V3D, V3E, V3F, V40, V41, V42, V43, V44, V45, V46, V47, V48, V49, V4A,
3463/// #   V4B, V4C, V4D, V4E, V4F, V50, V51, V52, V53, V54, V55, V56, V57, V58, V59,
3464/// #   V5A, V5B, V5C, V5D, V5E, V5F, V60, V61, V62, V63, V64, V65, V66, V67, V68,
3465/// #   V69, V6A, V6B, V6C, V6D, V6E, V6F, V70, V71, V72, V73, V74, V75, V76, V77,
3466/// #   V78, V79, V7A, V7B, V7C, V7D, V7E, V7F, V80, V81, V82, V83, V84, V85, V86,
3467/// #   V87, V88, V89, V8A, V8B, V8C, V8D, V8E, V8F, V90, V91, V92, V93, V94, V95,
3468/// #   V96, V97, V98, V99, V9A, V9B, V9C, V9D, V9E, V9F, VA0, VA1, VA2, VA3, VA4,
3469/// #   VA5, VA6, VA7, VA8, VA9, VAA, VAB, VAC, VAD, VAE, VAF, VB0, VB1, VB2, VB3,
3470/// #   VB4, VB5, VB6, VB7, VB8, VB9, VBA, VBB, VBC, VBD, VBE, VBF, VC0, VC1, VC2,
3471/// #   VC3, VC4, VC5, VC6, VC7, VC8, VC9, VCA, VCB, VCC, VCD, VCE, VCF, VD0, VD1,
3472/// #   VD2, VD3, VD4, VD5, VD6, VD7, VD8, VD9, VDA, VDB, VDC, VDD, VDE, VDF, VE0,
3473/// #   VE1, VE2, VE3, VE4, VE5, VE6, VE7, VE8, VE9, VEA, VEB, VEC, VED, VEE, VEF,
3474/// #   VF0, VF1, VF2, VF3, VF4, VF5, VF6, VF7, VF8, VF9, VFA, VFB, VFC, VFD, VFE,
3475/// #   VFF,
3476/// # /*
3477///     ...
3478/// # */
3479/// }
3480///
3481/// #[derive(FromBytes, Immutable)]
3482/// union MyUnion {
3483/// #   variant: u8,
3484/// # /*
3485///     ...
3486/// # */
3487/// }
3488/// ```
3489///
3490/// This derive performs a sophisticated, compile-time safety analysis to
3491/// determine whether a type is `FromBytes`.
3492///
3493/// # Safety
3494///
3495/// *This section describes what is required in order for `T: FromBytes`, and
3496/// what unsafe code may assume of such types. If you don't plan on implementing
3497/// `FromBytes` manually, and you don't plan on writing unsafe code that
3498/// operates on `FromBytes` types, then you don't need to read this section.*
3499///
3500/// If `T: FromBytes`, then unsafe code may assume that it is sound to produce a
3501/// `T` whose bytes are initialized to any sequence of valid `u8`s (in other
3502/// words, any byte value which is not uninitialized). If a type is marked as
3503/// `FromBytes` which violates this contract, it may cause undefined behavior.
3504///
3505/// `#[derive(FromBytes)]` only permits [types which satisfy these
3506/// requirements][derive-analysis].
3507///
3508#[cfg_attr(
3509    feature = "derive",
3510    doc = "[derive]: zerocopy_derive::FromBytes",
3511    doc = "[derive-analysis]: zerocopy_derive::FromBytes#analysis"
3512)]
3513#[cfg_attr(
3514    not(feature = "derive"),
3515    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromBytes.html"),
3516    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromBytes.html#analysis"),
3517)]
3518#[cfg_attr(
3519    zerocopy_diagnostic_on_unimplemented_1_78_0,
3520    diagnostic::on_unimplemented(note = "Consider adding `#[derive(FromBytes)]` to `{Self}`")
3521)]
3522pub unsafe trait FromBytes: FromZeros {
3523    // The `Self: Sized` bound makes it so that `FromBytes` is still object
3524    // safe.
3525    #[doc(hidden)]
3526    fn only_derive_is_allowed_to_implement_this_trait()
3527    where
3528        Self: Sized;
3529
3530    /// Interprets the given `source` as a `&Self`.
3531    ///
3532    /// This method attempts to return a reference to `source` interpreted as a
3533    /// `Self`. If the length of `source` is not a [valid size of
3534    /// `Self`][valid-size], or if `source` is not appropriately aligned, this
3535    /// returns `Err`. If [`Self: Unaligned`][self-unaligned], you can
3536    /// [infallibly discard the alignment error][size-error-from].
3537    ///
3538    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3539    ///
3540    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3541    /// [self-unaligned]: Unaligned
3542    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3543    /// [slice-dst]: KnownLayout#dynamically-sized-types
3544    ///
3545    /// # Compile-Time Assertions
3546    ///
3547    /// This method cannot yet be used on unsized types whose dynamically-sized
3548    /// component is zero-sized. Attempting to use this method on such types
3549    /// results in a compile-time assertion error; e.g.:
3550    ///
3551    /// ```compile_fail,E0080
3552    /// use zerocopy::*;
3553    /// # use zerocopy_derive::*;
3554    ///
3555    /// #[derive(FromBytes, Immutable, KnownLayout)]
3556    /// #[repr(C)]
3557    /// struct ZSTy {
3558    ///     leading_sized: u16,
3559    ///     trailing_dst: [()],
3560    /// }
3561    ///
3562    /// let _ = ZSTy::ref_from_bytes(0u16.as_bytes()); // âš  Compile Error!
3563    /// ```
3564    ///
3565    /// # Examples
3566    ///
3567    /// ```
3568    /// use zerocopy::FromBytes;
3569    /// # use zerocopy_derive::*;
3570    ///
3571    /// #[derive(FromBytes, KnownLayout, Immutable)]
3572    /// #[repr(C)]
3573    /// struct PacketHeader {
3574    ///     src_port: [u8; 2],
3575    ///     dst_port: [u8; 2],
3576    ///     length: [u8; 2],
3577    ///     checksum: [u8; 2],
3578    /// }
3579    ///
3580    /// #[derive(FromBytes, KnownLayout, Immutable)]
3581    /// #[repr(C)]
3582    /// struct Packet {
3583    ///     header: PacketHeader,
3584    ///     body: [u8],
3585    /// }
3586    ///
3587    /// // These bytes encode a `Packet`.
3588    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11][..];
3589    ///
3590    /// let packet = Packet::ref_from_bytes(bytes).unwrap();
3591    ///
3592    /// assert_eq!(packet.header.src_port, [0, 1]);
3593    /// assert_eq!(packet.header.dst_port, [2, 3]);
3594    /// assert_eq!(packet.header.length, [4, 5]);
3595    /// assert_eq!(packet.header.checksum, [6, 7]);
3596    /// assert_eq!(packet.body, [8, 9, 10, 11]);
3597    /// ```
3598    #[must_use = "has no side effects"]
3599    #[inline]
3600    fn ref_from_bytes(source: &[u8]) -> Result<&Self, CastError<&[u8], Self>>
3601    where
3602        Self: KnownLayout + Immutable,
3603    {
3604        static_assert_dst_is_not_zst!(Self);
3605        match Ptr::from_ref(source).try_cast_into_no_leftover::<_, BecauseImmutable>(None) {
3606            Ok(ptr) => Ok(ptr.recall_validity().as_ref()),
3607            Err(err) => Err(err.map_src(|src| src.as_ref())),
3608        }
3609    }
3610
3611    /// Interprets the prefix of the given `source` as a `&Self` without
3612    /// copying.
3613    ///
3614    /// This method computes the [largest possible size of `Self`][valid-size]
3615    /// that can fit in the leading bytes of `source`, then attempts to return
3616    /// both a reference to those bytes interpreted as a `Self`, and a reference
3617    /// to the remaining bytes. If there are insufficient bytes, or if `source`
3618    /// is not appropriately aligned, this returns `Err`. If [`Self:
3619    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
3620    /// error][size-error-from].
3621    ///
3622    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3623    ///
3624    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3625    /// [self-unaligned]: Unaligned
3626    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3627    /// [slice-dst]: KnownLayout#dynamically-sized-types
3628    ///
3629    /// # Compile-Time Assertions
3630    ///
3631    /// This method cannot yet be used on unsized types whose dynamically-sized
3632    /// component is zero-sized. See [`ref_from_prefix_with_elems`], which does
3633    /// support such types. Attempting to use this method on such types results
3634    /// in a compile-time assertion error; e.g.:
3635    ///
3636    /// ```compile_fail,E0080
3637    /// use zerocopy::*;
3638    /// # use zerocopy_derive::*;
3639    ///
3640    /// #[derive(FromBytes, Immutable, KnownLayout)]
3641    /// #[repr(C)]
3642    /// struct ZSTy {
3643    ///     leading_sized: u16,
3644    ///     trailing_dst: [()],
3645    /// }
3646    ///
3647    /// let _ = ZSTy::ref_from_prefix(0u16.as_bytes()); // âš  Compile Error!
3648    /// ```
3649    ///
3650    /// [`ref_from_prefix_with_elems`]: FromBytes::ref_from_prefix_with_elems
3651    ///
3652    /// # Examples
3653    ///
3654    /// ```
3655    /// use zerocopy::FromBytes;
3656    /// # use zerocopy_derive::*;
3657    ///
3658    /// #[derive(FromBytes, KnownLayout, Immutable)]
3659    /// #[repr(C)]
3660    /// struct PacketHeader {
3661    ///     src_port: [u8; 2],
3662    ///     dst_port: [u8; 2],
3663    ///     length: [u8; 2],
3664    ///     checksum: [u8; 2],
3665    /// }
3666    ///
3667    /// #[derive(FromBytes, KnownLayout, Immutable)]
3668    /// #[repr(C)]
3669    /// struct Packet {
3670    ///     header: PacketHeader,
3671    ///     body: [[u8; 2]],
3672    /// }
3673    ///
3674    /// // These are more bytes than are needed to encode a `Packet`.
3675    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14][..];
3676    ///
3677    /// let (packet, suffix) = Packet::ref_from_prefix(bytes).unwrap();
3678    ///
3679    /// assert_eq!(packet.header.src_port, [0, 1]);
3680    /// assert_eq!(packet.header.dst_port, [2, 3]);
3681    /// assert_eq!(packet.header.length, [4, 5]);
3682    /// assert_eq!(packet.header.checksum, [6, 7]);
3683    /// assert_eq!(packet.body, [[8, 9], [10, 11], [12, 13]]);
3684    /// assert_eq!(suffix, &[14u8][..]);
3685    /// ```
3686    #[must_use = "has no side effects"]
3687    #[inline]
3688    fn ref_from_prefix(source: &[u8]) -> Result<(&Self, &[u8]), CastError<&[u8], Self>>
3689    where
3690        Self: KnownLayout + Immutable,
3691    {
3692        static_assert_dst_is_not_zst!(Self);
3693        ref_from_prefix_suffix(source, None, CastType::Prefix)
3694    }
3695
3696    /// Interprets the suffix of the given bytes as a `&Self`.
3697    ///
3698    /// This method computes the [largest possible size of `Self`][valid-size]
3699    /// that can fit in the trailing bytes of `source`, then attempts to return
3700    /// both a reference to those bytes interpreted as a `Self`, and a reference
3701    /// to the preceding bytes. If there are insufficient bytes, or if that
3702    /// suffix of `source` is not appropriately aligned, this returns `Err`. If
3703    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
3704    /// alignment error][size-error-from].
3705    ///
3706    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3707    ///
3708    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3709    /// [self-unaligned]: Unaligned
3710    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3711    /// [slice-dst]: KnownLayout#dynamically-sized-types
3712    ///
3713    /// # Compile-Time Assertions
3714    ///
3715    /// This method cannot yet be used on unsized types whose dynamically-sized
3716    /// component is zero-sized. See [`ref_from_suffix_with_elems`], which does
3717    /// support such types. Attempting to use this method on such types results
3718    /// in a compile-time assertion error; e.g.:
3719    ///
3720    /// ```compile_fail,E0080
3721    /// use zerocopy::*;
3722    /// # use zerocopy_derive::*;
3723    ///
3724    /// #[derive(FromBytes, Immutable, KnownLayout)]
3725    /// #[repr(C)]
3726    /// struct ZSTy {
3727    ///     leading_sized: u16,
3728    ///     trailing_dst: [()],
3729    /// }
3730    ///
3731    /// let _ = ZSTy::ref_from_suffix(0u16.as_bytes()); // âš  Compile Error!
3732    /// ```
3733    ///
3734    /// [`ref_from_suffix_with_elems`]: FromBytes::ref_from_suffix_with_elems
3735    ///
3736    /// # Examples
3737    ///
3738    /// ```
3739    /// use zerocopy::FromBytes;
3740    /// # use zerocopy_derive::*;
3741    ///
3742    /// #[derive(FromBytes, Immutable, KnownLayout)]
3743    /// #[repr(C)]
3744    /// struct PacketTrailer {
3745    ///     frame_check_sequence: [u8; 4],
3746    /// }
3747    ///
3748    /// // These are more bytes than are needed to encode a `PacketTrailer`.
3749    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
3750    ///
3751    /// let (prefix, trailer) = PacketTrailer::ref_from_suffix(bytes).unwrap();
3752    ///
3753    /// assert_eq!(prefix, &[0, 1, 2, 3, 4, 5][..]);
3754    /// assert_eq!(trailer.frame_check_sequence, [6, 7, 8, 9]);
3755    /// ```
3756    #[must_use = "has no side effects"]
3757    #[inline]
3758    fn ref_from_suffix(source: &[u8]) -> Result<(&[u8], &Self), CastError<&[u8], Self>>
3759    where
3760        Self: Immutable + KnownLayout,
3761    {
3762        static_assert_dst_is_not_zst!(Self);
3763        ref_from_prefix_suffix(source, None, CastType::Suffix).map(swap)
3764    }
3765
3766    /// Interprets the given `source` as a `&mut Self`.
3767    ///
3768    /// This method attempts to return a reference to `source` interpreted as a
3769    /// `Self`. If the length of `source` is not a [valid size of
3770    /// `Self`][valid-size], or if `source` is not appropriately aligned, this
3771    /// returns `Err`. If [`Self: Unaligned`][self-unaligned], you can
3772    /// [infallibly discard the alignment error][size-error-from].
3773    ///
3774    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3775    ///
3776    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3777    /// [self-unaligned]: Unaligned
3778    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3779    /// [slice-dst]: KnownLayout#dynamically-sized-types
3780    ///
3781    /// # Compile-Time Assertions
3782    ///
3783    /// This method cannot yet be used on unsized types whose dynamically-sized
3784    /// component is zero-sized. See [`mut_from_prefix_with_elems`], which does
3785    /// support such types. Attempting to use this method on such types results
3786    /// in a compile-time assertion error; e.g.:
3787    ///
3788    /// ```compile_fail,E0080
3789    /// use zerocopy::*;
3790    /// # use zerocopy_derive::*;
3791    ///
3792    /// #[derive(FromBytes, Immutable, IntoBytes, KnownLayout)]
3793    /// #[repr(C, packed)]
3794    /// struct ZSTy {
3795    ///     leading_sized: [u8; 2],
3796    ///     trailing_dst: [()],
3797    /// }
3798    ///
3799    /// let mut source = [85, 85];
3800    /// let _ = ZSTy::mut_from_bytes(&mut source[..]); // âš  Compile Error!
3801    /// ```
3802    ///
3803    /// [`mut_from_prefix_with_elems`]: FromBytes::mut_from_prefix_with_elems
3804    ///
3805    /// # Examples
3806    ///
3807    /// ```
3808    /// use zerocopy::FromBytes;
3809    /// # use zerocopy_derive::*;
3810    ///
3811    /// #[derive(FromBytes, IntoBytes, KnownLayout, Immutable)]
3812    /// #[repr(C)]
3813    /// struct PacketHeader {
3814    ///     src_port: [u8; 2],
3815    ///     dst_port: [u8; 2],
3816    ///     length: [u8; 2],
3817    ///     checksum: [u8; 2],
3818    /// }
3819    ///
3820    /// // These bytes encode a `PacketHeader`.
3821    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7][..];
3822    ///
3823    /// let header = PacketHeader::mut_from_bytes(bytes).unwrap();
3824    ///
3825    /// assert_eq!(header.src_port, [0, 1]);
3826    /// assert_eq!(header.dst_port, [2, 3]);
3827    /// assert_eq!(header.length, [4, 5]);
3828    /// assert_eq!(header.checksum, [6, 7]);
3829    ///
3830    /// header.checksum = [0, 0];
3831    ///
3832    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 0, 0]);
3833    /// ```
3834    #[must_use = "has no side effects"]
3835    #[inline]
3836    fn mut_from_bytes(source: &mut [u8]) -> Result<&mut Self, CastError<&mut [u8], Self>>
3837    where
3838        Self: IntoBytes + KnownLayout,
3839    {
3840        static_assert_dst_is_not_zst!(Self);
3841        match Ptr::from_mut(source).try_cast_into_no_leftover::<_, BecauseExclusive>(None) {
3842            Ok(ptr) => Ok(ptr.recall_validity::<_, (_, (_, _))>().as_mut()),
3843            Err(err) => Err(err.map_src(|src| src.as_mut())),
3844        }
3845    }
3846
3847    /// Interprets the prefix of the given `source` as a `&mut Self` without
3848    /// copying.
3849    ///
3850    /// This method computes the [largest possible size of `Self`][valid-size]
3851    /// that can fit in the leading bytes of `source`, then attempts to return
3852    /// both a reference to those bytes interpreted as a `Self`, and a reference
3853    /// to the remaining bytes. If there are insufficient bytes, or if `source`
3854    /// is not appropriately aligned, this returns `Err`. If [`Self:
3855    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
3856    /// error][size-error-from].
3857    ///
3858    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3859    ///
3860    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3861    /// [self-unaligned]: Unaligned
3862    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3863    /// [slice-dst]: KnownLayout#dynamically-sized-types
3864    ///
3865    /// # Compile-Time Assertions
3866    ///
3867    /// This method cannot yet be used on unsized types whose dynamically-sized
3868    /// component is zero-sized. See [`mut_from_suffix_with_elems`], which does
3869    /// support such types. Attempting to use this method on such types results
3870    /// in a compile-time assertion error; e.g.:
3871    ///
3872    /// ```compile_fail,E0080
3873    /// use zerocopy::*;
3874    /// # use zerocopy_derive::*;
3875    ///
3876    /// #[derive(FromBytes, Immutable, IntoBytes, KnownLayout)]
3877    /// #[repr(C, packed)]
3878    /// struct ZSTy {
3879    ///     leading_sized: [u8; 2],
3880    ///     trailing_dst: [()],
3881    /// }
3882    ///
3883    /// let mut source = [85, 85];
3884    /// let _ = ZSTy::mut_from_prefix(&mut source[..]); // âš  Compile Error!
3885    /// ```
3886    ///
3887    /// [`mut_from_suffix_with_elems`]: FromBytes::mut_from_suffix_with_elems
3888    ///
3889    /// # Examples
3890    ///
3891    /// ```
3892    /// use zerocopy::FromBytes;
3893    /// # use zerocopy_derive::*;
3894    ///
3895    /// #[derive(FromBytes, IntoBytes, KnownLayout, Immutable)]
3896    /// #[repr(C)]
3897    /// struct PacketHeader {
3898    ///     src_port: [u8; 2],
3899    ///     dst_port: [u8; 2],
3900    ///     length: [u8; 2],
3901    ///     checksum: [u8; 2],
3902    /// }
3903    ///
3904    /// // These are more bytes than are needed to encode a `PacketHeader`.
3905    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
3906    ///
3907    /// let (header, body) = PacketHeader::mut_from_prefix(bytes).unwrap();
3908    ///
3909    /// assert_eq!(header.src_port, [0, 1]);
3910    /// assert_eq!(header.dst_port, [2, 3]);
3911    /// assert_eq!(header.length, [4, 5]);
3912    /// assert_eq!(header.checksum, [6, 7]);
3913    /// assert_eq!(body, &[8, 9][..]);
3914    ///
3915    /// header.checksum = [0, 0];
3916    /// body.fill(1);
3917    ///
3918    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 0, 0, 1, 1]);
3919    /// ```
3920    #[must_use = "has no side effects"]
3921    #[inline]
3922    fn mut_from_prefix(
3923        source: &mut [u8],
3924    ) -> Result<(&mut Self, &mut [u8]), CastError<&mut [u8], Self>>
3925    where
3926        Self: IntoBytes + KnownLayout,
3927    {
3928        static_assert_dst_is_not_zst!(Self);
3929        mut_from_prefix_suffix(source, None, CastType::Prefix)
3930    }
3931
3932    /// Interprets the suffix of the given `source` as a `&mut Self` without
3933    /// copying.
3934    ///
3935    /// This method computes the [largest possible size of `Self`][valid-size]
3936    /// that can fit in the trailing bytes of `source`, then attempts to return
3937    /// both a reference to those bytes interpreted as a `Self`, and a reference
3938    /// to the preceding bytes. If there are insufficient bytes, or if that
3939    /// suffix of `source` is not appropriately aligned, this returns `Err`. If
3940    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
3941    /// alignment error][size-error-from].
3942    ///
3943    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3944    ///
3945    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3946    /// [self-unaligned]: Unaligned
3947    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3948    /// [slice-dst]: KnownLayout#dynamically-sized-types
3949    ///
3950    /// # Compile-Time Assertions
3951    ///
3952    /// This method cannot yet be used on unsized types whose dynamically-sized
3953    /// component is zero-sized. Attempting to use this method on such types
3954    /// results in a compile-time assertion error; e.g.:
3955    ///
3956    /// ```compile_fail,E0080
3957    /// use zerocopy::*;
3958    /// # use zerocopy_derive::*;
3959    ///
3960    /// #[derive(FromBytes, Immutable, IntoBytes, KnownLayout)]
3961    /// #[repr(C, packed)]
3962    /// struct ZSTy {
3963    ///     leading_sized: [u8; 2],
3964    ///     trailing_dst: [()],
3965    /// }
3966    ///
3967    /// let mut source = [85, 85];
3968    /// let _ = ZSTy::mut_from_suffix(&mut source[..]); // âš  Compile Error!
3969    /// ```
3970    ///
3971    /// # Examples
3972    ///
3973    /// ```
3974    /// use zerocopy::FromBytes;
3975    /// # use zerocopy_derive::*;
3976    ///
3977    /// #[derive(FromBytes, IntoBytes, KnownLayout, Immutable)]
3978    /// #[repr(C)]
3979    /// struct PacketTrailer {
3980    ///     frame_check_sequence: [u8; 4],
3981    /// }
3982    ///
3983    /// // These are more bytes than are needed to encode a `PacketTrailer`.
3984    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
3985    ///
3986    /// let (prefix, trailer) = PacketTrailer::mut_from_suffix(bytes).unwrap();
3987    ///
3988    /// assert_eq!(prefix, &[0u8, 1, 2, 3, 4, 5][..]);
3989    /// assert_eq!(trailer.frame_check_sequence, [6, 7, 8, 9]);
3990    ///
3991    /// prefix.fill(0);
3992    /// trailer.frame_check_sequence.fill(1);
3993    ///
3994    /// assert_eq!(bytes, [0, 0, 0, 0, 0, 0, 1, 1, 1, 1]);
3995    /// ```
3996    #[must_use = "has no side effects"]
3997    #[inline]
3998    fn mut_from_suffix(
3999        source: &mut [u8],
4000    ) -> Result<(&mut [u8], &mut Self), CastError<&mut [u8], Self>>
4001    where
4002        Self: IntoBytes + KnownLayout,
4003    {
4004        static_assert_dst_is_not_zst!(Self);
4005        mut_from_prefix_suffix(source, None, CastType::Suffix).map(swap)
4006    }
4007
4008    /// Interprets the given `source` as a `&Self` with a DST length equal to
4009    /// `count`.
4010    ///
4011    /// This method attempts to return a reference to `source` interpreted as a
4012    /// `Self` with `count` trailing elements. If the length of `source` is not
4013    /// equal to the size of `Self` with `count` elements, or if `source` is not
4014    /// appropriately aligned, this returns `Err`. If [`Self:
4015    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
4016    /// error][size-error-from].
4017    ///
4018    /// [self-unaligned]: Unaligned
4019    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4020    ///
4021    /// # Examples
4022    ///
4023    /// ```
4024    /// use zerocopy::FromBytes;
4025    /// # use zerocopy_derive::*;
4026    ///
4027    /// # #[derive(Debug, PartialEq, Eq)]
4028    /// #[derive(FromBytes, Immutable)]
4029    /// #[repr(C)]
4030    /// struct Pixel {
4031    ///     r: u8,
4032    ///     g: u8,
4033    ///     b: u8,
4034    ///     a: u8,
4035    /// }
4036    ///
4037    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7][..];
4038    ///
4039    /// let pixels = <[Pixel]>::ref_from_bytes_with_elems(bytes, 2).unwrap();
4040    ///
4041    /// assert_eq!(pixels, &[
4042    ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
4043    ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
4044    /// ]);
4045    ///
4046    /// ```
4047    ///
4048    /// Since an explicit `count` is provided, this method supports types with
4049    /// zero-sized trailing slice elements. Methods such as [`ref_from_bytes`]
4050    /// which do not take an explicit count do not support such types.
4051    ///
4052    /// ```
4053    /// use zerocopy::*;
4054    /// # use zerocopy_derive::*;
4055    ///
4056    /// #[derive(FromBytes, Immutable, KnownLayout)]
4057    /// #[repr(C)]
4058    /// struct ZSTy {
4059    ///     leading_sized: [u8; 2],
4060    ///     trailing_dst: [()],
4061    /// }
4062    ///
4063    /// let src = &[85, 85][..];
4064    /// let zsty = ZSTy::ref_from_bytes_with_elems(src, 42).unwrap();
4065    /// assert_eq!(zsty.trailing_dst.len(), 42);
4066    /// ```
4067    ///
4068    /// [`ref_from_bytes`]: FromBytes::ref_from_bytes
4069    #[must_use = "has no side effects"]
4070    #[inline]
4071    fn ref_from_bytes_with_elems(
4072        source: &[u8],
4073        count: usize,
4074    ) -> Result<&Self, CastError<&[u8], Self>>
4075    where
4076        Self: KnownLayout<PointerMetadata = usize> + Immutable,
4077    {
4078        let source = Ptr::from_ref(source);
4079        let maybe_slf = source.try_cast_into_no_leftover::<_, BecauseImmutable>(Some(count));
4080        match maybe_slf {
4081            Ok(slf) => Ok(slf.recall_validity().as_ref()),
4082            Err(err) => Err(err.map_src(|s| s.as_ref())),
4083        }
4084    }
4085
4086    /// Interprets the prefix of the given `source` as a DST `&Self` with length
4087    /// equal to `count`.
4088    ///
4089    /// This method attempts to return a reference to the prefix of `source`
4090    /// interpreted as a `Self` with `count` trailing elements, and a reference
4091    /// to the remaining bytes. If there are insufficient bytes, or if `source`
4092    /// is not appropriately aligned, this returns `Err`. If [`Self:
4093    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
4094    /// error][size-error-from].
4095    ///
4096    /// [self-unaligned]: Unaligned
4097    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4098    ///
4099    /// # Examples
4100    ///
4101    /// ```
4102    /// use zerocopy::FromBytes;
4103    /// # use zerocopy_derive::*;
4104    ///
4105    /// # #[derive(Debug, PartialEq, Eq)]
4106    /// #[derive(FromBytes, Immutable)]
4107    /// #[repr(C)]
4108    /// struct Pixel {
4109    ///     r: u8,
4110    ///     g: u8,
4111    ///     b: u8,
4112    ///     a: u8,
4113    /// }
4114    ///
4115    /// // These are more bytes than are needed to encode two `Pixel`s.
4116    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4117    ///
4118    /// let (pixels, suffix) = <[Pixel]>::ref_from_prefix_with_elems(bytes, 2).unwrap();
4119    ///
4120    /// assert_eq!(pixels, &[
4121    ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
4122    ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
4123    /// ]);
4124    ///
4125    /// assert_eq!(suffix, &[8, 9]);
4126    /// ```
4127    ///
4128    /// Since an explicit `count` is provided, this method supports types with
4129    /// zero-sized trailing slice elements. Methods such as [`ref_from_prefix`]
4130    /// which do not take an explicit count do not support such types.
4131    ///
4132    /// ```
4133    /// use zerocopy::*;
4134    /// # use zerocopy_derive::*;
4135    ///
4136    /// #[derive(FromBytes, Immutable, KnownLayout)]
4137    /// #[repr(C)]
4138    /// struct ZSTy {
4139    ///     leading_sized: [u8; 2],
4140    ///     trailing_dst: [()],
4141    /// }
4142    ///
4143    /// let src = &[85, 85][..];
4144    /// let (zsty, _) = ZSTy::ref_from_prefix_with_elems(src, 42).unwrap();
4145    /// assert_eq!(zsty.trailing_dst.len(), 42);
4146    /// ```
4147    ///
4148    /// [`ref_from_prefix`]: FromBytes::ref_from_prefix
4149    #[must_use = "has no side effects"]
4150    #[inline]
4151    fn ref_from_prefix_with_elems(
4152        source: &[u8],
4153        count: usize,
4154    ) -> Result<(&Self, &[u8]), CastError<&[u8], Self>>
4155    where
4156        Self: KnownLayout<PointerMetadata = usize> + Immutable,
4157    {
4158        ref_from_prefix_suffix(source, Some(count), CastType::Prefix)
4159    }
4160
4161    /// Interprets the suffix of the given `source` as a DST `&Self` with length
4162    /// equal to `count`.
4163    ///
4164    /// This method attempts to return a reference to the suffix of `source`
4165    /// interpreted as a `Self` with `count` trailing elements, and a reference
4166    /// to the preceding bytes. If there are insufficient bytes, or if that
4167    /// suffix of `source` is not appropriately aligned, this returns `Err`. If
4168    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
4169    /// alignment error][size-error-from].
4170    ///
4171    /// [self-unaligned]: Unaligned
4172    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4173    ///
4174    /// # Examples
4175    ///
4176    /// ```
4177    /// use zerocopy::FromBytes;
4178    /// # use zerocopy_derive::*;
4179    ///
4180    /// # #[derive(Debug, PartialEq, Eq)]
4181    /// #[derive(FromBytes, Immutable)]
4182    /// #[repr(C)]
4183    /// struct Pixel {
4184    ///     r: u8,
4185    ///     g: u8,
4186    ///     b: u8,
4187    ///     a: u8,
4188    /// }
4189    ///
4190    /// // These are more bytes than are needed to encode two `Pixel`s.
4191    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4192    ///
4193    /// let (prefix, pixels) = <[Pixel]>::ref_from_suffix_with_elems(bytes, 2).unwrap();
4194    ///
4195    /// assert_eq!(prefix, &[0, 1]);
4196    ///
4197    /// assert_eq!(pixels, &[
4198    ///     Pixel { r: 2, g: 3, b: 4, a: 5 },
4199    ///     Pixel { r: 6, g: 7, b: 8, a: 9 },
4200    /// ]);
4201    /// ```
4202    ///
4203    /// Since an explicit `count` is provided, this method supports types with
4204    /// zero-sized trailing slice elements. Methods such as [`ref_from_suffix`]
4205    /// which do not take an explicit count do not support such types.
4206    ///
4207    /// ```
4208    /// use zerocopy::*;
4209    /// # use zerocopy_derive::*;
4210    ///
4211    /// #[derive(FromBytes, Immutable, KnownLayout)]
4212    /// #[repr(C)]
4213    /// struct ZSTy {
4214    ///     leading_sized: [u8; 2],
4215    ///     trailing_dst: [()],
4216    /// }
4217    ///
4218    /// let src = &[85, 85][..];
4219    /// let (_, zsty) = ZSTy::ref_from_suffix_with_elems(src, 42).unwrap();
4220    /// assert_eq!(zsty.trailing_dst.len(), 42);
4221    /// ```
4222    ///
4223    /// [`ref_from_suffix`]: FromBytes::ref_from_suffix
4224    #[must_use = "has no side effects"]
4225    #[inline]
4226    fn ref_from_suffix_with_elems(
4227        source: &[u8],
4228        count: usize,
4229    ) -> Result<(&[u8], &Self), CastError<&[u8], Self>>
4230    where
4231        Self: KnownLayout<PointerMetadata = usize> + Immutable,
4232    {
4233        ref_from_prefix_suffix(source, Some(count), CastType::Suffix).map(swap)
4234    }
4235
4236    /// Interprets the given `source` as a `&mut Self` with a DST length equal
4237    /// to `count`.
4238    ///
4239    /// This method attempts to return a reference to `source` interpreted as a
4240    /// `Self` with `count` trailing elements. If the length of `source` is not
4241    /// equal to the size of `Self` with `count` elements, or if `source` is not
4242    /// appropriately aligned, this returns `Err`. If [`Self:
4243    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
4244    /// error][size-error-from].
4245    ///
4246    /// [self-unaligned]: Unaligned
4247    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4248    ///
4249    /// # Examples
4250    ///
4251    /// ```
4252    /// use zerocopy::FromBytes;
4253    /// # use zerocopy_derive::*;
4254    ///
4255    /// # #[derive(Debug, PartialEq, Eq)]
4256    /// #[derive(KnownLayout, FromBytes, IntoBytes, Immutable)]
4257    /// #[repr(C)]
4258    /// struct Pixel {
4259    ///     r: u8,
4260    ///     g: u8,
4261    ///     b: u8,
4262    ///     a: u8,
4263    /// }
4264    ///
4265    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7][..];
4266    ///
4267    /// let pixels = <[Pixel]>::mut_from_bytes_with_elems(bytes, 2).unwrap();
4268    ///
4269    /// assert_eq!(pixels, &[
4270    ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
4271    ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
4272    /// ]);
4273    ///
4274    /// pixels[1] = Pixel { r: 0, g: 0, b: 0, a: 0 };
4275    ///
4276    /// assert_eq!(bytes, [0, 1, 2, 3, 0, 0, 0, 0]);
4277    /// ```
4278    ///
4279    /// Since an explicit `count` is provided, this method supports types with
4280    /// zero-sized trailing slice elements. Methods such as [`mut_from`] which
4281    /// do not take an explicit count do not support such types.
4282    ///
4283    /// ```
4284    /// use zerocopy::*;
4285    /// # use zerocopy_derive::*;
4286    ///
4287    /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
4288    /// #[repr(C, packed)]
4289    /// struct ZSTy {
4290    ///     leading_sized: [u8; 2],
4291    ///     trailing_dst: [()],
4292    /// }
4293    ///
4294    /// let src = &mut [85, 85][..];
4295    /// let zsty = ZSTy::mut_from_bytes_with_elems(src, 42).unwrap();
4296    /// assert_eq!(zsty.trailing_dst.len(), 42);
4297    /// ```
4298    ///
4299    /// [`mut_from`]: FromBytes::mut_from
4300    #[must_use = "has no side effects"]
4301    #[inline]
4302    fn mut_from_bytes_with_elems(
4303        source: &mut [u8],
4304        count: usize,
4305    ) -> Result<&mut Self, CastError<&mut [u8], Self>>
4306    where
4307        Self: IntoBytes + KnownLayout<PointerMetadata = usize> + Immutable,
4308    {
4309        let source = Ptr::from_mut(source);
4310        let maybe_slf = source.try_cast_into_no_leftover::<_, BecauseImmutable>(Some(count));
4311        match maybe_slf {
4312            Ok(slf) => Ok(slf
4313                .recall_validity::<_, (_, (_, (BecauseExclusive, BecauseExclusive)))>()
4314                .as_mut()),
4315            Err(err) => Err(err.map_src(|s| s.as_mut())),
4316        }
4317    }
4318
4319    /// Interprets the prefix of the given `source` as a `&mut Self` with DST
4320    /// length equal to `count`.
4321    ///
4322    /// This method attempts to return a reference to the prefix of `source`
4323    /// interpreted as a `Self` with `count` trailing elements, and a reference
4324    /// to the preceding bytes. If there are insufficient bytes, or if `source`
4325    /// is not appropriately aligned, this returns `Err`. If [`Self:
4326    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
4327    /// error][size-error-from].
4328    ///
4329    /// [self-unaligned]: Unaligned
4330    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4331    ///
4332    /// # Examples
4333    ///
4334    /// ```
4335    /// use zerocopy::FromBytes;
4336    /// # use zerocopy_derive::*;
4337    ///
4338    /// # #[derive(Debug, PartialEq, Eq)]
4339    /// #[derive(KnownLayout, FromBytes, IntoBytes, Immutable)]
4340    /// #[repr(C)]
4341    /// struct Pixel {
4342    ///     r: u8,
4343    ///     g: u8,
4344    ///     b: u8,
4345    ///     a: u8,
4346    /// }
4347    ///
4348    /// // These are more bytes than are needed to encode two `Pixel`s.
4349    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4350    ///
4351    /// let (pixels, suffix) = <[Pixel]>::mut_from_prefix_with_elems(bytes, 2).unwrap();
4352    ///
4353    /// assert_eq!(pixels, &[
4354    ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
4355    ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
4356    /// ]);
4357    ///
4358    /// assert_eq!(suffix, &[8, 9]);
4359    ///
4360    /// pixels[1] = Pixel { r: 0, g: 0, b: 0, a: 0 };
4361    /// suffix.fill(1);
4362    ///
4363    /// assert_eq!(bytes, [0, 1, 2, 3, 0, 0, 0, 0, 1, 1]);
4364    /// ```
4365    ///
4366    /// Since an explicit `count` is provided, this method supports types with
4367    /// zero-sized trailing slice elements. Methods such as [`mut_from_prefix`]
4368    /// which do not take an explicit count do not support such types.
4369    ///
4370    /// ```
4371    /// use zerocopy::*;
4372    /// # use zerocopy_derive::*;
4373    ///
4374    /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
4375    /// #[repr(C, packed)]
4376    /// struct ZSTy {
4377    ///     leading_sized: [u8; 2],
4378    ///     trailing_dst: [()],
4379    /// }
4380    ///
4381    /// let src = &mut [85, 85][..];
4382    /// let (zsty, _) = ZSTy::mut_from_prefix_with_elems(src, 42).unwrap();
4383    /// assert_eq!(zsty.trailing_dst.len(), 42);
4384    /// ```
4385    ///
4386    /// [`mut_from_prefix`]: FromBytes::mut_from_prefix
4387    #[must_use = "has no side effects"]
4388    #[inline]
4389    fn mut_from_prefix_with_elems(
4390        source: &mut [u8],
4391        count: usize,
4392    ) -> Result<(&mut Self, &mut [u8]), CastError<&mut [u8], Self>>
4393    where
4394        Self: IntoBytes + KnownLayout<PointerMetadata = usize>,
4395    {
4396        mut_from_prefix_suffix(source, Some(count), CastType::Prefix)
4397    }
4398
4399    /// Interprets the suffix of the given `source` as a `&mut Self` with DST
4400    /// length equal to `count`.
4401    ///
4402    /// This method attempts to return a reference to the suffix of `source`
4403    /// interpreted as a `Self` with `count` trailing elements, and a reference
4404    /// to the remaining bytes. If there are insufficient bytes, or if that
4405    /// suffix of `source` is not appropriately aligned, this returns `Err`. If
4406    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
4407    /// alignment error][size-error-from].
4408    ///
4409    /// [self-unaligned]: Unaligned
4410    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4411    ///
4412    /// # Examples
4413    ///
4414    /// ```
4415    /// use zerocopy::FromBytes;
4416    /// # use zerocopy_derive::*;
4417    ///
4418    /// # #[derive(Debug, PartialEq, Eq)]
4419    /// #[derive(FromBytes, IntoBytes, Immutable)]
4420    /// #[repr(C)]
4421    /// struct Pixel {
4422    ///     r: u8,
4423    ///     g: u8,
4424    ///     b: u8,
4425    ///     a: u8,
4426    /// }
4427    ///
4428    /// // These are more bytes than are needed to encode two `Pixel`s.
4429    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4430    ///
4431    /// let (prefix, pixels) = <[Pixel]>::mut_from_suffix_with_elems(bytes, 2).unwrap();
4432    ///
4433    /// assert_eq!(prefix, &[0, 1]);
4434    ///
4435    /// assert_eq!(pixels, &[
4436    ///     Pixel { r: 2, g: 3, b: 4, a: 5 },
4437    ///     Pixel { r: 6, g: 7, b: 8, a: 9 },
4438    /// ]);
4439    ///
4440    /// prefix.fill(9);
4441    /// pixels[1] = Pixel { r: 0, g: 0, b: 0, a: 0 };
4442    ///
4443    /// assert_eq!(bytes, [9, 9, 2, 3, 4, 5, 0, 0, 0, 0]);
4444    /// ```
4445    ///
4446    /// Since an explicit `count` is provided, this method supports types with
4447    /// zero-sized trailing slice elements. Methods such as [`mut_from_suffix`]
4448    /// which do not take an explicit count do not support such types.
4449    ///
4450    /// ```
4451    /// use zerocopy::*;
4452    /// # use zerocopy_derive::*;
4453    ///
4454    /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
4455    /// #[repr(C, packed)]
4456    /// struct ZSTy {
4457    ///     leading_sized: [u8; 2],
4458    ///     trailing_dst: [()],
4459    /// }
4460    ///
4461    /// let src = &mut [85, 85][..];
4462    /// let (_, zsty) = ZSTy::mut_from_suffix_with_elems(src, 42).unwrap();
4463    /// assert_eq!(zsty.trailing_dst.len(), 42);
4464    /// ```
4465    ///
4466    /// [`mut_from_suffix`]: FromBytes::mut_from_suffix
4467    #[must_use = "has no side effects"]
4468    #[inline]
4469    fn mut_from_suffix_with_elems(
4470        source: &mut [u8],
4471        count: usize,
4472    ) -> Result<(&mut [u8], &mut Self), CastError<&mut [u8], Self>>
4473    where
4474        Self: IntoBytes + KnownLayout<PointerMetadata = usize>,
4475    {
4476        mut_from_prefix_suffix(source, Some(count), CastType::Suffix).map(swap)
4477    }
4478
4479    /// Reads a copy of `Self` from the given `source`.
4480    ///
4481    /// If `source.len() != size_of::<Self>()`, `read_from_bytes` returns `Err`.
4482    ///
4483    /// # Examples
4484    ///
4485    /// ```
4486    /// use zerocopy::FromBytes;
4487    /// # use zerocopy_derive::*;
4488    ///
4489    /// #[derive(FromBytes)]
4490    /// #[repr(C)]
4491    /// struct PacketHeader {
4492    ///     src_port: [u8; 2],
4493    ///     dst_port: [u8; 2],
4494    ///     length: [u8; 2],
4495    ///     checksum: [u8; 2],
4496    /// }
4497    ///
4498    /// // These bytes encode a `PacketHeader`.
4499    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7][..];
4500    ///
4501    /// let header = PacketHeader::read_from_bytes(bytes).unwrap();
4502    ///
4503    /// assert_eq!(header.src_port, [0, 1]);
4504    /// assert_eq!(header.dst_port, [2, 3]);
4505    /// assert_eq!(header.length, [4, 5]);
4506    /// assert_eq!(header.checksum, [6, 7]);
4507    /// ```
4508    #[must_use = "has no side effects"]
4509    #[inline]
4510    fn read_from_bytes(source: &[u8]) -> Result<Self, SizeError<&[u8], Self>>
4511    where
4512        Self: Sized,
4513    {
4514        match Ref::<_, Unalign<Self>>::sized_from(source) {
4515            Ok(r) => Ok(Ref::read(&r).into_inner()),
4516            Err(CastError::Size(e)) => Err(e.with_dst()),
4517            Err(CastError::Alignment(_)) => {
4518                // SAFETY: `Unalign<Self>` is trivially aligned, so
4519                // `Ref::sized_from` cannot fail due to unmet alignment
4520                // requirements.
4521                unsafe { core::hint::unreachable_unchecked() }
4522            }
4523            Err(CastError::Validity(i)) => match i {},
4524        }
4525    }
4526
4527    /// Reads a copy of `Self` from the prefix of the given `source`.
4528    ///
4529    /// This attempts to read a `Self` from the first `size_of::<Self>()` bytes
4530    /// of `source`, returning that `Self` and any remaining bytes. If
4531    /// `source.len() < size_of::<Self>()`, it returns `Err`.
4532    ///
4533    /// # Examples
4534    ///
4535    /// ```
4536    /// use zerocopy::FromBytes;
4537    /// # use zerocopy_derive::*;
4538    ///
4539    /// #[derive(FromBytes)]
4540    /// #[repr(C)]
4541    /// struct PacketHeader {
4542    ///     src_port: [u8; 2],
4543    ///     dst_port: [u8; 2],
4544    ///     length: [u8; 2],
4545    ///     checksum: [u8; 2],
4546    /// }
4547    ///
4548    /// // These are more bytes than are needed to encode a `PacketHeader`.
4549    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4550    ///
4551    /// let (header, body) = PacketHeader::read_from_prefix(bytes).unwrap();
4552    ///
4553    /// assert_eq!(header.src_port, [0, 1]);
4554    /// assert_eq!(header.dst_port, [2, 3]);
4555    /// assert_eq!(header.length, [4, 5]);
4556    /// assert_eq!(header.checksum, [6, 7]);
4557    /// assert_eq!(body, [8, 9]);
4558    /// ```
4559    #[must_use = "has no side effects"]
4560    #[inline]
4561    fn read_from_prefix(source: &[u8]) -> Result<(Self, &[u8]), SizeError<&[u8], Self>>
4562    where
4563        Self: Sized,
4564    {
4565        match Ref::<_, Unalign<Self>>::sized_from_prefix(source) {
4566            Ok((r, suffix)) => Ok((Ref::read(&r).into_inner(), suffix)),
4567            Err(CastError::Size(e)) => Err(e.with_dst()),
4568            Err(CastError::Alignment(_)) => {
4569                // SAFETY: `Unalign<Self>` is trivially aligned, so
4570                // `Ref::sized_from_prefix` cannot fail due to unmet alignment
4571                // requirements.
4572                unsafe { core::hint::unreachable_unchecked() }
4573            }
4574            Err(CastError::Validity(i)) => match i {},
4575        }
4576    }
4577
4578    /// Reads a copy of `Self` from the suffix of the given `source`.
4579    ///
4580    /// This attempts to read a `Self` from the last `size_of::<Self>()` bytes
4581    /// of `source`, returning that `Self` and any preceding bytes. If
4582    /// `source.len() < size_of::<Self>()`, it returns `Err`.
4583    ///
4584    /// # Examples
4585    ///
4586    /// ```
4587    /// use zerocopy::FromBytes;
4588    /// # use zerocopy_derive::*;
4589    ///
4590    /// #[derive(FromBytes)]
4591    /// #[repr(C)]
4592    /// struct PacketTrailer {
4593    ///     frame_check_sequence: [u8; 4],
4594    /// }
4595    ///
4596    /// // These are more bytes than are needed to encode a `PacketTrailer`.
4597    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4598    ///
4599    /// let (prefix, trailer) = PacketTrailer::read_from_suffix(bytes).unwrap();
4600    ///
4601    /// assert_eq!(prefix, [0, 1, 2, 3, 4, 5]);
4602    /// assert_eq!(trailer.frame_check_sequence, [6, 7, 8, 9]);
4603    /// ```
4604    #[must_use = "has no side effects"]
4605    #[inline]
4606    fn read_from_suffix(source: &[u8]) -> Result<(&[u8], Self), SizeError<&[u8], Self>>
4607    where
4608        Self: Sized,
4609    {
4610        match Ref::<_, Unalign<Self>>::sized_from_suffix(source) {
4611            Ok((prefix, r)) => Ok((prefix, Ref::read(&r).into_inner())),
4612            Err(CastError::Size(e)) => Err(e.with_dst()),
4613            Err(CastError::Alignment(_)) => {
4614                // SAFETY: `Unalign<Self>` is trivially aligned, so
4615                // `Ref::sized_from_suffix` cannot fail due to unmet alignment
4616                // requirements.
4617                unsafe { core::hint::unreachable_unchecked() }
4618            }
4619            Err(CastError::Validity(i)) => match i {},
4620        }
4621    }
4622
4623    /// Reads a copy of `self` from an `io::Read`.
4624    ///
4625    /// This is useful for interfacing with operating system byte sinks (files,
4626    /// sockets, etc.).
4627    ///
4628    /// # Examples
4629    ///
4630    /// ```no_run
4631    /// use zerocopy::{byteorder::big_endian::*, FromBytes};
4632    /// use std::fs::File;
4633    /// # use zerocopy_derive::*;
4634    ///
4635    /// #[derive(FromBytes)]
4636    /// #[repr(C)]
4637    /// struct BitmapFileHeader {
4638    ///     signature: [u8; 2],
4639    ///     size: U32,
4640    ///     reserved: U64,
4641    ///     offset: U64,
4642    /// }
4643    ///
4644    /// let mut file = File::open("image.bin").unwrap();
4645    /// let header = BitmapFileHeader::read_from_io(&mut file).unwrap();
4646    /// ```
4647    #[cfg(feature = "std")]
4648    #[inline(always)]
4649    fn read_from_io<R>(mut src: R) -> io::Result<Self>
4650    where
4651        Self: Sized,
4652        R: io::Read,
4653    {
4654        // NOTE(#2319, #2320): We do `buf.zero()` separately rather than
4655        // constructing `let buf = CoreMaybeUninit::zeroed()` because, if `Self`
4656        // contains padding bytes, then a typed copy of `CoreMaybeUninit<Self>`
4657        // will not necessarily preserve zeros written to those padding byte
4658        // locations, and so `buf` could contain uninitialized bytes.
4659        let mut buf = CoreMaybeUninit::<Self>::uninit();
4660        buf.zero();
4661
4662        let ptr = Ptr::from_mut(&mut buf);
4663        // SAFETY: After `buf.zero()`, `buf` consists entirely of initialized,
4664        // zeroed bytes. Since `MaybeUninit` has no validity requirements, `ptr`
4665        // cannot be used to write values which will violate `buf`'s bit
4666        // validity. Since `ptr` has `Exclusive` aliasing, nothing other than
4667        // `ptr` may be used to mutate `ptr`'s referent, and so its bit validity
4668        // cannot be violated even though `buf` may have more permissive bit
4669        // validity than `ptr`.
4670        let ptr = unsafe { ptr.assume_validity::<invariant::Initialized>() };
4671        let ptr = ptr.as_bytes::<BecauseExclusive>();
4672        src.read_exact(ptr.as_mut())?;
4673        // SAFETY: `buf` entirely consists of initialized bytes, and `Self` is
4674        // `FromBytes`.
4675        Ok(unsafe { buf.assume_init() })
4676    }
4677
4678    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::ref_from_bytes`")]
4679    #[doc(hidden)]
4680    #[must_use = "has no side effects"]
4681    #[inline(always)]
4682    fn ref_from(source: &[u8]) -> Option<&Self>
4683    where
4684        Self: KnownLayout + Immutable,
4685    {
4686        Self::ref_from_bytes(source).ok()
4687    }
4688
4689    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::mut_from_bytes`")]
4690    #[doc(hidden)]
4691    #[must_use = "has no side effects"]
4692    #[inline(always)]
4693    fn mut_from(source: &mut [u8]) -> Option<&mut Self>
4694    where
4695        Self: KnownLayout + IntoBytes,
4696    {
4697        Self::mut_from_bytes(source).ok()
4698    }
4699
4700    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::ref_from_prefix_with_elems`")]
4701    #[doc(hidden)]
4702    #[must_use = "has no side effects"]
4703    #[inline(always)]
4704    fn slice_from_prefix(source: &[u8], count: usize) -> Option<(&[Self], &[u8])>
4705    where
4706        Self: Sized + Immutable,
4707    {
4708        <[Self]>::ref_from_prefix_with_elems(source, count).ok()
4709    }
4710
4711    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::ref_from_suffix_with_elems`")]
4712    #[doc(hidden)]
4713    #[must_use = "has no side effects"]
4714    #[inline(always)]
4715    fn slice_from_suffix(source: &[u8], count: usize) -> Option<(&[u8], &[Self])>
4716    where
4717        Self: Sized + Immutable,
4718    {
4719        <[Self]>::ref_from_suffix_with_elems(source, count).ok()
4720    }
4721
4722    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::mut_from_prefix_with_elems`")]
4723    #[doc(hidden)]
4724    #[must_use = "has no side effects"]
4725    #[inline(always)]
4726    fn mut_slice_from_prefix(source: &mut [u8], count: usize) -> Option<(&mut [Self], &mut [u8])>
4727    where
4728        Self: Sized + IntoBytes,
4729    {
4730        <[Self]>::mut_from_prefix_with_elems(source, count).ok()
4731    }
4732
4733    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::mut_from_suffix_with_elems`")]
4734    #[doc(hidden)]
4735    #[must_use = "has no side effects"]
4736    #[inline(always)]
4737    fn mut_slice_from_suffix(source: &mut [u8], count: usize) -> Option<(&mut [u8], &mut [Self])>
4738    where
4739        Self: Sized + IntoBytes,
4740    {
4741        <[Self]>::mut_from_suffix_with_elems(source, count).ok()
4742    }
4743
4744    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::read_from_bytes`")]
4745    #[doc(hidden)]
4746    #[must_use = "has no side effects"]
4747    #[inline(always)]
4748    fn read_from(source: &[u8]) -> Option<Self>
4749    where
4750        Self: Sized,
4751    {
4752        Self::read_from_bytes(source).ok()
4753    }
4754}
4755
4756/// Interprets the given affix of the given bytes as a `&Self`.
4757///
4758/// This method computes the largest possible size of `Self` that can fit in the
4759/// prefix or suffix bytes of `source`, then attempts to return both a reference
4760/// to those bytes interpreted as a `Self`, and a reference to the excess bytes.
4761/// If there are insufficient bytes, or if that affix of `source` is not
4762/// appropriately aligned, this returns `Err`.
4763#[inline(always)]
4764fn ref_from_prefix_suffix<T: FromBytes + KnownLayout + Immutable + ?Sized>(
4765    source: &[u8],
4766    meta: Option<T::PointerMetadata>,
4767    cast_type: CastType,
4768) -> Result<(&T, &[u8]), CastError<&[u8], T>> {
4769    let (slf, prefix_suffix) = Ptr::from_ref(source)
4770        .try_cast_into::<_, BecauseImmutable>(cast_type, meta)
4771        .map_err(|err| err.map_src(|s| s.as_ref()))?;
4772    Ok((slf.recall_validity().as_ref(), prefix_suffix.as_ref()))
4773}
4774
4775/// Interprets the given affix of the given bytes as a `&mut Self` without
4776/// copying.
4777///
4778/// This method computes the largest possible size of `Self` that can fit in the
4779/// prefix or suffix bytes of `source`, then attempts to return both a reference
4780/// to those bytes interpreted as a `Self`, and a reference to the excess bytes.
4781/// If there are insufficient bytes, or if that affix of `source` is not
4782/// appropriately aligned, this returns `Err`.
4783#[inline(always)]
4784fn mut_from_prefix_suffix<T: FromBytes + IntoBytes + KnownLayout + ?Sized>(
4785    source: &mut [u8],
4786    meta: Option<T::PointerMetadata>,
4787    cast_type: CastType,
4788) -> Result<(&mut T, &mut [u8]), CastError<&mut [u8], T>> {
4789    let (slf, prefix_suffix) = Ptr::from_mut(source)
4790        .try_cast_into::<_, BecauseExclusive>(cast_type, meta)
4791        .map_err(|err| err.map_src(|s| s.as_mut()))?;
4792    Ok((slf.recall_validity::<_, (_, (_, _))>().as_mut(), prefix_suffix.as_mut()))
4793}
4794
4795/// Analyzes whether a type is [`IntoBytes`].
4796///
4797/// This derive analyzes, at compile time, whether the annotated type satisfies
4798/// the [safety conditions] of `IntoBytes` and implements `IntoBytes` if it is
4799/// sound to do so. This derive can be applied to structs and enums (see below
4800/// for union support); e.g.:
4801///
4802/// ```
4803/// # use zerocopy_derive::{IntoBytes};
4804/// #[derive(IntoBytes)]
4805/// #[repr(C)]
4806/// struct MyStruct {
4807/// # /*
4808///     ...
4809/// # */
4810/// }
4811///
4812/// #[derive(IntoBytes)]
4813/// #[repr(u8)]
4814/// enum MyEnum {
4815/// #   Variant,
4816/// # /*
4817///     ...
4818/// # */
4819/// }
4820/// ```
4821///
4822/// [safety conditions]: trait@IntoBytes#safety
4823///
4824/// # Error Messages
4825///
4826/// On Rust toolchains prior to 1.78.0, due to the way that the custom derive
4827/// for `IntoBytes` is implemented, you may get an error like this:
4828///
4829/// ```text
4830/// error[E0277]: the trait bound `(): PaddingFree<Foo, true>` is not satisfied
4831///   --> lib.rs:23:10
4832///    |
4833///  1 | #[derive(IntoBytes)]
4834///    |          ^^^^^^^^^ the trait `PaddingFree<Foo, true>` is not implemented for `()`
4835///    |
4836///    = help: the following implementations were found:
4837///                   <() as PaddingFree<T, false>>
4838/// ```
4839///
4840/// This error indicates that the type being annotated has padding bytes, which
4841/// is illegal for `IntoBytes` types. Consider reducing the alignment of some
4842/// fields by using types in the [`byteorder`] module, wrapping field types in
4843/// [`Unalign`], adding explicit struct fields where those padding bytes would
4844/// be, or using `#[repr(packed)]`. See the Rust Reference's page on [type
4845/// layout] for more information about type layout and padding.
4846///
4847/// [type layout]: https://doc.rust-lang.org/reference/type-layout.html
4848///
4849/// # Unions
4850///
4851/// Currently, union bit validity is [up in the air][union-validity], and so
4852/// zerocopy does not support `#[derive(IntoBytes)]` on unions by default.
4853/// However, implementing `IntoBytes` on a union type is likely sound on all
4854/// existing Rust toolchains - it's just that it may become unsound in the
4855/// future. You can opt-in to `#[derive(IntoBytes)]` support on unions by
4856/// passing the unstable `zerocopy_derive_union_into_bytes` cfg:
4857///
4858/// ```shell
4859/// $ RUSTFLAGS='--cfg zerocopy_derive_union_into_bytes' cargo build
4860/// ```
4861///
4862/// However, it is your responsibility to ensure that this derive is sound on
4863/// the specific versions of the Rust toolchain you are using! We make no
4864/// stability or soundness guarantees regarding this cfg, and may remove it at
4865/// any point.
4866///
4867/// We are actively working with Rust to stabilize the necessary language
4868/// guarantees to support this in a forwards-compatible way, which will enable
4869/// us to remove the cfg gate. As part of this effort, we need to know how much
4870/// demand there is for this feature. If you would like to use `IntoBytes` on
4871/// unions, [please let us know][discussion].
4872///
4873/// [union-validity]: https://github.com/rust-lang/unsafe-code-guidelines/issues/438
4874/// [discussion]: https://github.com/google/zerocopy/discussions/1802
4875///
4876/// # Analysis
4877///
4878/// *This section describes, roughly, the analysis performed by this derive to
4879/// determine whether it is sound to implement `IntoBytes` for a given type.
4880/// Unless you are modifying the implementation of this derive, or attempting to
4881/// manually implement `IntoBytes` for a type yourself, you don't need to read
4882/// this section.*
4883///
4884/// If a type has the following properties, then this derive can implement
4885/// `IntoBytes` for that type:
4886///
4887/// - If the type is a struct, its fields must be [`IntoBytes`]. Additionally:
4888///     - if the type is `repr(transparent)` or `repr(packed)`, it is
4889///       [`IntoBytes`] if its fields are [`IntoBytes`]; else,
4890///     - if the type is `repr(C)` with at most one field, it is [`IntoBytes`]
4891///       if its field is [`IntoBytes`]; else,
4892///     - if the type has no generic parameters, it is [`IntoBytes`] if the type
4893///       is sized and has no padding bytes; else,
4894///     - if the type is `repr(C)`, its fields must be [`Unaligned`].
4895/// - If the type is an enum:
4896///   - It must have a defined representation (`repr`s `C`, `u8`, `u16`, `u32`,
4897///     `u64`, `usize`, `i8`, `i16`, `i32`, `i64`, or `isize`).
4898///   - It must have no padding bytes.
4899///   - Its fields must be [`IntoBytes`].
4900///
4901/// This analysis is subject to change. Unsafe code may *only* rely on the
4902/// documented [safety conditions] of `FromBytes`, and must *not* rely on the
4903/// implementation details of this derive.
4904///
4905/// [Rust Reference]: https://doc.rust-lang.org/reference/type-layout.html
4906#[cfg(any(feature = "derive", test))]
4907#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
4908pub use zerocopy_derive::IntoBytes;
4909
4910/// Types that can be converted to an immutable slice of initialized bytes.
4911///
4912/// Any `IntoBytes` type can be converted to a slice of initialized bytes of the
4913/// same size. This is useful for efficiently serializing structured data as raw
4914/// bytes.
4915///
4916/// # Implementation
4917///
4918/// **Do not implement this trait yourself!** Instead, use
4919/// [`#[derive(IntoBytes)]`][derive]; e.g.:
4920///
4921/// ```
4922/// # use zerocopy_derive::IntoBytes;
4923/// #[derive(IntoBytes)]
4924/// #[repr(C)]
4925/// struct MyStruct {
4926/// # /*
4927///     ...
4928/// # */
4929/// }
4930///
4931/// #[derive(IntoBytes)]
4932/// #[repr(u8)]
4933/// enum MyEnum {
4934/// #   Variant0,
4935/// # /*
4936///     ...
4937/// # */
4938/// }
4939/// ```
4940///
4941/// This derive performs a sophisticated, compile-time safety analysis to
4942/// determine whether a type is `IntoBytes`. See the [derive
4943/// documentation][derive] for guidance on how to interpret error messages
4944/// produced by the derive's analysis.
4945///
4946/// # Safety
4947///
4948/// *This section describes what is required in order for `T: IntoBytes`, and
4949/// what unsafe code may assume of such types. If you don't plan on implementing
4950/// `IntoBytes` manually, and you don't plan on writing unsafe code that
4951/// operates on `IntoBytes` types, then you don't need to read this section.*
4952///
4953/// If `T: IntoBytes`, then unsafe code may assume that it is sound to treat any
4954/// `t: T` as an immutable `[u8]` of length `size_of_val(t)`. If a type is
4955/// marked as `IntoBytes` which violates this contract, it may cause undefined
4956/// behavior.
4957///
4958/// `#[derive(IntoBytes)]` only permits [types which satisfy these
4959/// requirements][derive-analysis].
4960///
4961#[cfg_attr(
4962    feature = "derive",
4963    doc = "[derive]: zerocopy_derive::IntoBytes",
4964    doc = "[derive-analysis]: zerocopy_derive::IntoBytes#analysis"
4965)]
4966#[cfg_attr(
4967    not(feature = "derive"),
4968    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.IntoBytes.html"),
4969    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.IntoBytes.html#analysis"),
4970)]
4971#[cfg_attr(
4972    zerocopy_diagnostic_on_unimplemented_1_78_0,
4973    diagnostic::on_unimplemented(note = "Consider adding `#[derive(IntoBytes)]` to `{Self}`")
4974)]
4975pub unsafe trait IntoBytes {
4976    // The `Self: Sized` bound makes it so that this function doesn't prevent
4977    // `IntoBytes` from being object safe. Note that other `IntoBytes` methods
4978    // prevent object safety, but those provide a benefit in exchange for object
4979    // safety. If at some point we remove those methods, change their type
4980    // signatures, or move them out of this trait so that `IntoBytes` is object
4981    // safe again, it's important that this function not prevent object safety.
4982    #[doc(hidden)]
4983    fn only_derive_is_allowed_to_implement_this_trait()
4984    where
4985        Self: Sized;
4986
4987    /// Gets the bytes of this value.
4988    ///
4989    /// # Examples
4990    ///
4991    /// ```
4992    /// use zerocopy::IntoBytes;
4993    /// # use zerocopy_derive::*;
4994    ///
4995    /// #[derive(IntoBytes, Immutable)]
4996    /// #[repr(C)]
4997    /// struct PacketHeader {
4998    ///     src_port: [u8; 2],
4999    ///     dst_port: [u8; 2],
5000    ///     length: [u8; 2],
5001    ///     checksum: [u8; 2],
5002    /// }
5003    ///
5004    /// let header = PacketHeader {
5005    ///     src_port: [0, 1],
5006    ///     dst_port: [2, 3],
5007    ///     length: [4, 5],
5008    ///     checksum: [6, 7],
5009    /// };
5010    ///
5011    /// let bytes = header.as_bytes();
5012    ///
5013    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7]);
5014    /// ```
5015    #[must_use = "has no side effects"]
5016    #[inline(always)]
5017    fn as_bytes(&self) -> &[u8]
5018    where
5019        Self: Immutable,
5020    {
5021        // Note that this method does not have a `Self: Sized` bound;
5022        // `size_of_val` works for unsized values too.
5023        let len = mem::size_of_val(self);
5024        let slf: *const Self = self;
5025
5026        // SAFETY:
5027        // - `slf.cast::<u8>()` is valid for reads for `len * size_of::<u8>()`
5028        //   many bytes because...
5029        //   - `slf` is the same pointer as `self`, and `self` is a reference
5030        //     which points to an object whose size is `len`. Thus...
5031        //     - The entire region of `len` bytes starting at `slf` is contained
5032        //       within a single allocation.
5033        //     - `slf` is non-null.
5034        //   - `slf` is trivially aligned to `align_of::<u8>() == 1`.
5035        // - `Self: IntoBytes` ensures that all of the bytes of `slf` are
5036        //   initialized.
5037        // - Since `slf` is derived from `self`, and `self` is an immutable
5038        //   reference, the only other references to this memory region that
5039        //   could exist are other immutable references, and those don't allow
5040        //   mutation. `Self: Immutable` prohibits types which contain
5041        //   `UnsafeCell`s, which are the only types for which this rule
5042        //   wouldn't be sufficient.
5043        // - The total size of the resulting slice is no larger than
5044        //   `isize::MAX` because no allocation produced by safe code can be
5045        //   larger than `isize::MAX`.
5046        //
5047        // FIXME(#429): Add references to docs and quotes.
5048        unsafe { slice::from_raw_parts(slf.cast::<u8>(), len) }
5049    }
5050
5051    /// Gets the bytes of this value mutably.
5052    ///
5053    /// # Examples
5054    ///
5055    /// ```
5056    /// use zerocopy::IntoBytes;
5057    /// # use zerocopy_derive::*;
5058    ///
5059    /// # #[derive(Eq, PartialEq, Debug)]
5060    /// #[derive(FromBytes, IntoBytes, Immutable)]
5061    /// #[repr(C)]
5062    /// struct PacketHeader {
5063    ///     src_port: [u8; 2],
5064    ///     dst_port: [u8; 2],
5065    ///     length: [u8; 2],
5066    ///     checksum: [u8; 2],
5067    /// }
5068    ///
5069    /// let mut header = PacketHeader {
5070    ///     src_port: [0, 1],
5071    ///     dst_port: [2, 3],
5072    ///     length: [4, 5],
5073    ///     checksum: [6, 7],
5074    /// };
5075    ///
5076    /// let bytes = header.as_mut_bytes();
5077    ///
5078    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7]);
5079    ///
5080    /// bytes.reverse();
5081    ///
5082    /// assert_eq!(header, PacketHeader {
5083    ///     src_port: [7, 6],
5084    ///     dst_port: [5, 4],
5085    ///     length: [3, 2],
5086    ///     checksum: [1, 0],
5087    /// });
5088    /// ```
5089    #[must_use = "has no side effects"]
5090    #[inline(always)]
5091    fn as_mut_bytes(&mut self) -> &mut [u8]
5092    where
5093        Self: FromBytes,
5094    {
5095        // Note that this method does not have a `Self: Sized` bound;
5096        // `size_of_val` works for unsized values too.
5097        let len = mem::size_of_val(self);
5098        let slf: *mut Self = self;
5099
5100        // SAFETY:
5101        // - `slf.cast::<u8>()` is valid for reads and writes for `len *
5102        //   size_of::<u8>()` many bytes because...
5103        //   - `slf` is the same pointer as `self`, and `self` is a reference
5104        //     which points to an object whose size is `len`. Thus...
5105        //     - The entire region of `len` bytes starting at `slf` is contained
5106        //       within a single allocation.
5107        //     - `slf` is non-null.
5108        //   - `slf` is trivially aligned to `align_of::<u8>() == 1`.
5109        // - `Self: IntoBytes` ensures that all of the bytes of `slf` are
5110        //   initialized.
5111        // - `Self: FromBytes` ensures that no write to this memory region
5112        //   could result in it containing an invalid `Self`.
5113        // - Since `slf` is derived from `self`, and `self` is a mutable
5114        //   reference, no other references to this memory region can exist.
5115        // - The total size of the resulting slice is no larger than
5116        //   `isize::MAX` because no allocation produced by safe code can be
5117        //   larger than `isize::MAX`.
5118        //
5119        // FIXME(#429): Add references to docs and quotes.
5120        unsafe { slice::from_raw_parts_mut(slf.cast::<u8>(), len) }
5121    }
5122
5123    /// Writes a copy of `self` to `dst`.
5124    ///
5125    /// If `dst.len() != size_of_val(self)`, `write_to` returns `Err`.
5126    ///
5127    /// # Examples
5128    ///
5129    /// ```
5130    /// use zerocopy::IntoBytes;
5131    /// # use zerocopy_derive::*;
5132    ///
5133    /// #[derive(IntoBytes, Immutable)]
5134    /// #[repr(C)]
5135    /// struct PacketHeader {
5136    ///     src_port: [u8; 2],
5137    ///     dst_port: [u8; 2],
5138    ///     length: [u8; 2],
5139    ///     checksum: [u8; 2],
5140    /// }
5141    ///
5142    /// let header = PacketHeader {
5143    ///     src_port: [0, 1],
5144    ///     dst_port: [2, 3],
5145    ///     length: [4, 5],
5146    ///     checksum: [6, 7],
5147    /// };
5148    ///
5149    /// let mut bytes = [0, 0, 0, 0, 0, 0, 0, 0];
5150    ///
5151    /// header.write_to(&mut bytes[..]);
5152    ///
5153    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7]);
5154    /// ```
5155    ///
5156    /// If too many or too few target bytes are provided, `write_to` returns
5157    /// `Err` and leaves the target bytes unmodified:
5158    ///
5159    /// ```
5160    /// # use zerocopy::IntoBytes;
5161    /// # let header = u128::MAX;
5162    /// let mut excessive_bytes = &mut [0u8; 128][..];
5163    ///
5164    /// let write_result = header.write_to(excessive_bytes);
5165    ///
5166    /// assert!(write_result.is_err());
5167    /// assert_eq!(excessive_bytes, [0u8; 128]);
5168    /// ```
5169    #[must_use = "callers should check the return value to see if the operation succeeded"]
5170    #[inline]
5171    #[allow(clippy::mut_from_ref)] // False positive: `&self -> &mut [u8]`
5172    fn write_to(&self, dst: &mut [u8]) -> Result<(), SizeError<&Self, &mut [u8]>>
5173    where
5174        Self: Immutable,
5175    {
5176        let src = self.as_bytes();
5177        if dst.len() == src.len() {
5178            // SAFETY: Within this branch of the conditional, we have ensured
5179            // that `dst.len()` is equal to `src.len()`. Neither the size of the
5180            // source nor the size of the destination change between the above
5181            // size check and the invocation of `copy_unchecked`.
5182            unsafe { util::copy_unchecked(src, dst) }
5183            Ok(())
5184        } else {
5185            Err(SizeError::new(self))
5186        }
5187    }
5188
5189    /// Writes a copy of `self` to the prefix of `dst`.
5190    ///
5191    /// `write_to_prefix` writes `self` to the first `size_of_val(self)` bytes
5192    /// of `dst`. If `dst.len() < size_of_val(self)`, it returns `Err`.
5193    ///
5194    /// # Examples
5195    ///
5196    /// ```
5197    /// use zerocopy::IntoBytes;
5198    /// # use zerocopy_derive::*;
5199    ///
5200    /// #[derive(IntoBytes, Immutable)]
5201    /// #[repr(C)]
5202    /// struct PacketHeader {
5203    ///     src_port: [u8; 2],
5204    ///     dst_port: [u8; 2],
5205    ///     length: [u8; 2],
5206    ///     checksum: [u8; 2],
5207    /// }
5208    ///
5209    /// let header = PacketHeader {
5210    ///     src_port: [0, 1],
5211    ///     dst_port: [2, 3],
5212    ///     length: [4, 5],
5213    ///     checksum: [6, 7],
5214    /// };
5215    ///
5216    /// let mut bytes = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
5217    ///
5218    /// header.write_to_prefix(&mut bytes[..]);
5219    ///
5220    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7, 0, 0]);
5221    /// ```
5222    ///
5223    /// If insufficient target bytes are provided, `write_to_prefix` returns
5224    /// `Err` and leaves the target bytes unmodified:
5225    ///
5226    /// ```
5227    /// # use zerocopy::IntoBytes;
5228    /// # let header = u128::MAX;
5229    /// let mut insufficient_bytes = &mut [0, 0][..];
5230    ///
5231    /// let write_result = header.write_to_suffix(insufficient_bytes);
5232    ///
5233    /// assert!(write_result.is_err());
5234    /// assert_eq!(insufficient_bytes, [0, 0]);
5235    /// ```
5236    #[must_use = "callers should check the return value to see if the operation succeeded"]
5237    #[inline]
5238    #[allow(clippy::mut_from_ref)] // False positive: `&self -> &mut [u8]`
5239    fn write_to_prefix(&self, dst: &mut [u8]) -> Result<(), SizeError<&Self, &mut [u8]>>
5240    where
5241        Self: Immutable,
5242    {
5243        let src = self.as_bytes();
5244        match dst.get_mut(..src.len()) {
5245            Some(dst) => {
5246                // SAFETY: Within this branch of the `match`, we have ensured
5247                // through fallible subslicing that `dst.len()` is equal to
5248                // `src.len()`. Neither the size of the source nor the size of
5249                // the destination change between the above subslicing operation
5250                // and the invocation of `copy_unchecked`.
5251                unsafe { util::copy_unchecked(src, dst) }
5252                Ok(())
5253            }
5254            None => Err(SizeError::new(self)),
5255        }
5256    }
5257
5258    /// Writes a copy of `self` to the suffix of `dst`.
5259    ///
5260    /// `write_to_suffix` writes `self` to the last `size_of_val(self)` bytes of
5261    /// `dst`. If `dst.len() < size_of_val(self)`, it returns `Err`.
5262    ///
5263    /// # Examples
5264    ///
5265    /// ```
5266    /// use zerocopy::IntoBytes;
5267    /// # use zerocopy_derive::*;
5268    ///
5269    /// #[derive(IntoBytes, Immutable)]
5270    /// #[repr(C)]
5271    /// struct PacketHeader {
5272    ///     src_port: [u8; 2],
5273    ///     dst_port: [u8; 2],
5274    ///     length: [u8; 2],
5275    ///     checksum: [u8; 2],
5276    /// }
5277    ///
5278    /// let header = PacketHeader {
5279    ///     src_port: [0, 1],
5280    ///     dst_port: [2, 3],
5281    ///     length: [4, 5],
5282    ///     checksum: [6, 7],
5283    /// };
5284    ///
5285    /// let mut bytes = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
5286    ///
5287    /// header.write_to_suffix(&mut bytes[..]);
5288    ///
5289    /// assert_eq!(bytes, [0, 0, 0, 1, 2, 3, 4, 5, 6, 7]);
5290    ///
5291    /// let mut insufficient_bytes = &mut [0, 0][..];
5292    ///
5293    /// let write_result = header.write_to_suffix(insufficient_bytes);
5294    ///
5295    /// assert!(write_result.is_err());
5296    /// assert_eq!(insufficient_bytes, [0, 0]);
5297    /// ```
5298    ///
5299    /// If insufficient target bytes are provided, `write_to_suffix` returns
5300    /// `Err` and leaves the target bytes unmodified:
5301    ///
5302    /// ```
5303    /// # use zerocopy::IntoBytes;
5304    /// # let header = u128::MAX;
5305    /// let mut insufficient_bytes = &mut [0, 0][..];
5306    ///
5307    /// let write_result = header.write_to_suffix(insufficient_bytes);
5308    ///
5309    /// assert!(write_result.is_err());
5310    /// assert_eq!(insufficient_bytes, [0, 0]);
5311    /// ```
5312    #[must_use = "callers should check the return value to see if the operation succeeded"]
5313    #[inline]
5314    #[allow(clippy::mut_from_ref)] // False positive: `&self -> &mut [u8]`
5315    fn write_to_suffix(&self, dst: &mut [u8]) -> Result<(), SizeError<&Self, &mut [u8]>>
5316    where
5317        Self: Immutable,
5318    {
5319        let src = self.as_bytes();
5320        let start = if let Some(start) = dst.len().checked_sub(src.len()) {
5321            start
5322        } else {
5323            return Err(SizeError::new(self));
5324        };
5325        let dst = if let Some(dst) = dst.get_mut(start..) {
5326            dst
5327        } else {
5328            // get_mut() should never return None here. We return a `SizeError`
5329            // rather than .unwrap() because in the event the branch is not
5330            // optimized away, returning a value is generally lighter-weight
5331            // than panicking.
5332            return Err(SizeError::new(self));
5333        };
5334        // SAFETY: Through fallible subslicing of `dst`, we have ensured that
5335        // `dst.len()` is equal to `src.len()`. Neither the size of the source
5336        // nor the size of the destination change between the above subslicing
5337        // operation and the invocation of `copy_unchecked`.
5338        unsafe {
5339            util::copy_unchecked(src, dst);
5340        }
5341        Ok(())
5342    }
5343
5344    /// Writes a copy of `self` to an `io::Write`.
5345    ///
5346    /// This is a shorthand for `dst.write_all(self.as_bytes())`, and is useful
5347    /// for interfacing with operating system byte sinks (files, sockets, etc.).
5348    ///
5349    /// # Examples
5350    ///
5351    /// ```no_run
5352    /// use zerocopy::{byteorder::big_endian::U16, FromBytes, IntoBytes};
5353    /// use std::fs::File;
5354    /// # use zerocopy_derive::*;
5355    ///
5356    /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
5357    /// #[repr(C, packed)]
5358    /// struct GrayscaleImage {
5359    ///     height: U16,
5360    ///     width: U16,
5361    ///     pixels: [U16],
5362    /// }
5363    ///
5364    /// let image = GrayscaleImage::ref_from_bytes(&[0, 0, 0, 0][..]).unwrap();
5365    /// let mut file = File::create("image.bin").unwrap();
5366    /// image.write_to_io(&mut file).unwrap();
5367    /// ```
5368    ///
5369    /// If the write fails, `write_to_io` returns `Err` and a partial write may
5370    /// have occurred; e.g.:
5371    ///
5372    /// ```
5373    /// # use zerocopy::IntoBytes;
5374    ///
5375    /// let src = u128::MAX;
5376    /// let mut dst = [0u8; 2];
5377    ///
5378    /// let write_result = src.write_to_io(&mut dst[..]);
5379    ///
5380    /// assert!(write_result.is_err());
5381    /// assert_eq!(dst, [255, 255]);
5382    /// ```
5383    #[cfg(feature = "std")]
5384    #[inline(always)]
5385    fn write_to_io<W>(&self, mut dst: W) -> io::Result<()>
5386    where
5387        Self: Immutable,
5388        W: io::Write,
5389    {
5390        dst.write_all(self.as_bytes())
5391    }
5392
5393    #[deprecated(since = "0.8.0", note = "`IntoBytes::as_bytes_mut` was renamed to `as_mut_bytes`")]
5394    #[doc(hidden)]
5395    #[inline]
5396    fn as_bytes_mut(&mut self) -> &mut [u8]
5397    where
5398        Self: FromBytes,
5399    {
5400        self.as_mut_bytes()
5401    }
5402}
5403
5404/// Analyzes whether a type is [`Unaligned`].
5405///
5406/// This derive analyzes, at compile time, whether the annotated type satisfies
5407/// the [safety conditions] of `Unaligned` and implements `Unaligned` if it is
5408/// sound to do so. This derive can be applied to structs, enums, and unions;
5409/// e.g.:
5410///
5411/// ```
5412/// # use zerocopy_derive::Unaligned;
5413/// #[derive(Unaligned)]
5414/// #[repr(C)]
5415/// struct MyStruct {
5416/// # /*
5417///     ...
5418/// # */
5419/// }
5420///
5421/// #[derive(Unaligned)]
5422/// #[repr(u8)]
5423/// enum MyEnum {
5424/// #   Variant0,
5425/// # /*
5426///     ...
5427/// # */
5428/// }
5429///
5430/// #[derive(Unaligned)]
5431/// #[repr(packed)]
5432/// union MyUnion {
5433/// #   variant: u8,
5434/// # /*
5435///     ...
5436/// # */
5437/// }
5438/// ```
5439///
5440/// # Analysis
5441///
5442/// *This section describes, roughly, the analysis performed by this derive to
5443/// determine whether it is sound to implement `Unaligned` for a given type.
5444/// Unless you are modifying the implementation of this derive, or attempting to
5445/// manually implement `Unaligned` for a type yourself, you don't need to read
5446/// this section.*
5447///
5448/// If a type has the following properties, then this derive can implement
5449/// `Unaligned` for that type:
5450///
5451/// - If the type is a struct or union:
5452///   - If `repr(align(N))` is provided, `N` must equal 1.
5453///   - If the type is `repr(C)` or `repr(transparent)`, all fields must be
5454///     [`Unaligned`].
5455///   - If the type is not `repr(C)` or `repr(transparent)`, it must be
5456///     `repr(packed)` or `repr(packed(1))`.
5457/// - If the type is an enum:
5458///   - If `repr(align(N))` is provided, `N` must equal 1.
5459///   - It must be a field-less enum (meaning that all variants have no fields).
5460///   - It must be `repr(i8)` or `repr(u8)`.
5461///
5462/// [safety conditions]: trait@Unaligned#safety
5463#[cfg(any(feature = "derive", test))]
5464#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
5465pub use zerocopy_derive::Unaligned;
5466
5467/// Types with no alignment requirement.
5468///
5469/// If `T: Unaligned`, then `align_of::<T>() == 1`.
5470///
5471/// # Implementation
5472///
5473/// **Do not implement this trait yourself!** Instead, use
5474/// [`#[derive(Unaligned)]`][derive]; e.g.:
5475///
5476/// ```
5477/// # use zerocopy_derive::Unaligned;
5478/// #[derive(Unaligned)]
5479/// #[repr(C)]
5480/// struct MyStruct {
5481/// # /*
5482///     ...
5483/// # */
5484/// }
5485///
5486/// #[derive(Unaligned)]
5487/// #[repr(u8)]
5488/// enum MyEnum {
5489/// #   Variant0,
5490/// # /*
5491///     ...
5492/// # */
5493/// }
5494///
5495/// #[derive(Unaligned)]
5496/// #[repr(packed)]
5497/// union MyUnion {
5498/// #   variant: u8,
5499/// # /*
5500///     ...
5501/// # */
5502/// }
5503/// ```
5504///
5505/// This derive performs a sophisticated, compile-time safety analysis to
5506/// determine whether a type is `Unaligned`.
5507///
5508/// # Safety
5509///
5510/// *This section describes what is required in order for `T: Unaligned`, and
5511/// what unsafe code may assume of such types. If you don't plan on implementing
5512/// `Unaligned` manually, and you don't plan on writing unsafe code that
5513/// operates on `Unaligned` types, then you don't need to read this section.*
5514///
5515/// If `T: Unaligned`, then unsafe code may assume that it is sound to produce a
5516/// reference to `T` at any memory location regardless of alignment. If a type
5517/// is marked as `Unaligned` which violates this contract, it may cause
5518/// undefined behavior.
5519///
5520/// `#[derive(Unaligned)]` only permits [types which satisfy these
5521/// requirements][derive-analysis].
5522///
5523#[cfg_attr(
5524    feature = "derive",
5525    doc = "[derive]: zerocopy_derive::Unaligned",
5526    doc = "[derive-analysis]: zerocopy_derive::Unaligned#analysis"
5527)]
5528#[cfg_attr(
5529    not(feature = "derive"),
5530    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Unaligned.html"),
5531    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Unaligned.html#analysis"),
5532)]
5533#[cfg_attr(
5534    zerocopy_diagnostic_on_unimplemented_1_78_0,
5535    diagnostic::on_unimplemented(note = "Consider adding `#[derive(Unaligned)]` to `{Self}`")
5536)]
5537pub unsafe trait Unaligned {
5538    // The `Self: Sized` bound makes it so that `Unaligned` is still object
5539    // safe.
5540    #[doc(hidden)]
5541    fn only_derive_is_allowed_to_implement_this_trait()
5542    where
5543        Self: Sized;
5544}
5545
5546/// Derives an optimized [`Hash`] implementation.
5547///
5548/// This derive can be applied to structs and enums implementing both
5549/// [`Immutable`] and [`IntoBytes`]; e.g.:
5550///
5551/// ```
5552/// # use zerocopy_derive::{ByteHash, Immutable, IntoBytes};
5553/// #[derive(ByteHash, Immutable, IntoBytes)]
5554/// #[repr(C)]
5555/// struct MyStruct {
5556/// # /*
5557///     ...
5558/// # */
5559/// }
5560///
5561/// #[derive(ByteHash, Immutable, IntoBytes)]
5562/// #[repr(u8)]
5563/// enum MyEnum {
5564/// #   Variant,
5565/// # /*
5566///     ...
5567/// # */
5568/// }
5569/// ```
5570///
5571/// The standard library's [`derive(Hash)`][derive@Hash] produces hashes by
5572/// individually hashing each field and combining the results. Instead, the
5573/// implementations of [`Hash::hash()`] and [`Hash::hash_slice()`] generated by
5574/// `derive(ByteHash)` convert the entirety of `self` to a byte slice and hashes
5575/// it in a single call to [`Hasher::write()`]. This may have performance
5576/// advantages.
5577///
5578/// [`Hash`]: core::hash::Hash
5579/// [`Hash::hash()`]: core::hash::Hash::hash()
5580/// [`Hash::hash_slice()`]: core::hash::Hash::hash_slice()
5581#[cfg(any(feature = "derive", test))]
5582#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
5583pub use zerocopy_derive::ByteHash;
5584
5585/// Derives optimized [`PartialEq`] and [`Eq`] implementations.
5586///
5587/// This derive can be applied to structs and enums implementing both
5588/// [`Immutable`] and [`IntoBytes`]; e.g.:
5589///
5590/// ```
5591/// # use zerocopy_derive::{ByteEq, Immutable, IntoBytes};
5592/// #[derive(ByteEq, Immutable, IntoBytes)]
5593/// #[repr(C)]
5594/// struct MyStruct {
5595/// # /*
5596///     ...
5597/// # */
5598/// }
5599///
5600/// #[derive(ByteEq, Immutable, IntoBytes)]
5601/// #[repr(u8)]
5602/// enum MyEnum {
5603/// #   Variant,
5604/// # /*
5605///     ...
5606/// # */
5607/// }
5608/// ```
5609///
5610/// The standard library's [`derive(Eq, PartialEq)`][derive@PartialEq] computes
5611/// equality by individually comparing each field. Instead, the implementation
5612/// of [`PartialEq::eq`] emitted by `derive(ByteHash)` converts the entirety of
5613/// `self` and `other` to byte slices and compares those slices for equality.
5614/// This may have performance advantages.
5615#[cfg(any(feature = "derive", test))]
5616#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
5617pub use zerocopy_derive::ByteEq;
5618
5619/// Implements [`SplitAt`].
5620///
5621/// This derive can be applied to structs; e.g.:
5622///
5623/// ```
5624/// # use zerocopy_derive::{ByteEq, Immutable, IntoBytes};
5625/// #[derive(ByteEq, Immutable, IntoBytes)]
5626/// #[repr(C)]
5627/// struct MyStruct {
5628/// # /*
5629///     ...
5630/// # */
5631/// }
5632/// ```
5633#[cfg(any(feature = "derive", test))]
5634#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
5635pub use zerocopy_derive::SplitAt;
5636
5637#[cfg(feature = "alloc")]
5638#[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
5639#[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
5640mod alloc_support {
5641    use super::*;
5642
5643    /// Extends a `Vec<T>` by pushing `additional` new items onto the end of the
5644    /// vector. The new items are initialized with zeros.
5645    #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
5646    #[doc(hidden)]
5647    #[deprecated(since = "0.8.0", note = "moved to `FromZeros`")]
5648    #[inline(always)]
5649    pub fn extend_vec_zeroed<T: FromZeros>(
5650        v: &mut Vec<T>,
5651        additional: usize,
5652    ) -> Result<(), AllocError> {
5653        <T as FromZeros>::extend_vec_zeroed(v, additional)
5654    }
5655
5656    /// Inserts `additional` new items into `Vec<T>` at `position`. The new
5657    /// items are initialized with zeros.
5658    ///
5659    /// # Panics
5660    ///
5661    /// Panics if `position > v.len()`.
5662    #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
5663    #[doc(hidden)]
5664    #[deprecated(since = "0.8.0", note = "moved to `FromZeros`")]
5665    #[inline(always)]
5666    pub fn insert_vec_zeroed<T: FromZeros>(
5667        v: &mut Vec<T>,
5668        position: usize,
5669        additional: usize,
5670    ) -> Result<(), AllocError> {
5671        <T as FromZeros>::insert_vec_zeroed(v, position, additional)
5672    }
5673}
5674
5675#[cfg(feature = "alloc")]
5676#[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
5677#[doc(hidden)]
5678pub use alloc_support::*;
5679
5680#[cfg(test)]
5681#[allow(clippy::assertions_on_result_states, clippy::unreadable_literal)]
5682mod tests {
5683    use static_assertions::assert_impl_all;
5684
5685    use super::*;
5686    use crate::util::testutil::*;
5687
5688    // An unsized type.
5689    //
5690    // This is used to test the custom derives of our traits. The `[u8]` type
5691    // gets a hand-rolled impl, so it doesn't exercise our custom derives.
5692    #[derive(Debug, Eq, PartialEq, FromBytes, IntoBytes, Unaligned, Immutable)]
5693    #[repr(transparent)]
5694    struct Unsized([u8]);
5695
5696    impl Unsized {
5697        fn from_mut_slice(slc: &mut [u8]) -> &mut Unsized {
5698            // SAFETY: This *probably* sound - since the layouts of `[u8]` and
5699            // `Unsized` are the same, so are the layouts of `&mut [u8]` and
5700            // `&mut Unsized`. [1] Even if it turns out that this isn't actually
5701            // guaranteed by the language spec, we can just change this since
5702            // it's in test code.
5703            //
5704            // [1] https://github.com/rust-lang/unsafe-code-guidelines/issues/375
5705            unsafe { mem::transmute(slc) }
5706        }
5707    }
5708
5709    #[test]
5710    fn test_known_layout() {
5711        // Test that `$ty` and `ManuallyDrop<$ty>` have the expected layout.
5712        // Test that `PhantomData<$ty>` has the same layout as `()` regardless
5713        // of `$ty`.
5714        macro_rules! test {
5715            ($ty:ty, $expect:expr) => {
5716                let expect = $expect;
5717                assert_eq!(<$ty as KnownLayout>::LAYOUT, expect);
5718                assert_eq!(<ManuallyDrop<$ty> as KnownLayout>::LAYOUT, expect);
5719                assert_eq!(<PhantomData<$ty> as KnownLayout>::LAYOUT, <() as KnownLayout>::LAYOUT);
5720            };
5721        }
5722
5723        let layout = |offset, align, _trailing_slice_elem_size| DstLayout {
5724            align: NonZeroUsize::new(align).unwrap(),
5725            size_info: match _trailing_slice_elem_size {
5726                None => SizeInfo::Sized { size: offset },
5727                Some(elem_size) => SizeInfo::SliceDst(TrailingSliceLayout { offset, elem_size }),
5728            },
5729        };
5730
5731        test!((), layout(0, 1, None));
5732        test!(u8, layout(1, 1, None));
5733        // Use `align_of` because `u64` alignment may be smaller than 8 on some
5734        // platforms.
5735        test!(u64, layout(8, mem::align_of::<u64>(), None));
5736        test!(AU64, layout(8, 8, None));
5737
5738        test!(Option<&'static ()>, usize::LAYOUT);
5739
5740        test!([()], layout(0, 1, Some(0)));
5741        test!([u8], layout(0, 1, Some(1)));
5742        test!(str, layout(0, 1, Some(1)));
5743    }
5744
5745    #[cfg(feature = "derive")]
5746    #[test]
5747    fn test_known_layout_derive() {
5748        // In this and other files (`late_compile_pass.rs`,
5749        // `mid_compile_pass.rs`, and `struct.rs`), we test success and failure
5750        // modes of `derive(KnownLayout)` for the following combination of
5751        // properties:
5752        //
5753        // +------------+--------------------------------------+-----------+
5754        // |            |      trailing field properties       |           |
5755        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5756        // |------------+----------+----------------+----------+-----------|
5757        // |          N |        N |              N |        N |      KL00 |
5758        // |          N |        N |              N |        Y |      KL01 |
5759        // |          N |        N |              Y |        N |      KL02 |
5760        // |          N |        N |              Y |        Y |      KL03 |
5761        // |          N |        Y |              N |        N |      KL04 |
5762        // |          N |        Y |              N |        Y |      KL05 |
5763        // |          N |        Y |              Y |        N |      KL06 |
5764        // |          N |        Y |              Y |        Y |      KL07 |
5765        // |          Y |        N |              N |        N |      KL08 |
5766        // |          Y |        N |              N |        Y |      KL09 |
5767        // |          Y |        N |              Y |        N |      KL10 |
5768        // |          Y |        N |              Y |        Y |      KL11 |
5769        // |          Y |        Y |              N |        N |      KL12 |
5770        // |          Y |        Y |              N |        Y |      KL13 |
5771        // |          Y |        Y |              Y |        N |      KL14 |
5772        // |          Y |        Y |              Y |        Y |      KL15 |
5773        // +------------+----------+----------------+----------+-----------+
5774
5775        struct NotKnownLayout<T = ()> {
5776            _t: T,
5777        }
5778
5779        #[derive(KnownLayout)]
5780        #[repr(C)]
5781        struct AlignSize<const ALIGN: usize, const SIZE: usize>
5782        where
5783            elain::Align<ALIGN>: elain::Alignment,
5784        {
5785            _align: elain::Align<ALIGN>,
5786            size: [u8; SIZE],
5787        }
5788
5789        type AU16 = AlignSize<2, 2>;
5790        type AU32 = AlignSize<4, 4>;
5791
5792        fn _assert_kl<T: ?Sized + KnownLayout>(_: &T) {}
5793
5794        let sized_layout = |align, size| DstLayout {
5795            align: NonZeroUsize::new(align).unwrap(),
5796            size_info: SizeInfo::Sized { size },
5797        };
5798
5799        let unsized_layout = |align, elem_size, offset| DstLayout {
5800            align: NonZeroUsize::new(align).unwrap(),
5801            size_info: SizeInfo::SliceDst(TrailingSliceLayout { offset, elem_size }),
5802        };
5803
5804        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5805        // |          N |        N |              N |        Y |      KL01 |
5806        #[allow(dead_code)]
5807        #[derive(KnownLayout)]
5808        struct KL01(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
5809
5810        let expected = DstLayout::for_type::<KL01>();
5811
5812        assert_eq!(<KL01 as KnownLayout>::LAYOUT, expected);
5813        assert_eq!(<KL01 as KnownLayout>::LAYOUT, sized_layout(4, 8));
5814
5815        // ...with `align(N)`:
5816        #[allow(dead_code)]
5817        #[derive(KnownLayout)]
5818        #[repr(align(64))]
5819        struct KL01Align(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
5820
5821        let expected = DstLayout::for_type::<KL01Align>();
5822
5823        assert_eq!(<KL01Align as KnownLayout>::LAYOUT, expected);
5824        assert_eq!(<KL01Align as KnownLayout>::LAYOUT, sized_layout(64, 64));
5825
5826        // ...with `packed`:
5827        #[allow(dead_code)]
5828        #[derive(KnownLayout)]
5829        #[repr(packed)]
5830        struct KL01Packed(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
5831
5832        let expected = DstLayout::for_type::<KL01Packed>();
5833
5834        assert_eq!(<KL01Packed as KnownLayout>::LAYOUT, expected);
5835        assert_eq!(<KL01Packed as KnownLayout>::LAYOUT, sized_layout(1, 6));
5836
5837        // ...with `packed(N)`:
5838        #[allow(dead_code)]
5839        #[derive(KnownLayout)]
5840        #[repr(packed(2))]
5841        struct KL01PackedN(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
5842
5843        assert_impl_all!(KL01PackedN: KnownLayout);
5844
5845        let expected = DstLayout::for_type::<KL01PackedN>();
5846
5847        assert_eq!(<KL01PackedN as KnownLayout>::LAYOUT, expected);
5848        assert_eq!(<KL01PackedN as KnownLayout>::LAYOUT, sized_layout(2, 6));
5849
5850        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5851        // |          N |        N |              Y |        Y |      KL03 |
5852        #[allow(dead_code)]
5853        #[derive(KnownLayout)]
5854        struct KL03(NotKnownLayout, u8);
5855
5856        let expected = DstLayout::for_type::<KL03>();
5857
5858        assert_eq!(<KL03 as KnownLayout>::LAYOUT, expected);
5859        assert_eq!(<KL03 as KnownLayout>::LAYOUT, sized_layout(1, 1));
5860
5861        // ... with `align(N)`
5862        #[allow(dead_code)]
5863        #[derive(KnownLayout)]
5864        #[repr(align(64))]
5865        struct KL03Align(NotKnownLayout<AU32>, u8);
5866
5867        let expected = DstLayout::for_type::<KL03Align>();
5868
5869        assert_eq!(<KL03Align as KnownLayout>::LAYOUT, expected);
5870        assert_eq!(<KL03Align as KnownLayout>::LAYOUT, sized_layout(64, 64));
5871
5872        // ... with `packed`:
5873        #[allow(dead_code)]
5874        #[derive(KnownLayout)]
5875        #[repr(packed)]
5876        struct KL03Packed(NotKnownLayout<AU32>, u8);
5877
5878        let expected = DstLayout::for_type::<KL03Packed>();
5879
5880        assert_eq!(<KL03Packed as KnownLayout>::LAYOUT, expected);
5881        assert_eq!(<KL03Packed as KnownLayout>::LAYOUT, sized_layout(1, 5));
5882
5883        // ... with `packed(N)`
5884        #[allow(dead_code)]
5885        #[derive(KnownLayout)]
5886        #[repr(packed(2))]
5887        struct KL03PackedN(NotKnownLayout<AU32>, u8);
5888
5889        assert_impl_all!(KL03PackedN: KnownLayout);
5890
5891        let expected = DstLayout::for_type::<KL03PackedN>();
5892
5893        assert_eq!(<KL03PackedN as KnownLayout>::LAYOUT, expected);
5894        assert_eq!(<KL03PackedN as KnownLayout>::LAYOUT, sized_layout(2, 6));
5895
5896        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5897        // |          N |        Y |              N |        Y |      KL05 |
5898        #[allow(dead_code)]
5899        #[derive(KnownLayout)]
5900        struct KL05<T>(u8, T);
5901
5902        fn _test_kl05<T>(t: T) -> impl KnownLayout {
5903            KL05(0u8, t)
5904        }
5905
5906        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5907        // |          N |        Y |              Y |        Y |      KL07 |
5908        #[allow(dead_code)]
5909        #[derive(KnownLayout)]
5910        struct KL07<T: KnownLayout>(u8, T);
5911
5912        fn _test_kl07<T: KnownLayout>(t: T) -> impl KnownLayout {
5913            let _ = KL07(0u8, t);
5914        }
5915
5916        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5917        // |          Y |        N |              Y |        N |      KL10 |
5918        #[allow(dead_code)]
5919        #[derive(KnownLayout)]
5920        #[repr(C)]
5921        struct KL10(NotKnownLayout<AU32>, [u8]);
5922
5923        let expected = DstLayout::new_zst(None)
5924            .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), None)
5925            .extend(<[u8] as KnownLayout>::LAYOUT, None)
5926            .pad_to_align();
5927
5928        assert_eq!(<KL10 as KnownLayout>::LAYOUT, expected);
5929        assert_eq!(<KL10 as KnownLayout>::LAYOUT, unsized_layout(4, 1, 4));
5930
5931        // ...with `align(N)`:
5932        #[allow(dead_code)]
5933        #[derive(KnownLayout)]
5934        #[repr(C, align(64))]
5935        struct KL10Align(NotKnownLayout<AU32>, [u8]);
5936
5937        let repr_align = NonZeroUsize::new(64);
5938
5939        let expected = DstLayout::new_zst(repr_align)
5940            .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), None)
5941            .extend(<[u8] as KnownLayout>::LAYOUT, None)
5942            .pad_to_align();
5943
5944        assert_eq!(<KL10Align as KnownLayout>::LAYOUT, expected);
5945        assert_eq!(<KL10Align as KnownLayout>::LAYOUT, unsized_layout(64, 1, 4));
5946
5947        // ...with `packed`:
5948        #[allow(dead_code)]
5949        #[derive(KnownLayout)]
5950        #[repr(C, packed)]
5951        struct KL10Packed(NotKnownLayout<AU32>, [u8]);
5952
5953        let repr_packed = NonZeroUsize::new(1);
5954
5955        let expected = DstLayout::new_zst(None)
5956            .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), repr_packed)
5957            .extend(<[u8] as KnownLayout>::LAYOUT, repr_packed)
5958            .pad_to_align();
5959
5960        assert_eq!(<KL10Packed as KnownLayout>::LAYOUT, expected);
5961        assert_eq!(<KL10Packed as KnownLayout>::LAYOUT, unsized_layout(1, 1, 4));
5962
5963        // ...with `packed(N)`:
5964        #[allow(dead_code)]
5965        #[derive(KnownLayout)]
5966        #[repr(C, packed(2))]
5967        struct KL10PackedN(NotKnownLayout<AU32>, [u8]);
5968
5969        let repr_packed = NonZeroUsize::new(2);
5970
5971        let expected = DstLayout::new_zst(None)
5972            .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), repr_packed)
5973            .extend(<[u8] as KnownLayout>::LAYOUT, repr_packed)
5974            .pad_to_align();
5975
5976        assert_eq!(<KL10PackedN as KnownLayout>::LAYOUT, expected);
5977        assert_eq!(<KL10PackedN as KnownLayout>::LAYOUT, unsized_layout(2, 1, 4));
5978
5979        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5980        // |          Y |        N |              Y |        Y |      KL11 |
5981        #[allow(dead_code)]
5982        #[derive(KnownLayout)]
5983        #[repr(C)]
5984        struct KL11(NotKnownLayout<AU64>, u8);
5985
5986        let expected = DstLayout::new_zst(None)
5987            .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), None)
5988            .extend(<u8 as KnownLayout>::LAYOUT, None)
5989            .pad_to_align();
5990
5991        assert_eq!(<KL11 as KnownLayout>::LAYOUT, expected);
5992        assert_eq!(<KL11 as KnownLayout>::LAYOUT, sized_layout(8, 16));
5993
5994        // ...with `align(N)`:
5995        #[allow(dead_code)]
5996        #[derive(KnownLayout)]
5997        #[repr(C, align(64))]
5998        struct KL11Align(NotKnownLayout<AU64>, u8);
5999
6000        let repr_align = NonZeroUsize::new(64);
6001
6002        let expected = DstLayout::new_zst(repr_align)
6003            .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), None)
6004            .extend(<u8 as KnownLayout>::LAYOUT, None)
6005            .pad_to_align();
6006
6007        assert_eq!(<KL11Align as KnownLayout>::LAYOUT, expected);
6008        assert_eq!(<KL11Align as KnownLayout>::LAYOUT, sized_layout(64, 64));
6009
6010        // ...with `packed`:
6011        #[allow(dead_code)]
6012        #[derive(KnownLayout)]
6013        #[repr(C, packed)]
6014        struct KL11Packed(NotKnownLayout<AU64>, u8);
6015
6016        let repr_packed = NonZeroUsize::new(1);
6017
6018        let expected = DstLayout::new_zst(None)
6019            .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), repr_packed)
6020            .extend(<u8 as KnownLayout>::LAYOUT, repr_packed)
6021            .pad_to_align();
6022
6023        assert_eq!(<KL11Packed as KnownLayout>::LAYOUT, expected);
6024        assert_eq!(<KL11Packed as KnownLayout>::LAYOUT, sized_layout(1, 9));
6025
6026        // ...with `packed(N)`:
6027        #[allow(dead_code)]
6028        #[derive(KnownLayout)]
6029        #[repr(C, packed(2))]
6030        struct KL11PackedN(NotKnownLayout<AU64>, u8);
6031
6032        let repr_packed = NonZeroUsize::new(2);
6033
6034        let expected = DstLayout::new_zst(None)
6035            .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), repr_packed)
6036            .extend(<u8 as KnownLayout>::LAYOUT, repr_packed)
6037            .pad_to_align();
6038
6039        assert_eq!(<KL11PackedN as KnownLayout>::LAYOUT, expected);
6040        assert_eq!(<KL11PackedN as KnownLayout>::LAYOUT, sized_layout(2, 10));
6041
6042        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
6043        // |          Y |        Y |              Y |        N |      KL14 |
6044        #[allow(dead_code)]
6045        #[derive(KnownLayout)]
6046        #[repr(C)]
6047        struct KL14<T: ?Sized + KnownLayout>(u8, T);
6048
6049        fn _test_kl14<T: ?Sized + KnownLayout>(kl: &KL14<T>) {
6050            _assert_kl(kl)
6051        }
6052
6053        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
6054        // |          Y |        Y |              Y |        Y |      KL15 |
6055        #[allow(dead_code)]
6056        #[derive(KnownLayout)]
6057        #[repr(C)]
6058        struct KL15<T: KnownLayout>(u8, T);
6059
6060        fn _test_kl15<T: KnownLayout>(t: T) -> impl KnownLayout {
6061            let _ = KL15(0u8, t);
6062        }
6063
6064        // Test a variety of combinations of field types:
6065        //  - ()
6066        //  - u8
6067        //  - AU16
6068        //  - [()]
6069        //  - [u8]
6070        //  - [AU16]
6071
6072        #[allow(clippy::upper_case_acronyms, dead_code)]
6073        #[derive(KnownLayout)]
6074        #[repr(C)]
6075        struct KLTU<T, U: ?Sized>(T, U);
6076
6077        assert_eq!(<KLTU<(), ()> as KnownLayout>::LAYOUT, sized_layout(1, 0));
6078
6079        assert_eq!(<KLTU<(), u8> as KnownLayout>::LAYOUT, sized_layout(1, 1));
6080
6081        assert_eq!(<KLTU<(), AU16> as KnownLayout>::LAYOUT, sized_layout(2, 2));
6082
6083        assert_eq!(<KLTU<(), [()]> as KnownLayout>::LAYOUT, unsized_layout(1, 0, 0));
6084
6085        assert_eq!(<KLTU<(), [u8]> as KnownLayout>::LAYOUT, unsized_layout(1, 1, 0));
6086
6087        assert_eq!(<KLTU<(), [AU16]> as KnownLayout>::LAYOUT, unsized_layout(2, 2, 0));
6088
6089        assert_eq!(<KLTU<u8, ()> as KnownLayout>::LAYOUT, sized_layout(1, 1));
6090
6091        assert_eq!(<KLTU<u8, u8> as KnownLayout>::LAYOUT, sized_layout(1, 2));
6092
6093        assert_eq!(<KLTU<u8, AU16> as KnownLayout>::LAYOUT, sized_layout(2, 4));
6094
6095        assert_eq!(<KLTU<u8, [()]> as KnownLayout>::LAYOUT, unsized_layout(1, 0, 1));
6096
6097        assert_eq!(<KLTU<u8, [u8]> as KnownLayout>::LAYOUT, unsized_layout(1, 1, 1));
6098
6099        assert_eq!(<KLTU<u8, [AU16]> as KnownLayout>::LAYOUT, unsized_layout(2, 2, 2));
6100
6101        assert_eq!(<KLTU<AU16, ()> as KnownLayout>::LAYOUT, sized_layout(2, 2));
6102
6103        assert_eq!(<KLTU<AU16, u8> as KnownLayout>::LAYOUT, sized_layout(2, 4));
6104
6105        assert_eq!(<KLTU<AU16, AU16> as KnownLayout>::LAYOUT, sized_layout(2, 4));
6106
6107        assert_eq!(<KLTU<AU16, [()]> as KnownLayout>::LAYOUT, unsized_layout(2, 0, 2));
6108
6109        assert_eq!(<KLTU<AU16, [u8]> as KnownLayout>::LAYOUT, unsized_layout(2, 1, 2));
6110
6111        assert_eq!(<KLTU<AU16, [AU16]> as KnownLayout>::LAYOUT, unsized_layout(2, 2, 2));
6112
6113        // Test a variety of field counts.
6114
6115        #[derive(KnownLayout)]
6116        #[repr(C)]
6117        struct KLF0;
6118
6119        assert_eq!(<KLF0 as KnownLayout>::LAYOUT, sized_layout(1, 0));
6120
6121        #[derive(KnownLayout)]
6122        #[repr(C)]
6123        struct KLF1([u8]);
6124
6125        assert_eq!(<KLF1 as KnownLayout>::LAYOUT, unsized_layout(1, 1, 0));
6126
6127        #[derive(KnownLayout)]
6128        #[repr(C)]
6129        struct KLF2(NotKnownLayout<u8>, [u8]);
6130
6131        assert_eq!(<KLF2 as KnownLayout>::LAYOUT, unsized_layout(1, 1, 1));
6132
6133        #[derive(KnownLayout)]
6134        #[repr(C)]
6135        struct KLF3(NotKnownLayout<u8>, NotKnownLayout<AU16>, [u8]);
6136
6137        assert_eq!(<KLF3 as KnownLayout>::LAYOUT, unsized_layout(2, 1, 4));
6138
6139        #[derive(KnownLayout)]
6140        #[repr(C)]
6141        struct KLF4(NotKnownLayout<u8>, NotKnownLayout<AU16>, NotKnownLayout<AU32>, [u8]);
6142
6143        assert_eq!(<KLF4 as KnownLayout>::LAYOUT, unsized_layout(4, 1, 8));
6144    }
6145
6146    #[test]
6147    fn test_object_safety() {
6148        fn _takes_no_cell(_: &dyn Immutable) {}
6149        fn _takes_unaligned(_: &dyn Unaligned) {}
6150    }
6151
6152    #[test]
6153    fn test_from_zeros_only() {
6154        // Test types that implement `FromZeros` but not `FromBytes`.
6155
6156        assert!(!bool::new_zeroed());
6157        assert_eq!(char::new_zeroed(), '\0');
6158
6159        #[cfg(feature = "alloc")]
6160        {
6161            assert_eq!(bool::new_box_zeroed(), Ok(Box::new(false)));
6162            assert_eq!(char::new_box_zeroed(), Ok(Box::new('\0')));
6163
6164            assert_eq!(
6165                <[bool]>::new_box_zeroed_with_elems(3).unwrap().as_ref(),
6166                [false, false, false]
6167            );
6168            assert_eq!(
6169                <[char]>::new_box_zeroed_with_elems(3).unwrap().as_ref(),
6170                ['\0', '\0', '\0']
6171            );
6172
6173            assert_eq!(bool::new_vec_zeroed(3).unwrap().as_ref(), [false, false, false]);
6174            assert_eq!(char::new_vec_zeroed(3).unwrap().as_ref(), ['\0', '\0', '\0']);
6175        }
6176
6177        let mut string = "hello".to_string();
6178        let s: &mut str = string.as_mut();
6179        assert_eq!(s, "hello");
6180        s.zero();
6181        assert_eq!(s, "\0\0\0\0\0");
6182    }
6183
6184    #[test]
6185    fn test_zst_count_preserved() {
6186        // Test that, when an explicit count is provided to for a type with a
6187        // ZST trailing slice element, that count is preserved. This is
6188        // important since, for such types, all element counts result in objects
6189        // of the same size, and so the correct behavior is ambiguous. However,
6190        // preserving the count as requested by the user is the behavior that we
6191        // document publicly.
6192
6193        // FromZeros methods
6194        #[cfg(feature = "alloc")]
6195        assert_eq!(<[()]>::new_box_zeroed_with_elems(3).unwrap().len(), 3);
6196        #[cfg(feature = "alloc")]
6197        assert_eq!(<()>::new_vec_zeroed(3).unwrap().len(), 3);
6198
6199        // FromBytes methods
6200        assert_eq!(<[()]>::ref_from_bytes_with_elems(&[][..], 3).unwrap().len(), 3);
6201        assert_eq!(<[()]>::ref_from_prefix_with_elems(&[][..], 3).unwrap().0.len(), 3);
6202        assert_eq!(<[()]>::ref_from_suffix_with_elems(&[][..], 3).unwrap().1.len(), 3);
6203        assert_eq!(<[()]>::mut_from_bytes_with_elems(&mut [][..], 3).unwrap().len(), 3);
6204        assert_eq!(<[()]>::mut_from_prefix_with_elems(&mut [][..], 3).unwrap().0.len(), 3);
6205        assert_eq!(<[()]>::mut_from_suffix_with_elems(&mut [][..], 3).unwrap().1.len(), 3);
6206    }
6207
6208    #[test]
6209    fn test_read_write() {
6210        const VAL: u64 = 0x12345678;
6211        #[cfg(target_endian = "big")]
6212        const VAL_BYTES: [u8; 8] = VAL.to_be_bytes();
6213        #[cfg(target_endian = "little")]
6214        const VAL_BYTES: [u8; 8] = VAL.to_le_bytes();
6215        const ZEROS: [u8; 8] = [0u8; 8];
6216
6217        // Test `FromBytes::{read_from, read_from_prefix, read_from_suffix}`.
6218
6219        assert_eq!(u64::read_from_bytes(&VAL_BYTES[..]), Ok(VAL));
6220        // The first 8 bytes are from `VAL_BYTES` and the second 8 bytes are all
6221        // zeros.
6222        let bytes_with_prefix: [u8; 16] = transmute!([VAL_BYTES, [0; 8]]);
6223        assert_eq!(u64::read_from_prefix(&bytes_with_prefix[..]), Ok((VAL, &ZEROS[..])));
6224        assert_eq!(u64::read_from_suffix(&bytes_with_prefix[..]), Ok((&VAL_BYTES[..], 0)));
6225        // The first 8 bytes are all zeros and the second 8 bytes are from
6226        // `VAL_BYTES`
6227        let bytes_with_suffix: [u8; 16] = transmute!([[0; 8], VAL_BYTES]);
6228        assert_eq!(u64::read_from_prefix(&bytes_with_suffix[..]), Ok((0, &VAL_BYTES[..])));
6229        assert_eq!(u64::read_from_suffix(&bytes_with_suffix[..]), Ok((&ZEROS[..], VAL)));
6230
6231        // Test `IntoBytes::{write_to, write_to_prefix, write_to_suffix}`.
6232
6233        let mut bytes = [0u8; 8];
6234        assert_eq!(VAL.write_to(&mut bytes[..]), Ok(()));
6235        assert_eq!(bytes, VAL_BYTES);
6236        let mut bytes = [0u8; 16];
6237        assert_eq!(VAL.write_to_prefix(&mut bytes[..]), Ok(()));
6238        let want: [u8; 16] = transmute!([VAL_BYTES, [0; 8]]);
6239        assert_eq!(bytes, want);
6240        let mut bytes = [0u8; 16];
6241        assert_eq!(VAL.write_to_suffix(&mut bytes[..]), Ok(()));
6242        let want: [u8; 16] = transmute!([[0; 8], VAL_BYTES]);
6243        assert_eq!(bytes, want);
6244    }
6245
6246    #[test]
6247    #[cfg(feature = "std")]
6248    fn test_read_io_with_padding_soundness() {
6249        // This test is designed to exhibit potential UB in
6250        // `FromBytes::read_from_io`. (see #2319, #2320).
6251
6252        // On most platforms (where `align_of::<u16>() == 2`), `WithPadding`
6253        // will have inter-field padding between `x` and `y`.
6254        #[derive(FromBytes)]
6255        #[repr(C)]
6256        struct WithPadding {
6257            x: u8,
6258            y: u16,
6259        }
6260        struct ReadsInRead;
6261        impl std::io::Read for ReadsInRead {
6262            fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize> {
6263                // This body branches on every byte of `buf`, ensuring that it
6264                // exhibits UB if any byte of `buf` is uninitialized.
6265                if buf.iter().all(|&x| x == 0) {
6266                    Ok(buf.len())
6267                } else {
6268                    buf.iter_mut().for_each(|x| *x = 0);
6269                    Ok(buf.len())
6270                }
6271            }
6272        }
6273        assert!(matches!(WithPadding::read_from_io(ReadsInRead), Ok(WithPadding { x: 0, y: 0 })));
6274    }
6275
6276    #[test]
6277    #[cfg(feature = "std")]
6278    fn test_read_write_io() {
6279        let mut long_buffer = [0, 0, 0, 0];
6280        assert!(matches!(u16::MAX.write_to_io(&mut long_buffer[..]), Ok(())));
6281        assert_eq!(long_buffer, [255, 255, 0, 0]);
6282        assert!(matches!(u16::read_from_io(&long_buffer[..]), Ok(u16::MAX)));
6283
6284        let mut short_buffer = [0, 0];
6285        assert!(u32::MAX.write_to_io(&mut short_buffer[..]).is_err());
6286        assert_eq!(short_buffer, [255, 255]);
6287        assert!(u32::read_from_io(&short_buffer[..]).is_err());
6288    }
6289
6290    #[test]
6291    fn test_try_from_bytes_try_read_from() {
6292        assert_eq!(<bool as TryFromBytes>::try_read_from_bytes(&[0]), Ok(false));
6293        assert_eq!(<bool as TryFromBytes>::try_read_from_bytes(&[1]), Ok(true));
6294
6295        assert_eq!(<bool as TryFromBytes>::try_read_from_prefix(&[0, 2]), Ok((false, &[2][..])));
6296        assert_eq!(<bool as TryFromBytes>::try_read_from_prefix(&[1, 2]), Ok((true, &[2][..])));
6297
6298        assert_eq!(<bool as TryFromBytes>::try_read_from_suffix(&[2, 0]), Ok((&[2][..], false)));
6299        assert_eq!(<bool as TryFromBytes>::try_read_from_suffix(&[2, 1]), Ok((&[2][..], true)));
6300
6301        // If we don't pass enough bytes, it fails.
6302        assert!(matches!(
6303            <u8 as TryFromBytes>::try_read_from_bytes(&[]),
6304            Err(TryReadError::Size(_))
6305        ));
6306        assert!(matches!(
6307            <u8 as TryFromBytes>::try_read_from_prefix(&[]),
6308            Err(TryReadError::Size(_))
6309        ));
6310        assert!(matches!(
6311            <u8 as TryFromBytes>::try_read_from_suffix(&[]),
6312            Err(TryReadError::Size(_))
6313        ));
6314
6315        // If we pass too many bytes, it fails.
6316        assert!(matches!(
6317            <u8 as TryFromBytes>::try_read_from_bytes(&[0, 0]),
6318            Err(TryReadError::Size(_))
6319        ));
6320
6321        // If we pass an invalid value, it fails.
6322        assert!(matches!(
6323            <bool as TryFromBytes>::try_read_from_bytes(&[2]),
6324            Err(TryReadError::Validity(_))
6325        ));
6326        assert!(matches!(
6327            <bool as TryFromBytes>::try_read_from_prefix(&[2, 0]),
6328            Err(TryReadError::Validity(_))
6329        ));
6330        assert!(matches!(
6331            <bool as TryFromBytes>::try_read_from_suffix(&[0, 2]),
6332            Err(TryReadError::Validity(_))
6333        ));
6334
6335        // Reading from a misaligned buffer should still succeed. Since `AU64`'s
6336        // alignment is 8, and since we read from two adjacent addresses one
6337        // byte apart, it is guaranteed that at least one of them (though
6338        // possibly both) will be misaligned.
6339        let bytes: [u8; 9] = [0, 0, 0, 0, 0, 0, 0, 0, 0];
6340        assert_eq!(<AU64 as TryFromBytes>::try_read_from_bytes(&bytes[..8]), Ok(AU64(0)));
6341        assert_eq!(<AU64 as TryFromBytes>::try_read_from_bytes(&bytes[1..9]), Ok(AU64(0)));
6342
6343        assert_eq!(
6344            <AU64 as TryFromBytes>::try_read_from_prefix(&bytes[..8]),
6345            Ok((AU64(0), &[][..]))
6346        );
6347        assert_eq!(
6348            <AU64 as TryFromBytes>::try_read_from_prefix(&bytes[1..9]),
6349            Ok((AU64(0), &[][..]))
6350        );
6351
6352        assert_eq!(
6353            <AU64 as TryFromBytes>::try_read_from_suffix(&bytes[..8]),
6354            Ok((&[][..], AU64(0)))
6355        );
6356        assert_eq!(
6357            <AU64 as TryFromBytes>::try_read_from_suffix(&bytes[1..9]),
6358            Ok((&[][..], AU64(0)))
6359        );
6360    }
6361
6362    #[test]
6363    fn test_ref_from_mut_from() {
6364        // Test `FromBytes::{ref_from, mut_from}{,_prefix,Suffix}` success cases
6365        // Exhaustive coverage for these methods is covered by the `Ref` tests above,
6366        // which these helper methods defer to.
6367
6368        let mut buf =
6369            Align::<[u8; 16], AU64>::new([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]);
6370
6371        assert_eq!(
6372            AU64::ref_from_bytes(&buf.t[8..]).unwrap().0.to_ne_bytes(),
6373            [8, 9, 10, 11, 12, 13, 14, 15]
6374        );
6375        let suffix = AU64::mut_from_bytes(&mut buf.t[8..]).unwrap();
6376        suffix.0 = 0x0101010101010101;
6377        // The `[u8:9]` is a non-half size of the full buffer, which would catch
6378        // `from_prefix` having the same implementation as `from_suffix` (issues #506, #511).
6379        assert_eq!(
6380            <[u8; 9]>::ref_from_suffix(&buf.t[..]).unwrap(),
6381            (&[0, 1, 2, 3, 4, 5, 6][..], &[7u8, 1, 1, 1, 1, 1, 1, 1, 1])
6382        );
6383        let (prefix, suffix) = AU64::mut_from_suffix(&mut buf.t[1..]).unwrap();
6384        assert_eq!(prefix, &mut [1u8, 2, 3, 4, 5, 6, 7][..]);
6385        suffix.0 = 0x0202020202020202;
6386        let (prefix, suffix) = <[u8; 10]>::mut_from_suffix(&mut buf.t[..]).unwrap();
6387        assert_eq!(prefix, &mut [0u8, 1, 2, 3, 4, 5][..]);
6388        suffix[0] = 42;
6389        assert_eq!(
6390            <[u8; 9]>::ref_from_prefix(&buf.t[..]).unwrap(),
6391            (&[0u8, 1, 2, 3, 4, 5, 42, 7, 2], &[2u8, 2, 2, 2, 2, 2, 2][..])
6392        );
6393        <[u8; 2]>::mut_from_prefix(&mut buf.t[..]).unwrap().0[1] = 30;
6394        assert_eq!(buf.t, [0, 30, 2, 3, 4, 5, 42, 7, 2, 2, 2, 2, 2, 2, 2, 2]);
6395    }
6396
6397    #[test]
6398    fn test_ref_from_mut_from_error() {
6399        // Test `FromBytes::{ref_from, mut_from}{,_prefix,Suffix}` error cases.
6400
6401        // Fail because the buffer is too large.
6402        let mut buf = Align::<[u8; 16], AU64>::default();
6403        // `buf.t` should be aligned to 8, so only the length check should fail.
6404        assert!(AU64::ref_from_bytes(&buf.t[..]).is_err());
6405        assert!(AU64::mut_from_bytes(&mut buf.t[..]).is_err());
6406        assert!(<[u8; 8]>::ref_from_bytes(&buf.t[..]).is_err());
6407        assert!(<[u8; 8]>::mut_from_bytes(&mut buf.t[..]).is_err());
6408
6409        // Fail because the buffer is too small.
6410        let mut buf = Align::<[u8; 4], AU64>::default();
6411        assert!(AU64::ref_from_bytes(&buf.t[..]).is_err());
6412        assert!(AU64::mut_from_bytes(&mut buf.t[..]).is_err());
6413        assert!(<[u8; 8]>::ref_from_bytes(&buf.t[..]).is_err());
6414        assert!(<[u8; 8]>::mut_from_bytes(&mut buf.t[..]).is_err());
6415        assert!(AU64::ref_from_prefix(&buf.t[..]).is_err());
6416        assert!(AU64::mut_from_prefix(&mut buf.t[..]).is_err());
6417        assert!(AU64::ref_from_suffix(&buf.t[..]).is_err());
6418        assert!(AU64::mut_from_suffix(&mut buf.t[..]).is_err());
6419        assert!(<[u8; 8]>::ref_from_prefix(&buf.t[..]).is_err());
6420        assert!(<[u8; 8]>::mut_from_prefix(&mut buf.t[..]).is_err());
6421        assert!(<[u8; 8]>::ref_from_suffix(&buf.t[..]).is_err());
6422        assert!(<[u8; 8]>::mut_from_suffix(&mut buf.t[..]).is_err());
6423
6424        // Fail because the alignment is insufficient.
6425        let mut buf = Align::<[u8; 13], AU64>::default();
6426        assert!(AU64::ref_from_bytes(&buf.t[1..]).is_err());
6427        assert!(AU64::mut_from_bytes(&mut buf.t[1..]).is_err());
6428        assert!(AU64::ref_from_bytes(&buf.t[1..]).is_err());
6429        assert!(AU64::mut_from_bytes(&mut buf.t[1..]).is_err());
6430        assert!(AU64::ref_from_prefix(&buf.t[1..]).is_err());
6431        assert!(AU64::mut_from_prefix(&mut buf.t[1..]).is_err());
6432        assert!(AU64::ref_from_suffix(&buf.t[..]).is_err());
6433        assert!(AU64::mut_from_suffix(&mut buf.t[..]).is_err());
6434    }
6435
6436    #[test]
6437    fn test_to_methods() {
6438        /// Run a series of tests by calling `IntoBytes` methods on `t`.
6439        ///
6440        /// `bytes` is the expected byte sequence returned from `t.as_bytes()`
6441        /// before `t` has been modified. `post_mutation` is the expected
6442        /// sequence returned from `t.as_bytes()` after `t.as_mut_bytes()[0]`
6443        /// has had its bits flipped (by applying `^= 0xFF`).
6444        ///
6445        /// `N` is the size of `t` in bytes.
6446        fn test<T: FromBytes + IntoBytes + Immutable + Debug + Eq + ?Sized, const N: usize>(
6447            t: &mut T,
6448            bytes: &[u8],
6449            post_mutation: &T,
6450        ) {
6451            // Test that we can access the underlying bytes, and that we get the
6452            // right bytes and the right number of bytes.
6453            assert_eq!(t.as_bytes(), bytes);
6454
6455            // Test that changes to the underlying byte slices are reflected in
6456            // the original object.
6457            t.as_mut_bytes()[0] ^= 0xFF;
6458            assert_eq!(t, post_mutation);
6459            t.as_mut_bytes()[0] ^= 0xFF;
6460
6461            // `write_to` rejects slices that are too small or too large.
6462            assert!(t.write_to(&mut vec![0; N - 1][..]).is_err());
6463            assert!(t.write_to(&mut vec![0; N + 1][..]).is_err());
6464
6465            // `write_to` works as expected.
6466            let mut bytes = [0; N];
6467            assert_eq!(t.write_to(&mut bytes[..]), Ok(()));
6468            assert_eq!(bytes, t.as_bytes());
6469
6470            // `write_to_prefix` rejects slices that are too small.
6471            assert!(t.write_to_prefix(&mut vec![0; N - 1][..]).is_err());
6472
6473            // `write_to_prefix` works with exact-sized slices.
6474            let mut bytes = [0; N];
6475            assert_eq!(t.write_to_prefix(&mut bytes[..]), Ok(()));
6476            assert_eq!(bytes, t.as_bytes());
6477
6478            // `write_to_prefix` works with too-large slices, and any bytes past
6479            // the prefix aren't modified.
6480            let mut too_many_bytes = vec![0; N + 1];
6481            too_many_bytes[N] = 123;
6482            assert_eq!(t.write_to_prefix(&mut too_many_bytes[..]), Ok(()));
6483            assert_eq!(&too_many_bytes[..N], t.as_bytes());
6484            assert_eq!(too_many_bytes[N], 123);
6485
6486            // `write_to_suffix` rejects slices that are too small.
6487            assert!(t.write_to_suffix(&mut vec![0; N - 1][..]).is_err());
6488
6489            // `write_to_suffix` works with exact-sized slices.
6490            let mut bytes = [0; N];
6491            assert_eq!(t.write_to_suffix(&mut bytes[..]), Ok(()));
6492            assert_eq!(bytes, t.as_bytes());
6493
6494            // `write_to_suffix` works with too-large slices, and any bytes
6495            // before the suffix aren't modified.
6496            let mut too_many_bytes = vec![0; N + 1];
6497            too_many_bytes[0] = 123;
6498            assert_eq!(t.write_to_suffix(&mut too_many_bytes[..]), Ok(()));
6499            assert_eq!(&too_many_bytes[1..], t.as_bytes());
6500            assert_eq!(too_many_bytes[0], 123);
6501        }
6502
6503        #[derive(Debug, Eq, PartialEq, FromBytes, IntoBytes, Immutable)]
6504        #[repr(C)]
6505        struct Foo {
6506            a: u32,
6507            b: Wrapping<u32>,
6508            c: Option<NonZeroU32>,
6509        }
6510
6511        let expected_bytes: Vec<u8> = if cfg!(target_endian = "little") {
6512            vec![1, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0]
6513        } else {
6514            vec![0, 0, 0, 1, 0, 0, 0, 2, 0, 0, 0, 0]
6515        };
6516        let post_mutation_expected_a =
6517            if cfg!(target_endian = "little") { 0x00_00_00_FE } else { 0xFF_00_00_01 };
6518        test::<_, 12>(
6519            &mut Foo { a: 1, b: Wrapping(2), c: None },
6520            expected_bytes.as_bytes(),
6521            &Foo { a: post_mutation_expected_a, b: Wrapping(2), c: None },
6522        );
6523        test::<_, 3>(
6524            Unsized::from_mut_slice(&mut [1, 2, 3]),
6525            &[1, 2, 3],
6526            Unsized::from_mut_slice(&mut [0xFE, 2, 3]),
6527        );
6528    }
6529
6530    #[test]
6531    fn test_array() {
6532        #[derive(FromBytes, IntoBytes, Immutable)]
6533        #[repr(C)]
6534        struct Foo {
6535            a: [u16; 33],
6536        }
6537
6538        let foo = Foo { a: [0xFFFF; 33] };
6539        let expected = [0xFFu8; 66];
6540        assert_eq!(foo.as_bytes(), &expected[..]);
6541    }
6542
6543    #[test]
6544    fn test_new_zeroed() {
6545        assert!(!bool::new_zeroed());
6546        assert_eq!(u64::new_zeroed(), 0);
6547        // This test exists in order to exercise unsafe code, especially when
6548        // running under Miri.
6549        #[allow(clippy::unit_cmp)]
6550        {
6551            assert_eq!(<()>::new_zeroed(), ());
6552        }
6553    }
6554
6555    #[test]
6556    fn test_transparent_packed_generic_struct() {
6557        #[derive(IntoBytes, FromBytes, Unaligned)]
6558        #[repr(transparent)]
6559        #[allow(dead_code)] // We never construct this type
6560        struct Foo<T> {
6561            _t: T,
6562            _phantom: PhantomData<()>,
6563        }
6564
6565        assert_impl_all!(Foo<u32>: FromZeros, FromBytes, IntoBytes);
6566        assert_impl_all!(Foo<u8>: Unaligned);
6567
6568        #[derive(IntoBytes, FromBytes, Unaligned)]
6569        #[repr(C, packed)]
6570        #[allow(dead_code)] // We never construct this type
6571        struct Bar<T, U> {
6572            _t: T,
6573            _u: U,
6574        }
6575
6576        assert_impl_all!(Bar<u8, AU64>: FromZeros, FromBytes, IntoBytes, Unaligned);
6577    }
6578
6579    #[cfg(feature = "alloc")]
6580    mod alloc {
6581        use super::*;
6582
6583        #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
6584        #[test]
6585        fn test_extend_vec_zeroed() {
6586            // Test extending when there is an existing allocation.
6587            let mut v = vec![100u16, 200, 300];
6588            FromZeros::extend_vec_zeroed(&mut v, 3).unwrap();
6589            assert_eq!(v.len(), 6);
6590            assert_eq!(&*v, &[100, 200, 300, 0, 0, 0]);
6591            drop(v);
6592
6593            // Test extending when there is no existing allocation.
6594            let mut v: Vec<u64> = Vec::new();
6595            FromZeros::extend_vec_zeroed(&mut v, 3).unwrap();
6596            assert_eq!(v.len(), 3);
6597            assert_eq!(&*v, &[0, 0, 0]);
6598            drop(v);
6599        }
6600
6601        #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
6602        #[test]
6603        fn test_extend_vec_zeroed_zst() {
6604            // Test extending when there is an existing (fake) allocation.
6605            let mut v = vec![(), (), ()];
6606            <()>::extend_vec_zeroed(&mut v, 3).unwrap();
6607            assert_eq!(v.len(), 6);
6608            assert_eq!(&*v, &[(), (), (), (), (), ()]);
6609            drop(v);
6610
6611            // Test extending when there is no existing (fake) allocation.
6612            let mut v: Vec<()> = Vec::new();
6613            <()>::extend_vec_zeroed(&mut v, 3).unwrap();
6614            assert_eq!(&*v, &[(), (), ()]);
6615            drop(v);
6616        }
6617
6618        #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
6619        #[test]
6620        fn test_insert_vec_zeroed() {
6621            // Insert at start (no existing allocation).
6622            let mut v: Vec<u64> = Vec::new();
6623            u64::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6624            assert_eq!(v.len(), 2);
6625            assert_eq!(&*v, &[0, 0]);
6626            drop(v);
6627
6628            // Insert at start.
6629            let mut v = vec![100u64, 200, 300];
6630            u64::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6631            assert_eq!(v.len(), 5);
6632            assert_eq!(&*v, &[0, 0, 100, 200, 300]);
6633            drop(v);
6634
6635            // Insert at middle.
6636            let mut v = vec![100u64, 200, 300];
6637            u64::insert_vec_zeroed(&mut v, 1, 1).unwrap();
6638            assert_eq!(v.len(), 4);
6639            assert_eq!(&*v, &[100, 0, 200, 300]);
6640            drop(v);
6641
6642            // Insert at end.
6643            let mut v = vec![100u64, 200, 300];
6644            u64::insert_vec_zeroed(&mut v, 3, 1).unwrap();
6645            assert_eq!(v.len(), 4);
6646            assert_eq!(&*v, &[100, 200, 300, 0]);
6647            drop(v);
6648        }
6649
6650        #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
6651        #[test]
6652        fn test_insert_vec_zeroed_zst() {
6653            // Insert at start (no existing fake allocation).
6654            let mut v: Vec<()> = Vec::new();
6655            <()>::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6656            assert_eq!(v.len(), 2);
6657            assert_eq!(&*v, &[(), ()]);
6658            drop(v);
6659
6660            // Insert at start.
6661            let mut v = vec![(), (), ()];
6662            <()>::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6663            assert_eq!(v.len(), 5);
6664            assert_eq!(&*v, &[(), (), (), (), ()]);
6665            drop(v);
6666
6667            // Insert at middle.
6668            let mut v = vec![(), (), ()];
6669            <()>::insert_vec_zeroed(&mut v, 1, 1).unwrap();
6670            assert_eq!(v.len(), 4);
6671            assert_eq!(&*v, &[(), (), (), ()]);
6672            drop(v);
6673
6674            // Insert at end.
6675            let mut v = vec![(), (), ()];
6676            <()>::insert_vec_zeroed(&mut v, 3, 1).unwrap();
6677            assert_eq!(v.len(), 4);
6678            assert_eq!(&*v, &[(), (), (), ()]);
6679            drop(v);
6680        }
6681
6682        #[test]
6683        fn test_new_box_zeroed() {
6684            assert_eq!(u64::new_box_zeroed(), Ok(Box::new(0)));
6685        }
6686
6687        #[test]
6688        fn test_new_box_zeroed_array() {
6689            drop(<[u32; 0x1000]>::new_box_zeroed());
6690        }
6691
6692        #[test]
6693        fn test_new_box_zeroed_zst() {
6694            // This test exists in order to exercise unsafe code, especially
6695            // when running under Miri.
6696            #[allow(clippy::unit_cmp)]
6697            {
6698                assert_eq!(<()>::new_box_zeroed(), Ok(Box::new(())));
6699            }
6700        }
6701
6702        #[test]
6703        fn test_new_box_zeroed_with_elems() {
6704            let mut s: Box<[u64]> = <[u64]>::new_box_zeroed_with_elems(3).unwrap();
6705            assert_eq!(s.len(), 3);
6706            assert_eq!(&*s, &[0, 0, 0]);
6707            s[1] = 3;
6708            assert_eq!(&*s, &[0, 3, 0]);
6709        }
6710
6711        #[test]
6712        fn test_new_box_zeroed_with_elems_empty() {
6713            let s: Box<[u64]> = <[u64]>::new_box_zeroed_with_elems(0).unwrap();
6714            assert_eq!(s.len(), 0);
6715        }
6716
6717        #[test]
6718        fn test_new_box_zeroed_with_elems_zst() {
6719            let mut s: Box<[()]> = <[()]>::new_box_zeroed_with_elems(3).unwrap();
6720            assert_eq!(s.len(), 3);
6721            assert!(s.get(10).is_none());
6722            // This test exists in order to exercise unsafe code, especially
6723            // when running under Miri.
6724            #[allow(clippy::unit_cmp)]
6725            {
6726                assert_eq!(s[1], ());
6727            }
6728            s[2] = ();
6729        }
6730
6731        #[test]
6732        fn test_new_box_zeroed_with_elems_zst_empty() {
6733            let s: Box<[()]> = <[()]>::new_box_zeroed_with_elems(0).unwrap();
6734            assert_eq!(s.len(), 0);
6735        }
6736
6737        #[test]
6738        fn new_box_zeroed_with_elems_errors() {
6739            assert_eq!(<[u16]>::new_box_zeroed_with_elems(usize::MAX), Err(AllocError));
6740
6741            let max = <usize as core::convert::TryFrom<_>>::try_from(isize::MAX).unwrap();
6742            assert_eq!(
6743                <[u16]>::new_box_zeroed_with_elems((max / mem::size_of::<u16>()) + 1),
6744                Err(AllocError)
6745            );
6746        }
6747    }
6748}