Struct Mpsc
pub struct Mpsc { /* private fields */ }
Expand description
A paired mpsc transport which runs on fuchsia-async by default.
Implementations§
Trait Implementations§
§impl HasExecutor for Mpsc
impl HasExecutor for Mpsc
§fn executor(&self) -> <Mpsc as HasExecutor>::Executor
fn executor(&self) -> <Mpsc as HasExecutor>::Executor
Returns a reference to the executor for this transport.
§type Executor = FuchsiaAsync
type Executor = FuchsiaAsync
The executor to spawn on. It must be able to run this transport.
§impl Transport for Mpsc
impl Transport for Mpsc
The shared part of the transport. It is provided by shared reference
while sending and receiving. For an MPSC, this would contain a sender.
§type Exclusive = <Mpsc as Transport>::Exclusive
type Exclusive = <Mpsc as Transport>::Exclusive
The exclusive part of the transport. It is provided by mutable reference
only while receiving. For an MPSC, this would contain a receiver.
§type SendBuffer = <Mpsc as Transport>::SendBuffer
type SendBuffer = <Mpsc as Transport>::SendBuffer
The buffer type for sending.
§type SendFutureState = <Mpsc as Transport>::SendFutureState
type SendFutureState = <Mpsc as Transport>::SendFutureState
The future state for send operations.
§type RecvFutureState = <Mpsc as Transport>::RecvFutureState
type RecvFutureState = <Mpsc as Transport>::RecvFutureState
The future state for receive operations.
§type RecvBuffer = <Mpsc as Transport>::RecvBuffer
type RecvBuffer = <Mpsc as Transport>::RecvBuffer
The buffer type for receivers.
§fn split(self) -> (<Mpsc as Transport>::Shared, <Mpsc as Transport>::Exclusive)
fn split(self) -> (<Mpsc as Transport>::Shared, <Mpsc as Transport>::Exclusive)
Splits the transport into shared and exclusive pieces.
§fn acquire(
shared: &<Mpsc as Transport>::Shared,
) -> <Mpsc as Transport>::SendBuffer
fn acquire( shared: &<Mpsc as Transport>::Shared, ) -> <Mpsc as Transport>::SendBuffer
Acquires an empty send buffer for the transport.
§fn begin_send(
shared: &<Mpsc as Transport>::Shared,
buffer: <Mpsc as Transport>::SendBuffer,
) -> <Mpsc as Transport>::SendFutureState
fn begin_send( shared: &<Mpsc as Transport>::Shared, buffer: <Mpsc as Transport>::SendBuffer, ) -> <Mpsc as Transport>::SendFutureState
Begins sending a
SendBuffer
over this transport. Read more§fn poll_send(
future: Pin<&mut <Mpsc as Transport>::SendFutureState>,
cx: &mut Context<'_>,
shared: &<Mpsc as Transport>::Shared,
) -> Poll<Result<(), Option<<Mpsc as Transport>::Error>>>
fn poll_send( future: Pin<&mut <Mpsc as Transport>::SendFutureState>, cx: &mut Context<'_>, shared: &<Mpsc as Transport>::Shared, ) -> Poll<Result<(), Option<<Mpsc as Transport>::Error>>>
Polls a
SendFutureState
for completion with the shared part of the
transport. Read more§fn begin_recv(
shared: &<Mpsc as Transport>::Shared,
exclusive: &mut <Mpsc as Transport>::Exclusive,
) -> <Mpsc as Transport>::RecvFutureState
fn begin_recv( shared: &<Mpsc as Transport>::Shared, exclusive: &mut <Mpsc as Transport>::Exclusive, ) -> <Mpsc as Transport>::RecvFutureState
Begins receiving a
RecvBuffer
over this transport. Read more§fn poll_recv(
future: Pin<&mut <Mpsc as Transport>::RecvFutureState>,
cx: &mut Context<'_>,
shared: &<Mpsc as Transport>::Shared,
exclusive: &mut <Mpsc as Transport>::Exclusive,
) -> Poll<Result<<Mpsc as Transport>::RecvBuffer, Option<<Mpsc as Transport>::Error>>>
fn poll_recv( future: Pin<&mut <Mpsc as Transport>::RecvFutureState>, cx: &mut Context<'_>, shared: &<Mpsc as Transport>::Shared, exclusive: &mut <Mpsc as Transport>::Exclusive, ) -> Poll<Result<<Mpsc as Transport>::RecvBuffer, Option<<Mpsc as Transport>::Error>>>
Polls a
RecvFutureState
for completion with a receiver. Read moreAuto Trait Implementations§
impl Freeze for Mpsc
impl !RefUnwindSafe for Mpsc
impl Send for Mpsc
impl !Sync for Mpsc
impl Unpin for Mpsc
impl !UnwindSafe for Mpsc
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more