merged with latest changes and made some additional corrections

This commit is contained in:
Carl Fredrik Samson
2020-04-10 20:39:35 +02:00
19 changed files with 488 additions and 487 deletions

View File

@@ -150,7 +150,7 @@
<div id="content" class="content">
<main>
<h1><a class="header" href="#implementing-futures---main-example" id="implementing-futures---main-example">Implementing Futures - main example</a></h1>
<p>We'll create our own <code>Futures</code> together with a fake reactor and a simple
<p>We'll create our own Futures together with a fake reactor and a simple
executor which allows you to edit, run an play around with the code right here
in your browser.</p>
<p>I'll walk you through the example, but if you want to check it out closer, you
@@ -189,8 +189,8 @@ a <code>Future</code> has resolved and should be polled again.</p>
fn block_on&lt;F: Future&gt;(mut future: F) -&gt; F::Output {
// the first thing we do is to construct a `Waker` which we'll pass on to
// the `reactor` so it can wake us up when an event is ready.
let mywaker = Arc::new(MyWaker{ thread: thread::current() });
// the `reactor` so it can wake us up when an event is ready.
let mywaker = Arc::new(MyWaker{ thread: thread::current() });
let waker = waker_into_waker(Arc::into_raw(mywaker));
// The context struct is just a wrapper for a `Waker` object. Maybe in the
@@ -208,7 +208,7 @@ fn block_on&lt;F: Future&gt;(mut future: F) -&gt; F::Output {
// an event occurs, or a thread has a &quot;spurious wakeup&quot; (an unexpected wakeup
// that can happen for no good reason).
let val = loop {
match Future::poll(pinned, &amp;mut cx) {
// when the Future is ready we're finished
@@ -224,23 +224,23 @@ fn block_on&lt;F: Future&gt;(mut future: F) -&gt; F::Output {
<p>In all the examples you'll see in this chapter I've chosen to comment the code
extensively. I find it easier to follow along that way so I'll not repeat myself
here and focus only on some important aspects that might need further explanation.</p>
<p>Now that you've read so much about <code>Generators</code> and <code>Pin</code> already this should
<p>Now that you've read so much about <code>Generator</code>s and <code>Pin</code> already this should
be rather easy to understand. <code>Future</code> is a state machine, every <code>await</code> point
is a <code>yield</code> point. We could borrow data across <code>await</code> points and we meet the
exact same challenges as we do when borrowing across <code>yield</code> points.</p>
<blockquote>
<p><code>Context</code> is just a wrapper around the <code>Waker</code>. At the time of writing this
book it's nothing more. In the future it might be possible that the <code>Context</code>
object will do more than just wrapping a <code>Future</code> so having this extra
object will do more than just wrapping a <code>Future</code> so having this extra
abstraction gives some flexibility.</p>
</blockquote>
<p>As explained in the <a href="./3_generators_pin.html">chapter about generators</a>, we use
<code>Pin</code> and the guarantees that give us to allow <code>Futures</code> to have self
<code>Pin</code> and the guarantees that give us to allow <code>Future</code>s to have self
references.</p>
<h2><a class="header" href="#the-future-implementation" id="the-future-implementation">The <code>Future</code> implementation</a></h2>
<p>Futures has a well defined interface, which means they can be used across the
entire ecosystem. </p>
<p>We can chain these <code>Futures</code> so that once a <strong>leaf-future</strong> is
entire ecosystem.</p>
<p>We can chain these <code>Future</code>s so that once a <strong>leaf-future</strong> is
ready we'll perform a set of operations until either the task is finished or we
reach yet another <strong>leaf-future</strong> which we'll wait for and yield control to the
scheduler.</p>
@@ -365,8 +365,8 @@ is pretty normal, and makes this easy and safe to work with. Cloning a <code>Wak
is just increasing the refcount in this case.</p>
<p>Dropping a <code>Waker</code> is as easy as decreasing the refcount. Now, in special
cases we could choose to not use an <code>Arc</code>. So this low-level method is there
to allow such cases. </p>
<p>Indeed, if we only used <code>Arc</code> there is no reason for us to go through all the
to allow such cases.</p>
<p>Indeed, if we only used <code>Arc</code> there is no reason for us to go through all the
trouble of creating our own <code>vtable</code> and a <code>RawWaker</code>. We could just implement
a normal trait.</p>
<p>Fortunately, in the future this will probably be possible in the standard
@@ -382,7 +382,7 @@ and call park/unpark on it.</p>
<ol>
<li>A future could call <code>unpark</code> on the executor thread from a different thread</li>
<li>Our <code>executor</code> thinks that data is ready and wakes up and polls the future</li>
<li>The future is not ready yet when polled, but at that exact same time the
<li>The future is not ready yet when polled, but at that exact same time the
<code>Reactor</code> gets an event and calls <code>wake()</code> which also unparks our thread.</li>
<li>This could happen before we go to sleep again since these processes
run in parallel.</li>
@@ -407,12 +407,12 @@ to have an example to run.</p>
<p>Since concurrency mostly makes sense when interacting with the outside world (or
at least some peripheral), we need something to actually abstract over this
interaction in an asynchronous way.</p>
<p>This is the <code>Reactors</code> job. Most often you'll see reactors in Rust use a library
<p>This is the Reactors job. Most often you'll see reactors in Rust use a library
called <a href="https://github.com/tokio-rs/mio">Mio</a>, which provides non blocking APIs and event notification for
several platforms.</p>
<p>The reactor will typically give you something like a <code>TcpStream</code> (or any other
resource) which you'll use to create an I/O request. What you get in return is a
<code>Future</code>. </p>
<code>Future</code>.</p>
<blockquote>
<p>If our reactor did some real I/O work our <code>Task</code> in would instead be represent
a non-blocking <code>TcpStream</code> which registers interest with the global <code>Reactor</code>.
@@ -574,7 +574,7 @@ which you can edit and change the way you like.</p>
# task::{Context, Poll, RawWaker, RawWakerVTable, Waker}, mem,
# thread::{self, JoinHandle}, time::{Duration, Instant}, collections::HashMap
# };
#
#
fn main() {
// This is just to make it easier for us to see when our Future was resolved
let start = Instant::now();
@@ -636,32 +636,32 @@ fn main() {
# };
# val
# }
#
#
# // ====================== FUTURE IMPLEMENTATION ==============================
# #[derive(Clone)]
# struct MyWaker {
# thread: thread::Thread,
# }
#
#
# #[derive(Clone)]
# pub struct Task {
# id: usize,
# reactor: Arc&lt;Mutex&lt;Box&lt;Reactor&gt;&gt;&gt;,
# data: u64,
# }
#
#
# fn mywaker_wake(s: &amp;MyWaker) {
# let waker_ptr: *const MyWaker = s;
# let waker_arc = unsafe { Arc::from_raw(waker_ptr) };
# waker_arc.thread.unpark();
# }
#
#
# fn mywaker_clone(s: &amp;MyWaker) -&gt; RawWaker {
# let arc = unsafe { Arc::from_raw(s) };
# std::mem::forget(arc.clone()); // increase ref count
# RawWaker::new(Arc::into_raw(arc) as *const (), &amp;VTABLE)
# }
#
#
# const VTABLE: RawWakerVTable = unsafe {
# RawWakerVTable::new(
# |s| mywaker_clone(&amp;*(s as *const MyWaker)), // clone
@@ -670,18 +670,18 @@ fn main() {
# |s| drop(Arc::from_raw(s as *const MyWaker)), // decrease refcount
# )
# };
#
#
# fn waker_into_waker(s: *const MyWaker) -&gt; Waker {
# let raw_waker = RawWaker::new(s as *const (), &amp;VTABLE);
# unsafe { Waker::from_raw(raw_waker) }
# }
#
#
# impl Task {
# fn new(reactor: Arc&lt;Mutex&lt;Box&lt;Reactor&gt;&gt;&gt;, data: u64, id: usize) -&gt; Self {
# Task { id, reactor, data }
# }
# }
#
#
# impl Future for Task {
# type Output = usize;
# fn poll(self: Pin&lt;&amp;mut Self&gt;, cx: &amp;mut Context&lt;'_&gt;) -&gt; Poll&lt;Self::Output&gt; {
@@ -698,7 +698,7 @@ fn main() {
# }
# }
# }
#
#
# // =============================== REACTOR ===================================
# enum TaskState {
# Ready,
@@ -716,7 +716,7 @@ fn main() {
# Close,
# Timeout(u64, usize),
# }
#
#
# impl Reactor {
# fn new() -&gt; Arc&lt;Mutex&lt;Box&lt;Self&gt;&gt;&gt; {
# let (tx, rx) = channel::&lt;Event&gt;();
@@ -767,7 +767,7 @@ fn main() {
# }
# self.dispatcher.send(Event::Timeout(duration, id)).unwrap();
# }
#
#
# fn close(&amp;mut self) {
# self.dispatcher.send(Event::Close).unwrap();
# }
@@ -779,7 +779,7 @@ fn main() {
# }).unwrap_or(false)
# }
# }
#
#
# impl Drop for Reactor {
# fn drop(&amp;mut self) {
# self.handle.take().map(|h| h.join().unwrap()).unwrap();
@@ -797,7 +797,7 @@ two things:</p>
<p>The <code>async</code> keyword can be used on functions as in <code>async fn(...)</code> or on a
block as in <code>async { ... }</code>. Both will turn your function, or block, into a
<code>Future</code>.</p>
<p>These <code>Futures</code> are rather simple. Imagine our generator from a few chapters
<p>These Futures are rather simple. Imagine our generator from a few chapters
back. Every <code>await</code> point is like a <code>yield</code> point.</p>
<p>Instead of <code>yielding</code> a value we pass in, we yield the result of calling <code>poll</code> on
the next <code>Future</code> we're awaiting.</p>
@@ -811,7 +811,7 @@ to <code>spawn</code> them so the executor starts running them concurrently.</p>
<pre><code class="language-ignore">Future got 1 at time: 1.00.
Future got 2 at time: 3.00.
</code></pre>
<p>If these <code>Futures</code> were executed asynchronously we would expect to see:</p>
<p>If these Futures were executed asynchronously we would expect to see:</p>
<pre><code class="language-ignore">Future got 1 at time: 1.00.
Future got 2 at time: 2.00.
</code></pre>
@@ -828,7 +828,7 @@ the concept of Futures by now helping you along the way.</p>
how they implement different ways of running Futures to completion.</p>
<p><a href="./conclusion.html#building-a-better-exectuor">If I were you I would read this next, and try to implement it for our example.</a>.</p>
<p>That's actually it for now. There as probably much more to learn, this is enough
for today. </p>
for today.</p>
<p>I hope exploring Futures and async in general gets easier after this read and I
do really hope that you do continue to explore further.</p>
<p>Don't forget the exercises in the last chapter 😊.</p>