spellcheck intro + 3 first chapters

This commit is contained in:
Carl Fredrik Samson
2020-02-02 18:54:28 +01:00
parent 2d8465e9d1
commit 49fe0ad893
17 changed files with 106 additions and 83 deletions

View File

@@ -3,9 +3,8 @@
> **Relevant for:**
>
> - High level introduction to concurrency in Rust
> - Knowing what Rust provides and not when working with async
> - Knowing what Rust provides and not when working with async code
> - Understanding why we need runtimes
> - Knowing that Rust has `Futures 1.0` and `Futures 3.0`, and how to deal with them
> - Getting pointers to further reading on concurrency in general
Before we start implementing our `Futures` , we'll go through some background
@@ -21,15 +20,17 @@ Let's get some of the common roadblocks out of the way first.
Async in Rust is different from most other languages in the sense that Rust
has a very lightweight runtime.
In languages like C#, JavaScript, Java and GO, already includes a runtime
Languages like C#, JavaScript, Java and GO, already includes a runtime
for handling concurrency. So if you come from one of those languages this will
seem a bit strange to you.
In Rust you will have to make an active choice about which runtime to use.
### What Rust's standard library takes care of
1. The definition of an interruptible task
2. An efficient technique to start, suspend, resume and store tasks
which are executed concurrently.
2. An efficient technique to start, suspend, resume and store tasks which are
executed concurrently.
3. A defined way to wake up a suspended task
That's really what Rusts standard library does. As you see there is no definition
@@ -48,18 +49,19 @@ an event queue and so on.
Executors, accepts one or more asynchronous tasks called `Futures` and takes
care of actually running the code we write, suspend the tasks when they're
waiting for I/O and resumes them.
waiting for I/O and resume them.
In theory, we could choose one `Reactor` and one `Executor` that have nothing
to do with each other besides one creates leaf `Futures` and one runs them, but
in reality today you'll most often get both in a `Runtime`.
to do with each other besides that one creates leaf `Futures` and the other one
runs them, but in reality today you'll most often get both in a `Runtime`.
There are mainly two such runtimes today [async_std][async_std] and [tokio][tokio].
Quite a bit of complexity attributed to `Futures` are actually complexity rooted
in runtimes. Creating an efficient runtime is hard. Learning how to use one
correctly can be hard as well, but both are excellent and it's just like
learning any new library.
in runtimes. Creating an efficient runtime is hard.
Learning how to use one correctly can require quite a bit of effort as well, but you'll see that there are several similarities between these kind of runtimes so
learning one makes learning the next much easier.
The difference between Rust and other languages is that you have to make an
active choice when it comes to picking a runtime. Most often you'll just use
@@ -80,9 +82,10 @@ to know in advance.
A good sign is that if you're required to use combinators like `and_then` then
you're using `Futures 1.0`.
While not directly compatible, there is a tool that let's you relatively easily
convert a `Future 1.0` to a `Future 3.0` and vice a versa. You can find all you
need in the [`futures-rs`][futures_rs] crate and all [information you need here][compat_info].
While they're not directly compatible, there is a tool that let's you relatively
easily convert a `Future 1.0` to a `Future 3.0` and vice a versa. You can find
all you need in the [`futures-rs`][futures_rs] crate and all
[information you need here][compat_info].
## First things first
@@ -96,13 +99,12 @@ try to give a high level overview that will make it easier to learn Rusts
* [Async Basics - Strategies for handling I/O](https://cfsamson.github.io/book-exploring-async-basics/5_strategies_for_handling_io.html)
* [Async Basics - Epoll, Kqueue and IOCP](https://cfsamson.github.io/book-exploring-async-basics/6_epoll_kqueue_iocp.html)
Now learning these concepts by studying futures is making it much harder than
it needs to be, so go on and read these chapters. I'll be right here when
you're back.
Learning these concepts by studying futures is making it much harder than
it needs to be, so go on and read these chapters if you feel a bit unsure.
However, if you feel that you have the basics covered, then go right on.
I'll be right here when you're back.
Let's get moving!
However, if you feel that you have the basics covered, then let's get moving!
[async_std]: https://github.com/async-rs/async-std
[tokio]: https://github.com/tokio-rs/tokio

View File

@@ -8,7 +8,7 @@
## Trait objects and dynamic dispatch
One of the most confusing topic we encounter when implementing our own `Futures`
One of the most confusing things we encounter when implementing our own `Futures`
is how we implement a `Waker` . Creating a `Waker` involves creating a `vtable`
which allows us to use dynamic dispatch to call methods on a _type erased_ trait
object we construct our selves.
@@ -44,7 +44,7 @@ As you see from the output after running this, the sizes of the references varie
Many are 8 bytes (which is a pointer size on 64 bit systems), but some are 16
bytes.
The 16 byte sized pointers are called "fat pointers" since they carry more extra
The 16 byte sized pointers are called "fat pointers" since they carry extra
information.
**Example `&[i32]` :**
@@ -54,16 +54,16 @@ information.
**Example `&dyn SomeTrait`:**
This is the type of fat pointer we'll concern ourselves about going forward.
`&dyn SomeTrait` is a reference to a trait, or what Rust calls _trait objects_.
The layout for a pointer to a _trait object_ looks like this:
This is the type of fat pointer we'll concern ourselves about going forward.
`&dyn SomeTrait` is a reference to a trait, or what Rust calls a _trait object_.
The layout for a pointer to a _trait object_ looks like this:
- The first 8 bytes points to the `data` for the trait object
- The second 8 bytes points to the `vtable` for the trait object
The reason for this is to allow us to refer to an object we know nothing about
except that it implements the methods defined by our trait. To allow accomplish this we use _dynamic dispatch_.
except that it implements the methods defined by our trait. To accomplish this we use _dynamic dispatch_.
Let's explain this in code instead of words by implementing our own trait
object from these parts:

View File

@@ -3,21 +3,20 @@
>**Relevant for:**
>
>- Understanding how the async/await syntax works since it's how `await` is implemented
>- Why we need `Pin`
>- Why Rusts async model is very efficient
>- Knowing why we need `Pin`
>- Understanding why Rusts async model is very efficient
>
>The motivation for `Generators` can be found in [RFC#2033][rfc2033]. It's very
>well written and I can recommend reading through it (it talks as much about
>async/await as it does about generators).
The second difficult part that there seems to be a lot of questions about
is Generators and the `Pin` type. Since they're related we'll start off by
exploring generators first. By doing that we'll soon get to see why
we need to be able to "pin" some data to a fixed location in memory and
get an introduction to `Pin` as well.
The second difficult part is understanding Generators and the `Pin` type. Since
they're related we'll start off by exploring generators first. By doing that
we'll soon get to see why we need to be able to "pin" some data to a fixed
location in memory and get an introduction to `Pin` as well.
Basically, there were three main options that were discussed when Rust was
designing how the language would handle concurrency:
Basically, there were three main options discussed when designing how Rust would
handle concurrency:
1. Stackful coroutines, better known as green threads.
2. Using combinators.
@@ -29,9 +28,11 @@ I've written about green threads before. Go check out
[Green Threads Explained in 200 lines of Rust][greenthreads] if you're interested.
Green threads uses the same mechanisms as an OS does by creating a thread for
each task, setting up a stack, save the CPU's state and jump
from one task(thread) to another by doing a "context switch". We yield control to the scheduler which then
continues running a different task.
each task, setting up a stack, save the CPU's state and jump from one
task(thread) to another by doing a "context switch".
We yield control to the scheduler (which is a central part of the runtime in
such a system) which then continues running a different task.
Rust had green threads once, but they were removed before it hit 1.0. The state
of execution is stored in each stack so in such a solution there would be no need

View File

@@ -72,8 +72,15 @@ There are many great resources for further study. In addition to the RFCs and
articles I've already linked to in the book, here are some of my suggestions:
[The official Asyc book](https://rust-lang.github.io/async-book/01_getting_started/01_chapter.html)
[The async_std book](https://book.async.rs/)
[Aron Turon: Designing futures for Rust](https://aturon.github.io/blog/2016/09/07/futures-design/)
[Steve Klabnik's presentation: Rust's journey to Async/Await](https://www.infoq.com/presentations/rust-2019/)
[The Tokio Blog](https://tokio.rs/blog/2019-10-scheduler/)
[Stjepan's blog with a series where he implements an Executor](https://stjepang.github.io/)
[Stjepan's blog with a series where he implements an Executor](https://stjepang.github.io/)
[Jon Gjengset's video on The Why, What and How of Pinning in Rust](https://youtu.be/DkMwYxfSYNQ)

View File

@@ -8,7 +8,7 @@ The goal is to get a better understanding of `Futures` by implementing a toy
We'll start off a bit differently than most other explanations. Instead of
deferring some of the details about what's special about futures in Rust we
try to tackle that head on first. We'll be as brief as possible, but as thorough
as needed. This way, most question will be answered and explored up front.
as needed. This way, most questions will be answered and explored up front.
We'll end up with futures that can run an any executor like `tokio` and `async_str`.
@@ -27,8 +27,11 @@ of all, this book will focus on `Futures` and `async/await` specifically and
not in the context of any specific runtime.
Secondly, I've always found small runnable examples very exiting to learn from.
Thanks to [Mdbook][mdbook] the examples can even be edited and explored further. It's
all code that you can download, play with and learn from.
Thanks to [Mdbook][mdbook] the examples can even be edited and explored further
by uncommenting certain lines or adding new ones yourself. I use that quite a
but throughout so keep an eye out when reading through editable code segments.
It's all code that you can download, play with and learn from.
We'll and end up with an understandable example including a `Future`
implementation, an `Executor` and a `Reactor` in less than 200 lines of code.