renamed event simple
This commit is contained in:
@@ -19,15 +19,16 @@ pretty simple. I promise.
|
||||
Let's get some of the common roadblocks out of the way first.
|
||||
|
||||
Async in Rust is different from most other languages in the sense that Rust
|
||||
has an extremely lightweight runtime.
|
||||
has a very lightweight runtime.
|
||||
|
||||
In languages like C#, JavaScript, Java and GO, the runtime is already there. So
|
||||
if you come from one of those languages this will seem a bit strange to you.
|
||||
In languages like C#, JavaScript, Java and GO, already includes a runtime
|
||||
for handling concurrency. So if you come from one of those languages this will
|
||||
seem a bit strange to you.
|
||||
|
||||
### What Rust's standard library takes care of
|
||||
|
||||
1. The definition of an interruptible task
|
||||
2. An extremely efficient technique to start, suspend, resume and store tasks
|
||||
2. An efficient technique to start, suspend, resume and store tasks
|
||||
which are executed concurrently.
|
||||
3. A defined way to wake up a suspended task
|
||||
|
||||
@@ -80,7 +81,7 @@ A good sign is that if you're required to use combinators like `and_then` then
|
||||
you're using `Futures 1.0`.
|
||||
|
||||
While not directly compatible, there is a tool that let's you relatively easily
|
||||
convert a `Future 1.0` to a `Future 3.0` and vice a verca. You can find all you
|
||||
convert a `Future 1.0` to a `Future 3.0` and vice a versa. You can find all you
|
||||
need in the [`futures-rs`][futures_rs] crate and all [information you need here][compat_info].
|
||||
|
||||
## First things first
|
||||
|
||||
@@ -49,8 +49,8 @@ information.
|
||||
|
||||
**Example `&[i32]` :**
|
||||
|
||||
* The first 8 bytes is the actual pointer to the first element in the array (or part of an array the slice refers to)
|
||||
* The second 8 bytes is the length of the slice.
|
||||
- The first 8 bytes is the actual pointer to the first element in the array (or part of an array the slice refers to)
|
||||
- The second 8 bytes is the length of the slice.
|
||||
|
||||
**Example `&dyn SomeTrait`:**
|
||||
|
||||
@@ -59,8 +59,8 @@ This is the type of fat pointer we'll concern ourselves about going forward.
|
||||
|
||||
The layout for a pointer to a _trait object_ looks like this:
|
||||
|
||||
* The first 8 bytes points to the `data` for the trait object
|
||||
* The second 8 bytes points to the `vtable` for the trait object
|
||||
- The first 8 bytes points to the `data` for the trait object
|
||||
- The second 8 bytes points to the `vtable` for the trait object
|
||||
|
||||
The reason for this is to allow us to refer to an object we know nothing about
|
||||
except that it implements the methods defined by our trait. To allow accomplish this we use _dynamic dispatch_.
|
||||
@@ -71,6 +71,7 @@ object from these parts:
|
||||
>This is an example of _editable_ code. You can change everything in the example
|
||||
and try to run it. If you want to go back, press the undo symbol. Keep an eye
|
||||
out for these as we go forward. Many examples will be editable.
|
||||
|
||||
```rust, editable
|
||||
// A reference to a trait object is a fat pointer: (data_ptr, vtable_ptr)
|
||||
trait Test {
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
>
|
||||
>- Understanding how the async/await syntax works since it's how `await` is implemented
|
||||
>- Why we need `Pin`
|
||||
>- Why Rusts async model is extremely efficient
|
||||
>- Why Rusts async model is very efficient
|
||||
>
|
||||
>The motivation for `Generators` can be found in [RFC#2033][rfc2033]. It's very
|
||||
>well written and I can recommend reading through it (it talks as much about
|
||||
@@ -17,7 +17,7 @@ we need to be able to "pin" some data to a fixed location in memory and
|
||||
get an introduction to `Pin` as well.
|
||||
|
||||
Basically, there were three main options that were discussed when Rust was
|
||||
desiging how the language would handle concurrency:
|
||||
designing how the language would handle concurrency:
|
||||
|
||||
1. Stackful coroutines, better known as green threads.
|
||||
2. Using combinators.
|
||||
@@ -53,6 +53,7 @@ let future = Connection::connect(conn_str).and_then(|conn| {
|
||||
let rows: Result<Vec<SomeStruct>, SomeLibraryError> = block_on(future).unwrap();
|
||||
|
||||
```
|
||||
|
||||
While an effective solution there are mainly three downsides I'll focus on:
|
||||
|
||||
1. The error messages produced could be extremely long and arcane
|
||||
@@ -74,7 +75,7 @@ the needed state increases with each added step.
|
||||
|
||||
This is the model used in Rust today. It a few notable advantages:
|
||||
|
||||
1. It's easy to convert normal Rust code to a stackless corotuine using using
|
||||
1. It's easy to convert normal Rust code to a stackless coroutine using using
|
||||
async/await as keywords (it can even be done using a macro).
|
||||
2. No need for context switching and saving/restoring CPU state
|
||||
3. No need to handle dynamic stack allocation
|
||||
@@ -209,7 +210,7 @@ limitation just slip and call it a day yet.
|
||||
Instead of discussing it in theory, let's look at some code.
|
||||
|
||||
> We'll use the optimized version of the state machines which is used in Rust today. For a more
|
||||
> in deapth explanation see [Tyler Mandry's execellent article: How Rust optimizes async/await][optimizing-await]
|
||||
> in deapth explanation see [Tyler Mandry's excellent article: How Rust optimizes async/await][optimizing-await]
|
||||
|
||||
```rust,noplaypen,ignore
|
||||
let a = 4;
|
||||
@@ -505,8 +506,6 @@ they did their unsafe implementation.
|
||||
Now, the code which is created and the need for `Pin` to allow for borrowing
|
||||
across `yield` points should be pretty clear.
|
||||
|
||||
|
||||
|
||||
[rfc2033]: https://github.com/rust-lang/rfcs/blob/master/text/2033-experimental-coroutines.md
|
||||
[greenthreads]: https://cfsamson.gitbook.io/green-threads-explained-in-200-lines-of-rust/
|
||||
[rfc1823]: https://github.com/rust-lang/rfcs/pull/1823
|
||||
|
||||
@@ -51,9 +51,9 @@ this book, creating an reading fields of a self referential struct still require
|
||||
8. When Pinning, you can either pin a value to memory either on the stack or
|
||||
on the heap.
|
||||
|
||||
1. Pinning a `MustStay` pointer to the stack requires `unsafe`
|
||||
9. Pinning a `MustStay` pointer to the stack requires `unsafe`
|
||||
|
||||
2. Pinning a `MustStay` pointer to the heap does not require `unsafe`. There is a shortcut for doing this using `Box::pin`.
|
||||
10. Pinning a `MustStay` pointer to the heap does not require `unsafe`. There is a shortcut for doing this using `Box::pin`.
|
||||
|
||||
> Unsafe code does not mean it's literally "unsafe", it only relieves the
|
||||
> guarantees you normally get from the compiler. An `unsafe` implementation can
|
||||
@@ -125,7 +125,7 @@ a: test1, b: test1
|
||||
|
||||
|
||||
Next we swap the data stored at the memory location which `test1` is pointing to
|
||||
with the data stored at the memory location `test2` is pointing to and vice a verca.
|
||||
with the data stored at the memory location `test2` is pointing to and vice a versa.
|
||||
|
||||
We should expect that printing the fields of `test2` should display the same as
|
||||
`test1` (since the object we printed before the swap has moved there now).
|
||||
|
||||
@@ -9,7 +9,7 @@ can always [clone the repository][example_repo] and play around with the code yo
|
||||
are two branches. The `basic_example` is this code, and the `basic_example_commented`
|
||||
is this example with extensive comments.
|
||||
|
||||
> If you want to follow along as we go through, initalize a new cargo project
|
||||
> If you want to follow along as we go through, initialize a new cargo project
|
||||
> by creating a new folder and run `cargo init` inside it. Everything we write
|
||||
> here will be in `main.rs`
|
||||
|
||||
@@ -72,6 +72,7 @@ fn block_on<F: Future>(mut future: F) -> F::Output {
|
||||
val
|
||||
}
|
||||
```
|
||||
|
||||
Inn all the examples here I've chose to comment the code extensively. I find it
|
||||
easier to follow that way than dividing if up into many paragraphs.
|
||||
|
||||
@@ -289,11 +290,11 @@ struct Reactor {
|
||||
}
|
||||
|
||||
// We just have two kind of events. A timeout event, a "timeout" event called
|
||||
// `Simple` and a `Close` event to close down our reactor.
|
||||
// `Timeout` and a `Close` event to close down our reactor.
|
||||
#[derive(Debug)]
|
||||
enum Event {
|
||||
Close,
|
||||
Simple(Waker, u64, usize),
|
||||
Timeout(Waker, u64, usize),
|
||||
}
|
||||
|
||||
impl Reactor {
|
||||
@@ -315,7 +316,7 @@ impl Reactor {
|
||||
match event {
|
||||
// If we get a close event we break out of the loop we're in
|
||||
Event::Close => break,
|
||||
Event::Simple(waker, duration, id) => {
|
||||
Event::Timeout(waker, duration, id) => {
|
||||
|
||||
// When we get an event we simply spawn a new thread...
|
||||
let event_handle = thread::spawn(move || {
|
||||
@@ -354,7 +355,7 @@ impl Reactor {
|
||||
// registering an event is as simple as sending an `Event` through
|
||||
// the channel.
|
||||
self.dispatcher
|
||||
.send(Event::Simple(waker, duration, data))
|
||||
.send(Event::Timeout(waker, duration, data))
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
@@ -603,11 +604,11 @@ fn main() {
|
||||
# }
|
||||
#
|
||||
# // We just have two kind of events. A timeout event, a "timeout" event called
|
||||
# // `Simple` and a `Close` event to close down our reactor.
|
||||
# // `Timeout` and a `Close` event to close down our reactor.
|
||||
# #[derive(Debug)]
|
||||
# enum Event {
|
||||
# Close,
|
||||
# Simple(Waker, u64, usize),
|
||||
# Timeout(Waker, u64, usize),
|
||||
# }
|
||||
#
|
||||
# impl Reactor {
|
||||
@@ -629,7 +630,7 @@ fn main() {
|
||||
# match event {
|
||||
# // If we get a close event we break out of the loop we're in
|
||||
# Event::Close => break,
|
||||
# Event::Simple(waker, duration, id) => {
|
||||
# Event::Timeout(waker, duration, id) => {
|
||||
#
|
||||
# // When we get an event we simply spawn a new thread...
|
||||
# let event_handle = thread::spawn(move || {
|
||||
@@ -668,7 +669,7 @@ fn main() {
|
||||
# // registering an event is as simple as sending an `Event` through
|
||||
# // the channel.
|
||||
# self.dispatcher
|
||||
# .send(Event::Simple(waker, duration, data))
|
||||
# .send(Event::Timeout(waker, duration, data))
|
||||
# .unwrap();
|
||||
# }
|
||||
#
|
||||
|
||||
@@ -13,7 +13,7 @@ It will still take some time for the ecosystem to migrate over to `Futures 3.0`
|
||||
but since the advantages are so huge, it will not be a split between libraries
|
||||
using `Futures 1.0` and libraries using `Futures 3.0` for long.
|
||||
|
||||
# Reader excercises
|
||||
# Reader exercises
|
||||
|
||||
So our implementation has taken some obvious shortcuts and could use some improvement. Actually digging into the code and try things yourself is a good way to learn. Here are som relatively simple and good exercises:
|
||||
|
||||
@@ -51,6 +51,21 @@ What about CPU intensive tasks? Right now they'll prevent our executor thread fr
|
||||
|
||||
In both `async_std` and `tokio` this method is called `spawn_blocking`, a good place to start is to read the documentation and the code thy use to implement that.
|
||||
|
||||
## Building a better exectuor
|
||||
|
||||
Right now, we can only run one and one future. Most runtimes has a `spawn`
|
||||
function which let's you start off a future and `await` it later so you
|
||||
can run multiple futures concurrently.
|
||||
|
||||
As I'm writing this [@stjepan](https://github.com/stjepang) is writing a blog
|
||||
series about implementing your own executors, and he just released a post
|
||||
on how to accomplish just this you can visit [here](https://stjepang.github.io/2020/01/31/build-your-own-executor.html).
|
||||
He knows what he's talking about so I recommend following that.
|
||||
|
||||
In the [bonus_spawn]() branch of the example repository you can also find an
|
||||
extremely simplified (and worse) way of accomplishing the same in only a
|
||||
few lines of code.
|
||||
|
||||
## Further reading
|
||||
|
||||
There are many great resources for further study. In addition to the RFCs and
|
||||
|
||||
@@ -134,7 +134,7 @@ struct Reactor {
|
||||
#[derive(Debug)]
|
||||
enum Event {
|
||||
Close,
|
||||
Simple(Waker, u64, usize),
|
||||
Timeout(Waker, u64, usize),
|
||||
}
|
||||
|
||||
impl Reactor {
|
||||
@@ -149,7 +149,7 @@ impl Reactor {
|
||||
let rl_clone = rl_clone.clone();
|
||||
match event {
|
||||
Event::Close => break,
|
||||
Event::Simple(waker, duration, id) => {
|
||||
Event::Timeout(waker, duration, id) => {
|
||||
let event_handle = thread::spawn(move || {
|
||||
thread::sleep(Duration::from_secs(duration));
|
||||
rl_clone.lock().map(|mut rl| rl.push(id)).unwrap();
|
||||
@@ -175,7 +175,7 @@ impl Reactor {
|
||||
|
||||
fn register(&mut self, duration: u64, waker: Waker, data: usize) {
|
||||
self.dispatcher
|
||||
.send(Event::Simple(waker, duration, data))
|
||||
.send(Event::Timeout(waker, duration, data))
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user