diff --git a/src/0_background_information.md b/src/0_background_information.md index 9dbed29..c7cb5e5 100644 --- a/src/0_background_information.md +++ b/src/0_background_information.md @@ -29,7 +29,7 @@ The runtime we use to handle concurrency for us is the operating system itself. **Drawbacks:** - OS level threads come with a rather large stack. If you have many tasks - waiting simultaneously (like you would in a web-server under heavy load) you'll + waiting simultaneously (like you would in a web server under heavy load) you'll run out of memory pretty fast. - There are a lot of syscalls involved. This can be pretty costly when the number of tasks is high. @@ -93,7 +93,7 @@ runtime. ## Green threads -Green threads use the same mechanism as an OS does by creating a thread for +Green threads use the same mechanism as an OS - creating a thread for each task, setting up a stack, saving the CPU's state, and jumping from one task(thread) to another by doing a "context switch". @@ -130,7 +130,7 @@ second as you read this. **Drawbacks:** 1. The stacks might need to grow. Solving this is not easy and will have a cost. -2. You need to save all the CPU state on every switch. +2. You need to save the CPU state on every switch. 3. It's not a _zero cost abstraction_ (Rust had green threads early on and this was one of the reasons they were removed). 4. Complicated to implement correctly if you want to support many different @@ -483,7 +483,7 @@ as timers but could represent any kind of resource that we'll have to wait for. You might start to wonder by now, when are we going to talk about Futures? -Well, we're getting there. You see Promises, Futures and other names for +Well, we're getting there. You see Promises, Futures, and other names for deferred computations are often used interchangeably. There are formal differences between them, but we won't cover those diff --git a/src/1_futures_in_rust.md b/src/1_futures_in_rust.md index 1afd849..6d946b0 100644 --- a/src/1_futures_in_rust.md +++ b/src/1_futures_in_rust.md @@ -18,7 +18,7 @@ future. Async in Rust uses a `Poll` based approach, in which an asynchronous task will have three phases. -1. **The Poll phase.** A Future is polled which result in the task progressing until +1. **The Poll phase.** A Future is polled which results in the task progressing until a point where it can no longer make progress. We often refer to the part of the runtime which polls a Future as an executor. 2. **The Wait phase.** An event source, most often referred to as a reactor, @@ -35,7 +35,7 @@ pretty different from one another. ### Leaf futures -Runtimes create _leaf futures_ which represents a resource like a socket. +Runtimes create _leaf futures_ which represent a resource like a socket. ```rust, ignore, noplaypen // stream is a **leaf-future** @@ -54,7 +54,7 @@ completion alone as you'll understand by reading the next paragraph. ### Non-leaf-futures -Non-leaf-futures is the kind of futures we as _users_ of a runtime write +Non-leaf-futures are the kind of futures we as _users_ of a runtime write ourselves using the `async` keyword to create a **task** which can be run on the executor. @@ -76,30 +76,30 @@ let non_leaf = async { The key to these tasks is that they're able to yield control to the runtime's scheduler and then resume execution again where it left off at a later point. -In contrast to leaf futures, these kind of futures does not themselves represent +In contrast to leaf futures, these kind of futures do not themselves represent an I/O resource. When we poll these futures we either run some code or we yield to the scheduler while waiting for some resource to signal us that it's ready so we can resume where we left off. ## Runtimes -Languages like C#, JavaScript, Java, GO and many others comes with a runtime +Languages like C#, JavaScript, Java, GO, and many others comes with a runtime for handling concurrency. So if you come from one of those languages this will seem a bit strange to you. Rust is different from these languages in the sense that Rust doesn't come with -a runtime for handling concurrency, so you need to use a library which provide +a runtime for handling concurrency, so you need to use a library which provides this for you. Quite a bit of complexity attributed to Futures is actually complexity rooted -in runtimes. Creating an efficient runtime is hard. +in runtimes; creating an efficient runtime is hard. Learning how to use one correctly requires quite a bit of effort as well, but -you'll see that there are several similarities between these kind of runtimes so +you'll see that there are several similarities between these kind of runtimes, so learning one makes learning the next much easier. The difference between Rust and other languages is that you have to make an -active choice when it comes to picking a runtime. Most often, in other languages +active choice when it comes to picking a runtime. Most often in other languages, you'll just use the one provided for you. **An async runtime can be divided into two parts:** @@ -125,10 +125,10 @@ The two most popular runtimes for Futures as of writing this is: future through the `Future` trait. 2. An ergonomic way of creating tasks which can be suspended and resumed through the `async` and `await` keywords. -3. A defined interface wake up a suspended task through the `Waker` type. +3. A defined interface to wake up a suspended task through the `Waker` type. That's really what Rusts standard library does. As you see there is no definition -of non-blocking I/O, how these tasks are created or how they're run. +of non-blocking I/O, how these tasks are created, or how they're run. ## I/O vs CPU intensive tasks @@ -185,19 +185,19 @@ to the thread-pool most runtimes provide. Most executors have a way to accomplish #1 using methods like `spawn_blocking`. These methods send the task to a thread-pool created by the runtime where you -can either perform CPU-intensive tasks or "blocking" tasks which is not supported +can either perform CPU-intensive tasks or "blocking" tasks which are not supported by the runtime. Now, armed with this knowledge you are already on a good way for understanding -Futures, but we're not gonna stop yet, there is lots of details to cover. +Futures, but we're not gonna stop yet, there are lots of details to cover. -Take a break or a cup of coffe and get ready as we go for a deep dive in the next chapters. +Take a break or a cup of coffee and get ready as we go for a deep dive in the next chapters. ## Bonus section If you find the concepts of concurrency and async programming confusing in general, I know where you're coming from and I have written some resources to -try to give a high level overview that will make it easier to learn Rusts +try to give a high-level overview that will make it easier to learn Rusts Futures afterwards: * [Async Basics - The difference between concurrency and parallelism](https://cfsamson.github.io/book-exploring-async-basics/1_concurrent_vs_parallel.html) diff --git a/src/2_waker_context.md b/src/2_waker_context.md index ae2c1ab..bc0741b 100644 --- a/src/2_waker_context.md +++ b/src/2_waker_context.md @@ -26,7 +26,7 @@ extend the ecosystem with new leaf-level tasks. ## The Context type -As the docs state as of now this type only wrapps a `Waker`, but it gives some +As the docs state as of now this type only wraps a `Waker`, but it gives some flexibility for future evolutions of the API in Rust. The context can for example hold task-local storage and provide space for debugging hooks in later iterations. @@ -35,7 +35,7 @@ task-local storage and provide space for debugging hooks in later iterations. One of the most confusing things we encounter when implementing our own `Future`s is how we implement a `Waker` . Creating a `Waker` involves creating a `vtable` which allows us to use dynamic dispatch to call methods on a _type erased_ trait -object we construct our selves. +object we construct ourselves. >If you want to know more about dynamic dispatch in Rust I can recommend an article written by Adam Schwalm called [Exploring Dynamic Dispatch in Rust](https://alschwalm.com/blog/static/2017/03/07/exploring-dynamic-dispatch-in-rust/). @@ -105,7 +105,7 @@ trait Test { fn mul(&self) -> i32; } -// This will represent our home brewn fat pointer to a trait object +// This will represent our home-brewed fat pointer to a trait object #[repr(C)] struct FatPointer<'a> { /// A reference is a pointer to an instantiated `Data` instance diff --git a/src/3_generators_async_await.md b/src/3_generators_async_await.md index d8b479d..a215a27 100644 --- a/src/3_generators_async_await.md +++ b/src/3_generators_async_await.md @@ -56,7 +56,7 @@ let rows: Result, SomeLibraryError> = block_on(future); 1. The error messages produced could be extremely long and arcane 2. Not optimal memory usage -3. Did not allow to borrow across combinator steps. +3. Did not allow borrowing across combinator steps. Point #3, is actually a major drawback with `Futures 0.1`. @@ -306,8 +306,8 @@ to make this work, we'll have to let the compiler know that _we_ control this co That means turning to unsafe. -Let's try to write an implementation that will compiler using `unsafe`. As you'll -see we end up in a _self referential struct_. A struct which holds references +Let's try to write an implementation that will compile using `unsafe`. As you'll +see we end up in a _self-referential struct_. A struct which holds references into itself. As you'll notice, this compiles just fine! @@ -369,7 +369,7 @@ impl Generator for GeneratorA { } ``` -Remember that our example is the generator we crated which looked like this: +Remember that our example is the generator we created which looked like this: ```rust,noplaypen,ignore let mut gen = move || { @@ -585,7 +585,7 @@ let mut fut = async { }; ``` -The difference is that Futures has different states than what a `Generator` would +The difference is that Futures have different states than what a `Generator` would have. An async block will return a `Future` instead of a `Generator`, however, the way diff --git a/src/4_pin.md b/src/4_pin.md index 79d0afb..3e6d393 100644 --- a/src/4_pin.md +++ b/src/4_pin.md @@ -9,7 +9,7 @@ > > `Pin` was suggested in [RFC#2349][rfc2349] -Let's jump strait to it. Pinning is one of those subjects which is hard to wrap +Let's jump straight to it. Pinning is one of those subjects which is hard to wrap your head around in the start, but once you unlock a mental model for it it gets significantly easier to reason about. @@ -35,7 +35,7 @@ for the names that were chosen. Naming is not easy, and I considered renaming `Unpin` and `!Unpin` in this book to make them easier to reason about. However, an experienced member of the Rust community convinced me that that there -is just too many nuances and edge-cases to consider which is easily overlooked when +is just too many nuances and edge-cases to consider which are easily overlooked when naively giving these markers different names, and I'm convinced that we'll just have to get used to them and use them as is. @@ -430,7 +430,7 @@ us from swapping the pinned pointers. > It's important to note that stack pinning will always depend on the current > stack frame we're in, so we can't create a self referential object in one -> stack frame and return it since any pointers we take to "self" is invalidated. +> stack frame and return it since any pointers we take to "self" are invalidated. > > It also puts a lot of responsibility in your hands if you pin an object to the > stack. A mistake that is easy to make is, forgetting to shadow the original variable diff --git a/src/6_future_example.md b/src/6_future_example.md index 56b2a50..895028c 100644 --- a/src/6_future_example.md +++ b/src/6_future_example.md @@ -486,7 +486,7 @@ fn main() { fut2.await; }; - // This executor will block the main thread until the futures is resolved + // This executor will block the main thread until the futures are resolved block_on(mainfut); } # // ============================= EXECUTOR ==================================== diff --git a/src/conclusion.md b/src/conclusion.md index 2de9c60..39dd40f 100644 --- a/src/conclusion.md +++ b/src/conclusion.md @@ -32,7 +32,7 @@ thing in a slightly different way to get some inspiration. ### Building a better exectuor -Right now, we can only run one and one future. Most runtimes has a `spawn` +Right now, we can only run one and one future only. Most runtimes has a `spawn` function which let's you start off a future and `await` it later so you can run multiple futures concurrently.