renamed event simple
This commit is contained in:
@@ -19,15 +19,16 @@ pretty simple. I promise.
|
|||||||
Let's get some of the common roadblocks out of the way first.
|
Let's get some of the common roadblocks out of the way first.
|
||||||
|
|
||||||
Async in Rust is different from most other languages in the sense that Rust
|
Async in Rust is different from most other languages in the sense that Rust
|
||||||
has an extremely lightweight runtime.
|
has a very lightweight runtime.
|
||||||
|
|
||||||
In languages like C#, JavaScript, Java and GO, the runtime is already there. So
|
In languages like C#, JavaScript, Java and GO, already includes a runtime
|
||||||
if you come from one of those languages this will seem a bit strange to you.
|
for handling concurrency. So if you come from one of those languages this will
|
||||||
|
seem a bit strange to you.
|
||||||
|
|
||||||
### What Rust's standard library takes care of
|
### What Rust's standard library takes care of
|
||||||
|
|
||||||
1. The definition of an interruptible task
|
1. The definition of an interruptible task
|
||||||
2. An extremely efficient technique to start, suspend, resume and store tasks
|
2. An efficient technique to start, suspend, resume and store tasks
|
||||||
which are executed concurrently.
|
which are executed concurrently.
|
||||||
3. A defined way to wake up a suspended task
|
3. A defined way to wake up a suspended task
|
||||||
|
|
||||||
@@ -80,7 +81,7 @@ A good sign is that if you're required to use combinators like `and_then` then
|
|||||||
you're using `Futures 1.0`.
|
you're using `Futures 1.0`.
|
||||||
|
|
||||||
While not directly compatible, there is a tool that let's you relatively easily
|
While not directly compatible, there is a tool that let's you relatively easily
|
||||||
convert a `Future 1.0` to a `Future 3.0` and vice a verca. You can find all you
|
convert a `Future 1.0` to a `Future 3.0` and vice a versa. You can find all you
|
||||||
need in the [`futures-rs`][futures_rs] crate and all [information you need here][compat_info].
|
need in the [`futures-rs`][futures_rs] crate and all [information you need here][compat_info].
|
||||||
|
|
||||||
## First things first
|
## First things first
|
||||||
|
|||||||
@@ -47,20 +47,20 @@ bytes.
|
|||||||
The 16 byte sized pointers are called "fat pointers" since they carry more extra
|
The 16 byte sized pointers are called "fat pointers" since they carry more extra
|
||||||
information.
|
information.
|
||||||
|
|
||||||
**Example `&[i32]` :**
|
**Example `&[i32]` :**
|
||||||
|
|
||||||
* The first 8 bytes is the actual pointer to the first element in the array (or part of an array the slice refers to)
|
- The first 8 bytes is the actual pointer to the first element in the array (or part of an array the slice refers to)
|
||||||
* The second 8 bytes is the length of the slice.
|
- The second 8 bytes is the length of the slice.
|
||||||
|
|
||||||
**Example `&dyn SomeTrait`:**
|
**Example `&dyn SomeTrait`:**
|
||||||
|
|
||||||
This is the type of fat pointer we'll concern ourselves about going forward.
|
This is the type of fat pointer we'll concern ourselves about going forward.
|
||||||
`&dyn SomeTrait` is a reference to a trait, or what Rust calls _trait objects_.
|
`&dyn SomeTrait` is a reference to a trait, or what Rust calls _trait objects_.
|
||||||
|
|
||||||
The layout for a pointer to a _trait object_ looks like this:
|
The layout for a pointer to a _trait object_ looks like this:
|
||||||
|
|
||||||
* The first 8 bytes points to the `data` for the trait object
|
- The first 8 bytes points to the `data` for the trait object
|
||||||
* The second 8 bytes points to the `vtable` for the trait object
|
- The second 8 bytes points to the `vtable` for the trait object
|
||||||
|
|
||||||
The reason for this is to allow us to refer to an object we know nothing about
|
The reason for this is to allow us to refer to an object we know nothing about
|
||||||
except that it implements the methods defined by our trait. To allow accomplish this we use _dynamic dispatch_.
|
except that it implements the methods defined by our trait. To allow accomplish this we use _dynamic dispatch_.
|
||||||
@@ -71,11 +71,12 @@ object from these parts:
|
|||||||
>This is an example of _editable_ code. You can change everything in the example
|
>This is an example of _editable_ code. You can change everything in the example
|
||||||
and try to run it. If you want to go back, press the undo symbol. Keep an eye
|
and try to run it. If you want to go back, press the undo symbol. Keep an eye
|
||||||
out for these as we go forward. Many examples will be editable.
|
out for these as we go forward. Many examples will be editable.
|
||||||
|
|
||||||
```rust, editable
|
```rust, editable
|
||||||
// A reference to a trait object is a fat pointer: (data_ptr, vtable_ptr)
|
// A reference to a trait object is a fat pointer: (data_ptr, vtable_ptr)
|
||||||
trait Test {
|
trait Test {
|
||||||
fn add(&self) -> i32;
|
fn add(&self) -> i32;
|
||||||
fn sub(&self) -> i32;
|
fn sub(&self) -> i32;
|
||||||
fn mul(&self) -> i32;
|
fn mul(&self) -> i32;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -135,4 +136,4 @@ fn main() {
|
|||||||
|
|
||||||
The reason we go through this will be clear later on when we implement our own
|
The reason we go through this will be clear later on when we implement our own
|
||||||
`Waker` we'll actually set up a `vtable` like we do here to and knowing what
|
`Waker` we'll actually set up a `vtable` like we do here to and knowing what
|
||||||
it is will make this much less mysterious.
|
it is will make this much less mysterious.
|
||||||
|
|||||||
@@ -4,7 +4,7 @@
|
|||||||
>
|
>
|
||||||
>- Understanding how the async/await syntax works since it's how `await` is implemented
|
>- Understanding how the async/await syntax works since it's how `await` is implemented
|
||||||
>- Why we need `Pin`
|
>- Why we need `Pin`
|
||||||
>- Why Rusts async model is extremely efficient
|
>- Why Rusts async model is very efficient
|
||||||
>
|
>
|
||||||
>The motivation for `Generators` can be found in [RFC#2033][rfc2033]. It's very
|
>The motivation for `Generators` can be found in [RFC#2033][rfc2033]. It's very
|
||||||
>well written and I can recommend reading through it (it talks as much about
|
>well written and I can recommend reading through it (it talks as much about
|
||||||
@@ -16,8 +16,8 @@ exploring generators first. By doing that we'll soon get to see why
|
|||||||
we need to be able to "pin" some data to a fixed location in memory and
|
we need to be able to "pin" some data to a fixed location in memory and
|
||||||
get an introduction to `Pin` as well.
|
get an introduction to `Pin` as well.
|
||||||
|
|
||||||
Basically, there were three main options that were discussed when Rust was
|
Basically, there were three main options that were discussed when Rust was
|
||||||
desiging how the language would handle concurrency:
|
designing how the language would handle concurrency:
|
||||||
|
|
||||||
1. Stackful coroutines, better known as green threads.
|
1. Stackful coroutines, better known as green threads.
|
||||||
2. Using combinators.
|
2. Using combinators.
|
||||||
@@ -53,6 +53,7 @@ let future = Connection::connect(conn_str).and_then(|conn| {
|
|||||||
let rows: Result<Vec<SomeStruct>, SomeLibraryError> = block_on(future).unwrap();
|
let rows: Result<Vec<SomeStruct>, SomeLibraryError> = block_on(future).unwrap();
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
While an effective solution there are mainly three downsides I'll focus on:
|
While an effective solution there are mainly three downsides I'll focus on:
|
||||||
|
|
||||||
1. The error messages produced could be extremely long and arcane
|
1. The error messages produced could be extremely long and arcane
|
||||||
@@ -74,7 +75,7 @@ the needed state increases with each added step.
|
|||||||
|
|
||||||
This is the model used in Rust today. It a few notable advantages:
|
This is the model used in Rust today. It a few notable advantages:
|
||||||
|
|
||||||
1. It's easy to convert normal Rust code to a stackless corotuine using using
|
1. It's easy to convert normal Rust code to a stackless coroutine using using
|
||||||
async/await as keywords (it can even be done using a macro).
|
async/await as keywords (it can even be done using a macro).
|
||||||
2. No need for context switching and saving/restoring CPU state
|
2. No need for context switching and saving/restoring CPU state
|
||||||
3. No need to handle dynamic stack allocation
|
3. No need to handle dynamic stack allocation
|
||||||
@@ -209,7 +210,7 @@ limitation just slip and call it a day yet.
|
|||||||
Instead of discussing it in theory, let's look at some code.
|
Instead of discussing it in theory, let's look at some code.
|
||||||
|
|
||||||
> We'll use the optimized version of the state machines which is used in Rust today. For a more
|
> We'll use the optimized version of the state machines which is used in Rust today. For a more
|
||||||
> in deapth explanation see [Tyler Mandry's execellent article: How Rust optimizes async/await][optimizing-await]
|
> in deapth explanation see [Tyler Mandry's excellent article: How Rust optimizes async/await][optimizing-await]
|
||||||
|
|
||||||
```rust,noplaypen,ignore
|
```rust,noplaypen,ignore
|
||||||
let a = 4;
|
let a = 4;
|
||||||
@@ -503,9 +504,7 @@ the value afterwards it will violate the guarantee they promise to uphold when
|
|||||||
they did their unsafe implementation.
|
they did their unsafe implementation.
|
||||||
|
|
||||||
Now, the code which is created and the need for `Pin` to allow for borrowing
|
Now, the code which is created and the need for `Pin` to allow for borrowing
|
||||||
across `yield` points should be pretty clear.
|
across `yield` points should be pretty clear.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
[rfc2033]: https://github.com/rust-lang/rfcs/blob/master/text/2033-experimental-coroutines.md
|
[rfc2033]: https://github.com/rust-lang/rfcs/blob/master/text/2033-experimental-coroutines.md
|
||||||
[greenthreads]: https://cfsamson.gitbook.io/green-threads-explained-in-200-lines-of-rust/
|
[greenthreads]: https://cfsamson.gitbook.io/green-threads-explained-in-200-lines-of-rust/
|
||||||
|
|||||||
16
src/4_pin.md
16
src/4_pin.md
@@ -32,10 +32,10 @@ It just makes it so much easier to understand them.
|
|||||||
|
|
||||||
2. Getting a `&mut T` to a pinned pointer requires unsafe if `T: MustStay`. In other words: requiring a pinned pointer to a type which is `MustStay` prevents the _user_ of that API from moving that value unless it choses to write `unsafe` code.
|
2. Getting a `&mut T` to a pinned pointer requires unsafe if `T: MustStay`. In other words: requiring a pinned pointer to a type which is `MustStay` prevents the _user_ of that API from moving that value unless it choses to write `unsafe` code.
|
||||||
|
|
||||||
3. Pinning does nothing special with that memory like putting it into some "read only" memory or anything fancy. It only tells the compiler that some operations on this value should be forbidden.
|
3. Pinning does nothing special with that memory like putting it into some "read only" memory or anything fancy. It only tells the compiler that some operations on this value should be forbidden.
|
||||||
|
|
||||||
4. Most standard library types implement `CanMove`. The same goes for most
|
4. Most standard library types implement `CanMove`. The same goes for most
|
||||||
"normal" types you encounter in Rust. `Futures` and `Generators` are two
|
"normal" types you encounter in Rust. `Futures` and `Generators` are two
|
||||||
exceptions.
|
exceptions.
|
||||||
|
|
||||||
5. The main use case for `Pin` is to allow self referential types, the whole
|
5. The main use case for `Pin` is to allow self referential types, the whole
|
||||||
@@ -46,14 +46,14 @@ cases in the API which are being explored.
|
|||||||
Moving such a type can cause the universe to crash. As of the time of writing
|
Moving such a type can cause the universe to crash. As of the time of writing
|
||||||
this book, creating an reading fields of a self referential struct still requires `unsafe`.
|
this book, creating an reading fields of a self referential struct still requires `unsafe`.
|
||||||
|
|
||||||
7. You're not really meant to be implementing `MustStay`, but you can on nightly with a feature flag, or by adding `std::marker::PhantomPinned` to your type.
|
7. You're not really meant to be implementing `MustStay`, but you can on nightly with a feature flag, or by adding `std::marker::PhantomPinned` to your type.
|
||||||
|
|
||||||
8. When Pinning, you can either pin a value to memory either on the stack or
|
8. When Pinning, you can either pin a value to memory either on the stack or
|
||||||
on the heap.
|
on the heap.
|
||||||
|
|
||||||
1. Pinning a `MustStay` pointer to the stack requires `unsafe`
|
9. Pinning a `MustStay` pointer to the stack requires `unsafe`
|
||||||
|
|
||||||
2. Pinning a `MustStay` pointer to the heap does not require `unsafe`. There is a shortcut for doing this using `Box::pin`.
|
10. Pinning a `MustStay` pointer to the heap does not require `unsafe`. There is a shortcut for doing this using `Box::pin`.
|
||||||
|
|
||||||
> Unsafe code does not mean it's literally "unsafe", it only relieves the
|
> Unsafe code does not mean it's literally "unsafe", it only relieves the
|
||||||
> guarantees you normally get from the compiler. An `unsafe` implementation can
|
> guarantees you normally get from the compiler. An `unsafe` implementation can
|
||||||
@@ -125,7 +125,7 @@ a: test1, b: test1
|
|||||||
|
|
||||||
|
|
||||||
Next we swap the data stored at the memory location which `test1` is pointing to
|
Next we swap the data stored at the memory location which `test1` is pointing to
|
||||||
with the data stored at the memory location `test2` is pointing to and vice a verca.
|
with the data stored at the memory location `test2` is pointing to and vice a versa.
|
||||||
|
|
||||||
We should expect that printing the fields of `test2` should display the same as
|
We should expect that printing the fields of `test2` should display the same as
|
||||||
`test1` (since the object we printed before the swap has moved there now).
|
`test1` (since the object we printed before the swap has moved there now).
|
||||||
@@ -207,7 +207,7 @@ pub fn main() {
|
|||||||
```
|
```
|
||||||
|
|
||||||
Now, what we've done here is pinning a stack address. That will always be
|
Now, what we've done here is pinning a stack address. That will always be
|
||||||
`unsafe` if our type implements `!Unpin` (aka `MustStay`).
|
`unsafe` if our type implements `!Unpin` (aka `MustStay`).
|
||||||
|
|
||||||
We use some tricks here, including requiring an `init`. If we want to fix that
|
We use some tricks here, including requiring an `init`. If we want to fix that
|
||||||
and let users avoid `unsafe` we need to pin our data on the heap instead.
|
and let users avoid `unsafe` we need to pin our data on the heap instead.
|
||||||
|
|||||||
@@ -9,7 +9,7 @@ can always [clone the repository][example_repo] and play around with the code yo
|
|||||||
are two branches. The `basic_example` is this code, and the `basic_example_commented`
|
are two branches. The `basic_example` is this code, and the `basic_example_commented`
|
||||||
is this example with extensive comments.
|
is this example with extensive comments.
|
||||||
|
|
||||||
> If you want to follow along as we go through, initalize a new cargo project
|
> If you want to follow along as we go through, initialize a new cargo project
|
||||||
> by creating a new folder and run `cargo init` inside it. Everything we write
|
> by creating a new folder and run `cargo init` inside it. Everything we write
|
||||||
> here will be in `main.rs`
|
> here will be in `main.rs`
|
||||||
|
|
||||||
@@ -72,6 +72,7 @@ fn block_on<F: Future>(mut future: F) -> F::Output {
|
|||||||
val
|
val
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Inn all the examples here I've chose to comment the code extensively. I find it
|
Inn all the examples here I've chose to comment the code extensively. I find it
|
||||||
easier to follow that way than dividing if up into many paragraphs.
|
easier to follow that way than dividing if up into many paragraphs.
|
||||||
|
|
||||||
@@ -97,7 +98,7 @@ allow `Futures` to have self references.
|
|||||||
|
|
||||||
In Rust we call an interruptible task a `Future`. Futures has a well defined interface, which means they can be used across the entire ecosystem. We can chain
|
In Rust we call an interruptible task a `Future`. Futures has a well defined interface, which means they can be used across the entire ecosystem. We can chain
|
||||||
these `Futures` so that once a "leaf future" is ready we'll perform a set of
|
these `Futures` so that once a "leaf future" is ready we'll perform a set of
|
||||||
operations.
|
operations.
|
||||||
|
|
||||||
These operations can spawn new leaf futures themselves.
|
These operations can spawn new leaf futures themselves.
|
||||||
|
|
||||||
@@ -289,11 +290,11 @@ struct Reactor {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// We just have two kind of events. A timeout event, a "timeout" event called
|
// We just have two kind of events. A timeout event, a "timeout" event called
|
||||||
// `Simple` and a `Close` event to close down our reactor.
|
// `Timeout` and a `Close` event to close down our reactor.
|
||||||
#[derive(Debug)]
|
#[derive(Debug)]
|
||||||
enum Event {
|
enum Event {
|
||||||
Close,
|
Close,
|
||||||
Simple(Waker, u64, usize),
|
Timeout(Waker, u64, usize),
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Reactor {
|
impl Reactor {
|
||||||
@@ -315,7 +316,7 @@ impl Reactor {
|
|||||||
match event {
|
match event {
|
||||||
// If we get a close event we break out of the loop we're in
|
// If we get a close event we break out of the loop we're in
|
||||||
Event::Close => break,
|
Event::Close => break,
|
||||||
Event::Simple(waker, duration, id) => {
|
Event::Timeout(waker, duration, id) => {
|
||||||
|
|
||||||
// When we get an event we simply spawn a new thread...
|
// When we get an event we simply spawn a new thread...
|
||||||
let event_handle = thread::spawn(move || {
|
let event_handle = thread::spawn(move || {
|
||||||
@@ -354,7 +355,7 @@ impl Reactor {
|
|||||||
// registering an event is as simple as sending an `Event` through
|
// registering an event is as simple as sending an `Event` through
|
||||||
// the channel.
|
// the channel.
|
||||||
self.dispatcher
|
self.dispatcher
|
||||||
.send(Event::Simple(waker, duration, data))
|
.send(Event::Timeout(waker, duration, data))
|
||||||
.unwrap();
|
.unwrap();
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -603,11 +604,11 @@ fn main() {
|
|||||||
# }
|
# }
|
||||||
#
|
#
|
||||||
# // We just have two kind of events. A timeout event, a "timeout" event called
|
# // We just have two kind of events. A timeout event, a "timeout" event called
|
||||||
# // `Simple` and a `Close` event to close down our reactor.
|
# // `Timeout` and a `Close` event to close down our reactor.
|
||||||
# #[derive(Debug)]
|
# #[derive(Debug)]
|
||||||
# enum Event {
|
# enum Event {
|
||||||
# Close,
|
# Close,
|
||||||
# Simple(Waker, u64, usize),
|
# Timeout(Waker, u64, usize),
|
||||||
# }
|
# }
|
||||||
#
|
#
|
||||||
# impl Reactor {
|
# impl Reactor {
|
||||||
@@ -629,7 +630,7 @@ fn main() {
|
|||||||
# match event {
|
# match event {
|
||||||
# // If we get a close event we break out of the loop we're in
|
# // If we get a close event we break out of the loop we're in
|
||||||
# Event::Close => break,
|
# Event::Close => break,
|
||||||
# Event::Simple(waker, duration, id) => {
|
# Event::Timeout(waker, duration, id) => {
|
||||||
#
|
#
|
||||||
# // When we get an event we simply spawn a new thread...
|
# // When we get an event we simply spawn a new thread...
|
||||||
# let event_handle = thread::spawn(move || {
|
# let event_handle = thread::spawn(move || {
|
||||||
@@ -668,7 +669,7 @@ fn main() {
|
|||||||
# // registering an event is as simple as sending an `Event` through
|
# // registering an event is as simple as sending an `Event` through
|
||||||
# // the channel.
|
# // the channel.
|
||||||
# self.dispatcher
|
# self.dispatcher
|
||||||
# .send(Event::Simple(waker, duration, data))
|
# .send(Event::Timeout(waker, duration, data))
|
||||||
# .unwrap();
|
# .unwrap();
|
||||||
# }
|
# }
|
||||||
#
|
#
|
||||||
|
|||||||
@@ -13,7 +13,7 @@ It will still take some time for the ecosystem to migrate over to `Futures 3.0`
|
|||||||
but since the advantages are so huge, it will not be a split between libraries
|
but since the advantages are so huge, it will not be a split between libraries
|
||||||
using `Futures 1.0` and libraries using `Futures 3.0` for long.
|
using `Futures 1.0` and libraries using `Futures 3.0` for long.
|
||||||
|
|
||||||
# Reader excercises
|
# Reader exercises
|
||||||
|
|
||||||
So our implementation has taken some obvious shortcuts and could use some improvement. Actually digging into the code and try things yourself is a good way to learn. Here are som relatively simple and good exercises:
|
So our implementation has taken some obvious shortcuts and could use some improvement. Actually digging into the code and try things yourself is a good way to learn. Here are som relatively simple and good exercises:
|
||||||
|
|
||||||
@@ -51,6 +51,21 @@ What about CPU intensive tasks? Right now they'll prevent our executor thread fr
|
|||||||
|
|
||||||
In both `async_std` and `tokio` this method is called `spawn_blocking`, a good place to start is to read the documentation and the code thy use to implement that.
|
In both `async_std` and `tokio` this method is called `spawn_blocking`, a good place to start is to read the documentation and the code thy use to implement that.
|
||||||
|
|
||||||
|
## Building a better exectuor
|
||||||
|
|
||||||
|
Right now, we can only run one and one future. Most runtimes has a `spawn`
|
||||||
|
function which let's you start off a future and `await` it later so you
|
||||||
|
can run multiple futures concurrently.
|
||||||
|
|
||||||
|
As I'm writing this [@stjepan](https://github.com/stjepang) is writing a blog
|
||||||
|
series about implementing your own executors, and he just released a post
|
||||||
|
on how to accomplish just this you can visit [here](https://stjepang.github.io/2020/01/31/build-your-own-executor.html).
|
||||||
|
He knows what he's talking about so I recommend following that.
|
||||||
|
|
||||||
|
In the [bonus_spawn]() branch of the example repository you can also find an
|
||||||
|
extremely simplified (and worse) way of accomplishing the same in only a
|
||||||
|
few lines of code.
|
||||||
|
|
||||||
## Further reading
|
## Further reading
|
||||||
|
|
||||||
There are many great resources for further study. In addition to the RFCs and
|
There are many great resources for further study. In addition to the RFCs and
|
||||||
|
|||||||
@@ -134,7 +134,7 @@ struct Reactor {
|
|||||||
#[derive(Debug)]
|
#[derive(Debug)]
|
||||||
enum Event {
|
enum Event {
|
||||||
Close,
|
Close,
|
||||||
Simple(Waker, u64, usize),
|
Timeout(Waker, u64, usize),
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Reactor {
|
impl Reactor {
|
||||||
@@ -149,7 +149,7 @@ impl Reactor {
|
|||||||
let rl_clone = rl_clone.clone();
|
let rl_clone = rl_clone.clone();
|
||||||
match event {
|
match event {
|
||||||
Event::Close => break,
|
Event::Close => break,
|
||||||
Event::Simple(waker, duration, id) => {
|
Event::Timeout(waker, duration, id) => {
|
||||||
let event_handle = thread::spawn(move || {
|
let event_handle = thread::spawn(move || {
|
||||||
thread::sleep(Duration::from_secs(duration));
|
thread::sleep(Duration::from_secs(duration));
|
||||||
rl_clone.lock().map(|mut rl| rl.push(id)).unwrap();
|
rl_clone.lock().map(|mut rl| rl.push(id)).unwrap();
|
||||||
@@ -175,7 +175,7 @@ impl Reactor {
|
|||||||
|
|
||||||
fn register(&mut self, duration: u64, waker: Waker, data: usize) {
|
fn register(&mut self, duration: u64, waker: Waker, data: usize) {
|
||||||
self.dispatcher
|
self.dispatcher
|
||||||
.send(Event::Simple(waker, duration, data))
|
.send(Event::Timeout(waker, duration, data))
|
||||||
.unwrap();
|
.unwrap();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user