EP 193 · Concurrency's Future · Jun 20, 2022 ·Members

Video #193: Concurrency's Future: Sendable and Actors

smart_display

Loading stream…

Video #193: Concurrency's Future: Sendable and Actors

Episode: Video #193 Date: Jun 20, 2022 Access: Members Only 🔒 URL: https://www.pointfree.co/episodes/ep193-concurrency-s-future-sendable-and-actors

Episode thumbnail

Description

When working with concurrent code, you must contend with data synchronization and data races. While the tools of the past made it difficult to reason about these issues, Swift’s new tools make it a breeze, including the Sendable protocol, @Sendable closures, and actors.

Video

Cloudflare Stream video ID: dcfef88831b7a1e4485f937f43e6391b Local file: video_193_concurrency-s-future-sendable-and-actors.mp4 *(download with --video 193)*

References

Transcript

0:05

So we have now seen that the concepts of asynchrony are deeply baked into the Swift language. If you want to perform asynchronous work you need to be in an asynchronous context, which is something that the compiler explicitly knows about. You either need to implement your function with the async keyword applied to it, which means the caller is responsible for providing the asynchronous context, or you need to spin up a new task using the Task initializer. The choice between these two styles of providing an asynchronous context are very different, but we will dive into that topic in a moment.

0:29

Before that, there was another topic we delved into for both threads and dispatch queues, and that is data synchronization and data races. We saw that if we accessed mutable state from multiple threads or queues, then we leave ourselves open to data races, where two threads simultaneously read and write to the same value. When this happens we get unexpected results, such as incrementing a counter 1,000 times from 1,000 different threads causes the count to be slightly less than 1,000. This happens when one thread writes the count in between the moment when another thread reads the count and then writes to it. In that case the second write will mutate with an out-of-date value.

1:06

Let’s see what new tools Swift gives us to solve this problem. Sendable and @Sendable

1:11

First, let’s try adapting the solution that we previously came up with to fix this data race. Previously we protected access to the count variable during the increment mutation using a lock: class Counter { let lock = NSLock() var count = 0 func increment() { self.lock.lock() defer { self.lock.unlock() } self.count += 1 } }

1:22

This prevents multiple threads from interleaving during the steps of reading the count, incrementing the count, and then writing the new count.

1:31

We can construct one of these counters, then invoke the increment method from 1,000 tasks created, and then finally wait for some time and print the result of the count: let counter = Counter() for _ in 0..<workCount { Task { counter.increment() } } Thread.sleep(forTimeInterval: 2) print("counter.count", counter.count) counter.count 1000

1:43

If we run this many times we will always get 1000 printed to the console, so it seems to work as we expect.

1:55

So, we might just call it quits here and say that’s all there is to solving data races. But luckily Swift provides a lot more tools to make this situation a lot nicer.

2:04

First off, Swift can detect that what we are doing is not 100% guaranteed to be correct. Sure we’ve done the work to properly lock access to the mutable state when operating on it, but as we saw on past episodes this is super tricky to get right. There are many seemingly reasonable things we can do with mutable state that turn out to be completely unreasonable once we allow the state to be accessed and updated from multiple threads. The truth is, it’s nearly impossible to be sure that multithreaded code works the way we expect.

2:31

Swift can now catch the moments where we write code that is not guaranteed to be safe to run in concurrent contexts. Sometimes this manifests itself as errors from the compiler, other times it’s warnings, and someday many of the warnings will actually become errors.

2:43

For example, Swift explicitly prohibits capturing mutable variables inside asynchronous contexts: func doSomething() { var count = 0 Task { print(count) } } Reference to captured var ‘count’ in concurrently-executing code

3:01

This just simply isn’t allowed, and for good reasons. If you could capture mutable variables in this task, should mutations from outside the task be visible on the inside? var count = 0 Task { try await Task.sleep(nanoseconds: NSEC_PER_SEC) print(count) // 0? 1? } count = 1

3:22

And should mutations on the inside be visible on the outside? count = 0 Task { count = 1 } Thread.sleep(forTimeInterval: 1) print(count) // 0? 1?

3:34

Both of these situations are really confusing since they do not read linearly from top-to-bottom. Instead, things that happen lower in the code would somehow be able to affect things higher in the code. Not only that, if we allow mutable captures then we open ourselves to race conditions.

3:48

In fact, mutable captures are allowed in merely escaping closures, which means it’s possible to introduce a race condition without the compiler saying a peep. For example, the detachNewThread static method on the Thread class takes an escaping closure and hence will happily capture a mutable value: var count = 0 for _ in 0..<workCount { Thread.detachNewThread { count += 1 } } Thread.sleep(forTimeInterval: 2) print("count", count)

4:23

If we run this we will see it does seem to count less than 1,000 times: count 999

4:30

It seems that mutating a little local variable has fewer thread collision problems than when we wrapped the count inside a Counter class, which we saw typically counted only around 980 times out of 1,000. This is probably because mutating this local value compiles down to fewer CPU instructions compared to invoking a method on a class, and so there are fewer chances for threads to interleave in problematic ways. But again, this just shows how tricky multithreaded programming can be.

4:55

Outlawing mutable captures in concurrent contexts makes it so that we don’t even have to answer these questions. So this is one example of how the compiler can prevent us from writing code that seems reasonable, but is actually very wrong.

5:07

Although mutable captures are not allowed, immutable captures are just fine: let count = 0 Task { print(count) }

5:17

There is no risk of a race condition in this code and Swift knows this, so it compiles it just fine. Further, even if count is a var but we explicitly capture it in a capture list for the task, this too will compile just fine: var count = 0 Task { [count] in print(count) }

5:31

By explicitly capturing it we are making it known that we are only grabbing the value at the moment of creating this closure. It’s a completely new immutable variable inside the closure, untethered to the variable on the outside.

5:44

There are more ways the compiler can help us find these kinds of race conditions, but currently these tools are gated behind a Swift flag because their messaging and behavior is still being tweaked by the compiler team.

5:54

We can enable the flag in our Package.swift file: .executableTarget( name: "concurrency", dependencies: [], swiftSettings: [ .unsafeFlags([ "-Xfrontend", "-warn-concurrency", ]), ] ),

6:07

With that change we are already getting a warning on the line of code where we call the increment method. for _ in 0..<workCount { Task { counter.increment() } } Capture of ‘counter’ with non-sendable type ‘Counter’ in a @Sendable closure

6:23

This warning is telling us that something is wrong with our asynchronous code. We personally think that it’s safe to invoke the increment method from multiple threads, but the compiler doesn’t know that and we haven’t proven it to the compiler yet.

6:34

This warning will someday be an error in Swift so that you aren’t even allowed to write this code even if you think it’s 100% correct. You will have to prove to the compiler it is correct before you are allowed to compile it.

6:44

The error specifically says that we are accessing a non-sendable type from a closure that is marked as @Sendable . Let’s first focus on the concept of “sendable” types and then we will take a look at the @Sendable attribute. The Sendable protocol

6:59

Sendable types are types that conform to the Sendable protocol. It’s a protocol with no requirements: /// The Sendable protocol indicates that value of the given type can /// be safely used in concurrent code. public protocol Sendable { }

7:04

And therefore is seemingly trivial to conform to this protocol, but the compiler does additional work to confirm that types truly do conform to this protocol.

7:12

The protocol is meant to represent values that can be safely passed across concurrent boundaries. As we’ve seen before, a class holding a piece of mutable state is not typically safe to pass to multiple closures running concurrently. You have to put in the extra work to make it safe by using a lock internally.

7:27

On the other hand, some types are always safe to pass across concurrent boundaries. As we saw a moment ago it is OK to explicitly capture mutable values, thus giving up its mutable properties: var count = 0 Task { [count] in print(count) }

7:38

And it is OK to pass certain immutable values across concurrent boundaries, such as plain integers: let count = 0 Task { print(count) }

7:43

This is compiling without errors because the Int type conforms to the Sendable protocol. It’s hard to see this conformance explicitly in Xcode or the documentation because it appears to be hidden from us, but we can look up the open sourced code to see indeed all integer types conform to the Sendable protocol

8:09

And in fact the vast majority of types in the standard library are sendable specifically because they are just simple value types. So things like booleans, strings, arrays of sendables, dictionaries of sendables, and more can all be passed across concurrent boundaries.

8:21

More generally, any value type whose fields are all sendable can be passed across concurrent boundaries: struct User { var id: Int var name: String } let user = User(id: 42, name: "Blob") Task { print(user) }

8:39

This example is particularly interesting because we haven’t explicitly marked the User type has being sendable. It seems there is some compiler magic that is automatically applying the conformance, though we can do it explicitly if we want: struct User: Sendable { … }

8:51

But we don’t need this, so let’s remove for now.

8:54

So we can largely stay in the sendable world as long as we are creating simple value types that are composed of other sendable value types. This tends to be the vast majority of data types we create for our applications, which is great.

9:05

But, as our data types grow more complex we may accidentally fall out of the purview of automatic sendable conformance. For example, suppose we did something seemingly innocuous like adding an attributed string to our model for the bio of the user: struct User { var id: Int var name: String var bio: AttributedString } let user = User(id: 42, name: "Blob", bio: "") Task { print(user) } Capture of ‘user’ with non-sendable type ‘User’ in a @Sendable closure

9:19

Looks like we somehow lost the Sendable conformance on our User type.

9:24

It turns out that AttributedString is not sendable. This may be just an oversight in Foundation as it seems they are still in the process of auditing sendable types, or there may be a real reason why it is not sendable. Perhaps somewhere deep in the bowels of its implementation it access some shared mutable data that is hidden from us.

9:41

Either way, we are no longer proving to compiler that the User type is safe to use across concurrent boundaries, hence the warning. If we try to make our type explicitly conform to the sendable protocol we will get a more localized warning of what exactly is wrong: struct User: Sendable { var id: Int var name: String var bio: AttributedString } Stored property ‘bio’ of ‘Sendable’-conforming struct ‘User’ has non-sendable type ‘AttributedString’

9:54

So, let’s not store this new field in our struct.

9:57

So it seems like a lot of types can become sendable quite easily, but so far we have only dealt with value types. Reference types can also be made sendable, although it’s a lot harder to accomplish. For example, let’s make our User struct into a class: class User: Sendable { var id: Int var name: String init(id: Int, name: String) { self.id = id self.name = name } } Non-final class ‘User’ cannot conform to ‘Sendable’; use ’@unchecked Sendable’ Stored property ‘id’ of ‘Sendable’-conforming class ‘User’ is mutable

10:17

The first warning is saying that non-final classes cannot generally be inferred to be sendable because a subclass could be introduced that does some non-sendable-friendly things, such as introducing mutable state.

10:39

So, let’s mark our class as final to prevent subclassing: final class User: Sendable { … }

10:44

The next warning is saying that we cannot store mutable fields in a class if we want to be sendable. So let’s make our fields let s: class User: Sendable { let id: Int let name: String init(id: Int, name: String) { self.id = id self.name = name } }

10:53

Now all of the warnings go away which means we have guarantees from the Swift compiler that using the User type across concurrent boundaries will not lead to race conditions. Of course we have severely limited its capabilities. It can no longer change its internal state at all, which makes it behave somewhat similarly to the struct version we had, but that’s the cost of doing business with multithreaded code.

11:14

Sometimes we can’t prove to the compiler that our type is truly sendable, and we have to take matters into our own hands and operate outside the purview of the compiler. For example, consider the Counter class that used locking under the hood in order to protect against data races: class Counter { let lock = NSLock() var count = 0 func increment() { self.lock.lock() defer { self.lock.unlock() } self.count += 1 } } let counter = Counter() Task { counter.increment() } Capture of ‘counter’ with non-sendable type ‘Counter’ in a @Sendable closure

11:28

Swift doesn’t know that we have taken the steps to make this safe to use from multiple threads, so it has no choice but to warn us. Further, if we try to force Counter to be sendable, we just get warnings in other places: class Counter: Sendable { let lock = NSLock() var count = 0 func increment() { self.lock.lock() defer { self.lock.unlock() } self.count += 1 } } Non-final class ‘Counter’ cannot conform to ‘Sendable’; use ‘@unchecked Sendable’ Stored property ‘lock’ of ‘Sendable’-conforming class ‘Counter’ has non-sendable type ‘NSLock’ Stored property ‘count’ of ‘Sendable’-conforming class ‘Counter’ is mutable

11:41

We can’t make NSLock sendable because we don’t control that type, although it does seem like a type that should be sendable. Perhaps Apple just hasn’t audited it yet. And we don’t want to make count a let because its whole point is to be mutable.

11:54

We personally feel quite confident this type is safe to use from multiple threads, and so we can tell the compiler to just trust us that it’s actually sendable by marking the class as unchecked: class Counter: @unchecked Sendable { let lock = NSLock() var count = 0 func increment() { self.lock.lock() defer { self.lock.unlock() } self.count += 1 } }

12:05

This now compiles without warnings, but we should know any time we use @unchecked that we are operating outside the purview of the compiler. It’s absolutely possible that we could make changes to this counter that make it no longer safe to pass across concurrent boundaries, but Swift will not be able to detect that. @Sendable closures

12:22

Luckily for us Swift has a tool to help with this, but before moving onto that we need to discuss the other sendable concept, which is the @Sendable attribute that can be applied to closures.

12:37

Recall that before we had the Sendable conformance on the Counter class we were getting the following warning: class Counter /* : @unchecked Sendable */ { … } let counter = Counter() Task { counter.increment() } Capture of ‘counter’ with non-sendable type ‘Counter’ in a @Sendable closure

12:59

We now understand quite well what sendable means, but what does this @Sendable mean? An @Sendable closure is an indication that the closure is going to be used in a concurrent context, and so it’s not legitimate to use just any kind of closure.

13:15

In order to best understand @Sendable we need to back up a little bit and discuss @escaping . The @escaping attribute has been in Swift for a long time, and it restricts how you are allowed to use a closure that is passed into a function.

13:29

Consider a function that takes a closure as an argument: func perform(work: () -> Void) { }

13:36

When written as such, there is really only one thing you can do with work , and that is invoke it within the lifetime of the perform scope: func perform(work: () -> Void) { work() }

13:47

Or you can invoke it multiple times: func perform(work: () -> Void) { work() work() work() work() }

13:51

Or you can sprinkle in other bits of work before or after it: func perform(work: () -> Void) { print("Begin") work() print("Middle") work() print("Middle") work() print("Middle") work() print("End") }

14:03

And then when we invoke perform and pass a closure it will do what we expect: perform { print("Hello") } Begin Hello Middle Hello Middle Hello Middle Hello End

14:19

If you want to do something a little more interesting with this work closure you will most likely butt heads with the compiler. For example, suppose we wanted to download that 1MB file we’ve played around with a few times, and once it’s done we invoke the work closure: func perform(work: () -> Void) { print("Begin") URLSession.shared.dataTask( with: .init(string: "http://ipv4.download.thinkbroadband.com/1MB.zip")! ) { _, _, _ in work() } print("End") } Escaping closure captures non-escaping parameter ‘work’

14:39

As soon as we do that we get a compiler error complaining about work not being escaping.

14:50

Even something as simple as invoking work after a small delay doesn’t work: func perform(work: () -> Void) { print("Begin") // URLSession.shared.dataTask(with: .init(string: "http://ipv4.download.thinkbroadband.com/1MB.zip")!) { _, _, _ in // work() // } DispatchQueue(label: "delay").asyncAfter(deadline: .now() + 1) { work() } print("End") } Escaping closure captures non-escaping parameter ’work’

15:04

An escaping closure is one that needs to be captured and referenced after the scope of the function ends.

15:11

Both of the methods dataTask and asyncAfter return immediately after they are invoked and then will invoke work at some time later. This means that “Begin” and “End” will both be printed before the work closure is even invoked. So, the work closure needs to live longer than the function’s execution, and that is why Swift is complaining.

15:25

But, then the question is: why does Swift care if this closure lives longer than the function? Is it really so important to distinguish between these two types of closures?

15:33

Well, the answer is yes! Without the distinction between escaping and non-escaping closures we can write a lot of seemingly reasonable code that would be capable of doing some very unreasonable things.

15:44

For example, suppose you wanted to implement a function that took an inout value that was incremented after one second: func incrementAfterOneSecond(value: inout Int) { DispatchQueue(label: "delay").asyncAfter(deadline: .now() + 1) { value += 1 } }

16:02

This fails to compile right now, but let’s ignore that for a moment. Suppose this was perfectly valid Swift code. Then how do we expect the following code to behave: var count = 0 incrementAfterOneSecond(value: &count) assert(count == 0) Thread.sleep(forTimeInterval: 2) assert(count == 1)

16:29

We fire off the incrementAfterOneSecond function and can observe that right after we expect the count to stay at 0. But if we sleep for 2 seconds so that the delay can finish, what do we expect? Do we really expect the count to magically update itself to 1? That would be really bizarre and counter to how we think value types should behave. We didn’t perform any mutation between the two asserts yet somehow the value was changed. That’s sounding more like a reference type than a value type.

16:58

This kind of “spooky action from a distance” is precisely what value types were created to avoid, and this is precisely why this code does not compile. The Swift compiler is preventing us from writing code that simply does not make sense. It is not valid to pass a mutable inout variable across an escaping boundary.

17:19

Only non- inout values are allowed to cross escaping boundaries, which means if you really do want to implement this function you have no choice but to turn to a reference type like class : class Counter { var count = 0 } func incrementAfterOneSecond(counter: Counter) { DispatchQueue(label: "delay").asyncAfter(deadline: .now() + 1) { counter.count += 1 } } let counter = Counter() incrementAfterOneSecond(value: &counter) assert(counter.count == 0) Thread.sleep(forTimeInterval: 2) assert(counter.count == 1)

17:42

This compiles just fine, and even the assertion pass. It’s very strange code that doesn’t read linearly from top-to-bottom, which makes it hard to understand, but it is possible to do.

18:02

So, going back to our perform function, the only way to get it to compile is to explicitly mark the work closure as @escaping : func perform(work: @escaping () -> Void) { print("Begin") DispatchQueue(label: "delay").asyncAfter(deadline: .now() + 1) { work() } print("End") } perform { print("Hello") } Begin End Hello

18:38

And now this greatly restricts what kind of closures are allowed to be passed to perform since it is going to be used asynchronously. In particular, the work closure is not allowed to capture a mutable inout reference. Of course this prints in a strange order because execution breezes right past the invocation of asyncAfter , and the only after a second passes is the work executed, but now at least we have something in the types telling us this is possible. The fact that we are annoying the work closure with @escaping means we can expect that it will be invoked outside the lifetime of the perform function.

19:24

Interestingly, Swift’s new async keyword gives us the ability to pass non-escaping closures into certain kinds of asynchronous contexts that would typically need escaping. That sounds counterintuitive and possibly dangerous, after all we just saw the weird things that can happen when trying to pass non-escaping closures to escaping contexts. The reason this is possible is because Swift’s async keyword pinpoints one specific kind of asynchrony, and for this one kind it’s completely safe to do.

19:54

For example, suppose we upgraded our perform function to be async so that we could leverage Task.sleep for delaying the work rather than dispatch queue’s asyncAfter function: func perform(work: @escaping () -> Void) async throws { print("Begin") try await Task.sleep(nanoseconds: NSEC_PER_SEC) work() print("End") }

20:11

And we can invoke this by wrapping it in a task: Task { try await perform { print("Hello") } }

20:17

And interestingly when we run this we see that the print statements happen in the right order: Begin Hello End

20:28

But now the work closure isn’t being escaped at all. It is invoked only in the lifetime of the function scope, so we can now even remove the @escaping attribute: func perform(work: () -> Void) async throws { print("Begin") try await Task.sleep(nanoseconds: NSEC_PER_SEC) work() print("End") }

20:51

So even though this function is asynchronous, it seems to behave more similar to the first version of the function when everything was completely synchronous: func _perform(work: () -> Void) { print("Begin") work() print("End") }

21:09

We can even invoke the work multiple times with little sleeps between each invocation: func perform(work: () -> Void) async throws { print("Begin") try await Task.sleep(nanoseconds: NSEC_PER_SEC) work() try await Task.sleep(nanoseconds: NSEC_PER_SEC) work() try await Task.sleep(nanoseconds: NSEC_PER_SEC) work() try await Task.sleep(nanoseconds: NSEC_PER_SEC) work() print("End") }

21:23

We can even perform other kinds of asynchronous work, such as making a network request and then invoking work when it completes: func perform(work: () -> Void) async throws { print("Begin") _ = try await URLSession.shared.data(from: .init(string: "http://ipv4.download.thinkbroadband.com/1MB.zip")!) work() print("End") }

21:38

We are doing all of this without an escaping closure.

21:50

We can even access a mutable inout value: func perform(value: inout Int, work: () -> Void) async throws { print("Begin") let (data, _) = try await URLSession.shared.data(from: .init(string: "http://ipv4.download.thinkbroadband.com/1MB.zip")!) work() value += data.count print("End") } Task { var count = 0 try await perform(value: &count) { print("Hello") } print("count", count) } Begin Hello End count 1048576

22:19

This still compiles, even though it may seem a little scary. But remember, Swift knows this is an asynchronous function, and it knows the function will complete once all of the asynchronous work is done, therefore there is no need for an escaping closure.

22:44

This is pretty cool. Because the Swift compiler has more information about how this function works on the inside and how it can be invoked, it can completely elide the need for escaping, thus allowing us to use more types of closures when calling this function.

22:59

So, this is what the @escaping attribute gives us. It’s a way to signal to callers of a function that this closure may be referenced at a time after the function finishes its execution. Capturing this information in the type system allows the compiler to catch potential problems in our code where we might accidentally do something that doesn’t play nicely with cross-asynchronous boundaries.

23:19

The @Sendable attribute is very similar, except instead of protecting you from passing unsafe closures to asynchronous contexts it protects you from passing unsafe closures to concurrent contexts. Asynchronous work is just work that will be done at a later time. Concurrent work is work that may be performed multiple times, simultaneously from multiple threads.

23:40

So, let’s see what kind of new problems can crop up when dealing with concurrent code, and see what the compiler has to say about it.

23:46

Consider the version of the perform async function that invokes the work closure twice with a sleep before each: func perform(work: () -> Void) async throws { print("Begin") try await Task.sleep(nanoseconds: NSEC_PER_SEC) work() try await Task.sleep(nanoseconds: NSEC_PER_SEC) work() try await Task.sleep(nanoseconds: NSEC_PER_SEC) work() print("End") }

23:57

This is completely safe and valid Swift code. The function can only be called with an asynchronous context and so work does not need to be marked as @escaping .

24:08

However, what if we wanted to run these two units of work concurrently rather than serially one after another? We could spin up two tasks to do this: func perform(work: () -> Void) async throws { print("Begin") Task { try await Task.sleep(nanoseconds: NSEC_PER_SEC) work() } Task { try await Task.sleep(nanoseconds: NSEC_PER_SEC) work() } … print("End") } Escaping closure captures non-escaping parameter ‘work’ Capture of ‘work’ with non-sendable type ‘() -> Void’ in a @Sendable closure

24:27

But we are immediately faced with some compiler warnings and errors. The errors are expected considering what we now know about escaping closures. The initializer of Task takes an escaping closure because of course it wants to be able to invoke the closure after the task has been created. It has no choice but to be escaping. So let’s do that: func perform(work: @escaping () -> Void) async throws { … } Capture of ‘work’ with non-sendable type ‘() -> Void’ in a @Sendable closure

24:47

The warnings are saying that we are using something that is not @Sendable inside a context that requires @Sendable . This is very similar to the @escaping error, where we are using something that is not escaping in a context that requires @escaping . The only reason this is a warning and not an error is because typical Swift code is going to have a lot of these warnings since the problems are subtle and ubiquitous, and so Swift wants a soft landing to fixing these problems. However, in Swift 6 these warnings will become errors, like @escaping , and so we will have no choice but to fix them.

25:28

To see why this warning is a good thing, and why we would even want it to be an error someday in the future, let’s see what kind of seemingly reasonable code we can write that turns out to be completely unreasonable. Let’s for a moment assume that this function compiles as-is and try to use it.

25:45

First off all, currently the function is marked as async , but we aren’t actually making use of that asynchronous context inside the function. We are spinning up new tasks, which provide their own asynchronous context, so let’s go back to this being a plain synchronous function: func perform(work: @escaping () -> Void) { … }

25:59

We can invoke this function by supplying a closure: Task { try await perform { print("Hello") } }

26:04

And this compiles just fine because we aren’t doing anything that is unfriendly to escaping closures.

26:10

There is more we can do in this closure that is still within the limits of what escaping closures allow for, such as mutating a variable outside the scope of the closure. To see this let’s try it out inside a new function: Task { var count = 0 perform { print("Hello") count += 1 } }

26:29

This compiles just fine without any warnings as long as we ignore the compiler error we have in perform .

26:33

However, this code is not safe and will lead to some really surprising results if we allow it. For example, what if we invoked this function a whole bunch of times and mutated count inside each. Like say 1,000 times: Task { var count = 0 for _ in 0..<workCount { perform { count += 1 } } try await Task.sleep(nanoseconds: NSEC_PER_SEC * 2) print(count) } 2973

27:26

There’s nothing about the signature of perform that lets us know concurrent things are happening on the inside, but us being the implementors of the function do know this. And if we let 1,000 tasks mutate the count value from multiple threads we will inevitably have race conditions like we witnessed with threads and dispatch queues.

27:33

So, without the compiler knowing about code that is safe to run concurrently, it is possible to write seemingly reasonable code that is completely wrong.

27:42

Let’s now add in the @Sendable attribute to our doSomething function since the Task initializer demands it, and see how that trickles to other parts of the codebase: func perform(work: @escaping @Sendable () -> Void) { … }

27:48

Now the warnings in perform go away, but our previously compiling code fails to compile: Task { var count = 0 for _ in 0..<workCount { perform { print("Hello", count) count += 1 } } } Reference to captured var ‘count’ in concurrently-executing code Mutation of captured var ‘count’ in concurrently-executing code

28:09

And for good reason. Swift now knows enough about the intended use of the work closure, in particular it will be used to perform asynchronous and concurrent work, that it can outlaw certain type of closures from being passed to perform . You are no longer allowed to capture mutable variables or mutate variables from outside the closure, even though this was a perfectly valid thing to do from escaping closures.

28:33

So, we have no choice but to remove access to this mutable state in order to make the closure safe to use from concurrent contexts, and hence sendable: perform { print("Hello") }

28:44

So, just as the Sendable protocol allows us to prove to the compiler that values of a specific type are safe to be passed across concurrent boundaries, the @Sendable attribute allows us to prove to the compiler that functions can be safely used from multiple concurrent contexts.

29:02

If you are interacting with some API that takes a closure as an argument, and that closure is marked as @Sendable , you have to prove to the compiler that your closure doesn’t do any funny business in order to pass it along. In particular, this means it can only capture values that conform to the Sendable protocol, and it can only capture by value, hence no mutable captures.

29:21

If you can prove to the compiler that your closure is @Sendable , then the function you are invoking can be free to use that closure in any concurrent way it wants without leading to a race condition, and that’s incredibly powerful.

29:37

You can also use @Sendable to help make types that hold onto closures conform to the Sendable protocol. For example, suppose we were designing a lightweight dependency that abstracts over access to a database. If we follow the design we’ve discussed many types on Point-Free we might end up with a struct that has a few endpoints for performing database operations: struct User {} struct DatabaseClient { var fetchUsers: () async throws -> [User] var createUser: (User) async throws -> Void } extension DatabaseClient { static let live = Self( fetchUsers: { fatalError() }, createUser: { _ in fatalError() } ) }

30:40

Unfortunately this type is not sendable, and so cannot be passed across concurrent boundaries: func perform(client: DatabaseClient, work: @escaping @Sendable () -> Void) { Task { _ = try await client.fetchUsers() … } Task { _ = try await client.fetchUsers() … } … } Capture of ‘client’ with non-sendable type ‘DatabaseClient’ in a @Sendable closure

31:02

If we try to explicitly make DatabaseClient sendable we will see the problem: struct DatabaseClient: Sendable { var fetchUsers: () async throws -> [User] var createUser: (User) async throws -> Void } Stored property ‘fetchUsers’ of ‘Sendable’-conforming struct ‘DatabaseClient’ has non-sendable type ‘() async throws -> [User]’ Stored property ‘createUser’ of ‘Sendable’-conforming struct ‘DatabaseClient’ has non-sendable type ‘(User) async throws -> Void’

31:13

The client struct cannot be proven to the compiler to be sendable because we don’t know enough about what kinds of closures can be used. Right now they are just any kind of closure, which means they could be reading and writing to some shared mutable variable.

31:28

If we force these closures to be @Sendable , and thus heavily restricting what kinds of closures can be used with this client, then we will finally be able to prove to the compiler that DatabaseClient is a sendable type. func perform(client: DatabaseClient, work: @escaping @Sendable () -> Void) { Task { _ = try await client.fetchUsers() … } Task { _ = try await client.fetchUsers() … } … }

31:46

And now everything compiles with no warnings. Actors

32:04

It’s pretty incredible to see just how deeply ingrained the sendable concept is in the language. In order to conform to the Sendable protocol or pass a function as a @Sendable closure we need to prove to the compiler that our types and functions are safe to use from multiple concurrent contexts, and once that is done we can have stronger guarantees that we are avoiding race conditions.

32:26

The compiler is holding our hand every step of the way. As soon as we add a field to the type or capture a variable in a closure that breaks sendability, we are instantly notified by the compiler that something went wrong. So far all the examples we have explored have been quite simple so that we can get an understanding of how the concept is used, but in a really complex code base you may make a seemingly innocent change to a type or a function and unwittingly break its ability to be sendable. Having the compiler check your work behind you can be incredibly powerful, and often fixing the problem can force you to just write better code in general, much like how static types can force you to write better code.

33:05

Now, as we mentioned a moment ago, we have technically made our Counter class sendable, but really all we did is use locks on the inside of the class in order to protect data access, and then we just forced the compiler to make it sendable-complaint by using @unchecked Sendable .

33:20

Let’s see why this is problematic and what we can do to fix.

33:25

Right now our Counter class is compiling without warnings, but the use of @unchecked Sendable should be a huge red flag that we are operating outside the purview of the compiler. Sometimes it’s absolutely necessary to use @unchecked Sendable , like when interfacing with old code, but at the end of the day you are just telling the compiler to trust us that everything is kosher. The compiler isn’t proving it universally. It’s absolutely possible that we could make changes to this counter that make it no longer safe to pass across concurrent boundaries, but Swift will not be able to detect that.

33:54

To see how this can happen let’s add a new small feature to the Counter class. We will add a decrement endpoint for counting down, and we will introduce a maximum field to hold the maximum value the counter has ever held. So it needs to be incremented in the increment method, but nothing needs to happen in the decrement. class Counter: @unchecked Sendable { let lock = NSLock() var count = 0 var maximum = 0 func increment() { self.lock.lock() defer { self.lock.unlock() } self.count += 1 self.maximum = max(self.count, self.maximum) } func decrement() { self.lock.lock() defer { self.lock.unlock() } self.count -= 1 } }

34:34

If we accidentally increment the maximum field outside of the lock, say if it was refactored to not use defer , then we have technically introduced a race condition: func increment() { self.lock.lock() self.count += 1 self.lock.unlock() self.maximum = max(self.count, self.maximum) }

34:44

This class is no longer safe to pass across concurrent boundaries. If we fire up enough threads to invoke the increment method we will eventually get to a place where the max field is less than the count field because threads interleaved and we wrote back stale data. This is very subtle, and the compiler is not holding our hand to let us know that something bad happened.

35:02

This is why there is an entirely new kind of data type in Swift 5.5 that allows you to protect a piece of mutable state from these kinds of data races. And this data type is deeply ingrained into the language so that the compiler can know when you are using it in a way that could potentially lead to data races.

35:17

The kind of type is called an “actor”, and this concept lives right along side structs, enum, and classes: actor Counter { }

35:25

Structs and enums are Swift’s tools for modeling value types that represent holding multiple data types at once or holding a single choice from multiple data types. Reference types represent data that has identity and can be passed around by reference. And actors are also reference types, but that further synchronize access to its data and methods.

35:43

We can implement this actor much like how we first tried implementing the Counter class: actor CounterActor { var count = 0 func increment() { self.count += 1 } }

35:58

This is exactly how we wanted to implement the Counter class but quickly found out that there was the potential for data races when invoking the increment method.

36:05

The other features of the counter type can also be implemented easily: actor CounterActor { var count = 0 var maximum = 0 func increment() { self.count += 1 self.maximum = max(self.count, self.maximum) } func decrement() { self.count -= 1 } }

36:19

Notice that we don’t have any locks or dispatch queues, and we don’t need to maintain a private underscored piece of mutable state just so that we can lock access to it in a computed property. We also don’t have to worry about setting the maximum value outside the lock because the entire method is synchronized. Overall this type is much simpler than the class-based counter type.

36:39

So, this looks great so far, but how can this seemingly simple type prevent us from data races. Isn’t it possible to just fire up a bunch of threads and hammer the increment method?

36:46

Well, let’s try: let counter = CounterActor() for _ in 0..<workCount { Thread.detachNewThread { counter.increment() } } Actor-isolated instance method ‘increment()’ can not be referenced from a non-isolated context

36:53

Looks like we are not even allowed to call the increment method.

36:57

This is an example of Swift helping us by not letting us do something that could lead data races. We are not allowed to call actor methods from just any context because the whole point of actors is to protect the data it holds. If one thread tries to invoke increment while another thread is in the middle of running increment , then the first thread needs to somehow “wait” until the second thread finishes.

37:16

Now, we don’t want to actually “wait” in the sense that we do not want to hold up the first thread until the second thread finishes. As we saw previously, Swift concurrency runtime manages a very small pool of cooperative threads, and so blocking a single one of them for any amount of time amounts to wasting 10% of the system’s resources at once. And then imagine if you had many threads blocking. You could accidentally starve the entire system.

37:37

This is why there is no mention of locks in the actor. We don’t want to literally lock so that we can hold up the thread while another thread does its work. Instead, synchronization in actors is handled using the same asynchronous tools we explored in the previous episodes. You can only invoke the increment method if you are in an asynchronous context, in which case you can simply await invoking the method: for _ in 0..<workCount { Task { await counter.increment() } }

38:04

It may seem strange that we have to await invoking the increment method, especially since the method is not even declared as async: func increment() { … }

38:12

As far as the actor is concerned the method is perfectly synchronous. It can make any changes it wants to its mutable state without worrying about other threads because actors fully synchronize its data and methods.

38:21

But as far as the outside world is concerned, this method cannot be called synchronously because there may be multiple tasks trying to invoke the increment method at once. The actor needs to do the extra work in order to synchronize access. The way the actor can do this efficiently is by operating in an asynchronous context. This means that if one task wants to invoke the increment while another task is already in the middle of running increment , the system can suspend the first task, thanks to the asynchronous context, thus freeing up its thread to be used by other tasks, and then once the second task has finished invoking increment our task can be resumed, potentially on a completely different thread.

38:55

Let’s make sure this does what we expect. We want to give these tasks a little bit of time to do their work and then print the counter’s count so that we can make sure it equals 1,000: Thread.sleep(forTimeInterval: 1) print("counter.count", counter.count) Actor-isolated property ‘count’ can not be referenced from a non-isolated context

39:09

But even accessing a property on the actor is not allowed. This is because it’s possible to try reading the count while another task is in the middle of updating it, which could lead us to getting an out-of-date value.

39:20

This is technically a problem that we even had when dealing with the counter class in threads and queues. We never synchronized access to the count property which means technically we could have been reading the value while some other thread was in the middle of a write, and hence getting the wrong value.

39:33

We again need to execute this in an asynchronous context in order for the actor to synchronize access: Thread.sleep(forTimeInterval: 1) Task { await print("counter.count", counter.count) }

39:44

This gives the actor the ability to suspend if it needs to wait for other tasks to exit the increment method.

39:49

If we run this a few times we see we always get 1,000: counter.count 1000

40:01

We can even amp it up to 10,000 tasks and we get 10,000 consistently: counter.count 10000

40:10

This is absolutely awesome. We get all of the benefits of locking, but without needing to explicitly use locks anywhere, without accidentally holding up an entire thread while another thread does its work, and all the while the compiler is holding our hand to make sure we invoke this method in the right context. It’s absolutely amazing to see.

40:26

Let’s also test that the decrement method of the actor is synchronized. We will fire up another 10,000 tasks for calling the decrement method, and at the end of that the count should be back to 0: let counter = CounterActor() for _ in 0..<workCount { Task { await counter.increment() } Task { await counter.decrement() } } Thread.sleep(forTimeInterval: 1) Task { await print("counter.count", counter.count) } counter.count 0

40:49

Heck, let’s amp it up to 100,000: counter.count 0

40:55

Amazingly it works. We are firing up 200,000 tasks, all running concurrently, all fighting each other to increment or decrement the count, and we still get a completely consistent answer at the end.

41:08

We can even print the thread from the increment and decrement threads to show that we aren’t exploding the number of threads and that the increments and decrements are interleaving with each other: Increment <NSThread: 0x101304cb0>{number = 2, name = (null)} Increment <NSThread: 0x101304cb0>{number = 2, name = (null)} Decrement <NSThread: 0x101304cb0>{number = 2, name = (null)} Decrement <NSThread: 0x101304cb0>{number = 2, name = (null)} Decrement <NSThread: 0x1015040e0>{number = 3, name = (null)} … Increment <NSThread: 0x1015042e0>{number = 4, name = (null)} Decrement <NSThread: 0x101205590>{number = 9, name = (null)} Decrement <NSThread: 0x101304cb0>{number = 2, name = (null)} Increment <NSThread: 0x101704080>{number = 11, name = (null)} Decrement <NSThread: 0x101012270>{number = 6, name = (null)} Increment <NSThread: 0x1015046a0>{number = 10, name = (null)} counter.count 0

41:37

Super impressive. And as long as we can trust Swift’s concurrency runtime we can have a lot of confidence that this code works the way we expect. It’s extremely simple, and it’s basically written in the same style we would employ if we were writing synchronous code. We don’t have to fuss around with locks.

41:54

We alluded to this a moment ago, but although actors synchronize outside access to its mutable state, we don’t see any of that from within the actor. Notice that in the increment we don’t need to do any awaiting because we are already in a fully synchronized context: func increment() { self.count += 1 self.maximum = max(self.maximum, self.count) }

42:10

And if we introduce another method on the class, like move the max computation to another method: actor CounterActor { … private func computeMaximum() { self.maximum = max(self.maximum, self.count) } }

42:27

We are able to call this method from within another method without awaiting: func increment() { self.count += 1 self.computeMax() }

42:28

This means that working from within an actor can be quite simple and ergonomic, and it’s only when the outside world needs to deal with the actor that we have to worry about working in an asynchronous context.

42:37

Speaking of the maximum, let’s see what its value is after all of these tasks run, but let’s go back down to just 1,000 tasks for simplicity: let workCount = 1_000 ... Thread.sleep(forTimeInterval: 1) Task { await print("counter.count", counter.count) await print("counter.maximum", counter.maximum) } If we run this we get something reasonable: counter.count 0 counter.max 3

42:47

But if we run again we get something different: counter.count 0 counter.maximum 8 And again something different: counter.count 0 counter.maximum 6

43:03

Does this mean we have a race condition in our code and that somehow the actor isn’t protecting us?

43:08

Well, no actually. There is no race condition here. This is just an example of something that is non-deterministic by its very nature. We have 1,000 increment tasks and 1,000 decrement tasks running concurrently, and the order that they run is not going to be deterministic. Sometimes we may get a long stretch of consecutive increment tasks running, allowing the max to get a little high, and other times it may be more balanced of alternating incrementing and decrementing tasks. There really is no way to know, and that’s why this value can change.

43:38

It’s worth stressing again that this is not a race condition. It’s not deterministic, but the max computation does accurately represent the maximum value the counter saw during its lifetime. Whereas if we did the same in the Counter class without properly locking the maximum field’s mutation we would get a value that did not actually represent the largest value the counter class encountered during its lifetime.

43:59

This is yet another example of how difficult multithreaded programming can be. Just because we have extremely powerful tools for preventing data races doesn’t mean we have removed the possibilities of non-determinism creeping into our code. Just by virtue of the fact that we are firing off a bunch of concurrent tasks at once we have no way to avoid introducing some non-determinism into the system based on how the system is going to schedule and prioritize all of those tasks. If we don’t want that kind of non-determinism then we shouldn’t be performing concurrent work.

44:28

But the issue of non-determinism is completely separate from the issues of data races, and Swift’s tools are tuned to address the data races, not the non-determinism. Next time: structured vs. unstructured

44:36

We’ve now seen how Swift’s new concurrency tools compare to many of the other tools on Apple’s platforms, including threads, operation queues, dispatch queues and the Combine framework. And in pretty much every category that we considered, Swift’s new concurrency tools blew the old tools out of the water:

44:52

First, the concepts of asynchrony and concurrency are now baked directly into the language rather than bolted on as a library. Swift can now express when a function needs to perform asynchronous work, using the new async keyword, and Swift can express types and functions that can be used concurrently, using the new Sendable protocol and @Sendable attribute.

45:11

Second, although we don’t explicitly manage something like a thread pool or an execution queue, somehow Swift allows spinning up many thousands of concurrent tasks without exploding the number of threads created. In fact, a max of only 10 threads seems to be created for our computers.

45:27

Third, tasks have all the features that threads and queues had, such as priority, cooperative cancellation and storage, but in each case tasks massively improved the situation over the older tools. Cancellation is deeply ingrained into the system so that the cancellation of top level tasks trickle down to child tasks, and task storage also now inherits from task to child task, allowing you to nest locals in complex yet understandable ways.

45:55

Fourth, although Swift’s concurrency runtime limits us to a small number of threads in the cooperative thread pool, Swift does give us the tools that help us not clog up that pool. Using things like non-blocking asynchronous functions and Task.yield we suspend our functions to allow other tasks to use our thread, and then once we are ready to resume a thread will be automatically provided to us.

46:19

Fifth, and perhaps most exciting, Swift now provides a first class type for synchronizing and isolating mutable data in such a way that the compiler understands when you might have used it incorrectly. They’re called actors, and they allow you to largely write code that looks like simple, synchronous code, but under the hood it is locking and unlocking access to the mutable data.

46:43

Already it’s pretty impressive for Swift to accomplish so much so quickly. But there’s even more. Swift new concurrency tools allow us to write our asynchronous and concurrent code in a style that is substantially different from how we wrote our code with threads and queues. There are other features of Swift concurrency that are so unique that there’s just nothing we can really compare them to for the older concurrency tools such as threads and queues.

47:08

So, we’d like to take one more episode in this series on concurrency to discuss the amazing features that don’t quite fit into our narrative of looking at concurrency through the lens of the past.

47:19

And we will begin by discussing the concept of structured concurrency. Well really, let’s back up a bit and talk about structured programming in general so that we know why structured concurrency is such a big deal.

47:29

Most modern, popular languages are primary “structured programming languages”, so there’s a very good chance that you have never really programmed in an “unstructured” way. To put it simply, structured programming is a paradigm that aims to make programs read linearly from top-to-bottom. Doing so can help you compartmentalize parts of the program as black boxes so that you don’t have to be intimately familiar with all of its details at all times. The bread-and-butter of structured programming are tools like conditionals, loops, function calls and recursion.

48:01

This may seem very intuitive and obvious to you, but back in the 1950s it wasn’t so clear. At that time human readable programming languages were still quite nascent, and so those languages had tools that made a lot of sense for how the code was run at a low level on the machine, but was difficult for humans to fully understand.

48:20

An example of such a tool is the jump command. It allows you to redirect the flow of execution of the program to any other part of the program. Swift doesn’t have this tool, at least not in full generality, but let’s look at what it could have looked like…next time! References NSOperation Mattt • Jul 14, 2014 Note In life, there’s always work to be done. Every day brings with it a steady stream of tasks and chores to fill the working hours of our existence. Productivity is, as in life as it is in programming, a matter of scheduling and prioritizing and multi-tasking work in order to keep up appearances. https://nshipster.com/nsoperation/ libdispatch efficiency tips Thomas Clement • Apr 26, 2018 Note The libdispatch is one of the most misused API due to the way it was presented to us when it was introduced and for many years after that, and due to the confusing documentation and API. This page is a compilation of important things to know if you’re going to use this library. Many references are available at the end of this document pointing to comments from Apple’s very own libdispatch maintainer (Pierre Habouzit). https://gist.github.com/tclementdev/6af616354912b0347cdf6db159c37057 Modernizing Grand Central Dispatch Usage Apple • Jun 5, 2017 Note macOS 10.13 and iOS 11 have reinvented how Grand Central Dispatch and the Darwin kernel collaborate, enabling your applications to run concurrent workloads more efficiently. Learn how to modernize your code to take advantage of these improvements and make optimal use of hardware resources. https://developer.apple.com/videos/play/wwdc2017/706/ What went wrong with the libdispatch. A tale of caution for the future of concurrency. Thomas Clement • Nov 23, 2020 https://tclementdev.com/posts/what_went_wrong_with_the_libdispatch.html Introducing Swift Atomics Karoy Lorentey • Oct 1, 2020 Note I’m delighted to announce Swift Atomics, a new open source package that enables direct use of low-level atomic operations in Swift code. The goal of this library is to enable intrepid systems programmers to start building synchronization constructs (such as concurrent data structures) directly in Swift. https://www.swift.org/blog/swift-atomics/ Downloads Sample code 0193-concurrency-pt4 Point-Free A hub for advanced Swift programming. Brought to you by Brandon Williams and Stephen Celis . Content Become a member The Point-Free Way Beta previews Gifts Videos Collections Free clips Blog More About Us Community Slack Mastodon Twitter BlueSky GitHub Contact Us Privacy Policy © 2026 Point-Free, Inc. All rights are reserved for the videos and transcripts on this site. All other content is licensed under CC BY-NC-SA 4.0 , and the underlying source code to run this site is licensed under the MIT License .