EP 192 · Concurrency's Future · Jun 13, 2022 ·Members

Video #192: Concurrency's Future: Tasks and Cooperation

smart_display

Loading stream…

Video #192: Concurrency's Future: Tasks and Cooperation

Episode: Video #192 Date: Jun 13, 2022 Access: Members Only 🔒 URL: https://www.pointfree.co/episodes/ep192-concurrency-s-future-tasks-and-cooperation

Episode thumbnail

Description

Let’s look at what the future of concurrency looks like in Swift. A recent release of Swift came with a variety of tools with concurrency. Let’s examine its fundamental unit in depth, and explore how they “cooperate” in your applications.

Video

Cloudflare Stream video ID: c121ef0040d78fd9b5b638e314b7130e Local file: video_192_concurrency-s-future-tasks-and-cooperation.mp4 *(download with --video 192)*

References

Transcript

0:36

So, now that we are intimately familiar with what concurrency tools Apple has provided to us in the past and present, let’s look at what the future of concurrency looks like in Swift.

0:46

As we all know, Swift 5.5 was released 9 months ago with a variety of tools for concurrency. These tools are in many ways simpler and more robust than the tools we just covered, and they solve a lot of the problems we encountered. Best of all, the tools provide a fully integrated solution to data race conditions, and it’s really amazing to see. Once these features are fully baked in the language you will seldom have to think of asynchrony in terms of threads or reactive streams, and instead you will be able to write code that largely looks the same as if you were working entirely with synchronous processes.

1:21

So, let’s repeat the program we have put forth when exploring threads, operation queues and dispatch queues, but this time with a focus on Swift’s modern concurrency tools. These tools are quite a bit different from the threads and queues we previously explored because they are deeply integrated with the language itself, and not just a library built with the language. Task basics

1:58

The fundamental unit for creating an asynchronous context is known as Task , and it can be created in a way similar to threads and dispatch work items: Task { }

2:10

If in here we print the current thread we are on: Task { print(Thread.current) }

2:13

And if we run this we see that a new thread was spun up to run this task: <NSThread: 0x1062040f0>{number = 2, name = (null)}

2:18

This is starting to feel quite similar to what we experienced with threads. In particular, the detachNewThread class method on Thread , which creates a new thread and eagerly starts its execution. It also feels a little different from operation queues and dispatch queues where we needed to create both a queue for executing work and then create an operation or work item to execute.

2:38

It kind of feels like the Task type has taken a step backwards by again conflating how work is done with what work is to be done. However, this is not really the case. There is actually an object somewhere in the background that controls how tasks are executed, but Swift does a really good job of hiding it from us for the most part, and we usually don’t need to think about it.

2:56

Although this task interface seems similar to threads, it is actually very, very different. If we assign this task to a variable: let task = Task { }

3:04

And then check out its type: let task: Task<(), Never> = Task { }

3:10

We will see it has two generics, currently set to Void and Never . The first is the type of value that will be produced from the task after the asynchronous work is finished. Right now it’s void to represent that it doesn’t produce anything of interest. And the second generic is the type of error that can thrown inside the closure. Since Swift does not support typed throws (yet) this generic will always be either Never to represent it cannot fail, or Error to represent that any kind of error can be throw.

3:34

We can construct a task with different generics by returning a value from the task closure: let task: Task<Int, Never> = Task { return 42 }

3:42

Or throwing an error: let task: Task<Int, Error> = Task { struct SomeError: Error {} throw SomeError() return 42 }

3:58

Next, we can see that the closure provided to this task initializer has a very special form: extension Task where Failure == Error { public init( priority: TaskPriority? = nil, operation: @escaping @Sendable () async throws -> Success ) }

4:03

It has this strange @Sendable annotation, which is something we’ll be getting deep into soon enough. And this closure signature is also marked with the keyword async . This is a new keyword in Swift 5.5 that publicly declares when a function will perform asynchronous work in its implementation. The compiler enforces special restrictions on the invocation of such functions. In particular, you must provide an asynchronous context in order to call it.

4:28

For example, suppose we had a function declared as async like so: func doSomethingAsync() async { }

4:37

We cannot simply invoke this function directly if we are not in an asynchronous context: doSomethingAsync() ‘async’ call in a function that does not support concurrency

4:44

On the other hand, we can perform this work in the Task we created a moment ago, we just have to prefix it with await to make it explicit that we know there is something about the function invocation: Task { await doSomethingAsync() }

4:56

This is possible specifically because the task initializer creates a brand new asynchronous context to work in, and then executes our async closure in that context.

5:06

The use of await here creates what is known as a “suspension point.” While invoking the asynchronous work in the doSomethingAsync function we can completely halt the execution of our current task, and even potentially give up our current thread for other tasks to make use of. And then once the function finishes doing its work the task can resume its execution, even potentially on a completely different thread. It is an incredibly important concept for cooperative concurrency, and something we will be exploring deeply soon.

5:32

We can also call out to async functions from other async functions: func doSomethingElseAsync() async { await doSomethingAsync() } Here it is also perfectly fine to invoke another asynchronous function because we are in an asynchronous context.

5:46

This is in stark contrast to what we witnessed with threads and queues where there was no compiler-level distinction between asynchronous and synchronous code. Sure we could create new threads and queues in order to perform asynchronous work, but to Swift all of those APIs just took regular old closures, and then somewhere deep in Apple’s libraries it handled the details of creating pthreads to execute the work.

6:07

But here we are getting a compile-time distinction between synchronous code and asynchronous code. This is incredibly powerful, and is also analogous to error handling in Swift, where there is a compile time distinction between functions that can throw errors and ones that cannot.

6:20

For example, if you have a function that can throw an error you must explicitly mark it as throws : func doSomethingThatCanFail() throws {}

6:28

You cannot invoke these functions without being provided a context that is allowed to fail: try doSomethingThatCanFail()

6:35

But if we force a non-failable context, such as a new function, we will see the problem: func doSomething() { try doSomethingThatCanFail() } Errors thrown from here are not handled

6:50

To fix this you either have to mark the surrounding function as throwing in order to provide a failable context: func doSomethingElseThatCanFail() throws { try doSomethingThatCanFail() }

6:57

Or we can open up a do scope, which is analogous to Task . Where a Task initializer allows us to spin up some asynchronous work in a synchronous context, a do block allows us to spin up a new failable context to perform possibly throwing work: func doSomething() { do { try doSomethingThatCanFail() } catch { } }

7:22

It’s incredibly powerful to have this kind of information available to the compiler. It can prevent us from doing asynchronous or failable work in contexts where it is not appropriate to do that kind of work. And when we get caught on a compiler error, it either means we need to start up a new asynchronous or failable context, if that’s appropriate, or we need to somehow make the parent context into an asynchronous or failable one.

7:43

It’s worth mentioning that decorating functions with these little keywords can be thought of as a sugar-fied version of a function that returns tasks and results. For example, a throwing function like this: (A) throws -> B

7:58

Can be thought of as a function that drops the throws keyword and just returns a result instead: (A) -> Result<B, Error>

8:06

We’ve also discussed many times in the past that another Swift keyword is just syntactic sugar, and that’s inout . Functions that take an inout argument: (inout A) -> B

8:15

Can be rewritten to drop the inout keyword and return a tuple of B and A : (A) -> (B, A)

8:23

The same is true of asynchronous functions. An async function like this: (A) async -> B

8:30

Can be thought of as a function that drops the async keyword and just returns a task instead: (A) -> Task<B, Never>

8:37

If fact, the generics for Task and Result have the same names: struct Task<Success, Failure> enum Result<Success, Failure>

8:45

Further, tasks themselves have a de-sugared form that we have studied quite a bit in past Point-Free episodes where it can be expressed as a function that takes a function as an argument: (A) -> ((B) -> Void) -> Void

9:09

This is a fully curried function, where it accepts one argument and returns a function. We can “uncurry” this so that it takes two arguments: (A, (B) -> Void) -> Void

9:19

And this should be a familiar shape to anyone that has done asynchronous programming in the past. This is the fundamental shape of a completion callback handler. You pass the function a value of type A and a function that accepts B ’s so that the function can invoke that callback whenever it wants.

9:33

Most of Apple’s asynchronous APIs have this shape, such as URLSession ’s dataTask method: dataTask: (URL, (Data?, Response?, Error?) -> Void) -> Void

9:44

Or MKLocalSearch ’s start method: start: ((MKLocalSearch.Response?, Error?) -> Void) -> Void

9:52

And many, many others.

9:55

By having first class support for asynchrony in the language we can replace convoluted, confusing signatures like this: (A, (B) -> Void) -> Void

10:01

With simple signatures like this: (A) async -> B

10:04

So we have now seen 3 examples of how Swift provides sugar for common patterns, such as failures, asynchrony and in-place mutation. By having these features baked into the language at a deep level the ergonomics and features of these tools can be greatly improved.

10:18

Let’s keep digging deeper. If we spin up a bunch of tasks and print their respective threads we will see they spin up a bunch of different threads: Task { print("1", Thread.current) } Task { print("2", Thread.current) } Task { print("3", Thread.current) } Task { print("4", Thread.current) } Task { print("5", Thread.current) } 2 <NSThread: 0x1062040f0>{number = 3, name = (null)} 1 <NSThread: 0x106004bd0>{number = 4, name = (null)} 3 <NSThread: 0x106105c90>{number = 5, name = (null)} 4 <NSThread: 0x106004430>{number = 6, name = (null)} 5 <NSThread: 0x106007f40>{number = 7, name = (null)}

10:29

And like we saw with threads and operation queues, the order of these executions is non-deterministic. Swift does not guarantee the order in which these tasks will start. So, we are back to an asynchrony model that defaults to concurrent, unlike dispatch queues.

10:48

Now, is it true that every time we create a task we are secretly also creating a thread? If that were the case what would even be the benefit of tasks over using the regular Thread API that offers?

10:58

Well, luckily that is not the case. We can create 1,000 tasks and only a small handful of threads will be spun up to handle their work: for n in 0..<workCount { Task { print(n, Thread.current) } } 1 <NSThread: 0x1011caaa0>{number = 2, name = (null)} 0 <NSThread: 0x101304100>{number = 3, name = (null)} 7 <NSThread: 0x101304100>{number = 3, name = (null)} 11 <NSThread: 0x101304100>{number = 3, name = (null)} 13 <NSThread: 0x101304100>{number = 3, name = (null)} 14 <NSThread: 0x101304100>{number = 3, name = (null)} … 988 <NSThread: 0x1011caaa0>{number = 2, name = (null)} 916 <NSThread: 0x101304100>{number = 3, name = (null)} 804 <NSThread: 0x1015040f0>{number = 7, name = (null)} 923 <NSThread: 0x100710190>{number = 11, name = (null)} 989 <NSThread: 0x10070e380>{number = 5, name = (null)}

11:07

Looks like only about 10 threads were ever created.

11:15

So tasks seem to be capable of solving the thread explosion problem by using a pool of threads, all without having to manage an auxiliary object like an operation queue or dispatch queue. That’s already an improvement over threads and queues.

11:26

Tasks also have the ability to schedule work after waiting some time. Threads accomplished this with a sleep function that was unfortunately blocking and so would tie up thread resources, and queues accomplished this by scheduling work to be executed on a future date.

11:40

There is a throwing, asynchronous static function Task called sleep that takes the number of nanoseconds you want to sleep: try await Task.sleep(nanoseconds: <#UInt64#>)

11:47

It’s not super ergonomic to have to use nanoseconds, but there are improvements coming very soon to Swift that will allow you to specify these durations in friendlier units.

11:54

Although this method is called “sleep”, it is quite different from the sleep we have used many times on Thread . This sleep does pause the current task for an amount of time, but it does not hold up the thread. Remember that a task does not correspond to a thread, because as we just saw, 1,000 tasks were serviced by only 10 threads.

12:11

We can spin up 1,000 tasks and sleep them for a second: for n in 0..<workCount { Task { try await Task.sleep(nanoseconds: NSEC_PER_SEC * 1000) } }

12:20

When we await the sleep function we are able to suspend the execution of this task, thus freeing up its thread to be used by other tasks. If we run this and pause the executable we will see a small number of threads have been created, but they all have just a single stack frame with a cryptic function name: Thread 2#0 0x00000001a05ef604 in __workq_kernreturn () Thread 3#0 0x00000001a05ef604 in __workq_kernreturn () Thread 4#0 0x00000001a05ef604 in __workq_kernreturn () Thread 5#0 0x00000001a05ef604 in __workq_kernreturn () Thread 6#0 0x00000001a05ef604 in __workq_kernreturn () Thread 7#0 0x00000001a05ef604 in __workq_kernreturn () Thread 8#0 0x00000001a05ef604 in __workq_kernreturn () Thread 9#0 0x00000001a05ef604 in __workq_kernreturn () Thread 10#0 0x00000001a05ef604 in __workq_kernreturn () Thread 11#0 0x00000001a05ef604 in __workq_kernreturn () Thread 12#0 0x00000001a05ef604 in __workq_kernreturn ()

12:32

This __workq_kernreturn function is how Grand Central Dispatch parks a thread while it waits for work to be scheduled on it. So, although some threads have been created, this is a very lightweight way to let them rest without doing anything work, and most importantly these threads are free for other tasks to perform their work on them.

12:47

This means the sleep function on Task works cooperatively so that other tasks can have a chance to do their work on the cooperative thread pool, and then once the time passes our task will be resumed. We aren’t blocking up any of the threads in the pool, which would be disastrous considering there are so few threads in the pool.

13:04

Interestingly, after each suspension point our task can theoretically be resumed on any thread. Not just the one we started on. To see this we can alternative printing the current thread and sleeping: for _ in 1...workCount { Task { let current = Thread.current try await Task.sleep(nanoseconds: 1_000_000) if current != Thread.current { print("Thread changed from", current, "to", Thread.current) } } } 944 Thread changed from <NSThread: 0x10648a790>{number = 2, name = (null)} to <NSThread: 0x1063044c0>{number = 3, name = (null)} 945 Thread changed from <NSThread: 0x1063044c0>{number = 3, name = (null)} to <NSThread: 0x10648a790>{number = 2, name = (null)} 997 Thread changed from <NSThread: 0x10648a790>{number = 2, name = (null)} to <NSThread: 0x1063044c0>{number = 3, name = (null)} 979 Thread changed from <NSThread: 0x1063044c0>{number = 3, name = (null)} to <NSThread: 0x10648a790>{number = 2, name = (null)} 920 Thread changed from <NSThread: 0x10648a790>{number = 2, name = (null)} to <NSThread: 0x1063044c0>{number = 3, name = (null)} 896 Thread changed from <NSThread: 0x1063044c0>{number = 3, name = (null)} to <NSThread: 0x10648a790>{number = 2, name = (null)} …

13:41

The thread changes quite a bit, and so we do see it’s possible for suspended tasks to be resumed on a different thread. This means we should not make any assumptions about adjacent lines of code executing on the same thread. This seems quite strange, but also these tools are specifically designed to largely keep us from thinking about asynchrony and concurrency in terms of threads. Task priority and cancellation

14:48

Like all other forms of concurrency discussed so far, tasks also support the concept of priority, and it’s a granular value of finitely many possibilities: Task(priority: .low) { print("low") } Task(priority: .high) { print("high") } high low

15:14

This allows you to tell the system how important this task is, and the runtime may be able to give this task more or less time depending on various factors.

15:30

Next, tasks support cancellation just as every form of concurrency seen thus far have, and it looks very similar to everything we have seen so far. In order to make use of cancellation we need to get a handle on the actual task we want to cancel, which means we need to assign a variable: let task = Task { print(Thread.current) }

15:51

And then at any time we can cancel it: task.cancel()

15:57

However, if we run this we will see that the print statement still executes even though we canceled: <NSThread: 0x106204290>{number = 2, name = (null)}

16:03

This is a little different from threads, in which if you cancelled it right after creating it we could short-circuit the thread being started. Now we don’t know this is 100% always the case, but it seemed pretty consistent. So it seems tasks are a bit more eager in that they start up even if we cancel right after creating them.

16:28

The real way to short-circuit the work of a cancelled task is to manually check if the task has been cancelled, which is known as cooperative cancellation. This means cancelling the task does not just immediately stop execution, which would be dangerous if we opened resources that need to be closed, and instead it is up to us to be a good citizen and regularly check if we were canceled so we can early out.

16:51

We can check for cancellation by accessing the isCancelled boolean static on the Task type: let task = Task { guard !Task.isCancelled else { print("Cancelled!") return } print(Thread.current) } task.cancel()

17:13

Now when we run this we will see that only “Cancelled!” is printed to the console, not the current thread.

17:16

This static boolean seems quite magical. It represents the cancellation state of the current task. Even though it seems like a global value it is actually local to just the task we are operating in. This is similar to how cancellation worked with regular threads too.

17:39

It’s worth mentioning that there is no Task.current static like there was Thread.current . This is because it’s possible to not be operating inside the context of a task, whereas you are always operating within a thread no matter what.

17:57

There is still a way to get the current task, but you have to invoke a function with a closure and that closure is handed the current task if it exists. Task { withUnsafeCurrentTask { task in print(task) } } Optional(Swift.UnsafeCurrentTask(_task: (Opaque Value)))

18:20

At the root level of the executable there is no task context, so there it will be nil : withUnsafeCurrentTask { task in print(task) } nil

18:26

We are not aware of a lot of use cases for this because the things we most care about for tasks are available in other ways, such as its cancellation state. However, it may be useful someday.

18:34

So, it seems that cancellation of tasks is quite similar to threads and dispatch queues, but cancellation of tasks is far more deeply ingrained into the language. First of all, if your asynchronous context happens to also be a failable context, then you can check for cancellation via a throwing function rather than a boolean: let task = Task { try Task.checkCancellation() print(Thread.current) } task.cancel()

18:59

If the task has been cancelled when we try to invoke checkCancellation , the function will throw, thus short-circuiting the rest of the function. This can be a lot more ergonomic to invoke than guarding for Task.isCancelled .

19:12

So, it’s nice that cooperative cancellation integrates nicely with failable contexts, but the integration of cancellation goes even deeper. We saw in past episodes that you can sleep threads or schedule work to be done in the future on dispatch queues, but that those mechanisms are not automatically canceled when the parent thread or work item is cancelled.

19:29

So, let’s sleep the task for 1 second and then cancel the task after 0.1 seconds: let task = Task { let start = Date() defer { print("Task finished in", Date().timeIntervalSince(start)) } try await Task.sleep(nanoseconds: NSEC_PER_SEC) print(Thread.current) } Thread.sleep(forTimeInterval: 0.1) task.cancel()

19:53

When we run this we see that the task took only about 0.1 seconds to execute: Task finished in 0.10534894466400146

20:01

This means that the Task.sleep method was somehow able to detect when cancellation happened, early out of its sleep, and throw an error so that we could immediately short-circuit the rest of our task’s work. This is in stark contrast to how threads and dispatch queues worked, and shows just how deeply the concept of cooperative cancellation is baked into the system.

20:30

And this cancellation behavior goes deeper than the surface-level of this single task we are spinning up. If instead of sleeping directly in the task we called out to some other asynchronous function that did sleeping on the inside: func doSomething() async throws { try await Task.sleep(nanoseconds: NSEC_PER_SEC) } let task = Task { let start = Date() defer { print("Task finished in", Date().timeIntervalSince(start)) } try await doSomething() print(Thread.current) }

20:49

This behaves the same: Task finished in 0.10534894466400146

21:02

This is amazing to see. Remember that there was no equivalent of this at all with threads, operations queues or dispatch queues. All the work was on us to observe the cancellation state of some unit of work in order to figure out if we should cancel ourselves. That’s a pain to do in practice, but here it’s completely natural with tasks.

21:21

So already this is pretty incredible, but it gets even better. Not only does Task.sleep have this kind of deep integration with cancellation, but other asynchronous APIs do too. Suppose that instead of sleeping in the task we make a network request. We can simulate a network request that takes a few seconds to do its job by just downloading a large-ish file, like say 1MB: let task = Task { let start = Date() defer { print("Task finished in", Date().timeIntervalSince(start)) } let (data, _) = try await URLSession.shared.data( from: URL(string: "http://ipv4.download.thinkbroadband.com/1MB.zip")! ) print(Thread.current, "network request finished", data.count) }

22:11

Let’s first wait a decent amount of time before cancelling the task just to make sure that this request can complete successfully: Thread.sleep(forTimeInterval: 5)

22:15

When we run this we see that the task starts, gets past the first cancellation check, then after a moment finishes the network request, and finally the task finishes: <NSThread: 0x10c209b60>{number = 2, name = (null)} network request finished 1048576 Task finished in 3.1200979948043823

22:26

Now let’s wait for a smaller amount of time so that we can cancel the request while it is inflight: Thread.sleep(forTimeInterval: 0.5) task.cancel()

22:30

Now when we run this we see that the “Network request finished” message is never printed to the console: Task finished in 0.5094579458236694

22:40

So this is pretty incredible. Task cancellation is so deeply ingrained into the system that even network requests made with URLSession can detect cancellation, short-circuit their work, and interrupt the execution of the parent task. Ostensibly URLSession is also smart enough to break the network connection early so that it doesn’t to complete the network request, which would save resources. Task locals

23:03

So we are already seeing that tasks and Swift’s new concurrency features are dramatically improving upon what threads and dispatch queues give us. The language now understands what is asynchronous code and what isn’t, which provides guard rails for us to know when it is appropriate to do asynchronous work. And the concept of cancellation is heavily embedded throughout the entire system so that cancelling something at the top can trickle down and be observed from all the children, and conversely the child can interrupt the execution flow of what is happening at the top.

23:32

There is one other useful feature that threads and dispatch queues have. For threads it was called “thread dictionaries” and for dispatch queues it was called “specifics.” This concept allows us to implicitly carry information along with a thread or queue so that it could be accessed from deep within an application without having to pass it through every layer along the way. This is incredibly powerful, but it was also a little lacking.

23:56

For thread dictionaries the type you deal with is basically an [AnyHashable: Any] dictionary which means you lose a lot of type safety and have to do force casts everywhere. Dispatch queue specifics improved on the type safety by allowing you to statically describe the key for the value, as well as the concrete type of the value.

24:15

Further, threads did not have the concept of spinning up a new thread and having that new thread inherit the dictionary from the current thread. That can be incredibly handy for when you need to spin off additional asynchronous work inside a thread but also want the thread storage to be accessible from the new thread. Dispatch queues also improved upon this, in which if you have a new queue target an existing queue, then the queue’s specifics transfer over to the new queue.

24:40

But, even though that was improved, it was still wasn’t super ergonomic. You needed to explicitly pass queues around so that you could target them. There was no concept of a “current” queue that was just ubiquitously available. Let’s see what tasks have to say about this concept.

25:08

Recall that previously to explore the concepts of thread dictionaries and queue specifics we theorized a web server application that was represented as a function that took a URL request as an argument and returned URL response.

25:18

Let’s write out the function exactly as we did for threads and dispatch queues previously: func response(for request: URLRequest) -> HTTPURLResponse { // TODO: do some work to actually generate a response return .init() }

25:21

And then when a request comes in we could fire up a task so that we can perform some work on a separate thread, and in there invoke the response(for:) function: Task { response(for: .init(url: .init(string: "https://www.pointfree.co")!)) }

25:38

However, we now have an opportunity to improve this. Now that the notion of asynchrony is baked directly into Swift, we can make the response function an asynchronous function: func response(for request: URLRequest) async -> HTTPURLResponse { // TODO: do some work to actually generate a response return .init() }

25:48

Even better, we could allow response to also throw an error in order to represent something going wrong on the server: func response(for request: URLRequest) async throws -> HTTPURLResponse { // TODO: do some work to actually generate a response return .init() }

25:54

That’ll push the responsibility of creating an asynchronous and failable context for this function to the caller of the function: Task { _ = try await response( for: .init(url: URL(string: "http://pointfree.co")!) ) }

26:05

With this little toy server function set up we will want to implement its real logic. To do this we will probably want to perform database requests, network requests, and all types of other asynchronous work. As we described in the previous two episodes, it can be handy to log various messages when performing this work, but if we do so naively we will just get a whole mess of logs on our server that will be very difficult to make sense of.

26:27

One thing we can do to make things nicer is to associate a “request id” with each request so that when we are looking through logs we can group together multiple logged statements based on which request caused it. This request id needs to be accessible from many parts of our application so that we can feel free to log it at any point, but typically that would force us to pass the request id through many layers of our application.

26:49

Luckily that’s not necessary. Tasks have a feature known as “task local values” that allow us to associate a value with a task and then it can be effortlessly retrieved from anywhere that runs in the context of that task.

27:01

Let’s play around with task locals in the abstract before applying them directly to our server demo. This will give us an opportunity to explore some of the more subtle aspects of task locals without being bogged down by unnecessary details.

27:12

To begin, you define a new type that will hold the values you want to store in a task: enum MyLocals { }

27:22

And then you define static variables inside this type using the @TaskLocal property wrapper. Say we wanted to store an integer id: enum MyLocals { @TaskLocal static var id: Int! }

27:35

Because the variable must be static we are forced to give a default value. Here we have chosen to make the value an uninitialized, implicitly unwrapped optional so that we are forced to property set it up before using it. If we use it incorrectly it will trap, and that’s a good thing because it means we have done something very wrong.

27:49

Alternatively we could give the id a default value, but then we have to decide what is an appropriate default, which can be difficult. In our case, should we use 0 ? Or -1 ? Or -Int.max ? enum MyLocals { @TaskLocal static var id = -1 }

28:03

None seem like the right choice. We personally prefer the implicitly unwrapped optional as it should be a loud failure whenever you access an uninitialized task local.

28:12

It’s also worth mentioning that this little MyLocals namespace could be used to house other task locals that we may want throughout the application, including things like dependencies: enum MyLocals { @TaskLocal static var id: Int! // @TaskLocal var api: APIClient // @TaskLocal var database: DatabaseClient // @TaskLocal var stripe: StripeClient }

28:26

Alternatively you could also define a single struct to hold all of these values and then have a single @TaskLocal .

28:32

With our task locals defined we can access them by reaching directly into the MyLocals type: print(MyLocals.id) // nil

28:44

But of course right now the value is going to be uninitialized.

28:47

To set the value of id we go through a method on the property wrapper called withValue , which takes the value you want to update the local with, as well as a closure: MyLocals.$id.withValue( <#valueDuringOperation: Int?#>, operation: <#() throws -> R#> )

29:03

This may seem like a strange API, especially when compared to threads and dispatch queues where we were allowed to just reach right into the storage and mutating it directly.

29:11

But, this API design is similar to what we saw when getting the current task too: withUnsafeCurrentTask { task in }

29:18

You will see this pattern over-and-over in Swift concurrency. These API’s are designed in the continuation style where you pass a callback to the function so that the function can invoke your callback whenever it wants. This gives the function great control over the execution environment in which your callback is invoked. In the case of task locals, the local variable is only set during the lifetime of the closure you provide. After the execution of withValue the task local will go back to nil . The name of the argument even hints at this: valueDuringOperation . The value will only be updated while the operation is being executed, and then presumably will go back to whatever value it held before.

29:55

We can even look at the signature of withValue to see that the operation is not escaping, which means it must be invoked during the lifetime of invoking withValue : @discardableResult final func withValue<R>( _ valueDuringOperation: Value, operation: () throws -> R, ) rethrows -> R

30:04

This is in stark contrast with how thread dictionaries and queue specifics worked. With those APIs we could just reach right into the storage and mutate it however we want. With task locals we are seemingly more restricted, but this restriction allows for some really powerful features.

30:18

In order to see those features, let’s update the id to a new value and then print that id inside the operation closure: print("before:", MyLocals.id) MyLocals.$id.withValue(42) { print("withValue:", MyLocals.id!) } print("after:", MyLocals.id) before: nil withValue: 42 after: nil

30:45

So we can see that the local value was changed only inside the operation. As soon as the operation finished executing the id went back to nil .

30:52

However, even though the id is only changed for the duration of this operation, there are ways to make the change last for much longer. Whenever you start a new task it automatically inherits all of the task locals available at that moment. For example: print("before:", MyLocals.id) MyLocals.$id.withValue(42) { print("withValue:", MyLocals.id!) Task { print("Task:", MyLocals.id!) } } print("after:", MyLocals.id) before: nil withValue: 42 after: nil Task: 42

31:10

The inside task is able to print 42 even though that closure has escaped from the operation closure we are using in withValue .

31:19

We can even sleep the task before access the task local in another sub-task, and it still has 42: print("before:", MyLocals.id) MyLocals.$id.withValue(42) { print("withValue:", MyLocals.id!) Task { try await Task.sleep(nanoseconds: NSEC_PER_SEC) Task { print("Task:", MyLocals.id!) } } } print("after:", MyLocals.id) before: nil withValue: 42 after: nil Task: 42

31:46

This is possible specifically because the moment we create the task it captures all of the current task locals, and so then it doesn’t matter that later the withValue operation ends and the id local reverts back to nil .

31:59

We can even call out to other async functions and they too get the captured task locals: func doSomething() async { print("doSomething:", MyLocals.id!) } print("before:", MyLocals.id) MyLocals.$id.withValue(42) { print("withValue:", MyLocals.id!) Task { try await Task.sleep(nanoseconds: NSEC_PER_SEC) Task { print("withValue Task:", MyLocals.id!) await doSomething() } } } print("after:" MyLocals.id) before: nil withValue: 42 after: nil Task: 42 doSomething: 42

32:24

We can also nest withValue in order to create even smaller and more focused scoped changes to task locals: print("before:", MyLocals.id) MyLocals.$id.withValue(42) { print("withValue:", MyLocals.id!) Task { MyLocals.$id.withValue(1729) { Task { try await Task.sleep(nanoseconds: 2 * NSEC_PER_SEC) print("Task 2:", MyLocals.id!) } } try await Task.sleep(nanoseconds: NSEC_PER_SEC) print("Task:", MyLocals.id!) await doSomething() } } print("after:", MyLocals.id) before: nil withValue: 42 after: nil Task: 42 doSomething: 42 Task 2: 1729

32:57

This is pretty amazing. Because task locals were designed with this concept of setting their value for a particular scoped lifetime, they are capable of traveling much deeper into the system than was previously possible with threads and queues. We can now be a lot more confident that when we reach for a task local it will be there waiting for us.

33:14

So, armed with everything we have just learned about task locals, let’s apply this to the server demo we have explored a few times in past episodes. We will begin with a new type to hold our locals: enum RequestData { }

33:26

And then we define our task locals inside this type as statics with the @TaskLocal property wrapper: enum RequestData { @TaskLocal static var requestId: UUID! }

33:40

Let’s make things a little interesting and also hold onto the start date of when the request starts: enum RequestData { @TaskLocal static var requestId: UUID! @TaskLocal static var startDate: Date! }

33:48

With our task local variable defined we can now make use of it. We want to set the task local just before invoking the response function so that we can set up the locals. Since we have two locals we want to set we need to nest calls to withValue : RequestData.$requestId.withValue(UUID()) { RequestData.$startDate.withValue(Date()) { Task { try await response( .init(url: .init(string: "http://pointfree.co")!) ) } } }

34:25

Once the task local values are set we can access them quite easily from within our asynchronous work: @Sendable func response( _ request: URLRequest ) async throws -> HTTPURLResponse { let requestId = RequestData.requestId! let start = RequestData.startDate! defer { print( requestId, "Request finished in", Date().timeIntervalSince(start) ) } print(requestId, "Making database query") try await Task.sleep(nanoseconds: 500_000_000) print(requestId, "Finished database query") print(requestId, "Making network request") try await Task.sleep(nanoseconds: 500_000_000) print(requestId, "Finished network request") // TODO: return real response return .init() }

34:51

Running this shows that we really are able to access the newly set task locals directly in the asynchronous context: 4571A4A2-B101-4471-BFC2-96EDCC3B18F7 Making database query 4571A4A2-B101-4471-BFC2-96EDCC3B18F7 Finished database query 4571A4A2-B101-4471-BFC2-96EDCC3B18F7 Making network request 4571A4A2-B101-4471-BFC2-96EDCC3B18F7 Finished network request 4571A4A2-B101-4471-BFC2-96EDCC3B18F7 Request finished in 1.1019489765167236

35:09

Even better, if we decide to move the database query and network request to their own asynchronous functions we can feel free to access the task local right in there: func databaseQuery() async throws { let requestId = RequestData.requestId! print(requestId, "Making database query") try await Task.sleep(nanoseconds: 500_000_000) print(requestId, "Finished database query") } func networkRequest() async throws { let requestId = RequestData.requestId! print(requestId, "Making network request") try await Task.sleep(nanoseconds: 500_000_000) print(requestId, "Finished network request") }

35:24

The task locals will be properly propagated to these new async functions. We just have to invoke and await them from the response: func response(_ request: URLRequest) async throws -> HTTPURLResponse { // TODO: do the work to turn request into a response try await databaseQuery() try await networkRequest() print( RequestData.requestId!, "Request finished", Date().timeIntervalSince(RequestData.startDate) ) // TODO: return real response return .init() }

35:42

Now technically the two try await s are being run serially, whereas in previous episodes we made them run concurrently, but we will look at that a bit later. That caveat aside, this code still runs exactly as it did before.

35:56

Even cooler, we can spin up a new task inside this response async context to do some asynchronous work that we don’t want holding up the response function, and even that task will get the locals. For example, suppose that we want to track some analytics in this response, but that we don’t actually care about awaiting its result. It’s more of a fire-and-forget.

36:14

We can do this like so: Task { print(RequestData.requestId!, "Track analytics") } try await databaseQuery() try await networkRequest() DBFD9EAC-6CF7-425A-9F4B-98F33A6CD13E Making database query DBFD9EAC-6CF7-425A-9F4B-98F33A6CD13E Track analytics DBFD9EAC-6CF7-425A-9F4B-98F33A6CD13E Finished database query DBFD9EAC-6CF7-425A-9F4B-98F33A6CD13E Making network request DBFD9EAC-6CF7-425A-9F4B-98F33A6CD13E Finished network request DBFD9EAC-6CF7-425A-9F4B-98F33A6CD13E Request finished 1.0934910774230957

36:26

And amazingly even when getting into the newly spun up task to print that we tracked some analytics we were able to access the request ID.

36:30

So it seems that task local values permeate deeply throughout the system. They not only cross from one asynchronous context to another, but they even travel to new tasks spun up inside an existing task context.

36:41

This makes task locals a lot more understandable, useful and dependable than what we saw with thread dictionaries and dispatch queue specifics. We can be sure that when we set a task local that it will be accessible from deep within our application, and we can be confident that it will be correct value. Task cooperation

36:57

So far it seems that Swift’s new concurrency tools one up all the features we explored for threads and dispatch queues in the past two episodes. The tools are simpler to use, have tighter integration with the language, and are more deeply integrated into every facet of the runtime.

37:14

But there’s still more to compare with threads and dispatch queues. Recall that in previous episodes we explored what it looked like when we tried to perform many units of work concurrently. We saw that for threads this led to an explosion of threads and that meant all the threads were fighting for time on the CPU, and queues were able to keep the number of threads low by using a pool, but then it was easy to clog up the pool. Let’s see what this looks like with tasks.

37:52

First, let’s recall the setup for threads. We looped from 0 up until a constant to detach a whole bunch of threads and then kept those threads busy with an infinite loop: for _ in 0..<workCount { Thread.detachNewThread { while true {} } }

38:15

This literally creates 1,000 threads, which means we are going to use up a ton of resources. As we mentioned previously, this could be up to half a gig of memory from the stack and 8 gigs of virtual memory.

38:29

Then, after creating these 1,000 threads and keeping them busy we detached another thread to compute the 50,000th prime, which typically only takes about 20 milliseconds: Thread.detachNewThread { print("Starting prime thread") nthPrime(50_000) }

38:54

But with the amount of thread contention we have created with our 1,000 threads we found that this computation sometimes took as long as 4 seconds to compute, even though it should only take around 20 milliseconds.

39:12

We did a similar exercise with operation queues and dispatch queues and saw that spinning up 1,000 units of work just completely blocked up the thread pool. We couldn’t get any work done whatsoever because the infinite loops completely tied up all the threads in the pool. So on the one hand it was nice that queues managed a smaller set of threads for us, but on the other hand it means that without cooperation we can easily clog up the pool.

39:39

Well, let’s see what happens when we naively convert this code to use tasks instead. We can loop over the range from 0 to workCount , spin up a new task, and then tie it up with an infinite loop: for _ in 0..<workCount { Task { while true {} } }

39:51

And then right after that we will spin up another task to perform the prime computation: Task { print("Starting prime task") nthPrime(primeN) }

39:56

If we run this we will see…well nothing prints.

40:05

We can run again and then pause execution immediately after and we’ll see that there are 10 threads alive in the cooperative pool, and all 10 are completely blocked by the infinite loop.

40:28

This seems like a step back for us wanting to write code that runs many jobs simultaneously. Why would Swift limit us in this way?

40:31

Well, at the end of the day our computers only have a finite number of cores, and although our computers allow us to create many more threads than we have cores, we have seen that if done without abandon you can easily explode the number of threads and resources used, and cause threads to fight over execution time.

40:48

So, Swift takes the stance that it will not pretend we can spin up any number of threads to get the job done, and instead creates a smaller, more reasonable number of threads. And Swift further asks that anyone performing asynchronous work to cooperate in their usage of this shared thread pool. We should allow other asynchronous jobs running in the system to perform their work.

41:12

So, the question is: how do we cooperate?

41:14

Well, Swift gives us some powerful tools. First and foremost we should never block in an asynchronous context for long periods of time. Doing so would block one of the 10 threads devoted to Swift’s concurrency runtime, which is hugely problematic. Each time you block you could be potentially using up 10% of the resources Swift uses for concurrency.

41:46

The primary way we prevent blocking is to use non-blocking APIs. We’ve already experienced two such tools in this episode.

41:49

This is in stark contrast with the Thread.sleep method which just parked on a thread and didn’t allow anyone else to use it, wasting resources.

41:57

We can even do more complicated non-blocking, asynchronous work like downloading a megabyte data file. Let’s fire up 1,000 tasks do to do: for n in 0..<workCount { Task { let (data, _) = try await URLSession.shared.data( from: URL( string: "http://ipv4.download.thinkbroadband.com/1MB.zip" )! ) print(n, data.count, Thread.current) } }

42:26

Running this we will see the prime calculation returned very quickly and the downloads trickling in.

42:39

Apple ships a whole bunch of other non-blocking APIs besides sleeping and network requests, such as accessing the file system, asking for location permissions, and more. As long as you make use of these non-blocking APIs by working in an asynchronous context and using the await keyword you will generally be in good shape.

42:55

However, sometimes you really need to do some heavy, computationally expensive work that will tie up a thread for some amount of time. In such cases Swift provides a tool that allows you to occasionally give up your task’s thread so that other tasks can get time in the cooperative thread pool. It’s a static method on Task called yield , and its asynchronous so you must be in an asynchronous context to invoke it and it must be invoked with await : await Task.yield()

43:31

This creates a new suspension point, which frees up the task’s thread so that other tasks can make use of the thread. And then at some later point control will be restored back to this task so that we can continue executing.

43:52

This is an amazing tool for cooperation. It allows you to perform intense CPU work in a thread, while also not holding up an entire thread from the cooperative pool. We will use this in our little experimental demo by yielding inside the infinite loop: for _ in 1...workCount { Task.detached { while true { await Task.yield() } } }

44:03

When we run this it now runs almost instantly: 50000th prime 611953 time 0.025207375

44:11

The 50,000th prime wasn’t hampered at all by all of those tasks. Once those tasks yielded it gave the nthPrime function time to do its work, and it finished that work as quickly as when we ran the baseline in isolation. Around 25 milliseconds.

44:28

Next time: Sendable and Actors

44:28

So we have now seen that the concepts of asynchrony are deeply baked into the Swift language. If you want to perform asynchronous work you need to be in an asynchronous context, which is something that the compiler explicitly knows about. You either need to implement your function with the async keyword applied to it, which means the caller is responsible for providing the asynchronous context, or you need to spin up a new task using the Task initializer. The choice between these two styles of providing an asynchronous context are very different, but we will dive into that topic in a moment.

44:42

Before that, there was another topic we delved into for both threads and dispatch queues, and that is data synchronization and data races. We saw that if we accessed mutable state from multiple threads or queues, then we leave ourselves open to data races, where two threads simultaneously read and write to the same value. When this happens we get unexpected results, such as incrementing a counter 1,000 times from 1,000 different threads causes the count to be slightly less than 1,000. This happens when one thread writes the count in between the moment when another thread reads the count and then writes to it. In that case the second write will mutate with an out-of-date value.

45:20

Let’s see what new tools Swift gives us to solve this problem…next time! References NSOperation Mattt • Jul 14, 2014 Note In life, there’s always work to be done. Every day brings with it a steady stream of tasks and chores to fill the working hours of our existence. Productivity is, as in life as it is in programming, a matter of scheduling and prioritizing and multi-tasking work in order to keep up appearances. https://nshipster.com/nsoperation/ libdispatch efficiency tips Thomas Clement • Apr 26, 2018 Note The libdispatch is one of the most misused API due to the way it was presented to us when it was introduced and for many years after that, and due to the confusing documentation and API. This page is a compilation of important things to know if you’re going to use this library. Many references are available at the end of this document pointing to comments from Apple’s very own libdispatch maintainer (Pierre Habouzit). https://gist.github.com/tclementdev/6af616354912b0347cdf6db159c37057 Modernizing Grand Central Dispatch Usage Apple • Jun 5, 2017 Note macOS 10.13 and iOS 11 have reinvented how Grand Central Dispatch and the Darwin kernel collaborate, enabling your applications to run concurrent workloads more efficiently. Learn how to modernize your code to take advantage of these improvements and make optimal use of hardware resources. https://developer.apple.com/videos/play/wwdc2017/706/ What went wrong with the libdispatch. A tale of caution for the future of concurrency. Thomas Clement • Nov 23, 2020 https://tclementdev.com/posts/what_went_wrong_with_the_libdispatch.html Introducing Swift Atomics Karoy Lorentey • Oct 1, 2020 Note I’m delighted to announce Swift Atomics, a new open source package that enables direct use of low-level atomic operations in Swift code. The goal of this library is to enable intrepid systems programmers to start building synchronization constructs (such as concurrent data structures) directly in Swift. https://www.swift.org/blog/swift-atomics/ Downloads Sample code 0192-concurrency-pt3 Point-Free A hub for advanced Swift programming. Brought to you by Brandon Williams and Stephen Celis . Content Become a member The Point-Free Way Beta previews Gifts Videos Collections Free clips Blog More About Us Community Slack Mastodon Twitter BlueSky GitHub Contact Us Privacy Policy © 2026 Point-Free, Inc. All rights are reserved for the videos and transcripts on this site. All other content is licensed under CC BY-NC-SA 4.0 , and the underlying source code to run this site is licensed under the MIT License .