Video #194: Concurrency's Future: Structured and Unstructured
Episode: Video #194 Date: Jun 27, 2022 Access: Members Only 🔒 URL: https://www.pointfree.co/episodes/ep194-concurrency-s-future-structured-and-unstructured

Description
There are amazing features of Swift concurrency that don’t quite fit into our narrative of examining it through the lens of past concurrency tools. Instead, we’ll examine them through the lens of a past programming paradigm, structured programming, and see what is has to say about structured concurrency.
Video
Cloudflare Stream video ID: ed82f5feb573c080c58897e6a06ccf22 Local file: video_194_concurrency-s-future-structured-and-unstructured.mp4 *(download with --video 194)*
References
- Discussions
- Task makes use of special compiler attribute
- NSOperation
- libdispatch efficiency tips
- Modernizing Grand Central Dispatch Usage
- What went wrong with the libdispatch. A tale of caution for the future of concurrency.
- Introducing Swift Atomics
- 0194-concurrency-pt5
- Brandon Williams
- Stephen Celis
- Mastodon
- GitHub
- CC BY-NC-SA 4.0
- source code
- MIT License
Transcript
— 0:05
This is yet another example of how difficult multithreaded programming can be. Just because we have extremely powerful tools for preventing data races doesn’t mean we have removed the possibilities of non-determinism creeping into our code. Just by virtue of the fact that we are firing off a bunch of concurrent tasks at once we have no way to avoid introducing some non-determinism into the system based on how the system is going to schedule and prioritize all of those tasks. If we don’t want that kind of non-determinism then we shouldn’t be performing concurrent work.
— 0:34
But the issue of non-determinism is completely separate from the issues of data races, and Swift’s tools are tuned to address the data races, not the non-determinism.
— 0:42
We’ve now seen how Swift’s new concurrency tools compare to many of the other tools on Apple’s platforms, including threads, operation queues, dispatch queues and the Combine framework. And in pretty much every category that we considered, Swift’s new concurrency tools blew the old tools out of the water:
— 0:58
First, the concepts of asynchrony and concurrency are now baked directly into the language rather than bolted on as a library. Swift can now express when a function needs to perform asynchronous work, using the new async keyword, and Swift can express types and functions that can be used concurrently, using the new Sendable protocol and @Sendable attribute.
— 1:17
Second, although we don’t explicitly manage something like a thread pool or an execution queue, somehow Swift allows spinning up many thousands of concurrent tasks without exploding the number of threads created. In fact, a max of only 10 threads seems to be created for our computers.
— 1:33
Third, tasks have all the features that threads and queues had, such as priority, cooperative cancellation and storage, but in each case tasks massively improved the situation over the older tools. Cancellation is deeply ingrained into the system so that the cancellation of top level tasks trickle down to child tasks, and task storage also now inherits from task to child task, allowing you to nest locals in complex yet understandable ways.
— 2:01
Fourth, although Swift’s concurrency runtime limits us to a small number of threads in the cooperative thread pool, Swift does give us the tools that help us not clog up that pool. Using things like non-blocking asynchronous functions and Task.yield we suspend our functions to allow other tasks to use our thread, and then once we are ready to resume a thread will be automatically provided to us.
— 2:25
Fifth, and perhaps most exciting, Swift now provides a first class type for synchronizing and isolating mutable data in such a way that the compiler understands when you might have used it incorrectly. They’re called actors, and they allow you to largely write code that looks like simple, synchronous code, but under the hood it is locking and unlocking access to the mutable data.
— 2:49
Already it’s pretty impressive for Swift to accomplish so much so quickly. But there’s even more. Swift new concurrency tools allow us to write our asynchronous and concurrent code in a style that is substantially different from how we wrote our code with threads and queues. There are other features of Swift concurrency that are so unique that there’s just nothing we can really compare them to for the older concurrency tools such as threads and queues.
— 3:14
So, we’d like to take one more episode in this series on concurrency to discuss the amazing features that don’t quite fit into our narrative of looking at concurrency through the lens of the past.
— 3:25
And we will begin by discussing the concept of structured concurrency. Well really, let’s back up a bit and talk about structured programming in general so that we know why structured concurrency is such a big deal.
— 3:35
Most modern, popular languages are primary “structured programming languages”, so there’s a very good chance that you have never really programmed in an “unstructured” way. To put it simply, structured programming is a paradigm that aims to make programs read linearly from top-to-bottom. Doing so can help you compartmentalize parts of the program as black boxes so that you don’t have to be intimately familiar with all of its details at all times. The bread-and-butter of structured programming are tools like conditionals, loops, function calls and recursion.
— 4:07
This may seem very intuitive and obvious to you, but back in the 1950s it wasn’t so clear. At that time human readable programming languages were still quite nascent, and so those languages had tools that made a lot of sense for how the code was run at a low level on the machine, but was difficult for humans to fully understand.
— 4:26
An example of such a tool is the jump command. It allows you to redirect the flow of execution of the program to any other part of the program. Swift doesn’t have this tool, at least not in full generality, but let’s look at what it could have looked like. Structured programming
— 4:38
Suppose for a moment that Swift did not have for loops, but did have that jump statement we mentioned a moment ago. We could replicate what for loops give us by using the jump, for example, printing all the even integers between 0 and 100: var x = 0 top: if x.isMultiple(of: 2) { print(x) } x += 1 if x <= 100 { continue top } Here we label a particular line of our code, and then theorize the use of continue that allows us to redirect execution flow back to that label.
— 6:00
Again this probably seems like a bizarre way to write a program, but back then it was completely natural because jump is what is used in low-level Assembly Language.
— 6:10
Things get even more bizarre if we tried to do nested iteration in order to print all pairs of even numbers between 0 and 100: var x = 0 outer: var y = 0 inner: if x.isMultiple(of: 2) && y.isMultiple(of: 2) { print(x, y) } y += 1 if y <= 100 { continue inner } x += 1 if x <= 100 { continue outer }
— 6:55
When compared to the equivalent for loop style there’s no contest as to which is more readable: for x in 0...100 { for y in 0...100 { if x.isMultiple(of: 2) && y.isMultiple(of: 2) { print(x, y) } } }
— 7:22
However, the downsides to the jump statement go far beyond the aesthetics of the code. The fact of the matter is that a jump statement can magically transport execution to literally any line of your entire code base, which means that in order to understand any line of code you must understand all of the other lines of code that could have led to that line.
— 7:44
This is different from function calls in the languages we are used to writing in. Sure, invoking a function seems to transport execution to that function: func add(_ lhs: Int, _ rhs: Int) -> Int { lhs + rhs } add(3, 4)
— 7:58
But it is well-defined what the add function has access to, in particular its arguments and any globals or variables in the outer scope, and flow of execution is immediately returned to the caller when the function is done. Both of these features are necessary for us to be able to treat the add function as a black box from the outside and for us to not have to worry about any part of the code base on the inside.
— 8:24
Jump statements have neither of these features.
— 8:27
And if that wasn’t already the nail in the coffin for jump statements, things get worse. The flow of execution in a program with jump statements is so unpredictable and difficult to understand that it completely limits the programming language’s ability to implement features we’d like to have.
— 8:42
For example, take the lowly defer statement. It’s only purpose is to execute a little bit of logic at the end of a scope. It can be a great way to give high visibility to some terminal logic, such as closing an opened resource, instrumenting the executed code, and more. In our nested loop example it gives us the ability to print when the inner and outer loops finish without hiding that code after the loops: defer { print("Outer loop finished") } for x in 0...100 { defer { print("Inner loop finished for", x) for y in 0...100 { if x.isMultiple(of: 2) && y.isMultiple(of: 2) { print(x, y) } } }
— 9:21
Unfortunately, with completely unhindered access to jump statements Swift couldn’t possibly have this feature. If it did have some concept of defer , it would have no choice but to execute that just before any jump happens: var x = 0 defer { print("Deferred work") } outer: var y = 0 inner: if x.isMultiple(of: 2) && y.isMultiple(of: 2) { print(x, y) } y += 1 if y <= 100 { continue inner // "Deferred work" } x += 1 if x <= 100 { continue outer // "Deferred work" }
— 9:44
We don’t have the granularity to say what scope a defer statement belongs to because, well, there are no scopes. It’s just one long, flat list of instructions.
— 9:53
So we have no choice but to insert the print statements right after the conditionals that check if we should jump back to the top of the loop: var x = 0 outer: var y = 0 inner: if x.isMultiple(of: 2) && y.isMultiple(of: 2) { print(x, y) } y += 1 if y <= 100 { continue inner } print("Inner loop finished", x) x += 1 if x <= 100 { continue outer } print("Outer loop finished")
— 10:10
And not only would Swift have to shed some features if it adopted the jump statement, but it would also be very difficult, if not impossible, to write safe code.
— 10:19
For example, something we have now discussed a number of times in this series of episodes is a counter class that uses a lock under the hood in order to synchronize incrementing an internal piece of mutable state: final class Counter { let lock = NSLock() var count = 0 func increment() { self.lock.lock() defer { self.lock.unlock() } self.count += 1 } }
— 10:32
Here we are using a defer statement in order to guarantee that everything done inside the increment method is covered by the lock. It’s nice to use defer , but it isn’t absolutely necessary. We can of course do this: func increment() { self.lock.lock() self.count += 1 self.lock.unlock() }
— 10:44
But even with that change the code is not safe in a world of jump statements. We could still jump off to some other part of the code base and never return: func increment() { self.lock.lock() self.count += 1 continue somewhereElse self.lock.unlock() }
— 10:54
Which would mean our lock is never closed, and so this method will be locked forever.
— 10:59
This is only the tip of the iceberg of what can go wrong in programming languages with unrestricted jump statements. And this is why structured programming languages were a big area of research that ultimately produced languages that look more familiar to us today.
— 11:12
And although Swift does not offer jump statements, at least not in the completely unfettered way that unstructured programming languages do, it does still have some tools that leave the world of fully structured programming. We’ve already even seen a few of these tools.
— 11:31
For example, consider the times we detached a new thread: Thread.detachNewThread { print(Thread.current) }
— 11:36
This creates a completely new execution flow that is untethered from the execution flow that started it. If we put print statements before and after the detachment: print("Before") Thread.detachNewThread { print(Thread.current) } print("After")
— 11:49
We will see that the thread is printed after the “After” output: Before After <NSThread: 0x10071c760>{number = 2, name = (null)}
— 11:55
This clearly means that execution of this little snippet does not flow linearly from top-to-bottom. The thread closure goes off and does its own thing regardless of what is happening in the current thread.
— 12:04
This lack of top-to-bottom execution means that the tools we know and love from Swift are going to be subtly broken. For example, if we add a defer statement before spinning up the thread, then of course the defer is not going to execute when the thread finishes: do { defer { print("Finished") } print("Before") Thread.detachNewThread { print(Thread.current) } print("After") } Before After Finished <NSThread: 0x100710d70>{number = 2, name = (null)}
— 12:25
That couldn’t possibly work since the thread starts a totally new execution flow.
— 12:32
Similarly, if we lock before starting the thread and unlock after, then it is not guaranteed that we will be locked inside the thread’s execution: do { let lock = NSLock() lock.lock() print("Before") Thread.detachNewThread { // Not locked in here print(Thread.current) } print("After") lock.unlock() }
— 12:45
And threads aren’t the only way we can accidentally escape the structured programming world. It also happens with operation queues: print("Before") let queue = OperationQueue() queue.addOperation { print(Thread.current) } print("After") And dispatch queues: print("Before") let queue = DispatchQueue(label: "queue") queue.async { print(Thread.current) } print("After")
— 12:49
And more generally, it has nothing to do with threads or queues and has everything to do with escaping closures. Remember that an escaping closure is one that can be captured and used beyond the lifetime of the function that it is passed to. That is the essence of what we are grappling with here. Escaping closures allow you to spin up a completely new execution flow that is untethered to the execution flow that started the work. Structured concurrency
— 13:14
So, now that we understand what structured programming gave us over unstructured programming, what can structured concurrency give us over unstructured concurrency?
— 13:23
Turns out a lot.
— 13:25
By being able to write concurrent code in a style that looks like regular structured programming we will get access to all the tools we are used to from Swift, just in a concurrent context. This means we can write our programs from top-to-bottom even though they are asynchronous and concurrent, we can use tools like defer , and we can have a notion of scopes in our code so that it’s easy to run terminal code such as resource clean up.
— 13:48
Let’s check it out.
— 13:55
To explore this, let’s take another look at that theoretical server function we have played around with a number of times. Recall that we first have a few task local values that we want to set when a request first comes into the server, such as request ID and start date, so that we can have access to those values throughout the entire request-to-response lifecycle: enum RequestData { @TaskLocal static var requestId: UUID! @TaskLocal static var startDate: Date! }
— 14:40
Then we had a few async functions for simulating the idea of performing a database query and network request, and we performed some logging using the request ID: func databaseQuery() async throws { let requestId = RequestData.requestId! print(requestId, "Making database query") try await Task.sleep(nanoseconds: 500_000_000) print(requestId, "Finished database query") } func networkRequest() async throws { let requestId = RequestData.requestId! print(requestId, "Making network request") try await Task.sleep(nanoseconds: 500_000_000) print(requestId, "Finished network request") }
— 14:51
Then we implemented a server as a simple function that transforms an incoming request into an outgoing response. Right now it just invokes each of the database and request async functions and then returns a stubbed response: func response(_ request: URLRequest) async throws -> HTTPURLResponse { // TODO: do the work to turn request into a response let requestId = RequestData.requestId! let start = RequestData.startDate! defer { print(requestId, "Request finished in", Date().timeIntervalSince(start)) } try await databaseQuery() try await networkRequest() // TODO: return real response return .init() }
— 15:06
As we noted previously, this function is still performing its asynchronous work in a serial manner. The database query must complete before the network request can start. And because of this the response takes longer than it would had that work be parallelized, which we will be doing in a moment.
— 15:22
To see this slower response time we set up all of our task locals and then fire off the response in a task: RequestData.$requestId.withValue(UUID()) { RequestData.$startDate.withValue(Date()) { Task { _ = try await response(.init(url: .init(string: "http://pointfree.co")!)) } } }
— 15:29
Running this shows that the response takes about a second because each of the async functions take half a second: 2605A4AD-2B98-44F2-8494-AE640DE40937 Making database query 2605A4AD-2B98-44F2-8494-AE640DE40937 Finished database query 2605A4AD-2B98-44F2-8494-AE640DE40937 Making network request 2605A4AD-2B98-44F2-8494-AE640DE40937 Finished network request 2605A4AD-2B98-44F2-8494-AE640DE40937 Request finished in 1.103769063949585
— 15:41
Since the database query and network request are fully independent of each other we have the potential for speeding up this code by running each in parallel.
— 15:48
Now let’s first attempt this with just the tools we know about. As we’ve seen before, we can fire up a new task explicitly in order to perform some asynchronous work that is untethered to the scope from which it was started. So, maybe we just need to start up two tasks for each async function in order to run them in parallel: Task { try await databaseQuery() } Task { try await networkRequest() }
— 16:06
Well, this does indeed run the functions in parallel, but we have also lost some the structured aspect of this code. The task initializer is non-blocking, and so execution flow breezes right past while simultaneously the two new tasks get their own execution flow. This code does not read linearly from top-to bottom.
— 16:23
We can see this concretely by running it to see that the response seemingly finishes in just a fraction of a second: F6D4ECB0-48DF-44CD-BA35-8BBC28E92690 Request finished in 0.006371021270751953 F6D4ECB0-48DF-44CD-BA35-8BBC28E92690 Making database query F6D4ECB0-48DF-44CD-BA35-8BBC28E92690 Making network request F6D4ECB0-48DF-44CD-BA35-8BBC28E92690 Finished database query F6D4ECB0-48DF-44CD-BA35-8BBC28E92690 Finished network request
— 16:34
This is only happening because the timing is no longer taking into account how long the database and network requests take to execute.
— 16:40
On the one hand this behavior is to be expected. As we saw with structured programming previously, any time you use escaping closures you are exiting the world of structured programming. Escaping closures allow you to spin off a new execution flow that is independent of the one you are currently working on.
— 16:55
So, this behavior isn’t surprising, but on the other hand it would be really nice to be able to stay in the structured programming world even when needing to perform two tasks in parallel. Luckily there’s a tool that helps us bridge back to the structured programming world. The Task type comes with a property to access the value returned by the task, which of course must be done by awaiting: let task = Task { try await Task.sleep(nanoseconds: NSEC_PER_SEC) return 42 } let number = try await task.value
— 17:22
So, what we can do is assign the tasks we are creating to variables, and then await each of them: let databaseTask = Task { try await databaseQuery() } let networkTask = Task { try await networkRequest() } try await databaseTask.value try await networkTask.value
— 17:33
This may look serial since we are awaiting the network task after awaiting the database task, but really the two tasks are running in parallel so that by the time the database task is finished the network task has already been running and so will finish faster.
— 17:47
We can see this by running the executable and seeing that it only takes a little over half a second: B2A20E2F-6389-4A0A-9BB0-9F6C2AB1FF86 Making database query B2A20E2F-6389-4A0A-9BB0-9F6C2AB1FF86 Making network request B2A20E2F-6389-4A0A-9BB0-9F6C2AB1FF86 Finished database query B2A20E2F-6389-4A0A-9BB0-9F6C2AB1FF86 Finished network request B2A20E2F-6389-4A0A-9BB0-9F6C2AB1FF86 Request finished in 0.552590012550354
— 18:01
So, it’s pretty cool that even though we had to escape the structured programming world, there are tools to bring us back.
— 18:07
However, the code isn’t as succinct as it could be. It seems strange that we have to spin up new tasks just so that we can wait for both of them to complete later. But there are other problems beyond just aesthetics with this code, and it has to do with cancellation.
— 18:20
It turns out that not only does creating new tasks exit the structured programming world, but it also stops participating in cooperative cancellation. For example, suppose we cancel the response task 0.1 seconds after we start it: RequestData.$requestId.withValue(UUID()) { RequestData.$startDate.withValue(Date()) { let task = Task { _ = try await response(.init(url: .init(string: "http://pointfree.co")!)) } } } Thread.sleep(forTimeInterval: 0.1) task.cancel()
— 18:42
We would hope this cancels the database and network request and short circuits all later work so that it can return quickly, but that’s not the case: C73CF46E-4725-4907-9E3E-1E2E8FA76597 Making database query C73CF46E-4725-4907-9E3E-1E2E8FA76597 Making network request C73CF46E-4725-4907-9E3E-1E2E8FA76597 Finished network request C73CF46E-4725-4907-9E3E-1E2E8FA76597 Finished database query C73CF46E-4725-4907-9E3E-1E2E8FA76597 Request finished in 0.552880048751831
— 18:52
This function still took about half a second to execute even though the task was cancelled after just 0.1 seconds.
— 18:56
We can even put some extra print statements inside the database and network request to see what their cancellation state is: func databaseQuery() async throws { let requestId = RequestData.requestId! defer { print(requestId, "databaseQuery", "isCancelled", Task.isCancelled) } … } func networkRequest() async throws { let requestId = RequestData.requestId! defer { print(requestId,"networkRequest", "isCancelled", Task.isCancelled) } … } 55B0199A-67C4-4F71-8712-FE3A98B612A3 Making database query 55B0199A-67C4-4F71-8712-FE3A98B612A3 Making network request 55B0199A-67C4-4F71-8712-FE3A98B612A3 Finished database query 55B0199A-67C4-4F71-8712-FE3A98B612A3 databaseQuery isCancelled false 55B0199A-67C4-4F71-8712-FE3A98B612A3 Finished network request 55B0199A-67C4-4F71-8712-FE3A98B612A3 networkRequest isCancelled false 55B0199A-67C4-4F71-8712-FE3A98B612A3 Request finished in 0.5559689998626709
— 19:11
So it seems that isCancelled is false in both functions.
— 19:16
This seems to run contrary to some of the things we explored in previous episodes. Previously we saw that cancelling the parent task trickled down to the child units of work. However, back then we were doing a simple await on the function call, and now we are spinning up new tasks.
— 19:31
It turns out that cancelling a task does not cancel any new tasks spun up inside, even if you are awaiting its result. You might be surprised that a task isn’t cancelled when its parent task is, but this is just how Apple’s designed structured concurrency: once you enter the unstructured world by spinning off a task, you must explicitly manage its cancellation.
— 19:50
It is possible to not only bridge our unstructured code back into structured code, but also recover its cancellation behavior. It’s called withTaskCancellationHandler , and it allows you to tap into the moment that our current asynchronous context is cancelled so that we can perform extra work: withTaskCancellationHandler( handler: <#() -> Void#>, operation: <#() async throws -> T#> )
— 20:09
First you supply a closure that represents the work you want to perform when cancellation is detected, and the second closure is the actual asynchronous work you want to execute. If the parent asynchronous context is cancelled while performing this asynchronous work, the handler will be invoked.
— 20:23
So, what we need to do is explicitly cancel the database and network request tasks when cancellation is detected: try await withTaskCancellationHandler { databaseTask.cancel() networkTask.cancel() } operation: { try await databaseTask.value try await networkTask.value }
— 20:44
And now when we run this code we see that the database and network request functions do properly detect when cancellation happens, and the entire response takes only about 0.1 seconds since that’s how much time we wait before cancelling: 6BE67EBA-FB88-41BE-B097-A2131DFB786C Making database query 6BE67EBA-FB88-41BE-B097-A2131DFB786C Making network request 6BE67EBA-FB88-41BE-B097-A2131DFB786C databaseQuery isCancelled true 6BE67EBA-FB88-41BE-B097-A2131DFB786C networkRequest isCancelled true 6BE67EBA-FB88-41BE-B097-A2131DFB786C Request finished in 0.10507309436798096
— 20:51
So it works, but the code keeps getting longer and stranger. It kinda still reads linearly from top-to-bottom, but also not really. We have to first upfront declare our tasks, then invoke this weird withTaskCancellationHandler function, then separately provide closures for handling the cancellation logic and the operation logic.
— 21:13
Sometimes this withTaskCancellationHandler function really is necessary to use, but luckily for us there is an even simpler tool to use for this specific situation. There is a tool called async let that allows you to run multiple asynchronous units of work in parallel, while the code still reads linearly from top-to-bottom and cancellation happens as you would expect.
— 21:32
You start by declaring each unit of work that you want to run in parallel using the new async let combo of keywords without using await : async let databaseResponse = databaseQuery() async let networkResponse = networkRequest()
— 21:49
Currently our database and network functions don’t return anything of interest, but typically they would, and so let’s pretend they are giving us back some real data types: struct User { var id: Int } func fetchUser() async throws -> User { … return User(id: 42) } struct StripeSubscription { var id: Int } func fetchSubscription() async throws -> StripeSubscription { … return StripeSubscription(id: 1729) }
— 22:32
Then we would use async let more simply: async let user = fetchUser() async let subscription = fetchSubscription()
— 22:43
And if we try to use these variables in any way, such as passing it to a function or accessing a field: print(user) user.id
— 22:50
We are immediately confronted with a compiler error: Expression is ‘async’ but is not marked with ‘await’ Error: Reading ‘async let’ can throw but is not marked with ‘try’
— 22:52
It is not valid to directly use a variable that has been bound with async let . You must use await , and further must use try await if the value was obtained via a throwing process: try await user.id
— 23:08
But, even though we are awaiting for this user so that we can grab its id, the stripe subscription request is still going in parallel. We aren’t holding up that work just because we need access to the id.
— 23:17
We can even access both values simultaneously and just await a single time, like if we wanted to bundle both values into a response struct that could be encoded to JSON and set back to the client: struct User: Encodable { var id: Int } ... struct StripeSubscription: Encodable { var id: Int } ... struct Response: Encodable { let user: User let subscription: StripeSubscription } try await JSONEncoder().encode(Response(user: user, subscription: subscription))
— 24:05
It pretty incredible to see just how nicely this reads linearly from top-to-bottom: async let user = fetchUser() async let subscription = fetchSubscription() let jsonData = try await JSONEncoder().encode( Response( user: user, subscription: subscription ) )
— 24:11
And now that we have some data to return from our server, we can finally return a real response by bundling it up alongside the URLResponse in a tuple: func response(for request: URLRequest) async throws -> (Data, HTTPURLResponse) { … return (jsonData, .init()) }
— 24:59
We are performing concurrent work by fetching the user and subscription at the same time, and it’s not until we actually need to make use of it do we need to await. And even then we can just await a single time while both tasks run in parallel.
— 25:14
The async let construct works really well for when we need to run a statically known number of units of work in parallel and in a structured manner, but there’s another tool for dealing with an unknown number of units of work.
— 25:25
It’s called task group, and it allows you to suspend while a dynamic number of tasks do their work, and then resumes once all tasks are finished. You can even accumulate the output of each child task into a final output.
— 25:42
You begin by invoke the withTaskGroup function: withTaskGroup( of: <#Sendable#>, returning: <#GroupResult.Type#>, body: <#(inout TaskGroup<Sendable>) async -> GroupResult#> )
— 25:57
The first argument is a type of value that is returned from each child task the group spins up to do work, the second argument is the type of value that will be ultimately returned from running the group of tasks, and the final argument is a closure where you actually do the work to add tasks to the group.
— 26:22
To give this a spin, suppose that we wanted to fire up 1,000 tasks that simulate some complex process for procuring an integer, and then we want to sum up those 1,000 integers.
— 26:33
First, let’s see what this would look like in the world of bare threads. We can easily spin up 1,000 new threads that simulate some hard work by sleeping it for a bit, and then when that finishes it can increment some shared mutable state: var sum = 0 for n in 1...1_000 { Thread.detachNewThread { Thread.sleep(forTimeInterval: 1) sum += n } }
— 27:07
But, before we can print this value we have to manually sleep for a bit to wait for all the threads to finish since threads don’t come with a nice way to wait for them to all finish: Thread.sleep(forTimeInterval: 2) print(sum)
— 27:19
Running this gives us some number: 492540 But is it right?
— 27:27
If we run it again we get a completely different answer: 494177 So clearly it is not right.
— 27:42
This is happening because, as we’ve seen before, there is a race condition in this code. It is not safe to access and mutate the sum value from multiple threads. Technically we should be introducing a class with some internal locking in order to fix this problem, but we aren’t going to do that.
— 27:52
Instead, let’s look at how to compute this problem using task groups. The values we want each task to compute is an integer, as is the sum we return, so we can fill in the first and second arguments: withTaskGroup( of: Int.self, returning: Int.self, body: <#(inout TaskGroup<Sendable>) async -> GroupResult#> )
— 28:18
The third argument is a closure that is passed an inout task group, and it’s even async so that you can perform asynchronous work in the process of trying to add tasks to the group: await withTaskGroup(of: Int.self) { (group: inout TaskGroup<Int>) async in }
— 28:22
Then in this closure you can add tasks to the group via the addTask method: group.addTask(operation: <#() async -> Int#>) And you can add as many tasks as you want to the group.
— 28:29
Note that the task we are adding is an async function that takes no arguments but must return an integer. It knows that an integer must be returned because the group is of type TaskGroup<Int> , and that was determined by the fact that we specified we wanted a task group of Int . So this is a bit different from the thread example in which we are not even trying to mutate some shared mutable state, but rather are just going to return the one single value we procure from the task.
— 28:47
So, let’s loop over 1,000 integers and add a task that sleeps for a second and then returns an integer: for n in 1...1_000 { group.addTask { try? await Task.sleep(nanoseconds: NSEC_PER_SEC) return n } }
— 29:04
All of these tasks will be run in parallel, and we can wait for them all to finish: await group.waitForAll()
— 29:17
Or even better, we can iterate over their outputs, so as soon as one finishes we will get access to the value it emitted. Remember that since the tasks are run in parallel there is no guarantee of the order they will emit, and this can be an important distinction.
— 29:37
The way we get access to each child task’s output as it is emitted is to iterate over the group using for await : var sum = 0 for await int in group { sum += int } return sum
— 29:53
The task group conforms to a protocol known as AsyncSequence , which is analogous to the Sequence protocol in Swift, except it’s next() method is allowed to suspend to perform asynchronous work. This is what allows for the for await syntax.
— 30:09
This is now compiling: let sum = await withTaskGroup(of: Int.self, returning: Int.self) { group in … } Thread.sleep(forTimeInterval: 2) print(sum)
— 30:30
And if we run it we get something different from our threaded code: 500500
— 30:35
But if we run it again we get the same answer: 500500
— 30:40
And in fact this is the correct answer. It can be shown that the sum of integers from 1 to n is equal to “n*(n+1)/2”, and so for 1,000 that is 1000*1001/2, which is exactly 500,500.
— 31:01
So this code works without any race conditions and we didn’t even have to introduce an actor in order to isolate access to shared mutable state. Thanks to the way task group was designed we get the ability to accumulate the results of 1,000 tasks in a very simple manner.
— 31:25
Task groups also participate in cooperative cancellation in a deep way. If the asynchronous context that started the task group is cancelled, then we can check the isCancelled flag inside the task at any point to short-circuit later logic. We can even use a different method on group to completely bypass adding the task if it detects the parent task has already been cancelled: group.addTaskUnlessCancelled { … }
— 31:40
However, if you want cancellation to “just work” automatically like we have seen with throwing asynchronous units of work, then we have to switch to a throwing task group: try await withThrowingTaskGroup(of: Int.self) { group -> Int in … }
— 31:59
And now our sleep can try so that it throws if it detects cancellation: group.addTask { try await Task.sleep(nanoseconds: NSEC_PER_SEC) return n }
— 32:02
And then we can wrap the work up in a task so that we can cancel it after a small amount of time: let sum = try await withThrowingTaskGroup(of: Int.self, returning: Int.self) { group in … } print("sum", sum)
— 32:06
And if the task that spins up this task group is cancelled, cancellation trickles down to all the child tasks.
— 32:23
So task groups have allowed us to spin up a dynamic number of tasks to perform lots of work in parallel, and then we can gather all of the output from those tasks into a final accumulated result. And it’s important to note that this construct is still very much in the world of “structured programming” because this code reads linearly from top-to-bottom. Sure we have to split the work up into two steps due to the design of task group, where we first add all the tasks and the later combine all the tasks’ output into a single output, but it still reads linearly.
— 32:53
So, it’s pretty amazing to see the tools that Swift comes with to keep us in the structured programming world even though we are doing asynchronous and concurrency work. As we have seen, there are some tools out there that eject us from the world of structured programming, such as when we spin up a new Task from scratch, but we should strive to stay in the structured world for as long as possible. As long as we are using async functions with await statements, or async let s, or as we’ve just seen task groups, we can be confident that we will remain in the structured world.
— 33:22
The tricky part is what do we do when we don’t have an asynchronous context available? It seems like the only choice we have is to spin up a task, which as we now know will not be structured with respect to the code that starts up the task.
— 33:31
Well, unfortunately in such cases we really have no alternative but to spin up a new unstructured task. However, as the language and frameworks mature more and more, there will be fewer situations in which we need to do this. With each release Swift and iOS, Apple is adding more async-friendly APIs that provide a structured, asynchronous context for us to work in so that we can perform structured concurrent work.
— 33:56
For example, SwiftUI now exposes a view modifier that allows running an asynchronous task when the view appears, and the task will be cancelled when the view disappears: import SwiftUI Text("Hi") .task { let sum = try? await withThrowingTaskGroup( of: Int.self, returning: Int.self ) { group in … } print("sum", sum) }
— 34:38
As long as we use only the tools of structured programming we can be sure that cancellation of this top level task will trickle down to all the child tasks, including the database query and network request that are run in parallel inside the response function.
— 34:47
As another example, Swift also allows us to implement the entry point of executables in such a way that they are automatically provided with an asynchronous context. As we noted before, we can’t perform asynchronous work at the top-level of the main.swift file: try await Task.sleep(nanoseconds: NSEC_PER_SEC) ‘async’ call in a function that does not support concurrency
— 35:15
This will be allowed in Swift 5.7, but till then, we can refactor the entry point into a struct that is annotated with @main and has a single static function that is invoked when the executable starts, and that function is allowed to be async and throwing: @main struct Main { static func main() async throws { try await Task.sleep(nanoseconds: NSEC_PER_SEC) print("done!") } } done! Program ended with exit code: 0
— 35:53
The cool thing about immediately having an asynchronous context available, and that context defining the lifetime of the executable, is that we no longer need to add sleeps to the main thread so that work can be performed. That was hacky and imprecise. Now we can just sequentially run one async unit of work after another, and the executable will complete as soon as all work is done.
— 36:11
These two examples show that as Apple’s frameworks become more deeply integrated with Swift’s concurrency tools you will have to seldom, if ever, spin up unstructured tasks. In fact, we highly, highly recommend that you think long and hard about ways to avoid doing so. If you find yourself creating a new task it is worth thinking about other ways you could have an asynchronous context provided to you from a parent scope. Sometimes it may not be possible, but if you do figure it out, it just means that your code will read more easily from top-to-bottom, and cancellation will magically just work as you expect. Unstructured concurrency
— 36:44
So, although we prefer to remain in the structured programming world for as long as possible, we do know that sometimes it’s just not going to be possible. And although spinning up a new task does eject you from the structured programming world, it does inherit some things from the parent context that make it easier to use and more understandable. Let’s take a moment to explore these topics so that we can wield unstructured tasks to the best of our ability.
— 37:30
First, as we’ve mentioned before, unstructured tasks inherit the task locals from the current task context: enum MyLocals { @TaskLocal static var id: Int! } print("before:", MyLocals.id) MyLocals.$id.withValue(42) { print("withValue:", MyLocals.id!) Task { print("Task:", MyLocals.id!) } } print("after:", MyLocals.id) before: nil withValue: 42 after: nil Task: 42
— 37:48
This is incredibly powerful and allows us to push data deep inside our application without having to literally pass it through every layer of function, method or initializer.
— 37:59
Tasks inherit other things too, such as priority. We can see this by nesting a few tasks and printing out the tasks’ priorities: print(Task.currentPriority) Task(priority: .low) { print(Task.currentPriority) Task { print(Task.currentPriority) } } 33 17 17
— 38:24
This shows that the first task has priority 33, which is high, and the next two tasks have priority 17, which is low. This means the innermost task inherited the priority from the first task.
— 38:32
So now we see that unstructured tasks inherit the task locals and priority of the caller. There is a third thing tasks inherit, and that’s the actor context of the caller. To understand what this means, let’s go back to the counter actor we used when exploring data races: actor Counter { var count = 0 func increment() { self.count += 1 } func decrement() { self.count -= 1 } }
— 38:48
Suppose we wanted to add a really silly feature to this counter so that if you decrement below 0 it will increment back up, but will do so after a small delay. We could of course make decrement be async and then add a small sleep if we detect it dips below 0 before adding 1: func decrement() async throws { self.count -= 1 try await Task.sleep(nanoseconds: NSEC_PER_SEC/2) if self.count < 0 { self.count += 1 } }
— 39:23
However, this may not actually be what we want. By adding the sleep directly to decrement we would force the caller to suspend for the half second when it needs to perform its adjustment. That may not be what we want. We may want the caller to be able to breeze on by after incrementing, and then sometime later the actor automatically makes its adjustments.
— 39:41
This sounds like the perfect use case for an unstructured task. We specifically want to leave the structured world so that we can start a new execution flow for handling this logic. And so we may try to do it like this: func decrement() { self.count -= 1 Task { try await Task.sleep(nanoseconds: NSEC_PER_SEC/2) if self.count < 0 { self.count += 1 } } }
— 40:00
And this compiles. But perhaps it’s a little surprising that it compiles.
— 40:05
After all, so far whenever we have tried accessing methods and properties on actors we have be forced to await it: let counter = Counter() counter.count counter.decrement() Expression is ‘async’ but is not marked with ’await’ Expression is ‘async’ but is not marked with ‘await’
— 40:17
The only exception was when writing code directly in the actor: func increment() { self.count += 1 }
— 40:20
Now technically the task code is inside the actor, but as we’ve noted before, the closure used in the task initializer is an @escaping and @Sendable closure, which means for all intents and purposes it really is its own execution context. How on earth is it possible that we are able to reach out from this escaped context and access the actor’s properties without having to await for synchronization?
— 40:41
This is possible specifically because this task inherits its actor’s context. It is allowed to interact with the actor as if it was code written directly in a method on the actor, all without doing any awaiting. The isolation and synchronization is handled automatically for us. This is incredibly useful and important to understand, especially at times when it can be very important to know what actor we are running on, such as the case when dealing with UI APIs.
— 41:04
We can even give this feature a spin by firing up 1,000 tasks to hammer on the decrement endpoint, and then after waiting a bit of time we can confirm that the count was restored back to 0: Task { let counter = Counter() for _ in 0..<workCount { Task { await counter.decrement() } } Thread.sleep(forTimeInterval: 1) print(await counter.count) } 0
— 41:23
This is very cool. There are a lot of opportunities for race conditions in this code, not only when we hammer on the decrement endpoint, but also once the small delay passes and we increment back up. But the compiler is keeping us in check that we are not accidentally accessing mutable data in a non-isolated way, and the actor synchronizes access to the data automatically, and we can write our code in a very natural way without worrying about locks.
— 41:47
So, now we know that tasks created with this initializer inherit their priority, task locals and actor from the caller, which means that even if they are unstructured they still do have nice properties that make them easier to understand.
— 42:00
There is another way to create tasks that fully detaches you from the current context. It doesn’t inherit the priority, task locals or actor: Task.detached { }
— 42:11
For example, if we use a detached task in our decrement method: func decrement() { self.count -= 1 Task.detached { try await Task.sleep(nanoseconds: NSEC_PER_SEC/2) if self.count < 0 { self.count += 1 } } } }
— 42:14
We will see that it no longer compiles because it no longer operates in the same context as the Counter actor: Expression is ‘async’ but is not marked with ‘await’ Actor-isolated property ‘count’ can not be mutated from a Sendable closure
— 42:20
The only way to fix this is to invoke the increment method and await it: func decrement() { self.count -= 1 Task.detached { try await Task.sleep(nanoseconds: NSEC_PER_SEC/2) if await self.count < 0 { await self.increment() } } }
— 42:32
Detached tasks also do not inherit priority, as can be seen here: print(Task.currentPriority) Task(priority: .low) { print(Task.currentPriority) Task.detached { print(Task.currentPriority) } } try await Task.sleep(nanoseconds: NSEC_PER_SEC) TaskPriority(rawValue: 33) TaskPriority(rawValue: 17) TaskPriority(rawValue: 21)
— 42:46
The detached task has a priority of 21, which is the default priority.
— 42:46
Detached tasks also do not inherit task local values: enum MyLocals { @TaskLocal static var id: Int! } print("before:", MyLocals.id) MyLocals.$id.withValue(42) { print("withValue:", MyLocals.id!) Task.detached { print("Task:", MyLocals.id!) } } print("after:", MyLocals.id) before: nil withValue: 42 after: nil Thread 2: Swift runtime failure: Unexpectedly found nil while unwrapping an Optional value
— 43:13
So, we now see that tasks created with its regular initializer do exit us from the structured world, but bring along a lot of its niceties too, such as priority, task locals and actor. On the other hand, detached tasks also exit the world of structured programming, but they leave behind their priority, task locals and actor, and start with a clean slate.
— 43:34
It’s worth noting that even some of the tools for structured concurrency in Swift do not inherit everything from the current task. In particular, @Sendable closures, async let , and task groups do not inherit the current actor context.
— 43:44
For example, suppose that we did something silly like accessed an actor’s property inside a synchronous closure that is executed immediately: func increment() { self.count += 1 let count = { self.count }() }
— 44:05
This is completely fine to do. However, if we force the closure to be @Sendable , which means the compiler must assume that it will be used in a concurrent fashion at some point, we get a compiler error. func increment() { … let count = { @Sendable in self.count }() } Actor-isolated property ‘count’ can not be referenced from a Sendable closure
— 44:17
The compiler no longer thinks we are accessing self.count in an isolated manner because the sendable closure is no longer being operated in the context of the Counter actor. In order to do this we must provide an asynchronous context so that we can await access to self.count : func increment() async { … let count = await { @Sendable in await self.count }() }
— 44:41
Now you may be wondering how is that tasks seemed to inherit the actor context. After all, the initializer on Task took a @Sendable closure as its argument.
— 44:50
It turns out that Task makes use of special compiler attribute that is not exposed publicly that specifically make it inherit actor context: public init( priority: TaskPriority? = nil, @_inheritActorContext @_implicitSelfCapture operation: __owned @Sendable @escaping () async -> Success ) {
— 45:11
So the fact that tasks inherit actor context is just a special feature of tasks.
— 45:15
The async let construct also loses its actor context. To see this we can modify this perfectly valid code: let count = { self.count }() To be async let : async let count = { self.count }() Actor-isolated property ‘count’ can not be referenced from a non-isolated context
— 45:30
We get a compiler error saying that self.count is being accessed in a non-isolated way. Since async let ’s whole purpose is to run code concurrently, this closure is being implicitly updated to be @Sendable . The only way to fix this is to await accessing the count so that the actor can isolate: func increment() async { … async let count = { await self.count }() }
— 45:56
Task groups also do not inherit their actor context. To see this, let’s implement a silly method on Counter that spins up 1,000 tasks, and randomly either increments or decrements the counter: func increment() async { … await withTaskGroup(of: Void.self) { group in for _ in 1...1_000 { group.addTask { if Bool.random() { self.count += 1 } else { self.count -= 1 } } } } } Actor-isolated property ‘count’ can not be mutated from a Sendable closure Actor-isolated property ‘count’ can not be mutated from a Sendable closure
— 46:34
This does not compile because the closure passed to addTask does not inherit the current actor. This means we need to use the child task’s asynchronous context to interact with the actor: group.addTask { if Bool.random() { await self.increment() } else { await self.decrement() } }
— 46:53
Although task groups don’t inherit the actor, they do inherit priority and task locals, but we won’t show that right now. @MainActor
— 46:59
So we are now all very familiar with the concepts of structured and unstructured programming. Swift is mostly a structured programming language, and the most common time we leave the structured world is when dealing with escaping closures. But luckily for us Swift also provides us tools for writing asynchrony and concurrent code that is structured. This means we have a well-defined notion of scoping in our code, and asynchronous tasks have a lifetime tied to the scope, which makes for very readable and understandable code.
— 47:26
There should only be a few times when we need to reach for unstructured concurrency tools, and in the fullness of time we may never have to reach for those tools.
— 47:34
There’s one last topic we want to cover. Although using Swift’s concurrency tools largely prevents us from having to think about things like threads, there are still a few times it’s important. Sometimes we need to interact with a system that requires access to be done on a particular thread or thread pool. By far the most common example of this is interacting with anything UI related typically requires us to be on the main thread.
— 47:58
Let’s look at why this is surprisingly complicated when it comes to asynchrony in Swift, and see what tools Swift provides to fix the problem.
— 48:08
Given all the pitfalls we saw with unstructured programming, let’s first go back to using a structured entry point into the application: @main struct Main { static func main() async throws { } }
— 48:29
As we’ve seen before, it’s possible for tasks to be resumed after a suspension point on pretty much any thread. It doesn’t have to necessarily be the one you were on before the suspension point. Usually when working with tasks, you shouldn’t even need to think about what thread you’re working on, but what do we do for those times that we really do need to execute on a particular thread, like the main thread?
— 48:50
For example, suppose we had an observable object that held an integer counter as well as an async endpoint that simulates some kind of complex computation, for which we will just use a sleep for now: class ViewModel: ObservableObject { @Published var count = 0 func perform() async throws { try await Task.sleep(nanoseconds: NSEC_PER_SEC) self.count = .random(in: 1...1_000) } }
— 48:59
Sadly this compiles just fine with no warnings even though it is completely wrong. The problem is that we have no idea what thread is going to run the line that mutates self.count , but SwiftUI requires that published properties be mutated on the main thread: class ViewModel: ObservableObject { @Published var count = 0 func perform() async throws { try await Task.sleep(nanoseconds: NSEC_PER_SEC) if !Thread.current.isMainThread { print("🟣 Mutating @Published property on non-main thread.") } self.count = .random(in: 1...1_000) } } Task { let viewModel = ViewModel() try await viewModel.perform() } 🟣 Mutating @Published property on non-main thread.
— 49:30
So we may think we need to dispatch back to the main queue since that is the tool we are familiar with: class ViewModel: ObservableObject { @Published var count = 0 func perform() async throws { try await Task.sleep(nanoseconds: NSEC_PER_SEC) DispatchQueue.main.async { self.count = .random(in: 1...1_000) } } }
— 50:06
However this causes a warning which someday will be an error: Capture of ‘self’ with non-sendable type ‘ViewModel’ in a @Sendable closure
— 5:08
The problem is that ViewModel is not a sendable type, and as far as we know cannot easily be made into one since it’s a class with mutable data. Further, the async method on DispatchQueue requires the closure you pass to it to be sendable, and hence it cannot capture non-sendable things.
— 50:23
So, this is not safe concurrent code. But there are other problems. By mixing in a new form of concurrency that is different from async we are ejecting ourselves out of a lot of the niceties that Swift’s native concurrency tools give us.
— 50:41
For example, we lose all of our task locals by dispatching directly to the main queue: MyLocals.$id.withValue(42) { DispatchQueue.main.async { … self.count = MyLocals.id } } Thread 1: Swift runtime failure: Unexpectedly found nil while unwrapping an Optional value
— 51:07
We get a crash! That’s a big bummer. The reason this is happening is because the task local is only set for the lifetime of the withValue closure, which remember is non-escaping and so it called immediately and synchronously.
— 51:17
We can see this more concretely by adding another print statement: MyLocals.$id.withValue(42) { defer { print("withValue scope ended") } DispatchQueue.main.async { … print("On the main thread") … } } withValue scope ended On the main thread
— 51:30
So we see that we are trying to access the task local after the scope of the withValue function has already ended.
— 51:41
All of this is reason enough for us to look for another way of forcing work to be done on the main thread. Just as threads have the concept of a “main thread”, and dispatch queues have the concept of a “main queue”, actors have the concept of a “main actor” and it’s an actor type in the standard library literally called MainActor : @globalActor final public actor MainActor: GlobalActor { … }
— 52:00
Let’s ignore all the global actor stuff for a moment and just focus on the fact that it is indeed an actor.
— 52:06
The main actor type comes with a special endpoint for running a synchronous closure on the main thread: MainActor.run { … } Expression is ‘async’ but is not marked with ‘await’
— 52:24
However, in order to invoke this static method we must await it because it needs to coordinate with the isolated main thread context and that could take some time if someone else is already executing work on the main thread: await MainActor.run { … }
— 52:43
But now we have a warning where we mutate the count: await MainActor.run { self.count = .random(in: 1...1_000) } Capture of ‘self’ with non-sendable type ‘ViewModel’ in a @Sendable closure
— 52:47
While it is true that MainActor.run is the best way to synchronously run code on the main thread, it still is not appropriate to pass non-sendable data across this boundary. If you invoked MainActor.run multiple times you have a chance for a race condition: await MainActor.run { self.count = .random(in: 1...1_000) } await MainActor.run { self.count = .random(in: 1...1_000) }
— 53:11
There is another way to use MainActor that mitigates these issues. You can actually use MainActor as an attribute for decorating an entire function or method and then every line in that scope will be executed on the main actor, and hence the main thread: @MainActor func perform() async throws { try await Task.sleep(nanoseconds: NSEC_PER_SEC) MyLocals.$id.withValue(42) { defer { print("withValue scope ended") } if !Thread.current.isMainThread { print("🟣 Mutating @Published property on non-main thread.") } self.count = MyLocals.id print(self.count) } } 42 withValue scope ended
— 54:08
Now it’s important to note that just because the function is marked as @MainActor it doesn’t mean everything is performed on the main thread. Any suspension points are still allowed to be executed by other actors, and hence other threads. We can perform completely asynchronous and concurrent work in this function even though the whole thing is marked as @MainActor .
— 54:28
Also, sleeping in this function does not block up the main thread. Other tasks are allowed to execute on the main actor while we are sleeping, and we will see this very explicitly in a moment.
— 54:34
However, if we perform intense, blocking CPU work on the main thread we will block it up, like say computing the 2,000,00th prime: @MainActor func perform() async throws { nthPrime(2_000_000) … }
— 54:49
Because there is no suspension point here, there is no way for us to unblock the main thread and let other tasks execute.
— 54:51
You can even mark the entire view model class as MainActor : @MainActor class ViewModel: ObservableObject { … }
— 54:54
This implicitly marks all initializers, methods and computed properties as @MainActor . It further even makes the entire class @Sendable even though it doesn’t at first seem to be safe at all from being used in concurrent contexts. After all, it is not a final class and it contains mutable data.
— 55:10
However, by declaring it as @MainActor we know that all interactions with it will be serialized to the main thread, and that makes it safe to use across concurrent boundaries.
— 55:19
We can see this concretely by trying access self from a @Sendable closure context, such as a new task: @MainActor class ViewModel: ObservableObject { @Published var count = 0 func perform() async throws { Task { self.count += 1 } } }
— 55:28
Without the @MainActor attribute this class is no longer sendable, and hence we get a warning: class ViewModel: ObservableObject { @Published var count = 0 func perform() async throws { Task { self.count += 1 } } } Capture of ‘self’ with non-sendable type ‘ViewModel’ in a @Sendable closure
— 55:33
Even when you are in a @MainActor context there is still a way to escape it. Recall that tasks created with its initializer automatically inherit the actor context, and so we expect this to print the main thread: @MainActor func perform() async throws { Task { print(Thread.current) } } <_NSMainThread: 0x10600a9b0>{number = 1, name = main}
— 55:50
And it does.
— 55:51
However, if we detach a task then we lose its actor context, and it goes back to executing on a background thread: @MainActor func perform() async throws { Task.detached { print(Thread.current) } } <NSThread: 0x1061040e0>{number = 2, name = (null)}
— 55:58
There were only concurrency constructs that did not inherit the actor context, such as async let and task groups. And if we were to introduced these tools to code running on the main actor they will run on non-main threads.
— 56:10
With all of these tools in our arsenal we can finally demonstrate something that at first seems counterintuitive. Although throughout all 5 episodes of these series we have seemed to obsess over threads, constantly printing out threads in order to see what work is running on what thread, it turns out you can go really, really far without ever thinking about threads. In fact, you can do pretty much everything we’ve discussed on just a single thread.
— 56:33
That may seem bizarre, but there are actually a lot of environments that are naturally single threaded, such as web assembly and some raspberry pis, and so having a story for asynchrony and concurrency in such situations can be extremely powerful.
— 56:49
To explore this, let’s fire up a bunch of concurrent work, and force it all to run on the main thread. We’ll start up a group with one task that prints every quarter second: await withThrowingTaskGroup(of: Void.self) { group in group.addTask { @MainActor in while true { try await Task.sleep(nanoseconds: NSEC_PER_SEC / 4) print(Thread.current, "Timer ticked") } } }
— 57:25
Then another task to compute the 2,000,000th prime, which takes quite a bit of time to do: group.addTask { @MainActor in nthPrime(2_000_000) }
— 57:33
And then a task for downloading 1,000 large files: for n in 0..<workCount { group.addTask { @MainActor in _ = try await URLSession.shared .data(from: .init(string: "http://ipv4.download.thinkbroadband.com/1MB.zip")!) print(Thread.current, "Download finished", n) } }
— 57:54
Well, if we run this we will see that nothing prints for a long time. About 6 seconds later we will finally get some logs in the console, and that’s because the nthPrime function is super intense and ties up the main thread for 6 seconds.
— 58:25
Let’s improve it so that it is an async function that runs on the main thread, but every 1,000 prime checks we perform we will yield so that other tasks can do their work: @MainActor func asyncNthPrime(_ n: Int) async { let start = Date() var primeCount = 0 var prime = 2 while primeCount < n { defer { prime += 1 } if isPrime(prime) { primeCount += 1 } else if prime.isMultiple(of: 1_000) { await Task.yield() } } print( Thread.current, "\(n)th prime", prime-1, "time", Date().timeIntervalSince(start) ) }
— 58:54
And then we will update the group task to await: group.addTask { @MainActor in await asyncNthPrime(2_000_000) }
— 58:58
Now when we run we immediately get a bunch of logs, some for the timer ticks and others for the download finishing, all interleaved with each other. And then once the nth prime calculation finishes about 6 seconds later we get its answer printed to the console: … 32452843 time 6.753162980079651 …
— 59:18
This is pretty amazing. Conclusion
— 59:31
This now concludes the main topics that we wanted to talk about when it comes to concurrency in Swift. We honestly did not expect to spend 5 episodes on this topic, and certainly didn’t think we would spend 2 of those episodes discussing tools from over a decade ago.
— 59:43
But we felt that it was necessary to understand how we worked in concurrency in the past so that we could understand why these new tools were designed the way they were.
— 59:51
As we have seen, it has been really important for the notion of asynchrony to be baked directly into the compiler rather than bolted on as a library so that the compiler can catch you when you do something unreasonable. It’s also been important to disentangle the notion of asynchrony from the notion of thread. Micro managing thread resources at the application level is a losing game that can easily lead to thread explosion and/or thread starvation. It’s also been important to have a first class notion of data isolation so that the compiler let you know when you are using mutable data in an unprotected way.
— 1:00:25
And even with everything we have covered there is so much more that we could cover. There is the concept of continuations which allow you to bridge non-async/await code to the async/await world. This is important for interfacing with legacy systems, but over time should be less necessary as more frameworks and APIs embrace async/await.
— 1:00:44
There’s also AsyncSequence , which we briefly touched up when discussing task groups, but that barely scratches the surface. There’s a lot more to say, and in many ways async sequences will probably largely replace the need for Combine, but we will have to talk about that in another episode.
— 1:01:00
And then there’s the concept of executors, which are kind of like operation queues and dispatch queues. They are the how work is done, but Swift does a really good job of hiding those details from us. For the most part our concurrent code runs on the global executor and our main actor code runs on the main executor, but we don’t actually have to interact directly with the executor.
— 1:01:20
So that concludes this series, and if you remember all the way back in the first episode we mentioned that the reason we wanted to discuss concurrency in depth is because we are finally ready to more deeply integrate the Composable Architecture with Swift’s concurrency tools, which we’ll be tackling that soon.
— 1:01:36
Until next time! References NSOperation Mattt • Jul 14, 2014 Note In life, there’s always work to be done. Every day brings with it a steady stream of tasks and chores to fill the working hours of our existence. Productivity is, as in life as it is in programming, a matter of scheduling and prioritizing and multi-tasking work in order to keep up appearances. https://nshipster.com/nsoperation/ libdispatch efficiency tips Thomas Clement • Apr 26, 2018 Note The libdispatch is one of the most misused API due to the way it was presented to us when it was introduced and for many years after that, and due to the confusing documentation and API. This page is a compilation of important things to know if you’re going to use this library. Many references are available at the end of this document pointing to comments from Apple’s very own libdispatch maintainer (Pierre Habouzit). https://gist.github.com/tclementdev/6af616354912b0347cdf6db159c37057 Modernizing Grand Central Dispatch Usage Apple • Jun 5, 2017 Note macOS 10.13 and iOS 11 have reinvented how Grand Central Dispatch and the Darwin kernel collaborate, enabling your applications to run concurrent workloads more efficiently. Learn how to modernize your code to take advantage of these improvements and make optimal use of hardware resources. https://developer.apple.com/videos/play/wwdc2017/706/ What went wrong with the libdispatch. A tale of caution for the future of concurrency. Thomas Clement • Nov 23, 2020 https://tclementdev.com/posts/what_went_wrong_with_the_libdispatch.html Introducing Swift Atomics Karoy Lorentey • Oct 1, 2020 Note I’m delighted to announce Swift Atomics, a new open source package that enables direct use of low-level atomic operations in Swift code. The goal of this library is to enable intrepid systems programmers to start building synchronization constructs (such as concurrent data structures) directly in Swift. https://www.swift.org/blog/swift-atomics/ Downloads Sample code 0194-concurrency-pt5 Point-Free A hub for advanced Swift programming. Brought to you by Brandon Williams and Stephen Celis . Content Become a member The Point-Free Way Beta previews Gifts Videos Collections Free clips Blog More About Us Community Slack Mastodon Twitter BlueSky GitHub Contact Us Privacy Policy © 2026 Point-Free, Inc. All rights are reserved for the videos and transcripts on this site. All other content is licensed under CC BY-NC-SA 4.0 , and the underlying source code to run this site is licensed under the MIT License .