Video #191: Concurrency's Present: Queues and Combine
Episode: Video #191 Date: May 30, 2022 Access: Members Only 🔒 URL: https://www.pointfree.co/episodes/ep191-concurrency-s-present-queues-and-combine

Description
Before developing Swift’s modern concurrency tools, Apple improved upon threads with several other abstractions, including operation queues, Grand Central Dispatch, and Combine. Let’s see what these newer tools brought to the table.
Video
Cloudflare Stream video ID: 4fe3a1bbed37628e1498cd43a29e9799 Local file: video_191_concurrency-s-present-queues-and-combine.mp4 *(download with --video 191)*
References
- Discussions
- NSOperation
- libdispatch efficiency tips
- Modernizing Grand Central Dispatch Usage
- What went wrong with the libdispatch. A tale of caution for the future of concurrency.
- Introducing Swift Atomics
- 0191-concurrency-pt2
- Brandon Williams
- Stephen Celis
- Mastodon
- GitHub
- CC BY-NC-SA 4.0
- source code
- MIT License
Transcript
— 0:05
So even using _read and _modify cannot fix this synchronization problem. It simply is not possible to lock property mutations in this style, and is why we need to either create one-off methods for mutating state or leverage the modify method.
— 0:19
This goes to show just how tricky multithreading and data races can be. What seems to be reasonable can often be incorrect and lead to incorrect results. The main problem with locks is they are fully decoupled from the concurrency tool we are using, which in this case is threads. Ideally the locking mechanism has intimate knowledge of how we are running multiple units of work at once in order to guarantee synchronization. This is what Swift’s new concurrency tools provide for us, but before we can discuss that there are a few more things to discuss.
— 0:51
So, Apple’s Thread class was the primary abstraction people would use on Apple’s platforms in order to unlock asynchrony and concurrency back in the day. It comes with some interesting features, such as priority, cancellation, thread dictionaries and more, but they also lack in many ways:
— 1:07
Threads don’t support the notion of child threads so that things like priority, cancellation and thread dictionaries don’t trickle down to threads created from other threads.
— 1:17
It’s easy to accidentally explode the number of threads being used.
— 1:21
It’s hard to coordinate between threads.
— 1:23
Threaded code looks very different from unthreaded code.
— 1:26
And the tools for synchronizing between threads are crude.
— 1:31
Now there’s a good chance that most of our viewers have never used threads directly in their codebase because ever since macOS Leopard, released 15 years ago, Apple has built abstractions on top of threads to help fix a lot of the problems we just uncovered. This includes operation queues, Grand Central Dispatch and even Combine. Let’s take a look at how those technologies improved upon threads, and see where they fall short. Operation queues
— 1:59
Let’s start with operation queues, which were introduced in macOS Leopard and the first iOS SDK for iOS 2.0. We are only going to briefly talk about operation queues because they never gained as much popularity as GCD or Combine, but it’s still interesting to see how they tried to solve some of Thread ’s problems.
— 2:57
Right off the bat operations are different from threads in that it separates the concepts of how work is performed from the actual work being performed. With threads that was all smashed into one single concept. You simultaneously created a thread and provided a closure of work to perform on that thread.
— 3:17
With operations you first create an operation queue, which acts as the arbiter of execution for many units of work: let queue = OperationQueue()
— 3:26
Once the queue is created we can add operations to the queue to be performed: queue.addOperation { print(Thread.current) } <NSThread: 0x100904c20>{number = 2, name = (null)}
— 3:36
And we see that already a thread has been created to run this work asynchronously.
— 3:45
You can fire off a bunch of operations at once by adding them to a queue: queue.addOperation { print("1", Thread.current) } queue.addOperation { print("2", Thread.current) } queue.addOperation { print("3", Thread.current) } queue.addOperation { print("4", Thread.current) } queue.addOperation { print("5", Thread.current) }
— 3:54
And, similar to threads, we’ll see that there is no guarantee of the order in which the queue executes this work: 3 <NSThread: 0x100904c20>{number = 2, name = (null)} 2 <NSThread: 0x100904c20>{number = 4, name = (null)} 4 <NSThread: 0x100904c20>{number = 3, name = (null)} 5 <NSThread: 0x100904c20>{number = 2, name = (null)} 1 <NSThread: 0x100904c20>{number = 5, name = (null)} Here we can see that the operation queue managed to spin up extra threads so that I could perform all of these units of work in parallel.
— 4:36
Operations and operation queues have a lot of the same features that threads do. For example, you can specify a priority of an operation. To do so we need a handle on the actual operation before adding it to the queue, so we have to construct an instance of Operation .
— 4:55
The Operation type is technically a class that is meant to be subclassed, but Apple’s frameworks ship with a few convenience subclasses that we can use. For now we will just use a BlockOperation to specify an operation that is defined by a closure: let operation = BlockOperation { print(Thread.current) } queue.addOperation(operation)
— 5:19
Then we can set the “priority” of this operation by mutating its qualityOfService field: operation.qualityOfService = .background
— 5:27
This is a less granular priority than what we say in threads, which is specified by a double between 0 and 1.
— 5:57
Operations also support cancellation. For example, we could make our operation sleep the thread in order to take a bit longer to finish: let operation = BlockOperation { Thread.sleep(forTimeInterval: 1) print(Thread.current) }
— 6:08
And before that 1 second sleep is finished we could cancel the operation: Thread.sleep(forTimeInterval: 0.1) operation.cancel()
— 6:20
However, if we run this we will see that the print in the operation still executes. That’s because, like threads, operation cancellation is a cooperative endeavor. That is, it is up to us to be good citizens by regularly checking if the item has been cancelled so that we can short circuit the remaining work left to be done.
— 6:39
Unlike threads, there is no concept of “current operation” that we can use to check for cancellation. We need to have a handle on the actual operation in order to check for cancellation. This is easier to do when you subclass Operation , but for our block operation we need to first initialize without the block so that we can reference it inside an execution block we add to it: let operation = BlockOperation() operation.addExecutionBlock { [unowned operation] in Thread.sleep(forTimeInterval: 1) guard !operation.isCancelled else { print("Cancelled!") return } print(Thread.current) }
— 7:51
Now when we run this it no longer prints the thread’s name because we get caught on the guard statement and early out.
— 8:03
It’s worth pointing that although cancellation is cooperative, the cooperation is not deeply ingrained in the system. Here we cancelled the task after 0.1 seconds, but the thread is still going to sleep for the full 1 second before continuing. There is no way to interrupt that sleeping: operation.addExecutionBlock { [unowned operation] in let start = Date() defer { print("Finished in", Date().timeIntervalSince(start)) } Thread.sleep(forTimeInterval: 1) guard !operation.isCancelled else { print("Cancelled!") return } print(Thread.current) } Cancelled! Finished in 1.0988129377365112
— 8:38
Strangely, operation queues do not support the idea of storage like threads have with thread dictionaries. There’s no way to set data on an operation queue or operation that is carried implicitly with the operation so that it can be accessed in deep parts of the application. We’re not entirely sure why, but that’s just how it is.
— 8:57
This means we can’t explore that web server demo that we saw in the previous episode where we needed to pass a request id from the moment a thread was created down to deeper parts of the application.
— 9:08
But, operation queues do solve some of the problems that threads have.
— 9:13
For one thing operation queues allow you to coordinate operations in a lot more nuanced ways than threads do. They support the concept of dependencies, which allow you to start one operation after another finishes.
— 9:25
For example, we could have one operation that sleeps for a second, and then after that run another operation: let queue = OperationQueue() let operationA = BlockOperation { print("A") Thread.sleep(forTimeInterval: 1) } let operationB = BlockOperation { print("B") } operationB.addDependency(operationA) queue.addOperation(operationA) queue.addOperation(operationB) A B
— 10:18
This will print “A” immediately but print “B” after a second delay. And if we run multiple times it always prints the same thing.
— 10:30
This is a seemingly simple tool, but allows you to express some quite complex dependency graphs. For example, what if we had another operation C that needs to wait for A, and another operation D that needs to wait for both B and C to finish: A ➡️ B ⬇️ ⬇️ C ➡️ D
— 10:57
We can express this with operation dependencies like so: let operationA = BlockOperation { print("A") Thread.sleep(forTimeInterval: 1) } let operationB = BlockOperation { print("B") } let operationC = BlockOperation { print("C") } let operationD = BlockOperation { print("D") } operationB.addDependency(operationA) operationC.addDependency(operationA) operationD.addDependency(operationB) operationD.addDependency(operationC) queue.addOperation(operationA) queue.addOperation(operationB) queue.addOperation(operationC) queue.addOperation(operationD) A C B D
— 11:43
What’s really cool is because there is no dependency between B and C they can be run in parallel, and in fact if we run this a few times we will see that the order of B and C can change: A C B D
— 12:23
So it’s pretty cool that operation queues offer an API that allow us to express complex dependencies between our operations, and then the queue does the hard work of running that work sequentially or in parallel. If you remember from last episode, doing the equivalent with threads required us to poll inside a while loop in order to figure out when other threads finished.
— 12:43
Let’s look at another problem that threads had, which is thread explosion. If we wrote our code in a naive way, creating new threads for each little unit of work we wanted to perform, we could easily be led to a situation where thousands of threads are running at the same time. This causes the threads to compete with each other for time on the CPU, and can lead to thread starvation.
— 13:04
Operation queues can help with this too. Because the operation queue has a more global view of what work is currently in flight and what work is being submitted to it, it can be smarter in how it spins up new threads. If we submit 1,000 operations to the queue we will see that way less than 1,000 threads are used: for n in 0..<workCount { queue.addOperation { print(n, Thread.current) } } 0 <NSThread: 0x100743f90>{number = 2, name = (null)} 1 <NSThread: 0x10608ee30>{number = 3, name = (null)} 7 <NSThread: 0x1066040e0>{number = 7, name = (null)} 5 <NSThread: 0x1065042d0>{number = 8, name = (null)} 9 <NSThread: 0x1066041f0>{number = 9, name = (null)} 8 <NSThread: 0x100743f90>{number = 2, name = (null)} 13 <NSThread: 0x106504240>{number = 10, name = (null)} … 931 <NSThread: 0x100743f90>{number = 2, name = (null)} 933 <NSThread: 0x106607b60>{number = 32, name = (null)} 871 <NSThread: 0x106607700>{number = 25, name = (null)} 997 <NSThread: 0x106225b30>{number = 34, name = (null)} 937 <NSThread: 0x106409310>{number = 26, name = (null)} 999 <NSThread: 0x100740f20>{number = 31, name = (null)}
— 14:18
So this is already a big win. Operation queues are behaving like the thread pool concept we discussed last episode, where instead of spinning up a new thread every time you want to perform some work, you ask a pool for a thread and then perform your work on that thread. Operation queue problems
— 14:34
So operation queues seem to be quite powerful, and fix a lot of the problems we saw with bare threads, but they are not without their problems. For one thing, they still do not give us tools to perform our asynchronous work in a non-blocking way.
— 14:46
For example, if we wanted an operation that waited for a bit of time before doing it’s work we have no choice but to sleep the thread: queue.addOperation { Thread.sleep(forTimeInterval: 1) print(n, Thread.current) }
— 14:54
This means we are holding up a thread just so that we can wait for some time to pass.
— 15:11
Second, operation queues also do not allow better cooperation with other asynchronous tasks. There is no way to give up our operation’s resources to another operation if we are waiting for something, such as a delay, timer or network request. We have no way to cooperate with other operations, other than just checking if we have been cancelled. And this will lead to competition between operations, where operations must fight for time on the CPU, and intense operations will starve other operations of execution time.
— 15:41
We can see this in real concrete terms if simulate some intense CPU work to be done inside each of those 1,000 operations: for n in 0..<workCount { queue.addOperation { print(n, Thread.current) while true {} } }
— 15:50
And then after those 1,000 operations are going we will add another operation to do some other intense CPU work, like computing the 50,000th prime as we did last episode: queue.addOperation { print("Starting prime operation") nthPrime(50_000) }
— 16:10
If we run this we will sadly see that a handful of threads are created and held up forever, and that our prime computation never gets a chance: 0 <NSThread: 0x10611a370>{number = 2, name = (null)} 1 <NSThread: 0x1062040e0>{number = 3, name = (null)} 2 <NSThread: 0x106004910>{number = 4, name = (null)} 4 <NSThread: 0x1064040e0>{number = 5, name = (null)} 8 <NSThread: 0x106404360>{number = 10, name = (null)} 9 <NSThread: 0x106504530>{number = 11, name = (null)} 5 <NSThread: 0x106404250>{number = 7, name = (null)} 7 <NSThread: 0x106504250>{number = 8, name = (null)} 6 <NSThread: 0x1065043c0>{number = 9, name = (null)} 3 <NSThread: 0x1065040e0>{number = 6, name = (null)} 10 <NSThread: 0x1067040e0>{number = 12, name = (null)} Program ended with exit code: 0
— 16:59
Right now we are doing a silly thing in these operations to simulate intense CPU work, which is just a fully blocking infinite loop, but in reality the work we would be doing in these operations has a lot of downtime. For example, if we are making a network request then there is the time we are waiting for the server to even respond, which can be 300ms, 500ms, or even longer than a second. That is time we should be able to give up our thread and let others do work with it.
— 17:31
So, although operation queues do seem powerful, they don’t offer us much in the way of cooperation.
— 17:40
Also, strangely, the cancellation of an operation does not cancel any of its dependencies. We previously saw that the cancellation of a particular thread does not cancel any of the threads spawned from it, even though that would be really handy. The same seems to be true of operations. Going back to the diamond dependency demo from earlier, if we cancel operationA we will see that B, C, and D run just fine: operationA.cancel() C B D
— 18:24
So that’s a bummer.
— 18:34
And lastly, and most superficially, even though operation queues offer tools for sequencing and parallelizing work, the API just looks kind of…odd. If we go back to the example where we created dependencies between 4 operations we will see that it does not read linearly from top to bottom: let queue = OperationQueue() let operationA = BlockOperation { print("A") Thread.sleep(forTimeInterval: 1) } let operationB = BlockOperation { print("B") } let operationC = BlockOperation { print("C") } let operationD = BlockOperation { print("D") } operationB.addDependency(operationA) operationC.addDependency(operationA) operationD.addDependency(operationB) operationD.addDependency(operationC) queue.addOperation(operationA) queue.addOperation(operationB) queue.addOperation(operationC) queue.addOperation(operationD)
— 18:51
First we forward declare all of the operations that we want to run. Then we create the dependencies between the operations. And then finally we add all the operations to the queue.
— 19:04
If this code was written in a synchronous fashion we would have just executed each unit of work after the other and been done with it. The API for operation queues is heavily inspired by object-oriented programming, where you are supposed to subclass in order to implement operations, and then use side-effecting methods to build up your dependency graph.
— 19:23
Operation queues also do not directly help with multithreaded race conditions. It’s still possible to reach out to shared state from inside an operation and we could potentially have all the same data races problems.
GCD 19:43
So, operation queues do have some benefits, but still have many of the same drawbacks.
GCD 19:52
Cancellation doesn’t trickle down to dependent operations as one might expect, and the API for operation queues is heavily OOP inspired, often enticing you to subclass in order to gain functionality, and it usually feels quite heavy weight for something that should feel simple and natural.
GCD 20:09
A few years after operation queues were announced a new abstraction on top of threads was released by Apple, and this new abstraction is actually what powers operation queues under the hood. In 2009 Apple announced “Grand Central Dispatch”, where you no longer think of concurrency in terms of threads, but instead in terms of queues. This sounds similar to operation queues, but it’s actually quite a bit simpler.
GCD 20:32
Let’s quickly repeat some of the exercises we did for threads and operation queues, but now for dispatch queues.
GCD 20:54
Like operation queues, dispatch queues also separate the concepts of how work is performed from the actual work that needs to be performed. So, you start by creating a dispatch queue, which is the “how”: let queue = DispatchQueue(label: "my.queue")
GCD 21:11
And then you can send units of work to the queue to be performed: queue.async { print(Thread.current) }
GCD 21:30
This will print a thread that is definitely not the main thread: <NSThread: 0x101012230>{number = 2, name = (null)}
GCD 21:36
So it seems by creating a dispatch and issuing some work to it we have caused a thread to be spun up.
GCD 21:41
Let’s also see what happens when we send a bunch of units of work to the queue: queue.async { print("1", Thread.current) } queue.async { print("2", Thread.current) } queue.async { print("3", Thread.current) } queue.async { print("4", Thread.current) } queue.async { print("5", Thread.current) } 1 <NSThread: 0x10601fd70>{number = 2, name = (null)} 2 <NSThread: 0x10601fd70>{number = 2, name = (null)} 3 <NSThread: 0x10601fd70>{number = 2, name = (null)} 4 <NSThread: 0x10601fd70>{number = 2, name = (null)} 5 <NSThread: 0x10601fd70>{number = 2, name = (null)}
GCD 21:52
Interestingly it looks like the units of work ran sequentially one after the other, and all are run on the same thread. We can even run this a bunch of times and we will always see the same printing order and see that only a single thread is used.
GCD 22:13
This is happening because dispatch queues are serial by default. This is in stark contrast to operation queues, which are concurrent by default. It seems that Apple made the conscious decision to use serial by default instead of concurrent due to all the complexities one comes across with concurrent code.
GCD 22:28
If you want a concurrent queue you must do extra work to specify that when creating the queue: let queue = DispatchQueue(label: "my.queue", attributes: .concurrent)
GCD 22:39
Now when we run this we will get our units of working printing in non-deterministic order and on different threads: 2 <NSThread: 0x100710690>{number = 4, name = (null)} 5 <NSThread: 0x1063040e0>{number = 7, name = (null)} 3 <NSThread: 0x1061040e0>{number = 5, name = (null)} 4 <NSThread: 0x1062040e0>{number = 6, name = (null)} 1 <NSThread: 0x1060cba50>{number = 3, name = (null)}
GCD 23:00
And just like operation queues, this already fixes the problem of thread explosion. We can issue 1,000 units of work to this single concurrent queue and it will not spin up 1,000 threads: for n in 0..<workCount { queue.async { print(n, Thread.current) } } 0 <NSThread: 0x101108070>{number = 2, name = (null)} 7 <NSThread: 0x101108070>{number = 2, name = (null)} 9 <NSThread: 0x101108070>{number = 2, name = (null)} 8 <NSThread: 0x101310700>{number = 9, name = (null)} 10 <NSThread: 0x101604330>{number = 10, name = (null)} 5 <NSThread: 0x1014996c0>{number = 7, name = (null)} … 929 <NSThread: 0x101498d30>{number = 29, name = (null)} 116 <NSThread: 0x10070f940>{number = 57, name = (null)} 935 <NSThread: 0x10070f2a0>{number = 50, name = (null)} 998 <NSThread: 0x101604160>{number = 5, name = (null)} 873 <NSThread: 0x101108070>{number = 2, name = (null)} 999 <NSThread: 0x101204d20>{number = 30, name = (null)}
GCD 23:28
There’s another tool that dispatch queues give us that greatly improves over what threads gave us. Threads had no way of performing work after some time passed in a non-blocking manner. We just had to hold up the thread by calling Thread.sleep , and that is wasteful in terms of resources.
GCD 23:42
Dispatch queues are capable of scheduling work to be performed in the future, all without blocking the current thread. For example, instead of sleeping the thread for a second we can tell the queue to perform some work after 1 second passes: print("before scheduling") queue.asyncAfter(deadline: .now() + 1) { print("1 second passed") } print("after scheduling") before scheduling after scheduling 1 second passed
GCD 24:26
The scheduling happens at a deeper level with the OS so that we don’t need to waste time on the thread waiting for time to pass. So already dispatch queues are solving quite a few problems that threads had.
GCD 24:54
Just like threads and operation queues, dispatch queues also have a type of priority associated with them that the system can use to figure out how much execution time should be given to the unit of work.
GCD 25:13
Like operation queues, priority is described as a “quality of service” that falls into a few buckets: let queue = DispatchQueue(label: "my.queue", qos: .background)
GCD 25:23
And like threads and operation queues you can get a handle on the actual unit of work by building up what is known as a DispatchWorkItem and then executing it on a queue: let item = DispatchWorkItem { print(Thread.current) } queue.async(execute: item) <NSThread: 0x100719310>{number = 2, name = (null)}
GCD 25:53
You can also cancel a work item while it is inflight, and this cancellation process is cooperative. That is, it is up to us to be good citizens by regularly checking if the item has been cancelled so that we can short circuit the remaining work left to be done.
GCD 26:05
There is no concept of a “current dispatch queue” or item like there is a “current thread”, so to check if the item has been cancelled we need access to the item itself. It’s a little roundabout accomplish, but we need to first forward declare the work item, then assign it so that we can access it from within the work closure: var item: DispatchWorkItem! item = DispatchWorkItem { defer { item = nil } let start = Date() defer { print("Finished in", Date().timeIntervalSince(start)) } Thread.sleep(forTimeInterval: 1) guard !item.isCancelled else { print("Cancelled!") return } print(Thread.current) } queue.async(execute: item) Thread.sleep(forTimeInterval: 0.5) item.cancel() Cancelled! Finished in 1.0974069833755493
GCD 27:24
Similar to threads and operation queues, although cancellation is cooperative, the cooperation is not deeply ingrained in the system. Here we cancelled the item after 0.5 seconds, but the thread is still going to be slept for the full 1 second before continuing. There is no way to interrupt that sleeping.
GCD 27:55
Another similarity with threads is that dispatch queues have something that is similar to thread dictionaries, although it is called “specifics”. To see how this works let’s revisit the example we explored for threads where we modeled a simple server as a function that takes a URLRequest and returns an HTTPURLResponse : func response(for request: URLRequest) -> HTTPURLResponse { // TODO: do the work to turn request into a response // TODO: return real response return .init() }
GCD 28:50
Then, when a request comes in we can create a work item to encapsulate the work: let item = DispatchWorkItem { response(for: .init(url: URL(string: "https://www.pointfree.co")!)) }
GCD 29:06
And we can create a new new dispatch queue to issue the work to. And because there could be many of these work items in flight we should probably give it a unique label: let requestId = UUID() let queue = DispatchQueue(label: "request-\(requestId)") queue.async(execute: item)
GCD 29:32
Recall that when we explored this idea for threads we noted that it can be handy to associated data with the thread that can be retrieved from deep within the application without having to pass the data through every layer.
GCD 29:55
We explored this by wanting to associate a request ID to a thread so that while we’re doing the work to construct the response we can log information and have a unique ID associated with each log. This makes it easy to sift through a bunch of server logs and find just the small set of logs associated with a single request.
GCD 30:14
We can do this by setting what is known as a “specific” on the queue, which is a piece of data that will be implicitly carried with the execution context so that anyone operating in this same context can retrieve it: let requestIdKey = DispatchSpecificKey<UUID>() queue.setSpecific(key: requestIdKey, value: requestId)
GCD 30:56
This allows us to pluck the request ID out of thin air without having to explicitly pass it through all layers: func response(for request: URLRequest) -> HTTPURLResponse { let requestId = DispatchQueue.getSpecific(key: requestIdKey)! print(requestId, "Making database query") Thread.sleep(forTimeInterval: 0.5) print(requestId, "Finished database query") print(requestId, "Making network request") Thread.sleep(forTimeInterval: 0.5) print(requestId, "Finished network request") return .init() }
GCD 31:06
One nice thing about this API over the thread dictionary API is that it is type safe. We are specifying the type of data we are storing in the queue. Recall that thread dictionaries are just nebulous [AnyHashable: Any] types, and so was prone to string key typos and mis-casting the Any to the proper type.
GCD 32:39
Dispatch queue specifics solve another problem that thread dictionaries have. Recall that if we spawned a new thread from an existing thread, the new thread did not inherit the thread dictionary from the current thread. This mean that if we wanted to split off two threads to perform some independent work in parallel, like say a database request and a network request, then those two threads would lose the request ID.
GCD 33:07
Dispatch queues offer a tool that allow its specifics to flow to another queue, and it’s called targeting. Let’s play with this tool a little bit in isolation before applying it to our response function.
GCD 33:19
Let’s start by creating a dispatch queue and setting two specifics on it, and confirming that we can indeed access those specifics from a unit of work ran on the queue: let queue1 = DispatchQueue(label: "queue1") let idKey = DispatchSpecificKey<Int>() let dateKey = DispatchSpecificKey<Date>() queue1.setSpecific(key: idKey, value: 42) queue1.setSpecific(key: dateKey, value: Date()) queue1.async { print("queue1", "id", DispatchQueue.getSpecific(key: idKey)) print("queue1", "date", DispatchQueue.getSpecific(key: dateKey)) } queue1 id Optional(42) queue1 date Optional(2022-05-18 17:29:11 +0000)
GCD 34:09
If we then naively create a new queue from within queue1.async and set one of the specifics but not the other, we will see something interesting: queue1.async { print("queue1", "id", DispatchQueue.getSpecific(key: idKey)) print("queue1", "date", DispatchQueue.getSpecific(key: dateKey)) let queue2 = DispatchQueue(label: "queue2") queue2.setSpecific(key: idKey, value: 1729) queue2.async { print("queue2", "id", DispatchQueue.getSpecific(key: idKey)) print("queue2", "date", DispatchQueue.getSpecific(key: dateKey)) } } queue1 id Optional(42) queue1 date Optional(2022-05-18 17:32:23 +0000) queue2 id Optional(1729) queue2 date nil
GCD 34:36
The second queue is able to see the id specific, but not the date.
GCD 34:45
This is because starting up a new queue from inside the execution context of another queue does not mean the specifics are automatically inherited. So it seems dispatch queues have a similar problem as threads. If we wanted to run two units of work in parallel we may lose the specifics that were set at the root.
GCD 35:04
Luckily there’s a fix. What we can do is make the 2nd queue target the first queue, and then it will inherit all of its specifics: let queue2 = DispatchQueue(label: "queue2", target: queue1) queue2.setSpecific(key: idKey, value: 1729) queue2.async { print("queue2", "id", DispatchQueue.getSpecific(key: idKey)) print("queue2", "date", DispatchQueue.getSpecific(key: dateKey)) } queue1 id Optional(42) queue1 date Optional(2022-05-18 17:36:26 +0000) queue2 id Optional(1729) queue2 date Optional(2022-05-18 17:36:26 +0000)
GCD 35:19
Now we can see from the logs that indeed the second queue is seeing the overridden id value, but that the date is the same as it was set on the first queue.
GCD 35:37
The queue targeting tool is very powerful. We can even make serial queues target concurrent queues so that their work is guaranteed to be executed sequentially, one job after another, but in doing so the work can technically be invoked on different threads.
GCD 36:02
Let’s apply this newfound knowledge to our response function. Suppose that in our response function we wanted to be able to run the database query and network request in parallel since they are both independent and take a decent amount of time to execute.
GCD 36:20
One thing we could do is split out each of those units of work into their own functions: func makeDatabaseQuery() { let requestId = DispatchQueue.getSpecific(key: requestIdKey)! print(requestId, "Making database query") Thread.sleep(forTimeInterval: 0.5) print(requestId, "Finished database query") } func makeNetworkRequest() { let requestId = DispatchQueue.getSpecific(key: requestIdKey)! print(requestId, "Making network request") Thread.sleep(forTimeInterval: 0.5) print(requestId, "Finished network request") } Notice that we are accessing the current queue’s specific in each function.
GCD 36:39
Then we could create two new queues in the response function for executing those units of work: func response(for request: URLRequest) -> HTTPURLResponse { let databaseQueue = DispatchQueue(label: "database-query") databaseQueue.async { makeDatabaseQuery() } let networkQueue = DispatchQueue(label: "network-request") networkQueue.async { makeNetworkRequest() } return .init() }
GCD 36:58
The problem here is that invoking async on each of the queues is a non-blocking operation, and so we will breeze right past each of those statements and go right to returning the HTTPURLResponse .
GCD 37:14
We want to somehow wait for both units of work to complete before continuing. This is something that dispatch queues improve upon with respect to threads, and also makes them similar to operations. Dispatch queues offer some nice tools for coordinating multiple units of work.
GCD 37:29
In particular, we can create what is known as a DispatchGroup , which allows us to treat multiple units of work as a single unit of work: let group = DispatchGroup()
GCD 37:37
And then when performing the work on each of the queues we created we will do so in the context of this dispatch group: databaseQueue.async(group: group) { makeDatabaseQuery() } … networkQueue.async(group: group) { makeNetworkRequest() }
GCD 37:45
And then finally we can wait for all of the work in the group to finish before continuing: func response(for request: URLRequest) -> HTTPURLResponse { let group = DispatchGroup() … group.wait() return .init() }
GCD 37:51
This is a nice way of coordinating work when compared to threads, which if you remember, we had to literally poll in an infinite while loop to see when two threads finished.
GCD 38:02
So, we have now done the work to run two units of work in parallel and wait for them to finish, but this does not work because the newly formed queues lose their specifics. We need these new queues to target the queue that set the specifics.
GCD 38:24
Now in order for the response function to be able to create queues that target this queue we need the response function to take a queue as an argument: func response( for request: URLRequest, queue: DispatchQueue ) -> HTTPURLResponse { let group = DispatchGroup() let databaseQueue = DispatchQueue( label: "database-request", target: queue ) databaseQueue.async(group: group) { makeDatabaseQuery() } let networkQueue = DispatchQueue( label: "network-request", target: queue ) networkQueue.async(group: group) { makeNetworkRequest() } group.wait() return .init() }
GCD 38:48
And then pass along the request queue when we invoke the response method: response( for: .init(url: URL(string: "https://www.pointfree.co")!), queue: queue )
GCD 39:00
And to make sure things run in parallel, we will beef our queue up to be concurrent. let queue = DispatchQueue( label: "request-\(requestId)", attributes: .concurrent )
GCD 39:09
Now when we run this we see that the database and network requests can correctly access the request ID, even though we have spun up new queues, and they are running fully in parallel: 48A7C5DD-5F02-4234-923C-F96315A42214 Making database query 48A7C5DD-5F02-4234-923C-F96315A42214 Making network request 48A7C5DD-5F02-4234-923C-F96315A42214 Finished database query 48A7C5DD-5F02-4234-923C-F96315A42214 Finished network request
GCD 39:29
To make the timing even clearer, let’s bring back the code that prints how long it took for the request to finish and run things: let start = Date() defer { print(requestId, "Finished in", Date().timeIntervalSince(start)) } E1638E62-B2E7-4BA3-B7C1-FEE97C12FA89 Making database query E1638E62-B2E7-4BA3-B7C1-FEE97C12FA89 Making network request E1638E62-B2E7-4BA3-B7C1-FEE97C12FA89 Finished database query E1638E62-B2E7-4BA3-B7C1-FEE97C12FA89 Finished network request E1638E62-B2E7-4BA3-B7C1-FEE97C12FA89 Finished in 0.5075399875640069
GCD 38:24
Let’s refactor things a bit to improve it even further. We are currently creating a new concurrent queue for each request that comes into the server, which is an expensive thing to do. Let’s instead create a single concurrent queue when the server starts up, and then for each request that comes in we create a new queue that targets that concurrent queue. This would give us a fresh queue for setting specifics, but we also won’t accidentally explode the number of threads in use by creating a whole bunch of queues: let serverQueue = DispatchQueue( label: "server", attributes: .concurrent )
GCD 40:37
And then later, when a request comes in we will create a new queue that targets the server queue: let requestId = UUID() let requestIdKey = DispatchSpecificKey<UUID>() let queue = DispatchQueue( label: "request-\(requestId)", attributes: .concurrent, target: serverQueue ) queue.setSpecific(key: requestIdKey, value: requestId) queue.async { response(for: .init(url: URL(string: "https://www.pointfree.co")!)) }
GCD 41:01
So, this is pretty cool. We are finally seeing the beginnings of what it means for a child execution context to inherit some of the properties from a parent execution context. Here we split off two new queues from the current queue in order to run a database query and network request in parallel, and those new queues inherited the specifics from the server request queue. This is not something that was possible with threads. GCD problems
GCD 41:28
So far dispatch queues seem like the clear winner in terms of features of concurrency tools. They have all the features of threads, in that they support performing asynchronous work, priority, cooperative cancellation and queue specific storage. But they also have all the features of operation queues, such as a more structured approach to running asynchronous work as well as better coordination between units of work.
GCD 41:51
Unfortunately there are still problems remaining. Let’s take a look at a few of them.
GCD 41:58
We can start by saying that although it was nice that queues that target an existing queue inherit the base queues specifics, it’s still not as nice as it could be. Currently we are passing the request queue into the response function so that it can be used: response( for: .init(url: URL(string: "https://www.pointfree.co")!), queue: requestQueue )
GCD 42:13
But remember, the whole point of us turning to queue specific storage is so that we didn’t have to pass data through multiple layers. We wanted to set some data on the base queue, and have it implicitly travel through the entire execution context.
GCD 42:23
But here we see that in order to be able to split off new child execution contexts that inherit the specifics, we need to have access to the parent queue. It would be far better if somehow queues knew what the current execution context was so that when we create a new queue it automatically inherits its specifics.
GCD 42:42
And although dispatch queues can technically inherit specifics from another dispatch queue, the same cannot be said of cancellation. There is no way to have the cancellation of one work item to trick down to “child” work items created. As we’ve seen before this can be incredibly handy, and neither threads nor operation queues support this concept either.
GCD 43:02
Also, dispatch queues do have the problem of accidentally exploding the number of threads in use if you are not careful, though it does do a better job at limiting this than when using bare threads.
GCD 43:27
For example, if we create 1,000 queues and run an infinite loop on each, we will see that 1,000 threads are not actually created: for n in 0..<workCount { DispatchQueue(label: "queue-\(n)").async { print(Thread.current) while true {} } } Thread.sleep(forTimeInterval: 3)
GCD 43:33
Putting in a breakpoint after a few seconds we will see that only roughly 500 threads were created. This means dispatch is doing some work under the hood to limit the number of threads created, though the number is still quite high.
GCD 43:50
So thread explosion is still possible. But even if we are careful to not explode thread usage, we still run the risk of starving other queues from being able to do their work. For example, instead of creating 1,000 queues to run a single unit of work, let’s create a single concurrent queue that runs 1,000 units of work: let queue = DispatchQueue(label: "concurrent-queue", attributes: .concurrent) for n in 0..<1_000 { queue.async { print(Thread.current) while true {} } } 0 <NSThread: 0x106204180>{number = 2, name = (null)} 5 <NSThread: 0x1065040e0>{number = 7, name = (null)} 2 <NSThread: 0x106304250>{number = 4, name = (null)} 4 <NSThread: 0x1063043c0>{number = 5, name = (null)} 3 <NSThread: 0x1064040e0>{number = 6, name = (null)} 1 <NSThread: 0x1063040e0>{number = 3, name = (null)} 6 <NSThread: 0x106404250>{number = 8, name = (null)} 7 <NSThread: 0x1061422b0>{number = 9, name = (null)} 9 <NSThread: 0x106404470>{number = 11, name = (null)} 8 <NSThread: 0x106404080>{number = 10, name = (null)}
GCD 44:14
We will see that, like operation queues, a small number of threads are created, but then they are tied up forever so that no other work items are getting an opportunity to execute.
GCD 44:23
So, even if we are being responsible by using a single queue, we still can do CPU intensive work on that queue and completely block others from doing work on the queue.
GCD 44:37
And dispatch queues do not give us the tools to allow cooperating with other work items. Currently we are doing a silly thing to simulate a CPU intensive operation by just looping forever, but in reality we would be doing things that have a lot of downtime, such as making a network request or a database request. While we are just waiting around to hear back from a server, it would be nice to allow other work items to use our thread. This is not really possible, and so dispatched worked items will have to fight for time on the CPU, and intense operations will starve other operations of execution time.
GCD 45:08
So although dispatch queues are incredibly powerful, they still do not offer us any tools for writing cooperative concurrent code.
GCD 45:15
Finally, let’s talk about data races. GCD does provide some new tools for synchronizing access to data, but the tools are quite similar to what we’ve previously seen with locks.
GCD 45:26
Suppose we had a counter class that held some mutable data that we wanted to increment from 1,000 different units of work: class Counter { var count = 0 func increment() { self.count += 1 } } let counter = Counter() let queue = DispatchQueue(label: "concurrent-queue", attributes: .concurrent) for _ in 0..<workCount { queue.async { counter.increment() } } Thread.sleep(forTimeInterval: 1) print("counter.count", counter.count)
GCD 45:48
If we run this we’ll see that we don’t quite make it to 1,000 counter.count 996
GCD 45:54
We get closer than we did with threads, which shouldn’t be surprising since concurrent queues spin up far fewer threads at once, and so there is far less of a chance for data races to happen.
GCD 46:01
Now we could fix this race condition using locks, but GCD comes with a similar tool that can be used. It’s called “barrier”, and it allows you to wait until a queue has executed all of its work before it executes its next unit of work: class Counter { let queue = DispatchQueue(label: "counter", attributes: .concurrent) var count = 0 func increment() { self.queue.sync(flags: .barrier) { self.count += 1 } } }
GCD 46:59
Now if we run this we see that we get a consistent 1,000 printed to the console, but it takes a very long time. This is partly because we are calling this method 1,000 times from a concurrent context and so that is a lot of locking that needs to take place. But it’s also partly because locking with barrier is slower than NSLock .
GCD 47:05
It’s interesting that GCD comes with some tools for data racing, but it’s still on us to know how to wield them correctly, and it’s not deeply integrated into the concurrency model. Combine
GCD 47:13
So we have now see what GCD brought to the table when it was introduced all the way back in 2009. It gives us simpler tools for writing asynchronous and concurrent code without having to think in terms of threads, and as such it solves a lot of the problems that threads have. But, still it has quite a few problems of its own.
GCD 47:30
There wasn’t a ton of innovation in the concurrency arena in the years after GCD’s introduction in 2009. Each year at WWDC there would be a few small GCD features announced, but there was no truly substantial update to the core library.
GCD 47:46
Then in 2019, 10 years after the introduction of GCD, the Combine framework was introduced as a library that is capable of representing streams of values over time. Now technically Combine is not a library for performing concurrent work, or even asynchronous work. It is certainly capable of doing so, but it entirely leans on existing concurrency tools to provide asynchrony and concurrency, such as dispatch queues, operation queues, and run loops.
GCD 48:13
So it’s probably not entirely correct for us to talk about Combine in the same breath that we mention threads, dispatch queues and so on, but we will anyway because it does aim to solve a lot of the difficult problems we have been encountering over and over while diving deep into concurrency. And we’re not going to dive deep into Combine from first principles right now because we have devoted entire episodes to Combine in the past, and we have passingly discussed Combine many, many times on Point-Free.
GCD 48:39
Instead we just want to demonstrate how Combine gives structure to the concept of streams of values, and show how it solves some of the problems we have been encountering, and also sets the stage for how Swift’s first class concurrency tools were ultimately developed.
GCD 48:54
So, as we mentioned a moment ago, Combine is not technically a library for representing asynchrony or concurrency. In fact, if you stay away from any of the Combine operators that deal with schedulers, such as receive(on:) , subscribe(on:) and any temporal operations, then usually all publisher code will be executed on the exact same thread you start the publisher.
GCD 49:42
For example, we can create a Future publisher, which allows us to emit a single value in a publisher via a callback interface: let publisher = Future<Int, Never> { callback in print(Thread.current) callback(.success(42)) }
GCD 50:21
Technically the Future type is eager in Combine, which means just the act of creating this value has its work. In order to ensure that this work is done only upon subscribing, let’s wrap it in a deferred publisher: let publisher = Deferred { Future<Int, Never> { callback in print(Thread.current) callback(.success(42)) } }
GCD 50:46
Then in order to actually get the value out of the publisher we need to subscribe to it: publisher .sink { print("sink", $0, Thread.current) } Result of call to ‘sink(receiveValue:)’ is unused
GCD 51:07
Which means we have to keep track of the cancellable: let cancellable = publisher .sink { print("sink", $0, Thread.current) } _ = cancellable
GCD 51:10
Technically we should keep this cancellable around for as long as we want a value from the publisher, but in this case publisher emits immediately so it doesn’t really matter.
GCD 51:24
If we run this we see that everything runs on the main thread: <_NSMainThread: 0x10610a920>{number = 1, name = main} publisher 42 <_NSMainThread: 0x10610a920>{number = 1, name = main}
GCD 51:45
So no real asynchrony here.
GCD 51:54
The real power of Combine comes from thinking of publishers as sequences of values over time, and then combining them in interesting ways using operators. For example, we could create another publisher to emits a string, and then zip the two publishers together to form one publisher that emits both values at the same time: let publisher1 = Deferred { Future<Int, Never> { callback in print(Thread.current) callback(.success(42)) } } let publisher2 = Deferred { Future<String, Never> { callback in print(Thread.current) callback(.success("Hello world")) } } let cancellable = publisher1 .zip(publisher2) .sink { print("sink", $0, Thread.current) } _ = cancellable <_NSMainThread: 0x10100c890>{number = 1, name = main} <_NSMainThread: 0x10100c890>{number = 1, name = main} sink (42, "Hello world") <_NSMainThread: 0x10100c890>{number = 1, name = main}
GCD 52:36
Already in just a few lines we have accomplished what took many lines and strange contortions to describe how to coordinate two threads, two operations or two dispatch queues.
GCD 53:06
There are more operators that can do even more complex things. For example, say after publisher finishes we want to run another publisher. We can use flatMap for this: let cancellable = publisher1 .flatMap { integer in Deferred { Future<String, Never> { callback in print(Thread.current) callback(.success("\(integer)")) } } } .zip(publisher2) .sink { print("sink", $0, Thread.current) } <_NSMainThread: 0x10100c890>{number = 1, name = main} <_NSMainThread: 0x10100c890>{number = 1, name = main} <_NSMainThread: 0x10100c890>{number = 1, name = main} publisher ("42", "Hello world") <_NSMainThread: 0x10100c890>{number = 1, name = main}
GCD 53:40
Still everything is being performed on the main thread, but we are coordinating multiple streams of values in some pretty complex ways.
GCD 54:12
So from this perspective it is pretty clear that publishers don’t technically have anything to do with asynchrony or concurrency. The only way to inject this kind of behavior into a publisher is to use an existing form of concurrency, such as operation queues or dispatch queues.
GCD 54:27
For example, we could force each of the futures to start their work on a their own dispatch queue: let publisher1 = Deferred { Future<Int, Never> { callback in print(Thread.current) callback(.success(42)) } } .subscribe(on: DispatchQueue(label: "publisher1")) let publisher2 = Deferred { Future<String, Never> { callback in print(Thread.current) callback(.success("Hello")) } } .subscribe(on: DispatchQueue(label: "publisher2")) let cancellable = publisher1 .flatMap { integer in Deferred { Future<String, Never> { callback in print(Thread.current) callback(.success("\(integer)")) } } .subscribe(on: DispatchQueue(label: "publisher3")) } .zip(publisher2) .sink { print("sink", $0, Thread.current) } Thread.sleep(forTimeInterval: 1) _ = cancellable <NSThread: 0x100711ba0>{number = 2, name = (null)} <NSThread: 0x100711ba0>{number = 3, name = (null)} <NSThread: 0x106004830>{number = 4, name = (null)} sink ("42", "Hello") <NSThread: 0x1060052f0>{number = 4, name = (null)}
GCD 54:48
And now we see that work is happening on non-main threads.
GCD 55:06
What’s cool about this is that we have introduced some asynchrony into the system, but we were still able to think of these publishers as simply streams of values over time, and could combine them together in the same ways. We didn’t have to change anything about our usage of zip or flatMap , it all just continued to work.
GCD 55:28
And this publisher is quite complex. It is running the publisher inside the flatMap after publisher1 emits, and it’s running publisher2 concurrently with those two chained publishers.
GCD 55:41
As we saw before, OperationQueue offers some interesting tools for creating dependent operations like this. For example, in order to create a diamond of dependencies like this: A ➡️ B ⬇️ ⬇️ C ➡️ D
GCD 56:02
Where first A completes, then B and C run in parallel, and D starts only once B and C finish in whichever order.
GCD 56:11
We found that we could expression this in the following way: let queue = OperationQueue() let operationA = BlockOperation { print("A") } let operationB = BlockOperation { print("B") } let operationC = BlockOperation { print("C") } let operationD = BlockOperation { print("D") } operationB.addDependency(operationA) operationC.addDependency(operationA) operationD.addDependency(operationB) operationD.addDependency(operationC) queue.addOperation(operationA) queue.addOperation(operationB) queue.addOperation(operationC) queue.addOperation(operationD)
GCD 56:26
This was interesting, but also the API is pretty verbose, and very object-oriented. We have to forward declare all of our operations, then create the dependencies between them, and then add all the operations to the queue.
GCD 56:37
Dispatch queues can express the same thing, but rather than using methods to build up a dependency graph you can just nest calls to async : let queue = DispatchQueue(label: "queue", attributes: .concurrent) queue.async { print("A") let group = DispatchGroup() queue.async(group: group) { print("B") } queue.async(group: group) { print("C") } group.notify(queue: queue) { print("D") } }
GCD 57:22
Both of those are quite verbose. The equivalent diamond of dependencies in Combine would look like this in pseudo code: a .flatMap { a in zip(b(a), c(a)) } .flatMap { b, c in d(b, c) } Assuming you have some publisher a , you can flatMap on it to run another publisher. That publisher can be a zip of two other publishers that depend on the value emitted from a , and that will run the two publishers in parallel. Once they both emit the second flatMap can now run a final publisher based on the tuple of values obtained from the zip.
GCD 58:12
This is a super compact way of expressing a complex dependency relationship between streams of values. But also, this complex chain of operators can be quite cryptic to those not intimately familiar with the dark arts of reactive programming.
GCD 58:37
This gets at the heart of what Swift’s modern concurrency tools want to accomplish. You should be able to express complex asynchronous and concurrent operations in just a few lines of code, but further that code should look more similar to regular synchronous code. Although these operators are packing a huge punch, they are still substantially different from how we write all our other, non-asynchronous code, which is probably the vast majority of code in our code base.
GCD 59:11
Once we are all familiar with Swift’s new concurrency tools we will be able to write this diamond dependency simply as: let a = await f() async let b = g(a) async let c = h(a) let d = await i(b, c)
GCD 1:00:03
And even better, since this code is just plain Swift code, we get to use all of the constructs that Swift has to offer with no fuss. For example, if f returned an optional we could use guard let to unwrap: guard let a = await f() else { return } async let b = g(a) async let c = h(a) let d = await i(b, c)
GCD 1:00:20
And if we wanted to execute some code after everything executed we could use defer: defer { print("Finished") } guard let a = await f() else { return } async let b = g(a) async let c = h(a) let d = await i(b, c)
GCD 1:00:31
The equivalent of this in Combine would need to use compactMap to discard any nil values, and then .handleEvents to inject some logic into the end of the chain: a .compactMap { $0 } .flatMap { a in zip(b(a), c(a)) } .flatMap { (b, c) in d(b, c) } .handleEvents(receiveCompletion: { _ in print("Finished") })
GCD 1:00:59
And even this isn’t a 100% correct translation because the guard let is capable of executing some logic if the value is nil whereas here we don’t have that opportunity. We’d have to insert another .handleEvents before the compact map to do that. Next time: Tasks
GCD 1:01:49
So, now that we are intimately familiar with what concurrency tools Apple has provided to us in the past and present, let’s look at what the future of concurrency looks like in Swift.
GCD 1:01:59
As we all know, Swift 5.5 was released 9 months ago with a variety of tools for concurrency. These tools are in many ways simpler and more robust than the tools we just covered, and they solve a lot of the problems we encountered. Best of all, the tools provide a fully integrated solution to data race conditions, and it’s really amazing to see. Once these features are fully baked in the language you will seldom have to think of asynchrony in terms of threads or reactive streams, and instead you will be able to write code that largely looks the same as if you were working entirely with synchronous processes.
GCD 1:02:34
So, let’s repeat the program we have put forth when exploring threads, operation queues and dispatch queues, but this time with a focus on Swift’s modern concurrency tools. These tools are quite a bit different from the threads and queues we previously explored because they are deeply integrated with the language itself. Not just library built in the language. References NSOperation Mattt • Jul 14, 2014 Note In life, there’s always work to be done. Every day brings with it a steady stream of tasks and chores to fill the working hours of our existence. Productivity is, as in life as it is in programming, a matter of scheduling and prioritizing and multi-tasking work in order to keep up appearances. https://nshipster.com/nsoperation/ libdispatch efficiency tips Thomas Clement • Apr 26, 2018 Note The libdispatch is one of the most misused API due to the way it was presented to us when it was introduced and for many years after that, and due to the confusing documentation and API. This page is a compilation of important things to know if you’re going to use this library. Many references are available at the end of this document pointing to comments from Apple’s very own libdispatch maintainer (Pierre Habouzit). https://gist.github.com/tclementdev/6af616354912b0347cdf6db159c37057 Modernizing Grand Central Dispatch Usage Apple • Jun 5, 2017 Note macOS 10.13 and iOS 11 have reinvented how Grand Central Dispatch and the Darwin kernel collaborate, enabling your applications to run concurrent workloads more efficiently. Learn how to modernize your code to take advantage of these improvements and make optimal use of hardware resources. https://developer.apple.com/videos/play/wwdc2017/706/ What went wrong with the libdispatch. A tale of caution for the future of concurrency. Thomas Clement • Nov 23, 2020 https://tclementdev.com/posts/what_went_wrong_with_the_libdispatch.html Introducing Swift Atomics Karoy Lorentey • Oct 1, 2020 Note I’m delighted to announce Swift Atomics, a new open source package that enables direct use of low-level atomic operations in Swift code. The goal of this library is to enable intrepid systems programmers to start building synchronization constructs (such as concurrent data structures) directly in Swift. https://www.swift.org/blog/swift-atomics/ Downloads Sample code 0191-concurrency-pt2 Point-Free A hub for advanced Swift programming. Brought to you by Brandon Williams and Stephen Celis . Content Become a member The Point-Free Way Beta previews Gifts Videos Collections Free clips Blog More About Us Community Slack Mastodon Twitter BlueSky GitHub Contact Us Privacy Policy © 2026 Point-Free, Inc. All rights are reserved for the videos and transcripts on this site. All other content is licensed under CC BY-NC-SA 4.0 , and the underlying source code to run this site is licensed under the MIT License .