EP 152 · Composable Architecture Performance · Jul 5, 2021 ·Members

Video #152: Composable Architecture Performance: Case Paths

smart_display

Loading stream…

Video #152: Composable Architecture Performance: Case Paths

Episode: Video #152 Date: Jul 5, 2021 Access: Members Only 🔒 URL: https://www.pointfree.co/episodes/ep152-composable-architecture-performance-case-paths

Episode thumbnail

Description

This week we improve the performance of another part of the Composable Architecture ecosystem: case paths! We will benchmark the reflection mechanism that powers case paths and speed things up with the help of a Swift runtime function.

Video

Cloudflare Stream video ID: 13a099e00947e89b393ad5fe1a097c67 Local file: video_152_composable-architecture-performance-case-paths.mp4 *(download with --video 152)*

References

Transcript

0:05

Last week we explored performance in the Composable Architecture. We looked at the tools it comes with that help you troubleshoot and improve application performance, and we also fixed some longstanding performance problems that existed in the library itself.

0:17

But we’re still not quite done making performance improvements, because just after we recorded and edited that episode we found yet another opportunity to eke out more performance. There’s another part of the library that isn’t as efficient as it could be, and that’s in a dependency that the library heavily leans on to earn the “composable” in its name: and that’s Case Paths .

0:36

Case paths are a topic we introduced more than a year and a half ago when we theorized what key paths would like look for enums. Swift’s key paths are a wonderful feature that allow you to write algorithms over the shape of a struct by isolating a single field from the rest. Our case paths do the same, except they isolate a single case from the rest of an enum.

0:57

We were even able to make case paths as ergonomic as key paths . Just as the compiler generates a key path for each field of a struct automatically, we were able to automatically generate a case path for each case of an enum by making use of Swift’s reflection APIs. We even introduced a prefix operator so that the actual syntax looks similar to key paths.

1:18

In those episodes we stressed that reflection can be difficult to get write since you are operating outside the purview of the compiler, but even worse, it can also be quite slow. Using reflection APIs creates a lot of unnecessary objects, and unfortunately this penalty shows up in case paths. Case paths: a recap

1:35

Let’s start by familiarizing ourselves with how case path reflection works.

1:43

Recall that a case path is just a simple struct wrapper around two functions: one to embed a piece of associated data into an enum, and another to optionally extract a piece of associated data from an enum: public struct CasePath<Root, Value> { private let _embed: (Value) -> Root private let _extract: (Root) -> Value? }

2:15

The extract method is failable because it may not be possible to extract the associated data. For example, you could have a Result value that is in the .failure state and you may try to extract out the .success data from it. That of course can’t happen and so it must return nil .

2:29

Constructing one of these things is straightforward, albeit a little verbose. Suppose we had an enum that held onto some logged in and logged out state: struct LoggedInState {} struct LoggedOutState {} enum AppState { case loggedIn(LoggedInState) case loggedOut(LoggedOutState) }

3:00

Then we can construct a case path to the .loggedIn case like so: let loggedInCasePath = CasePath<AppState, LoggedInState>( embed: AppState.loggedIn, extract: { appState in guard case let .loggedIn(state) = appState else { return nil } return state } )

3:58

That is quite a bit of code to maintain, especially when you compare it to key paths, which get automatically generated for you by the compiler: \String.count // KeyPath<String, Int>

4:17

So, that’s why we spent time investigating how we could automatically generate case paths for each case of an enum. By leveraging Swift’s reflection capabilities and sprinkling in a prefix operator we are able to make case paths look short and succinct: /AppState.loggedIn // CasePath<AppState, LoggedInState>

4:50

That’s great, but let’s quickly take a peek under the hood to see what the reflection code looks like. All of the reflection code takes place in a top-level function called extract , which given the embed function from the case of an enum it will return a function that can optionally extract that case from enum values: public func extract<Root, Value>( _ embed: @escaping (Value) -> Root ) -> (Root) -> (Value?) { return { root in func extractHelp( from root: Root ) -> (labels: [String?], value: Value)? { let mirror = Mirror(reflecting: root) assert( mirror.displayStyle == .enum || mirror.displayStyle == .optional ) guard let child = mirror.children.first, let childLabel = child.label, case let childMirror = Mirror(reflecting: child.value), let value = child.value as? Value ?? childMirror.children.first?.value as? Value else { #if compiler(<5.2) // https://bugs.swift.org/browse/SR-12044 if MemoryLayout<Value>.size == 0, !isUninhabitedEnum(Value.self) { return (["\(root)"], unsafeBitCast((), to: Value.self)) } #endif return nil } return ( [childLabel] + childMirror.children.map { $0.label }, value ) } guard let (rootLabel, value) = extractHelp(from: root), let (embedLabel, _) = extractHelp(from: embed(value)), rootLabel == embedLabel else { return nil } return value } }

5:42

That’s some pretty intense code, but it’s actually pretty close to what we wrote way back in our first episodes that introduced the concept. We just have a little bit of additional logic to handle some edge cases around parameter names and Swift reflection bugs.

5:57

You don’t need to understand every line of this function in order to know the basics of how it works. Loosely speaking, this function uses reflection to gather up the case label and all the associated data labels in the enum value. We can do this by using a mirror, which exposes some very basic information about the enum’s structure.

6:17

For example, suppose we have the following enum: enum Foo { case bar(a: Int, b: Int) }

7:28

Then reflecting on a value from this enum already gives us the name of the case: let mirror = Mirror(reflecting: Foo.bar(a: 1, b: 2)) mirror.children.first! // (label "bar", (a 1, b 2))

6:54

Once we know how to figure out the case name of enum values it’s only a matter of employing a small trick in order to extract the value out. What we can do is take the value inside this mirror, which is the (1, 2) tuple, embed it back into the enum, and then reflect on that new root value. If the mirror tells us it’s the same case label as what we have here then we know the associated data we extracted via the mirror matches the case represented by the embed function.

7:28

However, there are some edge cases to consider. In practice enums can have overloaded case names and even unlabeled argument names for the associated data: enum Foo { case bar(a: Int, b: Int) case bar(a: Int) case baz(Int, Int) }

7:46

So we have to do a little bit of extra work to gather up all of the labels for a case, including the case name and all the argument labels. This is why this little helper function returns a whole array of labels: func extractHelp( from root: Root ) -> (labels: [String?], value: Value)? {

8:07

It simultaneously tries to extract the value from the root via a mirror and returns all the labels encountered with the mirror.

8:18

Then we can perform the round trip extract-then-embed we alluded to before in order to check if the labels match, and if they do then we have successfully extracted the value: guard let (rootLabels, value) = extractHelp(from: root), let (embedLabels, _) = extractHelp(from: embed(value)), rootLabels == embedLabels else { return nil } return value

8:52

So that’s the very basics, but again it’s not necessary to understand all the nitty gritty details of this code to understand what we are about to do next.

9:02

What’s important to understand is that this code is using mirrors to extract out labels from an enum value so that it can differentiate between different cases of an enum. And as short as this reflection code is, it is probably quite slow. Creating and traversing a mirror is not a cheap operation, and here we are creating and traversing up to 4 of them in order to extract from a single enum case. Benchmarking case paths

9:27

But, let’s not just guess what the performance characteristics of this code are. Let’s actually benchmark it! We can quickly write a benchmark that shows how the reflection powered case paths fare against the manually defined case paths.

9:41

To write our benchmarks we will use the same tool we used to benchmark our parsing library : swift-benchmark .

9:56

We can add it as a dependency: dependencies: [ .package( url: "https://github.com/google/swift-benchmark", from: "0.1.0" ) ],

10:09

And to create a benchmark, we need to introduce a new target that depends on it: .target( name: "swift-case-paths-benchmark", dependencies: ["CasePaths", "Benchmark"] ),

10:26

Then, in the Sources directory, we will create a swift-case-paths-benchmark folder and a main.swift file that acts as the entry point into the executable that will do the benchmarking, where we can hop over to and import the library: import Benchmark

10:51

To get things going we can invoke the module’s benchmark function, which allows you to give the benchmark a name and provide a closure: benchmark("Manual") { } benchmark("Reflection") { }

11:04

We will start the benchmark by comparing case path reflection to code that manually picks apart an enum case.

11:16

And finally we can call the main function to hand control over to the library to run the benchmarks. Benchmark.main()

11:24

Now we just need to select the swift-case-paths-benchmark scheme and run the executable.

11:34

This immediately prints an error to the console: Please build with optimizations enabled ( -c release if using SwiftPM, -c opt if using bazel, or -O if using swiftc directly). If you would really like to run the benchmark without optimizations, pass the --allow-debug-build flag.

11:41

We encountered this exact same error back when we benchmarked our parsing library, and we emphasized that it is a good error to have. This is the benchmark library forcing us to build for release before we are even allowed to benchmark. This is important because benchmarking code built for development can be highly misleading. Not only does that code run much slower than code built for release, but also two sets of code could have wildly different performance characteristics when built for

RELEASE 12:11

So we must edit our scheme settings to set the build configuration to be release.

RELEASE 12:19

If we re-run the benchmark, it succeeds: running Manual... done! (78.26 ms) running Reflection... done! (90.08 ms) name time std iterations ------------------------------------------ Manual 30.000 ns ± 216.10 % 1000000 Reflection 33.000 ns ± 626.33 % 1000000 Program ended with exit code: 0

RELEASE 12:26

This prints out an entry for each benchmark we added, and tells us on average how much time it took to execute the closure, along with its standard deviation, as well as how many times it executed the closure.

RELEASE 12:41

Nothing too surprising here. Our closures aren’t doing anything yet and so both benchmarks basically take the same amount of time.

RELEASE 12:48

It’s worth noting that even these empty benchmarks take some amount of time, about 30 nanoseconds, and a nanosecond is one billionth of a second. As short as it is, we should keep this cost in mind when evaluating the results of the benchmarks we run.

RELEASE 13:04

Alright, let’s put some real work in these closures.

RELEASE 13:07

We need to define an enum that holds values we can extract. We can start simply: enum Enum { case associatedValue(Int) }

RELEASE 13:22

For the manual benchmark, we will create a case path from scratch. We just need to import the library. import CasePaths

RELEASE 13:26

And invoke the initializer on CasePath that takes an embed function and an extract function. let manual = CasePath( embed: <#(_) -> _#>, extract: <#(_) -> _?#> )

RELEASE 13:31

For the embed function we can pass along the enum case initializer, which is a function: let manual = CasePath( embed: Enum.associatedValue, extract: <#(_) -> _?#> )

RELEASE 13:38

And for the extract function we can open up a closure and do a manual guard case let : let manual = CasePath( embed: Enum.associatedValue, extract: { guard case let .associatedValue(value) = $0 else { return nil } return value } )

RELEASE 13:58

This isn’t so bad, but it’s boilerplate that adds up, and is exactly why we introduced the reflective code that can magically extract a value given an embed function: let reflection = /Enum.associatedValue

RELEASE 14:15

Which is much better, but to see how they compare let’s perform an extraction in each benchmark: let enumCase = Enum.associatedValue(42) benchmark("Manual") { manual.extract(from: enumCase) } benchmark("Reflection") { reflection.extract(from: enumCase) } Result of call to ‘extract(from:)’ is unused Result of call to ‘extract(from:)’ is unused

RELEASE 14:42

We get some warnings because we’re not doing anything with the values we extract. We can silence them with an underscore: benchmark("Manual") { _ = manual.extract(from: enumCase) } benchmark("Reflection") { _ = reflection.extract(from: enumCase) }

RELEASE 14:50

But what would perhaps be better is to introduce a precondition that verifies extraction is working as we expect, especially since we’ll be modifying the reflection code soon. benchmark("Manual") { precondition(manual.extract(from: enumCase) == 42) } benchmark("Reflection") { precondition(reflection.extract(from: enumCase) == 42) }

RELEASE 15:08

We can now run the benchmark and finally gain some insight into the cost of reflection: running Manual... done! (104.31 ms) running Reflection... done! (1176.43 ms) name time std iterations --------------------------------------------- Manual 41.000 ns ± 243.49 % 1000000 Reflection 8169.000 ns ± 55.03 % 106802 Program ended with exit code: 0

RELEASE 15:12

Ouch. That’s quite the difference. Extracting a value via reflection looks to be over 200 times slower than manual extraction. But if we remember the 30 nanosecond overhead of a benchmark, it’s actually about 1,000 times slower 😬

RELEASE 15:30

Now, 8,000 nanoseconds isn’t a ton of time on its own. At 60 frames per second, applications have essentially 16 milliseconds to do their work before they cause a hitch in the runloop. This means we could perform over 2,000 reflective case path extractions in that time. So, this isn’t going to be a huge deal for many applications, but it can still add up if your application uses a lot of case paths.

RELEASE 15:54

For example, in a Composable Architecture application that is heavily modularized, each pullback operation introduces a case path extraction that attempts to pluck a child action out of a parent one. And recently we even introduced an overload of pullback that allows you to pull back along a case path of state, in which case you will be performing two extractions. If your application is composed of dozens of these reducers and if the store can send actions at a high rate, say from a drag gesture or display link, you may start to encounter some real performance problems.

RELEASE 16:22

Our benchmark shows that while the reflection code can be slow, we at the very least have a fix for the problem today: if you ever hit a performance issue with case paths, you can always leave reflection behind and write some manual case paths instead. It’s tedious work, but mostly mechanical, and you could even use a code generation tool like Sourcery to automate the work. Comparing cases using the Swift runtime

RELEASE 16:44

Well, luckily there’s a better way to do reflective case paths and we don’t have to turn to code generation. A few weeks ago we had an episode on a SwitchStore concept in the Composable Architecture that allows you to “switch” over a store that holds onto enum state so that you can derive a store for each case of an enum. In order to make this construction performant we introduced some code that leveraged the Swift runtime metadata which allowed us to compute the “tag” of an enum. This gave us a lightweight way to figure out which case an enum value belongs to, which in turned allowed us to minimize the number of times the body of our views are recomputed.

RELEASE 17:22

So it sounds like this tag function could be very useful to us for case paths since a lot of the work we are doing has to deal with figuring out which case a value belongs to.

RELEASE 17:39

Let’s quickly paste that tag code into a playground so that we can see how it works: private func enumTag<Case>(_ case: Case) -> UInt32? { let metadataPtr = unsafeBitCast( type(of: case), to: UnsafeRawPointer.self ) let kind = metadataPtr.load(as: Int.self) let isEnumOrOptional = kind == 0x201 || kind == 0x202 guard isEnumOrOptional else { return nil } let vwtPtr = ( metadataPtr - MemoryLayout<UnsafeRawPointer>.size ) .load(as: UnsafeRawPointer.self) let vwt = vwtPtr.load(as: EnumValueWitnessTable.self) return withUnsafePointer(to: case) { vwt.getEnumTag($0, metadataPtr) } } private struct EnumValueWitnessTable { let f1, f2, f3, f4, f5, f6, f7, f8: UnsafeRawPointer let f9, f10: Int let f11, f12: UInt32 let getEnumTag: @convention(c) (UnsafeRawPointer, UnsafeRawPointer) -> UInt32 let f13, f14: UnsafeRawPointer }

RELEASE 17:48

This is even more intense and obtuse than the mirror reflection code, but it is the minimal amount of code that allows us to load a portion of Swift’s runtime metadata and the functionality to compute the tag of an enum, which is represented as a simple UInt32 value. We can run this function on some values of our Foo enum to see that it allows us to classify which case a value belongs to: enumTag(Foo.bar(a: 3, b: 4)) // 0 enumTag(Foo.bar(a: 100)) // 1 enumTag(Foo.baz(100)) // 2

RELEASE 19:08

So this is seeming pretty promising. How can we use it?

RELEASE 19:17

Well, if we look at the bottom of the extract function we will see that we are doing quite a bit of work for the failure path. We extract a value from the root, then embed that value back into the root just to extract again, and then compare the labels to make sure they are equal: guard let (rootLabels, value) = extractHelp(from: root), let (embedLabels, _) = extractHelp(from: embed(value)), rootLabels == embedLabels else { return nil }

RELEASE 19:39

This seems like a good use case for using the enumTag function since it specifically works to differentiate between enum cases. We would hope that we can just drop all the label logic we are doing and instead rely only on enumTag .

RELEASE 19:57

So, instead of relying on the array of labels computed to determine the case of a value, let’s just use the enumTag function: guard let (rootLabels, value) = extractHelp(from: root), let (embedLabels, _) = extractHelp(from: embed(value)), enumTag(root) == enumTag(embed(value)) else { return nil } Immutable value ‘rootLabels’ was never used Immutable value ‘embedLabels’ was never used

RELEASE 20:27

But now rootLabels and embedLabels aren’t being used at all, so we can ignore them: guard let (_, value) = extractHelp(from: root), let (_, _) = extractHelp(from: embed(value)), enumTag(root) == enumTag(embed(value)) else { return nil }

RELEASE 20:30

And if we aren’t using those values, then we should be able to drop it from the return of the extractHelp function: func extractHelp(from root: Root) -> Value? { … }

RELEASE 20:38

To make that compile we need to stop returning the labels from the function: // let childLabel = child.label, … // return (["\(root)"], unsafeBitCast((), to: Value.self)) return unsafeBitCast((), to: Value.self) … // return ([childLabel] + childMirror.children.map { $0.label }, value) return value

RELEASE 21:01

Now the extractHelp function is compiling, which means we can drop the extra return values from the guard statement down below: guard let value = extractHelp(from: root), let _ = extractHelp(from: embed(value)), enumTag(root) == enumTag(embed(value)) else { return nil }

RELEASE 21:15

And now we aren’t even using the output of the second extractHelp function, so we can get rid of it: guard let value = extractHelp(from: root), enumTag(root) == enumTag(embed(value)) else { return nil }

RELEASE 21:21

So that right there should improve performance a little bit because we no longer have to run the extractHelp method twice.

RELEASE 21:34

Let’s confirm by re-running the benchmarks. running Manual... done! (78.98 ms) running Reflection... done! (1779.86 ms) name time std iterations -------------------------------------------- Manual 32.000 ns ± 526.78 % 1000000 Reflection 3787.000 ns ± 63.41 % 301176 Program ended with exit code: 0

RELEASE 21:41

OK, a couple of interesting things here. It seems that the time to extract values with the reflection code has been cut in more half. Previously it was a little over 8,000 nanoseconds, and now we’re at just under 4,000. So that’s really good, and it’s expected because we are no longer running the extractHelp method twice, and so have cut our work down by half. We are also doing less work in that helper, since we no longer have to hold onto and compare arrays of labels. And it seems that the operation of computing an enum tag must not introduce the same overhead.

RELEASE 22:25

Remember that we have some preconditions in our benchmarks that verify our case paths have successfully extracted a value, but we also have a much more rigorous test suite that tests a whole lot of edge cases, so let’s quickly run that to make sure everything still works as expected.

RELEASE 22:52

And it does! Phew. Failing fast with caching

RELEASE 22:53

So this is already pretty promising. If we dropped these changes into an existing application it would already be able to perform over twice as many extractions in a frame as it previously could.

RELEASE 23:04

Let’s take things even further.

RELEASE 23:07

Because once we’ve successfully extracted a value from a root we can cache the tag that was computed. This will allow us to early out of doing a lot of work if we receive a root that doesn’t match the tag we’ve cached, since there is no way the extraction can succeed in that case.

RELEASE 23:36

So, let’s start by introducing a cached tag that will be nil to begin with: public func extract<Root, Value>( _ embed: @escaping (Value) -> Root ) -> (Root) -> (Value?) { var cachedTag: UInt32? … }

RELEASE 24:05

Then, just before we return the successfully extracted value, we can update the cached tag so that we know case of the enum this extraction corresponds to: guard let (value) = extractHelp(from: root), enumTag(root) == enumTag(embed(value)) else { return nil } cachedTag = enumTag(root) return value

RELEASE 24:17

And then finally we early out of this entire function if we have a cached tag and it doesn’t match the tag of the root we are inspecting: if let cachedTag = cachedTag, cachedTag != enumTag(root) { return nil }

RELEASE 24:38

We can also clean this code up a little bit because right now we are computing enumTag(root) three different times. Although this operation is fast, we can still avoid this extra work and compute it a single time up top. We can even bring back the assertion failure that catches when trying to use this reflective helper on a non-enum: guard let rootTag = enumTag(root) else { assertionFailure("Root must be an enum or optional.") return nil } if let cachedTag = cachedTag, cachedTag != rootTag { return nil } guard let (value) = extractHelp(from: root), rootTag == enumTag(embed(value)) else { return nil } cachedTag = rootTag return value

RELEASE 25:36

So, we’ve now made a number of changes to our reflection code and it’s time to see if there are any measurable improvements to our benchmarks. Running the benchmark suite we will see the following: running Manual... done! (87.33 ms) running Reflection... done! (2009.11 ms) name time std iterations --------------------------------------------- Manual 34.000 ns ± 2474.37 % 1000000 Reflection 3496.000 ns ± 213.07 % 353266 Program ended with exit code: 0

RELEASE 25:47

OK this doesn’t really seem any different than before, which seems to imply that our caching logic doesn’t provide us any net benefit. But that’s only because we are testing the code path that successfully extracts out a value. Our caching logic optimizes the path where extracting fails.

RELEASE 26:03

So, let’s add another benchmark to test that code path. We can copy and paste our existing benchmarks but make a few changes that will exercise the failure code paths. We just need to introduce a new case that fails extraction, and attempt to extract this value instead, our precondition assuming it will fail with nil : enum Enum { … case anotherAssociatedValue(String) } … let anotherCase = Enum.anotherAssociatedValue("Blob") … benchmark("Manual: Failure") { precondition(manual.extract(from: anotherCase) == nil) } benchmark("Reflection: Failure") { precondition(reflection.extract(from: anotherCase) == nil) } running Manual... done! (90.32 ms) running Reflection... done! (1908.70 ms) running Manual: Failure... done! (85.50 ms) running Reflection: Failure... done! (143.71 ms) name time std iterations ----------------------------------------------------- Manual 39.000 ns ± 266.87 % 1000000 Reflection 3399.000 ns ± 85.54 % 354827 Manual: Failure 36.000 ns ± 608.33 % 1000000 Reflection: Failure 80.000 ns ± 588.42 % 1000000 Program ended with exit code: 0

RELEASE 26:45

Wow! When extraction fails we get performance that is comparable to the manually written case path. That’s pretty incredible. Failure vs. success in isowords

RELEASE 26:56

But, the question is: how common is the failing path of extraction? Because if in practice is doesn’t happen very often then we won’t really reap these performance benefits.

RELEASE 27:13

Well, it turns out, quite a bit! In fact, in the Composable Architecture, which is probably the biggest use case of case paths, failed extractions far, far outweigh successful extractions. This is due to the nature of how the .combine and .pullback operators work. They allow you to take many reducers that work on small pieces of domain, and transform them into a single reducer that works on a larger domain. This is accomplished by trying to extract a local action from a global one, and if it succeeds it runs the local reducer on that action.

RELEASE 27:46

So, if you are combining 10 reducers together, when an action comes in you are necessarily going to get 9 failures and at most once success. So clearly there is the potential to have the failure path called a lot more than the success path, but let’s see it in real life.

RELEASE 28:02

We’re going to switch over to our isowords project where we have put in a special version of our case paths library that counts how many times the success and failure code paths are executed, and then prints that to the console.

RELEASE 28:42

If we run the game we instantly see that the number of successes computed is only a fraction of the number of times the failure is computed. … successCount 14 failCount 178

RELEASE 29:14

If we further open up a game to tap and drag around we will see those numbers diverge even more. … successCount 230 failCount 937

RELEASE 29:56

This is showing that our application will get some pretty big benefits to our case path changes. And the reason we seeing these kinds of numbers is because of how we’ve chosen to modularize our application so that each screen gets its own domain, reducer and view. This causes the failure path of case path extraction to be called many times because when an action enters the system there are going to be a lot of features that do not react to that action. Conclusion

RELEASE 30:29

Now, there is still more room for improvements. Ideally we could optimize the success code path of extraction too by getting rid of the Mirror APIs and instead fully relying on Swift’s runtime metadata. Unfortunately we haven’t quite figured out exactly how to do that yet, but if we ever do we’ll be sure to update everyone.

RELEASE 30:50

It’s also worth mentioning that ideally Swift would support the concept of case paths as a first class feature some day, just as key paths have language support. That could make these extractions extremely efficient and provided to us for free by the compiler. But till then, our library will help fill the gap.

RELEASE 31:12

That’s it for today’s episode. Until next time! References Collection: Case Paths Brandon Williams & Stephen Celis • Jan 20, 2020 The series of episodes in which Case Paths were first theorized and introduced. Key paths are an incredibly powerful feature of the Swift language they are compiler-generated bundles of getter-setter pairs and are automatically made available for every struct property. So what happens when we theorize an equivalent feature for every enum case? https://www.pointfree.co/collections/enums-and-structs/case-paths How Mirror Works Mike Ash • Sep 26, 2018 A post on the official Swift Blog explaining how Swift’s reflection APIs work, including calls to functions that live on the runtime metadata, like the enum tag code we use in this week’s episode. https://swift.org/blog/how-mirror-works/ The Swift Runtime Jordan Rose • Aug 31, 2020 A series of posts on the Swift runtime. https://belkadan.com/blog/tags/swift-runtime/ Echo Alejandro Alonso A complete reflection library for Swift. https://github.com/Azoy/Echo Downloads Sample code 0152-case-path-performance Point-Free A hub for advanced Swift programming. Brought to you by Brandon Williams and Stephen Celis . Content Become a member The Point-Free Way Beta previews Gifts Videos Collections Free clips Blog More About Us Community Slack Mastodon Twitter BlueSky GitHub Contact Us Privacy Policy © 2026 Point-Free, Inc. All rights are reserved for the videos and transcripts on this site. All other content is licensed under CC BY-NC-SA 4.0 , and the underlying source code to run this site is licensed under the MIT License .