EP 230 · Composable Navigation · Apr 10, 2023 ·Members

Video #230: Composable Navigation: Stack vs Heap

smart_display

Loading stream…

Video #230: Composable Navigation: Stack vs Heap

Episode: Video #230 Date: Apr 10, 2023 Access: Members Only 🔒 URL: https://www.pointfree.co/episodes/ep230-composable-navigation-stack-vs-heap

Episode thumbnail

Description

We take a detour to learn about the stack, the heap, copy-on-write, and how we can use this knowledge to further improve our navigation tools by introducing of a property wrapper.

Video

Cloudflare Stream video ID: 51ebd4ebb617e726497d09c3f1d6c94f Local file: video_230_composable-navigation-stack-vs-heap.mp4 *(download with --video 230)*

References

Transcript

0:05

So this is all looking really great. We now have better tools for modeling our domains more concisely.

0:10

Previously when we wanted to be able to navigate to a new child feature from an existing feature we would just throw an optional into our domain. However, with each additional optional you add to a domain you double the number of invalid states in your feature. For example, we had 4 options representing 4 different destinations, which means there were 16 different states our feature could be in, only 5 of which were valid.

0:32

Now we can embrace enums when modeling the destinations for our features, which gives us just one single place to describe all the different places a feature can navigate to. That gives us us more correctness in our code because the compiler is proving for us that only one destination can be activated at a time. Stephen

0:49

Next let’s address a problem that gets nearly everyone eventually when using our library, especially when composing lots of features together. Because modeling navigation in state requires us to nest child state in parent state, any sufficiently complex application eventually has a very large root app state that could potentially have hundreds of fields. This also means that the amount of memory you are storing on the stack will increase as you integrate more child features together.

1:15

This may not matter for awhile, but eventually your state may get too big or you may have too many frames on the stack, and you could accidentally overflow the stack. That crashes the application, and so that of course isn’t great.

1:27

Now it’s worth noting a couple of caveats here. Not all of your application’s state is necessary stored on the stack. Arrays, dictionaries, sets, and even most strings are all stored on the heap, and so it doesn’t matter if you have a 100,000 element array in your state, that makes no difference for the stack.

1:44

Also we made great strides towards reducing the number of stack frames that are incurred when combining lots of features together, all thanks to the Reducer protocol. In release mode most of those compositions get inlined away and you are left with very few actual stack frames.

1:59

But still, people do run into this limitation, and it’s a real bummer.

2:03

However, by far the most common reason for multiple features to be integrated together is because of navigation. You plug “FeatureB” into “FeatureA” when you need to navigate from A to B. As you do this more and more your state becomes bigger and bigger.

2:17

And now we are going to be giving everyone more tools to build up state like this for navigation, and so it may start happening a lot more. Perhaps we can directly bake into the tools a more efficient way of storing state so that it plays more nicely with deeply nested, composed features. Stack vs heap

2:38

Let’s start by showing everyone exactly how one can blow the stack in an application, and how to fix it.

2:53

We need the ability to generate a value that is really, really huge. Like “having hundreds or thousands of fields” big. One way to do this is to nest large values together, much like one nests features in the Composable Architecture.

3:06

So, let’s define a generic type that holds 10 fields, and we will go ahead and make it Equatable because we’ll need that later: struct Ten<A: Equatable>: Equatable { var a, b, c, d, e, f, g, h, i, j: A }

3:37

Let’s start a test so we can play around with this type: import XCTest class StackOverflowTests: XCTestCase { func testTen() { } }

3:48

We will construct a value of Ten that holds onto ten strings: let value1 = Ten( a: "qndfjkasdf njksdfnjsakd", b: "qndfjkasdf njksdfnjsakd", c: "qndfjkasdf njksdfnjsakd", d: "qndfjkasdf njksdfnjsakd", e: "qndfjkasdf njksdfnjsakd", f: "qndfjkasdf njksdfnjsakd", g: "qndfjkasdf njksdfnjsakd", h: "qndfjkasdf njksdfnjsakd", i: "qndfjkasdf njksdfnjsakd", j: "qndfjkasdf njksdfnjsakd" )

4:05

Then we will construct another Ten value that holds onto 10 of the first value: let value2 = Ten( a: value1, b: value1, c: value1, d: value1, e: value1, f: value1, g: value1, h: value1, i: value1, j: value1 )

4:25

And we’ll do it again: let value3 = Ten( a: value2, b: value2, c: value2, d: value2, e: value2, f: value2, g: value2, h: value2, i: value2, j: value2 )

4:33

…and again: let value4 = Ten( a: value3, b: value3, c: value3, d: value3, e: value3, f: value3, g: value3, h: value3, i: value3, j: value3 )

4:38

…and again: let value5 = Ten( a: value4, b: value4, c: value4, d: value4, e: value4, f: value4, g: value4, h: value4, i: value4, j: value4 )

4:43

It may not seem like much, but we have actually created a value that holds onto 10 times 10 times 10 times 10 times 10 fields, which is 100,000 fields!

4:54

And then, at the end of all of that, let’s grab a mutable copy of value5 : var copy = value5

5:00

Believe it or not that is all it takes to blow the stack on iOS. We will run this test…

5:09

Well, one thing we will find is that it takes a very long time. So, we will perform some movie magic so that you don’t have to wait.

5:30

And finally after over four and a half minutes the test runs, and it crashes on the line we make the copy.

5:44

If we comment out the copy and value5 and instead copy value4 : var copy = value4

5:53

We will see that things compile a bit faster and the test runs without crashing.

5:58

So, what is going on here?

6:00

Well, the strain we are witnessing in the compiler and the runtime is a direct consequence of a concept known as the “stack”. The stack is a fixed size of computer memory set aside for an application and is used for the times you need to allocate memory for a value that is known at compile time. In fact, that static, compile-time aspect is exactly why the compiler mysteriously got slower when trying to compile such a gigantic data type.

6:24

The compiler sees that we want to make use of a 100,000 field struct in this test, and it sees we want to make a copy, so at compile time it knows exactly how much stack memory to allocate when this code path is ready to execute. Further, if this value was passed to a function it would know how much stack memory needs to be allocated for that as well as whatever data types are used in the function.

6:45

The amount of stack memory an application has is relatively restricted compared to the amount of memory the computer has. It is meant to only serve the memory allocations that are known about at compile time, which comparatively is not very common. Such memory cannot depend on anything dynamic or any information gained at runtime, such as arrays of arbitrary lengths. And because the memory is restricted, it is possible to accidentally allocate more memory than the stack has, thus causing what is known as a “stack overflow” and crashing the application.

7:14

Now, many times the compiler can be quite smart and omit copies that aren’t really necessary, but it doesn’t always happen. In fact, this is a huge topic happening in Swift evolution right now with so-called “non-copyable” types, which allows you to describe a stack-allocated value that should never be copied. Giving this information to the compiler gives you more power in how that value can be used and what optimizations the compiler can do behind the scenes.

7:37

So, there definitely are issues with storing large amounts of data on the stack. Now of course a type with 100,000 fields isn’t super realistic, but it is not uncommon for a large application to have hundreds and possibly even thousands of fields, and then further for each layer that data is passed through it potentially gets copied. So it’s also possible to blow the stack with medium sized data and lots of stack frames.

8:01

Now the stack seems quite restrictive, but its power comes in its speed. Because it is statically known at compile time, the compiler can do fancy optimizations to make sure that the memory is packed in tightly and contiguously, making accessing stack memory very fast.

8:15

But, there’s clearly a missing piece to this story for memory, and that’s memory that needs to be allocated dynamically. After all, the user is doing things in the application that couldn’t possibly be predicted at compile time, and so we need the ability to allocate memory on a as-needed basis.

8:31

This is where heap memory comes into play. The heap is what allows you to dynamically allocate memory whose shape and size does not need to be known at compile time. It’s perfect for things that depend on user actions as well as resizable data types such as collections.

8:45

The simplest way to allocate data on the heap in Swift is to simply use a reference type such as a class. References types have a lifetime of their own that is independent of the function stack it is being used from. Stack allocated values are discarded the moment their enclosing function is finished, but reference types can live for much longer and travel across function boundaries throughout the application.

9:05

So, that seems good, but it comes with a cost. Heap allocated memory is typically not as fast to read from and write to because it is not known at compile time and is typically not contiguous. You can theoretically declare two pieces of heap allocated data right next each other in your program, yet their actual location in memory could be quite far apart. So, there are trade offs to using stack and heap allocated memory.

9:28

There’s another weird way you can move data over to the heap, and that’s to wrap it in an array. Arrays are weird in Swift in that they act like value types in that they are copied when passed around to functions, but because they are dynamically sized they must be heap allocated.

9:43

Let’s create a new type that doesn’t hold onto 10 fields, but rather holds onto an array of elements: struct BoxedTen<A: Equatable>: Equatable { var box: [A] }

10:05

Now right off the bat this seems strange because the array can hold any number of elements, not just 10. Well, we are going to work hard to strictly control how many elements it holds. For one thing, we will make the array private so that it cannot be accessed from the outside: private var box: [A]

10:22

And then we will provide an initializer that takes 10 arguments so that the underlying array will always have 10 elements: init( a: A, b: A, c: A, d: A, e: A, f: A, g: A, h: A, i: A, j: A ) { self.box = [a, b, c, d, e, f, g, h, i, j] }

10:28

Now of course it is on us to maintain this invariant of the type. We could of course do something bad in here one day, such as forget to put one of the elements in the array: self.box = [a, b, c, d, e, f, g, h, i, /* j */]

10:40

Now the array only has 9 elements, and so surely will crash eventually. The compiler can’t help us check, it’s entirely on us to make sure the data held in this array is correct.

10:50

But, having said that, we can make writable properties on the type that mimic the a , b , c , etc… fields of Ten : var a: A { get { self.box[0] } set { self.box[0] = newValue } } var b: A { get { self.box[1] } set { self.box[1] = newValue } } var c: A { get { self.box[2] } set { self.box[2] = newValue } } var d: A { get { self.box[3] } set { self.box[3] = newValue } } var e: A { get { self.box[4] } set { self.box[4] = newValue } } var f: A { get { self.box[5] } set { self.box[5] = newValue } } var g: A { get { self.box[6] } set { self.box[6] = newValue } } var h: A { get { self.box[7] } set { self.box[7] = newValue } } var i: A { get { self.box[8] } set { self.box[8] = newValue } } var j: A { get { self.box[9] } set { self.box[9] = newValue } }

11:07

And now from the outside, the BoxedTen type looks identical to the Ten type, but under the hood it stores its data in an array, which is heap allocated, rather than individual fields, which is stack allocated.

11:21

In fact, we can even copy and paste the testTen test, rename it to testBoxedTen , and just substitute all uses of Ten with BoxedTen .

11:41

Now one amazing thing already is that the tests compile instantly. Previously tests would take a few minutes to compile. That’s because the compiler doesn’t need to do the work to statically figure out how it’s going to stack allocate 100,000 fields, and instead it can defer that work until runtime.

11:49

So that is seeming promising.

11:51

Further, we can bring back the value5 and copy, and not only do tests still compile instantly, but they even run without crashing: let value5 = BoxedTen( a: value4, b: value4, c: value4, d: value4, e: value4, f: value4, g: value4, h: value4, i: value4, j: value4 ) var copy = value5

12:06

So this has gotten us around the stack overflow. Copy-on-write

12:10

But things get even cooler. Arrays in Swift implement a concept known as “copy on write”, which means that arrays can be passed around to functions and be quote-un-quote “copied”, but if you never make a change to any of those “copies” then the data is never actually copied. It’s only at the moment of trying to mutate the array that a copy is actually made and handed to the function or variable.

12:31

Let’s hop over to a Swift playground so that we can experiment with this concept. The idea of “copy-on-write” straddles the line between a value type and a reference type. We want to use value types in our code because they are simple, inert representations of data that are easy to understand and do not have spooky behavior across vast distances in the codebase.

13:03

But, at the same time, their very nature demands that they be copied when they are passed around to functions so that mutations in one place cannot affect values in another place. It is worth noting, though, that Swift can often be quite smart about eliding unnecessary copies, and so it’s not true that value types are copied every time they are passed to functions.

13:20

Reference types, such as classes, do not have this copying problem. You can pass references to functions and the entire contents of the object is not copied, but rather a reference to the object is passed along. Also, internally, Swift maintains a count of the number of places in the code base holding a reference to the object so that when it drops to 0 it can be deallocated.

13:40

So, that sounds more efficient than value types, but at the same time reference types introduce a lot of uncertainty into a code base because a mutating in place can magically cause mutations to happen in completely far away parts of the code base without you knowing what is happening.

13:55

Copy-on-write types try to blend a little bit of the good from both worlds. You get to use a value type, but it will not be copied until you try mutating the value, at which point it will be copied.

14:06

The way you do this is by starting with a value type to represent the data you are trying to model. We will use a struct , and it will be generic over the type of data it holds on the inside: struct Wrapper<Value> { }

14:18

Now, rather than holding onto the value directly inside like we would normally: var value: Value

14:24

…we are actually going to hold onto a reference type under the hood. We can even hide that implementation detail as a little private class nested inside the struct: struct Wrapper<Value> { private class Storage { var value: Value init(value: Value) { self.value = value } } }

14:41

We will make it so that anyone interacting with the struct never has to think about this reference type storage. We can do this by holding onto private storage: private var storage: Storage

14:59

…and providing a non-private initializer that deals with the value we want to hold rather than any concept of storage: init(value: Value) { self.storage = Storage(value: value) }

15:07

So we can now create values of Wrapper that secretly under the hood will be held in a reference type.

15:15

But, how can we get access to the underlying data without it being exposed that there’s a reference type? Well, if we try adding a computed property in the naive way: var value: Value { get { self.storage.value } set { self.storage.value = newValue } }

15:35

…then we haven’t accomplished much.

15:38

All we’ve done is make a quote-unquote “value” type that actually behaves almost entirely like a reference type. For example, if we create an instance of Wrapper , then copy it, and make a mutation to the copy, it will somehow magically also mutate the original: var x = Wrapper(value: 1) var y = x x.value = 2 x.value // 2 y.value // 2

16:11

This is not the behavior we expect from value types. This is not how any value type that ships in the Swift standard library behaves. If we use plain integers instead of Wrapper : var z = 1 var w = z z = 2 z // 2 w // 1

16:36

…then it behaves how I would expect.

16:43

So, we have to do more work with our Wrapper type to make it behave like a value type even though it is secretly using a reference type under the hood.

16:51

This is where the concept of “uniquely referenced” comes into play. It is possible to ask Swift an object has a reference count of 1, or in other works, is “uniquely referenced”, and then use that information to implement different logic.

17:04

In particular, we can ask the underlying storage of the Wrapper value if it is uniquely referenced. Let’s try that in isolation real quick. Since the storage is private we can’t ask this from the outside, so let’s expose a little method on Wrapper that does it for us: mutating func isKnownUniquelyReferenced() -> Bool { Swift.isKnownUniquelyReferenced(&self.storage) }

17:39

Then we can this evaluates to true right after creating a Wrapper value, but as soon as we make a copy it flips to false because now the underlying storage has a reference count of 2: var x = Wrapper(value: 1) x.isKnownUniquelyReferenced() // true var y = x x.isKnownUniquelyReferenced() // false y.isKnownUniquelyReferenced() // false

18:06

So, when mutating the value inside Wrapper we can use this information to determine if we mutating the storage directly, or if we should make a copy of the storage with the new value: var value: Value { get { self.storage.value } set { if Swift.isKnownUniquelyReferenced(&self.storage) { self.storage.value = newValue } else { self.storage = Storage(value: newValue) } } }

18:45

Now with that change the Wrapper type behaves exactly like a value type. If we make a copy and make a mutation, the two values are not mutated together: var x = Wrapper(value: 1) var y = x x.value = 2 x.value // 2 y.value // 1

18:55

And even better, after the copy, both of the values go back to having uniquely referenced underlying storage: x.isKnownUniquelyReferenced() // true y.isKnownUniquelyReferenced() // true

19:10

So that’s pretty cool, but we can use these ideas to provide a more efficient form of equality for Wrapper too. Since we now have a reference type representation of the value type, and since reference types have an identity unto themselves, we can first try comparing two Wrapper values by their storage and then fallback to actually comparing the data if that fails: extension Wrapper: Equatable where Value: Equatable { static func == (lhs: Self, rhs: Self) -> Bool { if lhs.storage === rhs.storage { return true } return lhs.value == rhs.value } }

20:04

Note that we are comparing the storage identity since we are using triple equals === , and that is typically a very fast operation. Certainly much faster than field-wise comparing two values.

20:13

We can even simulate forcing the field-wise comparison to be slow by adding a sleep: extension Wrapper: Equatable where Value: Equatable { static func == (lhs: Self, rhs: Self) -> Bool { if lhs.storage === rhs.storage { return true } Thread.sleep(forTimeInterval: 3) return lhs.value == rhs.value } }

20:30

If we now compare the equality of a Wrapper value and a copy we will see it evaluates almost immediately: var x = Wrapper(value: 1) var y = x x == y // true

20:45

However, if we make a mutation, even if we don’t change anything, the storage will be copied and so we can no longer rely on storage identity to give us a fast path for equality: y.value = y.value x == y // true after a 3 second delay This equality takes 3 seconds to finish due to our Thread.sleep , showing that the storage identity has changed.

21:16

Swift’s Array type uses this technique too. We can go to the Swift project on GitHub . We’ll go to Array.swift , and we’ll search for public static func == in this file. We will see that if the counts of the lhs and rhs are not equal, then it falls back to “referential identity”: // Test referential equality. if lhsCount == 0 || lhs._buffer.identity == rhs._buffer.identity { return true }

21:39

This gives arrays super powers when it comes to equality checking. It does not need to naively traverse the two arrays and check that each pair of elements is equal if it knows that the storage of each array is the same.

21:50

We can even see this in very concrete terms.

21:53

Let’s write a benchmark. First we will switch our test target to run in release mode:

22:03

And let’s bring back the testTen case, but we’re further going to even comment out value4 because it will take too long to compile value4 in release mode.

22:16

Then at the end of the test we will put this very rough benchmark into place: let start = Date() for _ in 1...10_000 { precondition(value3 == value3) } print("Ten equality", Date().timeIntervalSince(start))

22:49

Now this is obviously a true statement, but Swift doesn’t know that. As far as it knows, the two values handed to == are completely different and so it has to go through and check all 1,000 fields for equality, and we are doing that 10,000 times.

23:03

And then, in the testBoxedTen case we will put in a similar benchmark, but we will check for equality on value5 since it actually compiles very quickly: let start = Date() for _ in 1...10_000 { precondition(value5 == value5) } print( "BoxedTen equality", Date().timeIntervalSince(start) )

23:17

So this is theoretically checking for the equality on 100,000 fields and doing it 10,000 times. That sounds like a lot of work, especially when compared what we are doing over in the testTen case.

23:28

And so if we didn’t know any better we might think that this benchmark might be a lot slower since ostensibly it needs to do extra work to check the lengths of the array, and iterate over the array to check each element.

23:39

Well when we run the test we find: BoxedTen equality 1.0728836059570312e-06 Ten equality 0.00476992130279541

23:48

Wow, ok. So checking the Ten equality is quite fast, taking just a small fraction of a second, but it seems that the BoxedTen equality is still orders of magnitude faster than Ten . Property-wrapped presentation

24:11

So, it seems like magic, but the act of boxing up a value in an array can not only move it to the heap to get around potential stack overflow problems, but it can also speed up equality checking when it detects the underlying storage is the same. The array equality isn’t literally checking 100,000 fields. It’s first just checking the storage referential identity, which is very fast, and only if that fails does it fallback to checking all 100,000 fields.

24:35

So, what can we do with this knowledge? Brandon

24:37

We can actually insert a little copy-on-write wrapper into our navigation tools so that the moment you create a branch in your feature’s state for each of its destinations we move that state to the heap. That should prevent any stack overflow problems, and might also help with performance of equality checks for large types.

24:55

Let’s see what it takes to make this happen.

24:59

First, let’s take another look at the tools we have built for navigation. We start by modeling the presentation and dismissal of the destination with some optional state: struct State: Equatable { var destination: Destination.State? … }

25:11

Then you add a case to your action that holds onto a PresentationAction of the child domain: enum Action: Equatable { case destination(PresentationAction<Destination.Action>) … }

25:21

There is a bit of asymmetry here. We have a wrapper type for the presentation action so that the destination domain can be automatically enhanced with the dismiss action, but there is no such concept for the presentation state. It’s just a simple optional value.

25:40

What if we had a state wrapper type too, and in that type we could implement this copy-on-write optimization and completely hide it from the user of the library?

25:52

It might even look like this: var destination: PresentationState<Destination.State>

25:59

Or even better, what if it was a property wrapper? @PresentationState var destination: Destination.State?

26:07

Now we have great symmetry between the state and action domains. And this wrapper type would give us a place to squirrel away little bits of information and implementation details that make navigation more performant and more powerful.

26:19

Let’s give it a shot!

26:27

Suppose we had a PresentationState that could pair nicely with PresentationAction , and it would wrap a private array of state so that we get heap allocation and copy-on-write semantics, but we would only allow this array to hold zero or one value: struct PresentationState<State> { private var value: [State] } [00:0026:38] Then we would expose a computed property that exposes the underlying array publicly as an optional: var wrappedValue: State? { get { value.first } set { guard let newValue = newValue else { self.value = [] return } self.value = [newValue] } }

27:27

And since the field is private this type will not get a synthesized initializer, so let’s add one: init(wrappedValue: State? = nil) { if let wrappedValue { self.value = [wrappedValue] } else { self.value = [] } }

27:56

And let’s go ahead and make this type equality when the wrapped value is equatable: extension PresentationState: Equatable where State: Equatable {}

28:06

And thanks to copy-on-write semantics, when checking if two PresentationState values are equatable it will completely skip checking the underlying state if it knows the array shares the same underlying memory.

28:21

With that done we would then hold this value in our feature’s state that wants to present a child feature, rather than just a plain optional: struct State: Equatable { var destination = PresentationState<Destination.State>() … }

28:33

Then, when interacting with destination , like in the reducer, we need to make sure to go through the wrappedValue instead of acting directly on addItem .

28:47

So, for example, when the addButtonTapped action is sent we now need to do this: case .addButtonTapped: state.destination.wrappedValue = .addItem( ItemFormFeature.State( item: Item(name: "", status: .inStock(quantity: 1)) ) ) return .none

29:00

That is going to be a huge pain, and don’t want to have to pepper wrappedValue throughout our code base.

29:09

This is a textbook use case for property wrappers in Swift. They are perfect for the time that you want to carry a little bit of extra context with a data type, but not directly expose that context to users of the type. In this case we want to package the data up into an array internally, but from the outside it just looks like a regular optional.

29:27

All we have to do is mark PresentationState with @propertyWrapper : @propertyWrapper struct PresentationState<State> { … }

29:35

…and then use the proper wrapper syntax in the InventoryFeature ’s state: struct State: Equatable { @PresentationState var destination: Destination.State? … }

29:50

Now we can drop the explicit usage of wrappedValue because when you access destination directly it refers to the wrapped value, which is the optional, not its array representation: case .addButtonTapped: state.destination = .addItem( ItemFormFeature.State( item: Item(name: "", status: .inStock(quantity: 1)) ) ) return .none

29:53

And that’s all it takes. Since we already provided an initializer that takes a wrapped value and exposed a wrappedValue computed property, there is nothing else we need to do. Everything already compiles.

30:05

Things are still compiling and work exactly as they did before, but we’ve hidden the implementation detail of the array even more. Also even tests still compile and still pass.

30:26

So, we just a little bit of work we have now made deeply nested state in the Composable Architecture more efficient and safer for the stack. Also, there’s a lot of people out there that manage their own little copy-on-write property wrappers in order to work around stack overflows, and we think those will no longer be necessary if you start using the new navigation tools that will eventually ship in the library. Navigation integration points are by far the biggest reason why state gets deeply nested, and so if you start using our navigation tools we will automatically insert a little copy-on-write helper into all the critical lynchpins of your application, which should alleviate the need of ever using a copy-on-write property wrapper yourself. Fixing a bug with hidden state

31:11

So this is great, but also this little property wrapper gives us the perfect opportunity to fix a problem in our navigation tools. We actually have a pretty serious bug right now, and to solve it we can squirrel away a little bit of scratch state in the property wrapper.

31:27

We can easily observe the bug with the application as it is built right now. We have to add just a little bit of functionality. Let’s quickly add a button in the ItemFormFeature so that when it is tapped it dismisses itself. Seems simple enough, so let’s try it.

31:40

We will update the dismiss button that called the environment value to send an action through the store instead. Button("Dismiss") { viewStore.send(.dismissButtonTapped) }

32:00

We will add the action to the ItemFormFeature ’s action enum: enum Action: BindableAction, Equatable { case dismissButtonTapped … }

32:06

And we will handle it by invoking the dismiss dependency: case .dismissButtonTapped: return .fireAndForget { await self.dismiss() }

32:15

OK, that didn’t see bad at all, so what’s the problem with this?

32:24

Well, we can run the app in the simulator, drill down to a roll, and tap the “Dismiss” button and we see it works perfectly fine. However, if we start the app deep-linked into a particular item, and then tap the “Dismiss” button, we will see that nothing happens at all.

32:38

But, even better than seeing the bug in the simulator, we can even write a test that demonstrates the bug. This is only possible because we try to keep as much of the library’s logic in the reducer and store layer. This makes it possible to observe all of its behavior in tests, and we don’t need to resort to integration tests or running things in the simulator just to see how features will behave.

33:00

We can get a stub of a test in place: func testDismiss() async { }

33:12

And we can create a TestStore that begins in the state of having a single item in the inventory and we are already drilled down to that item: let store = TestStore( initialState: InventoryFeature.State( destination: .addItem( ItemFormFeature.State(item: .headphones) ) ), reducer: InventoryFeature() ) We don’t even expect any dependencies to be used so we can leave off the trailing closure for overriding dependencies.

33:36

Next, we would hope that if we send the dismissButtonTapped action inside the editItem destination that we would then immediately receive a dismiss action that clears out the destination state: await store.send( .destination(.presented(.addItem(.dismissButtonTapped))) ) await store.receive(.destination(.dismiss)) { $0.destination = nil }

34:08

Well, sadly this test fails because it is not true that we receive this action.

34:15

This is exactly what we were seeing over in the simulator. Tapping the close button for some reason does not actually send the dismiss action, and so the state isn’t cleared and the view is not popped off the stack.

34:26

What gives?

34:43

Well, recall that the way that dismissal works is that we attach a long living effect when we detect the child state is created: if let childStateAfter, !isEphemeral(childStateAfter), childStateAfter.id != childStateBefore?.id { onFirstAppearEffect = .run { send in … } }

35:15

Well, this code isn’t ever executed because we never detect the creation of the child state. Because we are deep linking into the state right when the app launches, we never actually see the creation of the state. There is no action that is sent into the system so that we can see the state being created.

35:32

However, some action must come through the system for the child in order for it to be able to dismiss itself, so what if we could detect if the onFirstAppearEffect had been created previously, and if not we would create it?

35:46

This is a perfect use case for squirreling away a little bit of data directly inside the property wrapper: @propertyWrapper struct PresentationState<State> { fileprivate var isPresented = false … }

36:10

We are even going to make it fileprivate because no one outside the navigation tools should ever need access to this state.

36:14

This state now lives right alongside the feature’s state, but the user of the navigation tools doesn’t need to know anything about it. It can be mostly hidden from the user since they will continue accessing their child state as a simple optional.

36:30

But now how can we use this extra state? In our ifLet operator all we have access to is actual child state. There’s no mention of PresentationState anywhere in the signature of ifLet .

36:45

In fact, we can clearly see a bit of imbalance in the ifLet signature right now in that the stateKeyPath only needs a key path to some optional state, whereas the actionCasePath wants a case path to a full blown PresentationAction : _ stateKeyPath: WritableKeyPath<State, ChildState?>, action actionCasePath: CasePath< Action, PresentationAction<ChildAction> >,

37:00

So, maybe this operator should restore some symmetry by requiring that the key path focus in on PresentationState , which would then give us access to that isPresented state too: func ifLet<ChildState: Identifiable, ChildAction>( _ stateKeyPath: WritableKeyPath< State, PresentationState<ChildState> >, action actionCasePath: CasePath< Action, PresentationAction<ChildAction> >, … )

37:22

This creates a bunch of compiler errors in the body of the reducer, but we just have to make sure to further go through the wrappedValue when wanting to access the optional child state directly.

37:56

And now the entire Navigation.swift file is compiling, and we access to the full blown PresentationState under the hood in ifLet and so we should helpfully be able to fix the bug.

38:11

However, before doing that let’s at least get the entire project to compile. Over in the InventoryFeature reducer we invoke ifLet like this: .ifLet(\.destination, action: /Action.destination) { Destination() }

38:19

This provides a key path to optional Destination.State , but we need a key path to PresentationState of Destination.State .

38:26

Currently the property wrapper does not expose any of the surrounding, wrapping information. It just exposes the optional, wrapped state as a wrappedValue . It is possible to expose some of that surround context through what is known as a projectedValue : struct PresentationState<State> { var projectedValue: <#???#> … }

38:58

Whatever type you expose for this property will be available to users of the property wrapper using a special $ syntax.

39:04

For example, if I do something silly like assign a number to this property: @propertyWrapper struct PresentationState<State> { var projectedValue: Int = 1 … }

39:13

Then we immediately get access to that integer over in the InventoryFeature reducer by using $destination syntax: state.$destination // Int

39:36

This gives us the perfect opportunity to expose the full PresentationState type to users of the property wrapper, but only if they go the extra mile to use the $ syntax. Typically usage will not use the $ and so would be dealing with the regular optional state.

39:57

So, let’s make projectedValue return PresentationState , and we will go ahead and make it a writable property since we need a writable key path out of it in order to use ifLet : var projectedValue: Self { get { self } set { self = newValue } }

40:09

With that we can now invoke ifLet by constructing a key path to the projected presentation state: .ifLet(\.$destination, action: /Action.destination) { Destination() }

40:32

Now the entire project is compiling again. Typically you will not need to use the $destination projected value because typically you do not care that you are dealing with PresentationState . You can really just think of holding a simple optional in your state and use it as such.

40:46

However, at the moment of needing to integrate your child features with the parent, then you do need to care about PresentationState because the ifLet operator needs that information. And so in that one case, and it’s probably the only need you need to do this, you will use $destination .

41:03

OK, with that done, let’s see what it takes to fix the bug we demonstrated a moment ago.

41:08

When constructing the onFirstAppearEffect we check for a bunch of conditions to see if this truly is the “first appear”: if let childStateAfter, !isEphemeral(childStateAfter), childStateAfter.id != childStateBefore?.id { onFirstAppearEffect = .run { send in … } }

41:20

Currently this checks if there is some state after the parent reducer ran, and we check if the ID of the state after the reducer ran does not match the ID of the state before the reducer ran. We also do an “ephemeral” check because ephemeral states do not need any of this logic.

41:34

Well, we now need to beef up this logic. It’s not just when the ID of the states change, but also if, according to the property wrapper, the state has still not been “presented”: if let childStateAfter, !isEphemeral(childStateAfter), childStateAfter.id != childStateBefore?.id || !state[keyPath: stateKeyPath].isPresented { … }

41:56

And once we get into that if branch we can flip the state to true : state[keyPath: stateKeyPath].isPresented = true onFirstAppearEffect = .run { send in … }

42:00

That’s basically all it takes, but unfortunately it doesn’t quite fix the problem yet. If we run in the simulator with a deep link in place we will see that the “Dismiss” button still does not work.

42:14

The problem is that this particular branch of the switch executes only when a parent action is being sent. We were operating under the assumption that the only time child state can be created is when a parent action is sent, but this is not true at all. There are lots of situations that can cause child state to be created without a parent action being sent, such as when we deep link into a particular state right when the app launches. In that case no action whatsoever is sent.

42:51

Really what we want is a more holistic way to compare the state before any of the ifLet logic is run with the state after it runs, and then at that time we can decide if it is a “first appear” event for which we should create the effect. That would be a great change, but it’s a bit of a slog, and there aren’t all that many important lessons to learn along the way.

43:12

Instead, we are going to do the simplest thing possible, and we are going to copy-and-paste this logic to the other important branch of the switch , which is when we process a child action with some child state: case ( .some(var childState), .some(.presented(let childAction)) ): … let onFirstAppearEffect: Effect<Action> if let childStateAfter, !isEphemeral(childStateAfter), childStateAfter.id != childStateBefore?.id || !state[keyPath: stateKeyPath].isPresented { state[keyPath: stateKeyPath].isPresented = true onFirstAppearEffect = .run { send in do { try await withTaskCancellation( id: DismissID(id: childStateAfter.id) ) { try await Task.never() } } catch is CancellationError { await send(actionCasePath.embed(.dismiss)) } } .cancellable(id: childStateAfter.id) } else { onFirstAppearEffect = .none } There are a few small compiler errors. We need to grab the childStateAfter from the state, and the before state is just the childState we destructured from the switch : if let childStateAfter = state[keyPath: stateKeyPath].wrappedValue, !isEphemeral(childStateAfter), childStateAfter.id != childState.id || !state[keyPath: stateKeyPath].isPresented { … }

44:03

Now finally the feature works correctly. We can start the simulator already deep linked into the edit screen, and tapping the “Close” button will dismiss the feature.

44:09

We also previously wrote a test that exercised this exact before. We started up a TestStore with the editItem state already populated, and then we tried tapping on the close button to see what would happen: func testDismiss() async { let item = Item.headphones let store = TestStore( initialState: InventoryFeature.State( destination: .editItem( ItemFormFeature.State(item: item) ), items: [item] ), reducer: InventoryFeature() ) await store.send( .destination( .presented(.editItem(.dismissButtonTapped)) ) ) await store.receive(.destination(.dismiss)) { $0.destination = nil } }

44:25

However, when we run this we get a mysterious failure: testDismiss(): State was not expected to change, but a change occurred: … InventoryFeature.State( _destination: PresentationState( − isPresented: false, + isPresented: true, value: […] ), items: […] ) (Expected: −, Actual: +)

44:33

Due to exhaustive testing in the Composable Architecture we are being notified that we haven’t fully asserted on how state changed in the feature. That little bit of private property wrapper state, isPresented , has flipped to true , and we didn’t assert on that.

44:59

However, that is not something we should be asserting on at all. We can’t even make that mutation if we wanted because isPresented is private. Really, the isPresented data is a complete implementation detail of the navigation tools, and it should not be surfaced to us at all. So, we can provide a custom Equatable conformance for PresentationState that ignores that state entirely: extension PresentationState: Equatable where State: Equatable { static func == (lhs: Self, rhs: Self) -> Bool { lhs.value == rhs.value } }

45:31

With that change the test now passes.

45:39

But there are still improvements we can make. While the isPresented state will no longer factor into the equatability of presentation state, and so won’t directly cause a test to fail, it is still surfaced in the diff of state when there is some other failure.

45:53

For example, suppose we thought that the destination was going to be nil ’d out immediately after tapping the close button: await store.send( .destination(.presented(.editItem(.dismissButtonTapped))) ) { $0.destination = nil }

46:01

This is not the case, and so we would expect a failure, but the failure diff now shows a lot of additional information we don’t care about: testDismiss(): A state change does not match expectation: … InventoryFeature.State( _destination: PresentationState( − isPresented: false, + isPresented: true, value: [ + [0]: .editItem( + ItemFormFeature.State( + _isTimerOn: false, + _item: Item( + id: UUID( + D3F3B658-1F92-45C0-947C-EA015C5470F1 + ), + name: "Headphones", + color: Item.Color( + name: "Blue", + red: 0.0, + green: 0.0, + blue: 1.0 + ), + status: .inStock(quantity: 20) + ) + ) + ) ] ), items: […] ) (Expected: −, Actual: +)

46:11

For one thing, this is showing the private isPresented state that we have zero control over and so it should not be apart of the diff at all. But further, the internal implementation of PresentationState as a 1-element array is also leaking through. We can see the value field is an array with a single element.

46:30

We would like to clean this up so that PresentationState prints itself more like a regular optional. One way to do this would be to make PresentationState conform to CustomReflectable so that we can control how it reflects in mirrors: extension PresentationState: CustomReflectable { var customMirror: Mirror { Mirror(reflecting: self.wrappedValue) } }

47:07

That technically would work, but may be going too far. That means we are completely shutting down the reflection capabilities of PresentationState just in order to control how it is printed in test failure output. Now, we don’t currently have any use case for needing the full mirror, but it certainly seems a bit weird to lock it down so harshly.

47:30

Luckily there’s an alternative.

47:32

If you didn’t know already, all of the great test failure messaging in the Composable Architecture is powered by our CustomDump library , which was extracted out of the Composable Architecture nearly 2 years ago:

47:42

CustomDump is “a collection of tools for debugging, diffing, and testing your application’s data structures.” We use it to turn the state of features into a nicely formatted string, and then when we detect state-mismatches during tests we diff those nicely formatted strings to give you the wonderful display you see now.

48:06

That library gives a few customization points so that you can better format the output. In particular, you can conform any of your types to CustomDumpReflectable to construct a custom mirror used specifically for dumping data structures and diffing. That is the protocol we will conform PresentationState to: extension PresentationState: CustomDumpReflectable { var customDumpMirror: Mirror { Mirror(reflecting: self.wrappedValue as Any) } }

48:39

Now when we run tests we still have a failing, but the failure diff is a lot more reasonable: testDismiss(): A state change does not match expectation: … InventoryFeature.State( − _destination: nil, + _destination: .editItem( + ItemFormFeature.State( + _isTimerOn: false, + _item: Item( + id: UUID( + 948A7E9E-1811-4D07-AD9E-6B08D79C24CB + ), + name: "Headphones", + color: Item.Color( + name: "Blue", + red: 0.0, + green: 0.0, + blue: 1.0 + ), + status: .inStock(quantity: 20) + ) + ) + ), items: […] ) (Expected: −, Actual: +)

48:52

The private isPresented state is no longer exposed, and we don’t see the internal guts of PresentationState that shows state being secretly held in an array of one element. Next time: Navigation stacks

49:01

OK, we have now introduced a property wrapper into our navigation tools, first to deal with some potential performance and stack overflow problems, but it also turned out to be the perfect spot for us to squirrel away some extra information so that we could make the tools even better. Stephen

49:16

So, this is all looking great, but we still have yet to discuss what is probably the hottest topic when it comes to navigation on Apple’s platforms, and that’s navigation stacks. In particular, the initializer of NavigationStack that takes a binding to a collection of data which drives the pushing and popping of screens to the stack. This was introduced in iOS 16 and kinda turned everything upside down relative to how navigation had been done in SwiftUI for the 3 years prior.

49:41

Stack-based navigation is where you model all the different screens you can drill-down to as a single, flat array of values. When a value is added to the array, a drill-down animation occurs to that screen, and when a value is removed from the array a pop-back animation occurs.

49:55

This stack-based style of navigation is in stark contrast with what we like to call “tree-based” navigation, which is what we have been doing so far in this entire series of episodes. In that model, each feature in the application acts as a branching point for all the different places you can navigate to, and then each of those destinations has branches, and on and on and on.

50:13

Each style has lots of positives and some negatives, so let’s dig a little deeper into a comparison of the two styles, and see what the Composable Architecture has to say about stack-based navigation…next time! References High-performance systems in Swift Johannes Weiss • Feb 22, 2019 For more information on copy-on-write, be sure to check out this detailed video from Johannes Weiss : Note Languages that have a rather low barrier to entry often struggle when it comes to performance because too much is abstracted from the programmer to make things simple. Therefore in those languages, the key to unlock performance is often to write some of the code in C, collaterally abandoning the safety of the higher-level language. Swift on the other hand lets you unlock best of both worlds: performance and safety. Naturally not all Swift code is magically fast and just like everything else in programming performance requires constant learning. Johannes discusses one aspect of what was learned during SwiftNIO development. He debunks one particular performance-related myth that has been in the Swift community ever since, namely that classes are faster to pass to functions than structs. https://www.youtube.com/watch?v=iLDldae64xE Composable navigation beta GitHub discussion Brandon Williams & Stephen Celis • Feb 27, 2023 In conjunction with the release of episode #224 we also released a beta preview of the navigation tools coming to the Composable Architecture. https://github.com/pointfreeco/swift-composable-architecture/discussions/1944 Downloads Sample code 0230-composable-navigation-pt9 Point-Free A hub for advanced Swift programming. Brought to you by Brandon Williams and Stephen Celis . Content Become a member The Point-Free Way Beta previews Gifts Videos Collections Free clips Blog More About Us Community Slack Mastodon Twitter BlueSky GitHub Contact Us Privacy Policy © 2026 Point-Free, Inc. All rights are reserved for the videos and transcripts on this site. All other content is licensed under CC BY-NC-SA 4.0 , and the underlying source code to run this site is licensed under the MIT License .