EP 248 · Tour of the Composable Architecture · Sep 4, 2023 ·Members

Video #248: Tour of the Composable Architecture: Dependencies

smart_display

Loading stream…

Video #248: Tour of the Composable Architecture: Dependencies

Episode: Video #248 Date: Sep 4, 2023 Access: Members Only 🔒 URL: https://www.pointfree.co/episodes/ep248-tour-of-the-composable-architecture-1-0-dependencies

Episode thumbnail

Description

We introduce a complex dependency to the record meeting screen: speech recognition. We will begin to integrate this dependency into our app’s logic, and show how to control it for Xcode previews and tests

Video

Cloudflare Stream video ID: 1e0a35d62eeedfe0b1f8dc056719467a Local file: video_248_tour-of-the-composable-architecture-1-0-dependencies.mp4 *(download with --video 248)*

References

Transcript

0:05

This unfortunately brings us face to face with another uncontrolled dependency. We came across these in our counter app way at the beginning, and then again in the Standups app when we needed to generate UUIDs.

0:16

The reason this is happening is because previews are incapable of showing the system alert to ask you for speech recognition permission. We don’t know if this is a bug in previews, or if this is how Apple intends for it to work, but regardless the await for fetching the status simply never un-suspends, and so the code after is never executed.

0:34

This has completely destroyed our ability to iterate on this feature in previews. We are now forced to run the app in the simulator if we want to iterate on the timer functionality, or really any of its dynamic behavior besides just its simple, static design. And this is all because we are reaching out to Apple’s APIs without regard, and so by accessing uncontrolled dependencies we are allowing them to control us. Brandon

0:55

Well, the path forward is to simply not reach out to uncontrolled dependencies. We should do a little bit of upfront work to put an interface in front of the dependency so that we can use different versions of the dependency in previews, tests and more. In this particular situation I would love if I could just tell the dependency that I don’t care about asking the user for permission. Let’s just pretend they granted us permission.

1:19

That would be great, and would completely unblock us to start using the preview again, but it’s going to take work. The speech client dependency

1:27

Let’s create a new file so that we can design a little dependency interface that will abstract away our access to the Speech framework.

1:38

There is a lot that can be said about designing dependencies, and we’ve said a ton in past Point-Free episodes , and so we are going to get mired in all the small details right now. We are going to design our dependency interface as a lightweight struct that houses some var closure endpoints for accessing the underlying dependency. You may be more used to designing your dependencies using protocols, and if so that is completely fine and you can continue doing so, but for our purposes the struct-style is more convenient.

2:10

So, we will start a struct to represent our speech dependency interface: struct SpeechClient { }

2:17

And for right now we will have one endpoint for request speech authorization: import Speech struct SpeechClient { var requestAuthorization: () async -> SFSpeechRecognizerAuthorizationStatus }

2:34

With the interface defined, one defines implementations of the interface by simply constructing values. We will certainly want a conformance for the live implementation that actually calls out to Apple’s Speech APIs, but we will also want a version to use in previews that skips all the Speech framework entirely and just immediately returns whatever data we want.

2:56

Further, we will also want to use this dependency in our reducer, and in particular with the @Dependency property wrapper, so we will register it with our dependencies library. This is done in a few quick steps.

3:05

First, we will import Dependencies and conform our client to the DependencyKey protocol: import Dependencies extension SpeechClient: DependencyKey { }

3:14

The bare minimum you need to implement for this protocol is a liveValue , which represents the live implementation that you want to use when the app runs on simulators and devices: extension SpeechClient: DependencyKey { static let liveValue = Self( requestAuthorization: <#() async -> SFSpeechRecognizerAuthorizationStatus#> ) }

3:22

This is the implementation where it is appropriate to interact with Apple’s Speech APIs, so let’s do just that: extension SpeechClient: DependencyKey { static let liveValue = Self( requestAuthorization: { await withUnsafeContinuation { continuation in SFSpeechRecognizer.requestAuthorization { status in continuation.resume(with: .success(status)) } } } ) }

3:42

And we have a sendability warning since we’ve got concurrency warnings cranked up in the project, and the fix is just to make sure the client is sendable by making the closures sendable: struct SpeechClient { var requestAuthorization: @Sendable () async -> SFSpeechRecognizerAuthorizationStatus }

3:58

The final step for registering the dependency with the library so that it can be used with the @Dependency property wrapper is to extend the DependencyValues type and added a computed property for the speech client: extension DependencyValues { var speechClient: SpeechClient { get { self[SpeechClient.self] } set { self[SpeechClient.self] = newValue } } }

4:09

Now, that’s all it takes to register the dependency, but we can take things a bit further. Right now the liveValue will be used in previews, and as we just saw, that isn’t very workable. The authorization request suspension just hangs forever.

4:37

Well, our dependencies library allows you to provide an alternative conformance of your dependency that will be used only in previews. We just have to define a previewValue , and we can construct a speech client that simply returns authorized from both endpoints: static let previewValue = SpeechClient( requestAuthorization: { .authorized } )

5:04

Now let’s start making use of the dependency. We will add it to our reducer: struct RecordMeeting: ReducerProtocol { … @Dependency(\.speechClient) var speechClient … }

5:16

And we should start using the dependency rather than reaching out to Apple’s Speech APIs directly: case .onTask: return .run { send in let status = await self.speechClient .requestAuthorization() … }

5:28

And with that small change our preview is back to working. We can now see the timer counting down and we are able to preview real, dynamic behavior in our feature. We aren’t relegated to simply seeing the boring, static representation of our feature.

5:53

So that’s pretty great. By putting in a little bit of upfront work to control our dependencies we can get more out of our previews, and prevent the dependency from controlling us.

6:01

So, we’ve now got a complex effect in place, and we are making use of a dependency. What next?

6:08

Let’s try implementing some of the logic around speakers changing and ending the meeting.

6:14

For example, when the timer ticks, it is not enough to just increment the secondsElapsed . If enough time has elapsed then we should also go to the next speaker, and the full meeting duration has elapsed we should also end the meeting.

6:27

Let’s layer on some of that logic. For example, we could compute how much time each attendee is allotted and if that divides evenly into the number of seconds elapsed, then it’s time to go on to the next speaker: case .timerTick: state.secondsElapsed += 1 let secondsPerAttendee = Int( state.standup.durationPerAttendee.components.seconds ) if state.secondsElapsed .isMultiple(of: secondsPerAttendee) { if state.speakerIndex == state.standup.attendees.count - 1 { // TODO: End meeting } state.speakerIndex += 1 } return .none

6:57

Now when we run the preview, every 10 seconds we see the speaker name update.

7:20

We can do something similar in the nextButtonTapped action. We can move the speakerIndex forward, as well as the number of seconds elapsed: case .nextButtonTapped: guard state.speakerIndex < state.standup.attendees.count - 1 else { // TODO: Alert to end meeting return .none } state.speakerIndex += 1 state.secondsElapsed = state.speakerIndex * Int( state.standup.durationPerAttendee.components.seconds ) return .none

7:55

Running this in the preview shows that it works too.

8:09

Of course if we let the countdown go for long enough it starts getting into the negative numbers, and that’s because we aren’t yet handling ending the meeting yet. Let’s figure how to do that.

8:28

We need to communicate to the parent domain when the meeting ends so that it can save the details of the meeting to the standup. This sounds like a great use case for delegate actions again.

8:38

Let’s add a delegate action to the record meeting domain that can tell the parent when it needs to save the recorded meeting: enum Action: Equatable { case delegate(Delegate) … enum Delegate { case saveMeeting } }

8:56

There’s nothing the recording domain needs to do with this delegate action, nor should any feature every implement custom logic in the delegate case: case .delegate: return .none

9:06

And now, when the timer ticks causing and we detect that the speaker should change and we are already on the last speaker, we can send the delegate action to tell the parent it is time to save the meeting: if state.speakerIndex == state.standup.attendees.count - 1 { return .run { send in await send(.delegate(.saveMeeting)) } }

9:27

Even better, we can also dismiss this feature when this happens. To do that we can use the dismiss dependency that we used for popping the detail feature off the stack when the standup is deleted.

9:32

So, let’s add the dependency: @Dependency(\.dismiss) var dismiss

9:36

And let’s invoke it after we send the delegate action: if state.speakerIndex == state.standup.attendees.count - 1 { return .run { send in await send(.delegate(.saveMeeting)) await self.dismiss() } }

9:43

And with that we can already give this a spin.

9:45

However, I don’t want to have to wait for a full minute to pass just to see this behavior. Let’s flex some of the muscles we have built up building this app in the Composable Architecture, and show that we can start up a preview in the absolute perfect state for testing out this behavior.

10:01

Since the behavior we want to test exists at the integration point of multiple features, we have no choice but to run the main app feature preview. But I can add a new preview so that it doesn’t mess with the default preview we want to use when not testing this very specific behavior.

10:18

So, what we can do is create a standup that is very short. The default, mock standup has 6 attendees, so let’s make the duration 6 seconds: #Preview { … } #Preview("Quick finish meeting") { var standup = Standup.mock let _ = standup.duration = .seconds(6) }

10:45

Then we can create a new preview that starts up in a state that is drilled into the detail and record features with that super short standup: #Preview("Quick finish meeting") { var standup = Standup.mock standup.duration = .seconds(6) return MainActor.assumeIsolated { AppView( store: Store( initialState: AppFeature.State( path: StackState([ .detail( StandupDetailFeature.State(standup: standup) ), .recordMeeting( RecordMeeting.State(standup: standup) ) ]), standupsList: StandupsListFeature.State( standups: [.mock] ) ) ) { AppFeature() } ) } }

11:04

And just like that we now have a super quick way of testing this functionality. We just have to wait 6 seconds, and we will see the feature is popped off the screen. We could even make the duration shorter if we thought 6 seconds was too long to wait.

11:33

So, this is incredibly powerful that we are able to test such a nuanced part of our application directly in a preview, and all without needing to tap a single time on the preview. If we found a problem with how it is running, we could even pin the preview and hop back over to the record meeting to explore a fix. That would allow us to preview the flow while editing the record feature, which would be great.

12:01

Now, things do work for the most part, but there is one small thing missing. When we pop the record meeting screen off the stack, and land back on the detail, we should see a meeting added to the list. But there are no meetings. This is happening because even though we added the delegate action for communicating to the parent, we aren’t actually using it right now.

12:53

We can handle this just as we handled delegate actions for the detail screen. We will pattern match to get the delegate actions from the record feature inside the path: case let .path( .element( id: _, action: .recordMeeting(.delegate(action)) ) ):

13:11

And then switch on that delegate action so that we are forced to handle each one: switch action { case .saveMeeting: }

13:20

So, what do we need to do in here?

13:29

Well, it’s a little tricky. We are basically facilitating communication between sibling features in the navigation stack. Because they are fully isolated from each other, and ideally some day would be put into their own separate modules, there is no way for the record meeting to directly tell the detail screen to add a meeting to its standup. Only the root, app feature has that ability.

13:41

So, we need to find the detail feature in the stack. Based on our understanding of the application, in particular that the detail feature always comes just before the record feature, we can just grab the second to last ID in the path: guard let detailID = state.path.ids.dropLast().last else { return .none } // TODO return .none

14:41

Now, you may be wondering… what if the path only holds one element, causing the second-to-last ID to nil . Right now we are just returning .none , but that seems to be silently ignoring a pretty bad situation. It should never be the case that a record meeting is the last feature in the stack. It should always come right after a detail screen.

15:01

Well, we can do a small thing in this else to help catch us in the future if we ever mess things up. We can simply add an XCTFail : else { XCTFail( """ Record meeting is only element in stack. \ A detail feature should precede it. """ ) return .none }

15:19

You may think it’s weird performing an XCTFail in here, and how can we even do this if we aren’t importing XCTest .

15:26

Well, this XCTFail is actually a special, dynamically loaded one from our library. It allows you to use XCTFail in application code, which is typically not possible, and only if XCTest is available at runtime will it actually perform a test failure.

15:40

By doing this we will be notified in tests if our application ever gets into a weird state. But even better, this also produces a loud runtime warning when running the app in the simulator.

15:52

For example, let’s quickly change the entry point of the application so that only a single record meeting is on the stack: @main struct StandupsApp: App { var body: some Scene { var standup = Standup.mock let _ = standup.duration = .seconds(6) WindowGroup { AppView( store: Store( initialState: AppFeature.State( path: StackState([ .recordMeeting( RecordMeetingFeature.State( standup: standup ) ), ]), standupsList: StandupsListFeature.State( standups: [standup] ) ) ) { AppFeature() ._printChanges() } ) } } }

16:33

When we run this and wait 6 seconds we will see the feature pop off the stack, but we will also see a purple runtime warning in Xcode letting us know that something is wrong: Record meeting is only element in stack. A detail feature should precede it.

16:59

So, while in a navigation stack we cannot assure that a detail screen always preceeds a record screen, and that is kinda the point of navigation stacks, we can at least add in little bits of instrumentation that make it clear when something goes wrong.

17:23

Now that we have the second-to-last ID in the stack, we can subscript into the path at the ID, and further dive into the detail case of the path enum: state.path[id: detailID, case: /Path.State.detail]

17:44

And that subscript returns some optional StandupDetail.State , so we can further chain along to get the standup : state.path[id: id, case: /Path.State.detail]?.standup

17:48

And now that we have a standup we can further dive into its meetings to insert a new meeting: state.path[id: id, case: /Path.State.detail]? .standup.meetings.insert( Meeting( id: UUID(), date: Date(), transcript: "N/A" ), at: 0 )

18:29

Now, you may be wondering again… what if the second-to-last feature in the stack isn’t a detail screen? What if we have another bad situation where the screen directly preceeding the record screen is some other feature? Wouldn’t we want to be notified of that situation?

19:04

Well, that is already covered for us thanks to this special path subscript. If we go to the source of the subscript, we will find that when mutate through the subscript with the wrong case of the enum, we emit a runtime warning: runtimeWarn( """ Can't modify unrelated case\(…) """ )

19:25

So, if we alter the entry point of the app again to have two record meetings pushed to the stack: path: StackState([ .recordMeeting(RecordMeeting.State(standup: standup)), .recordMeeting(RecordMeeting.State(standup: standup)), ]),

19:32

And run in the simulator again, after 6 seconds we will see another runtime warning: Can’t modify unrelated case “recordMeeting” So without us even doing anything this is already letting us know when we do something that is invalid. And further, if this runtime warning were triggered in a test it would cause a test failure.

20:00

What we are seeing here is that the library tries to notify us early and often when something has gone wrong in your domain. Ideally these kinds of problems shouldn’t even be possible because we should be modeling our domains as concisely as possible. However, this is not always possible, and in the case of navigation stacks, the whole point of that tool is to allow screens to be pushed onto the stack in any order to promote decoupling. And that comes with a cost of the domains needing to modeled in less-than-ideal ways.

20:36

So, it’s really great to see that even at times we cannot guarantee correctness at compile-time, the Composable Architecture gives us the tools to at least flag incorrectness at runtime and during tests.

20:47

We have now implemented the logic for inserting new meetings into a standup, but doing so has forced us to make a number of decisions. Meetings require a unique ID, so we reached out to the global UUID initializer to provide that, as well as the date the meeting was created. And we even need a transcript of the meeting, but we don’t have any speech recognizer logic in place yet, so we stubbed that for now.

20:59

Before moving on, let’s be a little more proactive with our uncontrolled dependencies. Sure we could wait until we write a test and find out that it’s impossible to predict what this UUID and date initializer produce, but it requires such little work to control them from the beginning that we might as well.

21:07

We will add two dependencies to our app feature for generating fresh dates and UUIDs: @Dependency(\.date.now) var now @Dependency(\.uuid) var uuid

21:29

And we will use those dependencies rather than reaching out to the global, uncontrolled dependencies: state.path[id: id, case: /Path.State.detail]? .standup.meetings.insert( Meeting( id: self.uuid(), date: self.now, transcript: "N/A" ), at: 0 )

21:37

And there’s one last bit of logic we need to add after inserting this meeting.

21:41

Just as in the other delegate action where the detail screen tells us to save the standup: case let .standupUpdated(standup): state.standupsList.standups[id: standup.id] = standup return .none

21:52

…we need to update the root standups list with the freshest data after this mutation: guard let standup = state .path[id: id, case: /Path.State.detail]?.standup else { return .none } state.standupsList.standups[id: standup.id] = standup

22:19

It’s tricky, but also a test would have caught this if we forgot this. We would have been able to easily write a failing test in which we go through the motions of recording a meeting and see that the data in the root list did not update properly.

22:36

With that done we can give the feature a spin in the preview. When we launch the preview we are already in the record feature. If we wait 6 seconds we will see the feature is popped off the stack and a new meeting has been added to the list with the current time, as of this recording. We can even start a new meeting, wait 6 seconds again, and see that another meeting is added to the list. Testing the record meeting feature

23:07

So the integration between record feature and detail feature is now working, thanks to the little bit of glue code we implemented in the app feature. It’s pretty incredible to see how the little bit of upfront work we did at the beginning of this series continues to pay dividends. We now have all the tools we need to to communicate between child and parent domains, and in particular, the root-level app feature has everything it needs to coordinate everything. It just has a very high level of view of everything going on in the entire navigation stack. Stephen

23:37

But even better, the whole thing is 100% testable of course. We don’t have the speech recognition logic in place yet, but that doesn’t mean we can’t test the other behavior.

23:46

Let’s see what it takes to write a test that exercises the entire use flow of a user recording a new meeting, the record screen popping off the stack, and a new meeting being inserted into the standup.

23:59

Let’s go to AppTests.swift and add a stub of a new test: func testTimerRunOutEndMeeting() async { }

24:11

For this test I am going to construct a fresh standup so that I know exactly what data it holds inside. It can be handy to use a centralized mock for tests, but at the same time it does inject a little bit of uncertainty into your test. You’re never really sure what data the mock holds, and if someone changes the mock it may break your test.

24:30

And so for this test I specifically want a standup that doesn’t have any past meetings, and I will also simplify it by having a single attendee: let standup = Standup( id: UUID(), attendees: [Attendee(id: UUID())], duration: .seconds(1), meetings: [], theme: .bubblegum, title: "Point-Free" )

24:39

Next we’ll construct a TestStore in the very particular state that it is already drilled down to the detail screen and record screen: let store = TestStore( initialState: AppFeature.State( path: StackState([ .detail( StandupDetailFeature.State(standup: standup) ), .recordMeeting( RecordMeetingFeature.State(standup: standup) ), ]), standupsList: StandupsListFeature.State( standups: [standup] ) ) ) { AppFeature() }

24:55

And this will be a non-exhaustive test so that we don’t have to assert on everything happening in the system. We just want to verify that a new meeting is added to the standup once everything is finished: store.exhaustivity = .off

25:07

We can then kick off the test by emulating the record meeting appearing to the user by sending the onTask action. Note that the onTask action is sent in the record feature, which is the second element on the stack, and so we have to use an appropriate ID: await store.send( .path(.element(id: 1, action: .recordMeeting(.onTask))) )

25:33

That should kick off a number of things, including asking for speech recognizer authorization and starting a timer. But, regardless of what it does, we know that eventually it should send a delegate action and pop itself off the stack.

25:47

So, we will tell the test store just to process all of that work without forcing us to assert on it all: await store.skipReceivedActions()

25:53

And after all of that is done we can assert that the standup inside the detail feature should have changed to have a new meeting added to its collection: store.assert { $0.path[id: 0, case: /AppFeature.Path.State.detail]? .standup.meetings = [ Meeting( id: UUID(0), date: Date(), transcript: "N/A" ) ] } We can’t possible predict the date, and so for now I’m just constructing a new value. I know that we controlled that dependency, but let’s see what happens if I just ignore that for a moment.

26:53

If we run tests we will see that we are met with a number of test failures, including one that tells us we are using a live dependency in a test: @Dependency(\.speechClient) has no test implementation, but was accessed from a test context: testTimerRunOutEndMeeting(): Unimplemented: ContinuousClock.now …

27:19

This is really incredible. This let’s us precisely know what dependencies were used in our test but that we did not explicitly override. This keeps us honest when writing tests in order to make sure that we are not accidentally reaching out to global dependencies, which can wreak havoc on our ability to test, and also cause us to make changes to the outside world that we may not intend, such as writing to the file system, making API requests, or tracking analytics.

27:42

It’s also really nice that it tells us precisely which dependencies are used so we can override just those. No need to override absolutely everything. In particular, we will override our speech client and the continuous clock: } withDependencies: { $0.continuousClock = ImmediateClock() $0.speechClient.requestAuthorization = { .denied } } Since we aren’t testing the speech recognition functionality right now we will make the speech client pretend that the user has already denied authorization.

28:13

OK, with those dependencies overridden, let’s run the test again: testTimerRunOutEndMeeting(): @Dependency(\.uuid) has no test implementation, but was accessed from a test context: Failed: testTimerRunOutEndMeeting(): Unimplemented: @Dependency(\.date) …

28:21

Well, it seems that by using an immediate clock our feature code is getting a little further along, and now we are accessing more dependencies. So, let’s override the date and UUID generator: } withDependencies: { $0.continuousClock = ImmediateClock() $0.date.now = Date(timeIntervalSince1970: 1234567890) $0.speechClient.requestAuthorization = { .denied } $0.uuid = .incrementing } … store.assert { $0.path[id: 0, case: /AppFeature.Path.State.detail]? .standup.meetings = [ Meeting( … date: Date(timeIntervalSince1970: 1234567890), … ) ] }

29:03

And this test passes!

29:12

This is pretty incredible. We now have verification at a very high level that when the timer runs out, a new meeting will be inserted into the standup.

29:20

And right now we are asserting on the bare minimum in the test, but we could assert more if we wanted. For example, instead of just blindly skipping all of the actions, we could assert that we receive the saveStandup delegate action as well as the popFrom action: await store.receive( .path( .element( id: 1, action: .recordMeeting(.delegate(.saveMeeting)) ) ) ) await store.receive(.path(.popFrom(id: 1))) // await store.skipReceivedActions()

30:40

This passes too, but we are asserting on a bit more of the internals of how things work. Other ways of ending meetings

30:46

And of course we could also go full exhaustive with this test and assert on absolutely everything. This can be good for a baseline of deep tests, but it certainly doesn’t need to be every test. Brandon

30:56

So, things are looking good. We now are detecting when the timer runs down to end the meeting, but there are two other ways to end a meeting. It can happen when tapping the “End meeting” in the top-left of the screen, as well as skipping the last speaker.

31:10

However, in both situations we want to alert the user first to make sure they want to end the meeting early. And further, we could also give them the option of whether or not they want to discard the meeting when ending early.

31:24

Let’s see what it takes to implement that kind of nuanced logic.

31:31

We’ll start by adding some presentation state to our domain to represent the alert: struct RecordMeetingFeature: ReducerProtocol { struct State: Equatable { @PresentationState var alert: AlertState<Action.Alert>? … } … }

31:49

I don’t think it’s necessary to go all-into a destination reducer for this, because as far as I know alerts are they only thing that can be presented from the record meeting. If in the future it turns out we have other kinds of navigation, then we can upgrade this to a destination reducer like we did earlier for the detail screen.

32:08

We will also add a presentation action to our domain, which requires specifying an enum of actions that can happen in the alert: enum Action: Equatable { case alert(PresentationAction<Alert>) … enum Alert { case confirmDiscard case confirmSave } } Right now the only actions that can take place are for the user to confirm they want to end the meeting early and discard the meeting, or if they want to save the meeting.

32:40

Then in the reducer we can handle these new alert actions, and we actually have everything at our disposal to handle these actions right now: case .alert(.presented(.confirmDiscard)): return .run { _ in await self.dismiss() } case .alert(.presented(.confirmSave)): return .run { send in await send(.delegate(.saveMeeting)) await self.dismiss() } case .alert(.dismiss): return .none

33:34

Then at the end of the reducer we will tack on the ifLet operator in order to integrate the alert logic with the main feature logic: .ifLet(\.$alert, action: /Action.alert)

33:52

And then at the end of the view we will make use of the alert(store:) view modifier to drive showing an alert from the presentation state: .alert( store: self.store.scope( state: \.$alert, action: { .alert($0) } ) )

34:05

That’s all it takes to integrate an alert into the feature, but of course we aren’t yet populating the alert state anywhere in order to actually show an alert.

34:20

For example, when the “End meeting” button is tapped we can show an alert asking them to confirm if they want to save the meeting or discard it, or just resume the meeting: case .endMeetingButtonTapped: state.alert = AlertState { TextState("End meeting?") } actions: { ButtonState(action: .confirmSave) { TextState("Save and end") } ButtonState( role: .destructive, action: .confirmDiscard ) { TextState("Discard") } ButtonState(role: .cancel) { TextState("Resume") } } message: { TextState( """ You are ending the meeting early. \ What would you like to do? """ ) } return .none

35:02

Now when we run the preview and tap the “End meeting” button we will see the alert immediately show.

35:13

There’s another spot we want to show an alert, and we even have a TODO for it. When the “Next” button is tapped on the last speaker, we will interpret that as the user want to end the meeting, and so we will show an alert. However, in this case perhaps it’s not appropriate to ask them if they want to discard the meeting. After all, every speaker has had their turn, and so maybe at this point in the feature lifecycle we should only give the choice of resuming the meeting or saving the meeting: case .nextButtonTapped: guard state.speakerIndex < state.standup.attendees.count - 1 else { state.alert = AlertState { TextState("End meeting?") } actions: { ButtonState(action: .confirmSave) { TextState("Save and end") } ButtonState(role: .cancel) { TextState("Resume") } } message: { TextState( """ You are ending the meeting early. \ What would you like to do? """ ) } return .none } …

35:47

And now when we run in the preview we will see it works as expected.

36:00

Now, you may notice that these two alerts are quite similar, and also take quite a few of lines to create. Maybe we can extract them out into a helper in order to share code and slim down the reducer.

36:11

The most appropriate place to put this helper is as a static function in the AlertState itself. We make it a static function so that it’s customizable. In particular, we want to be able to customize whether or not the meeting is discardable: extension AlertState where Action == RecordMeetingFeature.Action.Alert { static func endMeeting(isDiscardable: Bool) -> Self { Self { TextState("End meeting?") } actions: { ButtonState(action: .confirmSave) { TextState("Save and end") } if isDiscardable { ButtonState( role: .destructive, action: .confirmDiscard ) { TextState("Discard") } } ButtonState(role: .cancel) { TextState("Resume") } } message: { TextState( """ You are ending the meeting early. \ What would you like to do? """ ) } } }

36:40

And thanks to the fact that the builder syntax in the actions closure supports conditionals, we can put that logic directly inside.

36:51

And now we can greatly simplify anywhere we were creating alerts. For example, when the next button is tapped: case .nextButtonTapped: guard state.speakerIndex < state.standup.attendees.count - 1 else { state.alert = .endMeeting(isDiscardable: false) return .none }

37:06

And when the “End meeting” button is tapped: case .endMeetingButtonTapped: state.alert = .endMeeting(isDiscardable: true) return .none

37:11

That’s all it takes, and it’s cleaned things up a lot.

37:14

Now, there is one unfortunate thing, which we can see in the preview. While the alert is opened the timer is still running in the background. That’s a little unfortunate. What if we could pause the timer while an alert is open?

37:38

Well, that is incredible easy. We will just skip all of the logic in the timerTick action if the alert is opened: case .timerTick: guard state.alert == nil else { return .none }

37:51

And with that we can see in the preview that when the alert appears, the timer pauses.

38:06

So, that’s really awesome, but now we have new behavior in the app, which means it would be nice to get some test coverage on this behavior. To keep things quick, let’s just write a test for only the flow of the user ending the meeting early and discarding the meeting.

38:25

We can start off much as we did previously, with a standup that has no meetings and only lasts for one second, as well as a test store that is already in a state of being drilled down to the detail and record features: func testEndMeetingEarlyDiscard() async { let standup = Standup( id: Standup.ID(UUID()), attendees: [Attendee(id: Attendee.ID(UUID()))], duration: .seconds(1), meetings: [], theme: .bubblegum, title: "Point-Free" ) let store = TestStore( initialState: AppFeature.State( path: StackState([ .detail( StandupDetailFeature.State(standup: standup) ), .recordMeeting( RecordMeetingFeature.State(standup: standup) ), ]), standupsList: StandupsListFeature.State( standups: [standup] ) ) ) { AppFeature() } withDependencies: { $0.continuousClock = ImmediateClock() $0.speechClient.requestAuthorization = { .denied } } store.exhaustivity = .off … }

38:50

Except this time we don’t expect the date and UUID dependencies to be used at all, since no meeting should be created, and so we don’t need to override those dependencies. If we can get a passing test without it complaining about dependencies, then we would have definitively proven that those dependencies are definitely not touched in this user flow.

39:10

Next we can play a full script of what the user does. In particular, the first come to the feature, meaning the onTask action is sent, then they tap the “End meeting” button, and then they confirm to end the meeting and discard the meeting: await store.send( .path(.element(id: 1, action: .recordMeeting(.onTask))) ) await store.send( .path( .element( id: 1, action: .recordMeeting(.endMeetingButtonTapped) ) ) ) await store.send( .path( .element( id: 1, action: .recordMeeting( .alert(.presented(.confirmDiscard)) ) ) ) )

39:45

And after all of that, and after all effect acts are processed, we expect there to be no meetings in the standup and the path should be back to just the detail screen: await store.skipReceivedActions() store.assert { XCTAssertEqual($0.path.count, 1) $0.path[id: 0, case: /AppFeature.Path.State.detail]? .standup.meetings = [] }

40:09

That should be all it takes, and the test passes!

40:15

In a very short amount of time we have tested the integration of 3 features. If we had accidentally messed something up, like sending the delegate action for saving on discard: case .alert(.presented(.confirmDiscard)): return .run { send in await send(.delegate(.saveMeeting)) await self.dismiss() }

40:54

This is clearly a bug, and if we run the test, it catches it: A state change does not march expectation: …

41:14

A meeting was added when it shouldn’t have been. Speech recognition

41:29

OK, we now have a pretty impressive application. We’ve got 4 complete features built, and they are all integrated together in a complex way, it involves subtle and nuanced logic, and we are managing complex effects such as the timer. Stephen

41:45

The biggest feature left to implement is the speech recognizer, and it turns out this is not quite as complicated as it sounds at first. Apple’s Speech framework comes with everything necessary to start a speech recognition task, and then it will feed you a live stream of results as it transcribes the audio coming into the device.

42:00

Let’s give it a shot.

42:04

Now we start sprinkling in speech recognition code right in our reducer after we get the speech authorization status: case .onTask: return .run { send in let status = await self.speechClient .requestAuthorization() if status == .authorized { // TODO: Start speech recognizer } … }

42:19

However, that wouldn’t be a good idea.

42:20

The speech recognizer does not work at all in previews. So whatever code we put in here that interacts with the Speech framework directly will be off limits as far as previews go, and we’ll be forced to run the full application in the simulator.

42:32

Also, if we reach out directly to Apple’s Speech API code here, then it will be impossible to test any of the code surrounding speech recognition. Of course we can’t unit test speech recognition in any meaningful way since we of course are not going to yell into the computer while the test is running, however what we can test is how our application reacts to the speech recognizer feeding data back into the system.

42:54

It is absolutely within reason for us to write a test that assumes the speech recognizer is working to the best of its abilities, and is giving us a stream of results back. And then we can assert on how our feature code deals with those results.

43:06

So, the better place to interact directly with Apple’s Speech framework is in our SpeechClient dependency, which already has some basic interactions with the Speech framework: struct SpeechClient { var requestAuthorization: @Sendable () async -> SFSpeechRecognizerAuthorizationStatus }

43:20

In particular, we can ask the speech client for its current authorization status as well as request authorization status.

43:23

We now need a new endpoint for telling the client to start up a new speech recognition task: struct SpeechClient { … var start: <#???#> }

43:31

But what is the shape of this endpoint?

43:33

Well, to keep things simple it will be a function that doesn’t take any arguments: struct SpeechClient { … var start: @Sendable () -> … }

43:41

There are ways of customizing the kind of speech recognition task that is started, but we don’t need any of that power so we will leave it out for now.

43:54

But, what does this function return? Well, since the speech recognizer will be live transcribing audio, it will want to send back many results. This means we should return an entire async sequence of values: struct SpeechClient { … var start: @Sendable () -> AsyncThrowingStream<String, Error> }

44:10

And we are choose an async stream because it is convenient in Swift, and it’s a throwing stream because speech recognizers can fail.

44:18

And also it’s a stream of strings because that is simplest, and because our feature only needs the actual transcript. But there is a lot more information from the speech recognizer that we could expose to the user of the dependency if we wanted.

44:43

This creates a few compilation errors because we now have an additional endpoint to implement for each conformance we have created, such as the liveValue : static let liveValue = Self( … start: <#() -> AsyncThrowingStream<String, Error>#> ) …and the previewValue : static let previewValue = SpeechClient( authorizationStatus: { .authorized }, requestAuthorization: { .authorized }, start: <#() -> AsyncThrowingStream<String, Error>#> )

44:56

Let’s start with the previewValue .

44:57

The simplest thing we could do is just return a throwing stream that yields a single transcript immediately and then just finishes: start: { AsyncThrowingStream { continuation in continuation.yield("Hello world!") continuation.finish() } }

45:18

But that’s not very fun.

45:22

If we want to have more fun we could construct a stream that emits the words of a bunch of “Lorem ipsum” text slowly over time in order to approximate someone actually speaking: start: { AsyncThrowingStream { continuation in Task { @MainActor in var finalText = """ Lorem ipsum dolor sit amet, consectetur \ adipiscing elit, sed do eiusmod tempor \ incididunt ut labore et dolore magna aliqua. Ut \ enim ad minim veniam, quis nostrud exercitation \ ullamco laboris nisi ut aliquip ex ea commodo \ consequat. Duis aute irure dolor in \ reprehenderit in voluptate velit esse cillum \ dolore eu fugiat nulla pariatur. Excepteur sint \ occaecat cupidatat non proident, sunt in culpa \ qui officia deserunt mollit anim id est laborum. """ var text = "" while true { let word = finalText.prefix { $0 != " " } try await Task.sleep( for: .milliseconds( word.count * 50 + .random(in: 0...200) ) ) finalText.removeFirst(word.count) if finalText.first == " " { finalText.removeFirst() } text += word + " " continuation.yield(text) } } } }

45:41

That will just add a little bit more realism to our application when we run the feature in previews.

45:46

Now we just have the liveValue left. The details to actually set up a speech recognition task, while interesting, are not the main point of this series of episodes. We aren’t going to spend time showing step-by-step how to do that work, and instead just paste in the final work: start: { AsyncThrowingStream { continuation in let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory( .record, mode: .measurement, options: .duckOthers ) try audioSession.setActive( true, options: .notifyOthersOnDeactivation ) } catch { continuation.finish(throwing: error) return } let audioEngine = AVAudioEngine() let speechRecognizer = SFSpeechRecognizer( locale: Locale(identifier: "en-US") )! let request = SFSpeechAudioBufferRecognitionRequest() let recognitionTask = speechRecognizer .recognitionTask(with: request) { result, error in switch (result, error) { case let (.some(result), _): continuation.yield( result.bestTranscription.formattedString ) case (_, .some): continuation.finish(throwing: error) case (.none, .none): fatalError("It should not be possible to have both a nil result and nil error.") } } continuation.onTermination = { [audioEngine, recognitionTask] _ in _ = speechRecognizer audioEngine.stop() audioEngine.inputNode.removeTap(onBus: 0) recognitionTask.finish() } audioEngine.inputNode.installTap( onBus: 0, bufferSize: 1024, format: audioEngine.inputNode.outputFormat(forBus: 0) ) { buffer, when in request.append(buffer) } audioEngine.prepare() do { try audioEngine.start() } catch { continuation.finish(throwing: error) return } } } This handles all of the work of setting up an audio session, starting a recognition task, streaming transcripts back to the continuation, and more.

46:10

We do have a few concurrency warnings stemming from types in the Speech framework not being sendable. We could do more work to properly isolate those objects, but for now we will just do a preconcurrency import of Speech: @preconcurrency import Speech

46:29

That’s all it takes to beef up our speech client to handle speech recognition. Now we just have to start using it.

46:34

Back in the reducer, in particular in the effect returned from onTask , we can start up the speech recognizer and start listening for events: for try await transcript in self.speechClient.start() { }

46:57

And we want to send this transcript back into the system, so let’s add a new action: enum Action: Equatable { case speechResult(String) … } And use that action in the for await : for try await transcript in self.speechClient.start() { await send(.speechResult(transcript)) }

47:18

Now technically we should be handling this error too. We should probably wrap the whole thing in a do / catch and send the error back into the system so that we can react to it. Maybe show an alert or something. But we aren’t going to do that now.

47:29

Now that we have a new action we need to start handling it in the reducer: case let .speechResult(transcript): return .none

47:39

But what should we do here?

47:43

All we really need to do is hold onto this transcript so that we can reference it later, in particular when the meeting ends.

47:49

So, sounds like we have some new state to deal with in the domain: struct State: Equatable { var transcript = "" … }

47:58

And we’ll start capturing the transcript when we get a result: case let .speechResult(transcript): state.transcript = transcript return .none

48:06

So, this is all a good start, but it’s not quite right.

48:08

If we run the feature in a preview we will see that timer has stopped for some reason. This is happening because we are subscribing to a long living async sequence before we start the timer, and so the timer code is never executing.

48:30

What we really want to do is execute the speech recognizer and in parallel. The standard way to do this with Swift’s concurrency tools is via a task group or async let . Either tool is fine, but we’ll use a task group for right now: return .run { send in await withTaskGroup(of: Void.self) { group in group.addTask { guard await self.speechClient.requestAuthorization() == .authorized else { return } do { for try await transcript in self.speechClient.start() { await send(.speechResult(transcript)) } } catch { // TODO: Handle error } } group.addTask { for await _ in self.clock.timer(interval: .seconds(1)) { await send(.timerTick) } } } }

49:37

And now we can see the timer has started again in the preview.

49:40

Of course, this effect is getting really gnarly, and so if you wanted to you could extract out this effect into a little private helper method on the reducer: private func onTask(send: Send<Action>) async { await withTaskGroup(of: Void.self) { group in group.addTask { guard await self.speechClient.requestAuthorization() == .authorized else { return } do { for try await transcript in self.speechClient.start() { await send(.speechResult(transcript)) } } catch {} } group.addTask { for await _ in self.clock.timer(interval: .seconds(1)) { await send(.timerTick) } } } } Note that we do have to pass along the send argument so that the helper can send actions, but the helper has immediate access to all the dependencies in the reducer. And with that we can just pass along the helper to the effect: case .onTask: return .run(operation: self.onTask)

50:13

Believe it or not, that is all it takes to get the basics of the speech recognizer into place. We of course aren’t doing anything with the transcript held in state yet, but we can already see that we are indeed getting transcripts fed into the system.

50:25

To do this, remember that in the entry point of the application we have applied the handy little debug helper called _printChanges to the app reducer: AppFeature() ._printChanges()

50:34

With that applied, every single action that comes to the system is logged to the console, along with a full diff of how the state changed when the action was received.

50:42

So, let’s run the app in the simulator, start a meeting, and do some talking. We will see right in the diff that our words are definitely making their way back into our feature state: - transcript: "Do some" + transcript: "Do some talking"

50:59

Absolutely incredible. Next time: Transcription

51:01

It’s pretty cool to see just how easy it was to use Apple’s speech recognizer API in order to get a live feed of transcription data while running our meeting. And we could put all that logic in an effect so that our reducer can remain a simple function, and our state can remain a simple value type. Brandon

51:17

Now let’s actually do something with these transcripts…next time! References Composable Architecture Brandon Williams & Stephen Celis • May 4, 2020 The Composable Architecture is a library for building applications in a consistent and understandable way, with composition, testing and ergonomics in mind. http://github.com/pointfreeco/swift-composable-architecture Getting started with Scrumdinger Apple Learn the essentials of iOS app development by building a fully functional app using SwiftUI. https://developer.apple.com/tutorials/app-dev-training/getting-started-with-scrumdinger Downloads Sample code 0248-tca-tour-pt6 Point-Free A hub for advanced Swift programming. Brought to you by Brandon Williams and Stephen Celis . Content Become a member The Point-Free Way Beta previews Gifts Videos Collections Free clips Blog More About Us Community Slack Mastodon Twitter BlueSky GitHub Contact Us Privacy Policy © 2026 Point-Free, Inc. All rights are reserved for the videos and transcripts on this site. All other content is licensed under CC BY-NC-SA 4.0 , and the underlying source code to run this site is licensed under the MIT License .