Video #138: Better Test Dependencies: Exhaustivity
Episode: Video #138 Date: Mar 15, 2021 Access: Members Only 🔒 URL: https://www.pointfree.co/episodes/ep138-better-test-dependencies-exhaustivity

Description
We talk about dependencies a lot on Point-Free, but we’ve never done a deep dive on how to tune them for testing. It’s time to do just that, by first showing how a test can exhaustively describe its dependencies, which comes with some incredible benefits.
Video
Cloudflare Stream video ID: 06c2c121b71068356bfc1431cdfd7945 Local file: video_138_better-test-dependencies-exhaustivity.mp4 *(download with --video 138)*
References
- Discussions
- the Composable Architecture
- a bunch of demo applications
- Composable Architecture
- 0138-better-test-dependencies-pt1
- Brandon Williams
- Stephen Celis
- Mastodon
- GitHub
- CC BY-NC-SA 4.0
- source code
- MIT License
Transcript
— 0:05
We have talked about dependencies on Point-Free a bunch. Some of first episodes were about the concept of “ dependency injection made easy .” In those episodes we claimed that a lot of the complexities of dependencies can be alleviated by embracing structs instead of protocols and by coalescing all dependencies into a single mutable value.
— 0:23
We took these ideas further by showing how dependency injection can be made composable and modular with the Composable Architecture . The notion of dependencies was baked directly into the core unit that powers a feature, known as a reducer, and from that we got all types of benefits, such as isolating our features into their own modules, sharing dependencies between isolated features, and even using different versions of a dependency for the same screen depending on the context it was presented. There was a lot of cool stuff we explored in those episodes.
— 0:54
And then most recently we kicked things up a notch again when we did a deep dive into how to design dependencies . That is, how do we allow 3rd party code run in our applications without it running amok. The more 3rd party code you drop into your code base the more chances there are to make your code untestable, strain the build systems, and accidentally break things like SwiftUI previews or even the ability to run in the simulator.
— 1:19
We want to continue this theme a bit more because there’s still something we haven’t spent a ton of time on, and that’s how to best wield our dependencies in tests. One of the main selling points of controlling dependencies is that it allows us to easily write tests for our features. But the power of those tests largely depends on we make use of the test dependencies, and there are some really cool tricks we can employ to increase the value of our tests. Testing features with dependencies
— 1:44
To demonstrate this we are going to take a look at a demo application that is included with the Composable Architecture repo. It’s worth noting that these ideas apply even to applications not built with the Composable Architecture. UIKit and vanilla SwiftUI applications will greatly benefit from the things we are about to discuss.
— 2:05
In case you didn’t already know, the Composable Architecture is a library we opened sourced in May of 2020 after many months of developing it from first principles on Point-Free. It’s goal to help you build SwiftUI and UIKit applications with a focus on composability, modularity, and testability, and I have the project open right there. Further, even if you are aware of the Composable Architecture you may not know that the repo comes with a bunch of demo applications and case studies that show how to solve common problems using the library.
— 2:37
The one we are interested in right now is the Todos demo app. When we first open sourced the library we released a 4-part series of episodes that did a broad tour of the library by building this demo from scratch. In doing so we got to explore lots of cool concepts, such as breaking down domains into small components that piece together, modeling complex effects, and then writing a full test suite for all of the app’s functionality. If you haven’t watched that tour yet then we highly recommend you do so.
— 3:10
Let’s quickly run the app in the simulator so that we can remind ourselves of what it does. It’s a very basic todo app where we can:
— 3:22
Add a bunch of todos
— 3:24
Set the name of those todos
— 3:31
Check off the todos to complete them. Notice that there is a slight delay between the moment you check it off and the moment it sorts itself to the bottom of the list. We do this because you may want to check off a bunch of things at once and you wouldn’t want todos flying all around while doing so. That could lead you to accidentally check of something you didn’t mean to. To accomplish this functionality we had to employ some fancy effect cancellation techniques, which we will discuss more in a moment, and we animate the list sorting by using the animated schedulers we developed in the last few episodes .
— 4:16
You can also use the filters to select which types of todos you want to look at.
— 4:22
You can clear all the completed todos
— 4:27
And finally you can put the list into an edit mode so that it’s easy to re-order and delete todos.
— 4:34
So this is a moderately complex application, and helps demonstrate a lot of the things we need to solve in a real world application. Things like complex effects, breaking down a large parent domain into smaller child domains, and how to write tests that exercise all of the functionality of the app.
— 4:50
In fact, we happen to have a full test suite in the demo application. We can hit cmd+U in Xcode to run the suite and we’ll see that 8 tests run and pass in about 0.032 seconds.
— 5:05
Let’s take a look at one of these tests so that we can all remember what it looks like to write tests in the Composable Architecture. The ability to write simple yet deep and exhaustive tests in the Composable Architecture is one of its greatest selling points. The following tests asserts on what happens when the user adds a todo: func testAddTodo() { let store = TestStore( initialState: AppState(), reducer: appReducer, environment: AppEnvironment( mainQueue: self.scheduler.eraseToAnyScheduler(), uuid: UUID.incrementing ) ) store.assert( .send(.addTodoButtonTapped) { $0.todos.insert( Todo( description: "", id: UUID(uuidString: "00000000-0000-0000-0000-000000000000")!, isComplete: false ), at: 0 ) } ) }
— 5:28
It’s a pretty short test, but it is also very prototypical of how tests go down in the Composable Architecture. It consists of a few elements:
— 5:42
We set up a TestStore so that we can send it actions and make assertions on how the state evolved over time. This consists of three steps:
— 5:52
We set up the initial state of the feature
— 5:55
We specify the reducer whose logic we want to test.
— 5:58
We provide the environment of dependencies that are needed to run this feature. The full todo feature only needs 2 dependencies right now:
— 6:06
A scheduler, which is used in the reducer to handle the delayed sorting logic via a debounce operator. Right now we are using a test scheduler, which allows us to control how time flows through our application, rather than being at the mercy of DispatchQueue.main , which can add flakiness and uncertainty to our tests. We’ve built a test scheduler from scratch in a past episode of Point-Free and we highly recommend you check it out if you are interested in that topic.
— 6:36
And a function that represents a
UUID 7:18
Once the test store is created we can finally start making assertions of how our logic behaves. We do this with the .assert method on TestStore , and we feed it a sequence of steps that emulate what the user is doing along with closures that describe exactly how state mutated after each step.
UUID 7:34
So here we simulate that the user tapped the “Add todo” button, and we assert that a new todo was added to the todos array. Notice that we had to provide a
UUID 8:10
To be a little more exhaustive with our test, let’s simulate the user adding another todo, which causes another todo to be added to the todos array. But also notice that it is prepended to the todos array: .send(.addTodoButtonTapped) { $0.todos.insert( Todo( description: "", id: UUID(uuidString: "00000000-0000-0000-0000-000000000001")!, isComplete: false ), at: 0 ) }
UUID 8:23
And the test passes!
UUID 8:29
In order to explore what a failure looks like, let’s instead insert the todo at the end of the array: at: 1
UUID 8:46
This gives us a chance to show of the wonderful test failure messages for the Composable Architecture. It provides us a nicely formatted message to show exactly how the actual state differs from the expected state in a line-by-line diff: State change does not match expectation: … AppState( editMode: EditMode.inactive, filter: Filter.all, todos: [ Todo( description: "", − id: 00000000-0000-0000-0000-000000000000, + id: 00000000-0000-0000-0000-000000000001, isComplete: false ), Todo( description: "", − id: 00000000-0000-0000-0000-000000000001, + id: 00000000-0000-0000-0000-000000000000, isComplete: false ), ] ) (Expected: −, Actual: +)
UUID 8:59
So here we can clearly see that we expected the second todo to be added to the end of the list, but in actuality it was inserted at the beginning of the list. So let’s fix that real quick: .send(.addTodoButtonTapped) { $0.todos.insert( Todo( description: "", id: UUID(uuidString: "00000000-0000-0000-0000-000000000001")!, isComplete: false ), at: 0 ) }
UUID 9:18
There’s a whole bunch of these kinds of tests in this file, testing every little edge case of the application. We even have tests that verify what happens when you check off todos in rapid succession so that we can assert that they do not sort to the bottom until you wait a brief moment.
UUID 9:32
Let’s look at one more test just so that we can make sure we are comfortable with this style of testing. If we scroll down a bit we will see a test that verifies what happens when we try to delete a todo: func testDelete() { let todos: IdentifiedArrayOf<Todo> = [ Todo( description: "", id: UUID(uuidString: "00000000-0000-0000-0000-000000000000")!, isComplete: false ), Todo( description: "", id: UUID(uuidString: "00000000-0000-0000-0000-000000000001")!, isComplete: false ), Todo( description: "", id: UUID(uuidString: "00000000-0000-0000-0000-000000000002")!, isComplete: false ), ] let store = TestStore( initialState: .init(todos: todos), reducer: appReducer, environment: AppEnvironment( mainQueue: self.scheduler.eraseToAnyScheduler(), uuid: UUID.incrementing ) ) store.assert( .send(.delete([1])) { $0.todos = [ $0.todos[0], $0.todos[2], ] } ) }
UUID 9:50
Here we do a bit of extra work to set up a large collection of todos for the initial state, but then everything else looks like the last test. We simulate the idea of the user deleting a todo at a particular IndexSet , and then assert that indeed that todo was removed.
UUID 10:02
Had we said a different todo was removed, say the last one: .send(.delete([1])) { $0.todos = [ $0.todos[0], $0.todos[1], ] }
UUID 10:07
Then we would have gotten a nicely formatted test failure message: State change does not match expectation: … AppState( editMode: EditMode.inactive, filter: Filter.all, todos: [ Todo( description: "", id: 00000000-0000-0000-0000-000000000000, isComplete: false ), Todo( description: "", − id: 00000000-0000-0000-0000-000000000001, + id: 00000000-0000-0000-0000-000000000002, isComplete: false ), ] ) (Expected: −, Actual: +)
UUID 10:24
But we don’t want that, so let’s revert that:
UUID 10:26
So that’s the basics of testing in the Composable Architecture, and already this is pretty powerful but there’s also more that it is capable of. Right now these tests don’t deal with any effects, and that’s where this test store assertion helper really starts to shine. It forces us to exhaustively prove exactly how side effects are executing and feeding their data back into the system. If an effect sends an action back into the store and you don’t explicitly describe that in the assertion, that’s a failure because it means you aren’t further asserting on how that effect is going to change your state. Also, if an effect is still inflight by the time you have ended your assertion on the store, then that will also fail because you have not fully described how the system completes itself. Unimplemented dependencies
UUID 11:16
So that is all really cool, but already there are a few strange things going on in these tests. We are constructing an environment to pass to the store, and we are using completely controlled, deterministic dependencies, which is great, but some of these tests don’t need all of the dependencies. The test for adding todos only uses the
UUID 11:45
This is telling us that these tests are not quite as strong as they could be. Just as our test store assertion helper forces us to be exhaustive with how effects execute in the system, we’d love if we could also be a little be exhaustive in describing what dependencies a feature actually needs to do its job. This is something we’ve hinted at a few times on Point-Free, but never really dedicated time to explaining in depth. Let’s take another look at our tests to show what we mean.
UUID 12:13
In the test for deleting a todo we construct the following environment for the feature: environment: AppEnvironment( mainQueue: self.scheduler.eraseToAnyScheduler(), uuid: UUID.incrementing )
UUID 12:26
These are both controlled, deterministic dependencies, which is great for writing tests, but none of the dependencies are actually used. We kinda already know this just by virtue of the fact that we don’t expect that deleting a todo should cause a
UUID 12:47
We can also verify this by hopping over the reducer and seeing it for ourselves: case let .delete(indexSet): state.todos.remove(atOffsets: indexSet) return .none
UUID 12:56
Clearly no dependencies are being used here.
UUID 13:01
But even better than looking at the actual implementation of the reducer, or having a “feeling” that this action doesn’t use any dependencies, we should have some way of verifying definitively that these dependencies are not used at all.
UUID 13:14
The simplest way to do this is to just put in fatalError s for all the endpoints of the dependency, and this is even something we’ve done in past episodes. If the test passes then clearly none of those code paths were executed. It’s straightforward to do this for the
UUID 13:41
Tests still pass so we now know with 100% certainty that the reducer does not use the
UUID 13:49
We can even do it for the scheduler by constructing an AnyScheduler from scratch and just putting in a fatalError for all of its requirements: environment: AppEnvironment( mainQueue: .init( minimumTolerance: { fatalError() }, now: { fatalError() }, scheduleImmediately: { _, _ in fatalError() }, delayed: { _, _, _, _ in fatalError() }, interval: { _, _, _, _, _ in fatalError() } ), uuid: { fatalError() } )
UUID 14:20
Tests still pass and we now our test is even stronger. We can be certain that there is truly no asynchronous work being performed because it did not invoke the scheduler at all.
UUID 14:28
We can even extract these out to little helpers so that it’s easier to use them in all of our tests: extension UUID { static let unimplemented: () -> UUID = { fatalError() } } import Combine extension Scheduler { static var unimplemented: AnySchedulerOf<Self> { Self( minimumTolerance: { fatalError() }, now: { fatalError() }, scheduleImmediately: { _, _ in fatalError() }, delayed: { _, _, _, _ in fatalError() }, interval: { _, _, _, _, _ in fatalError() } ) } }
UUID 15:26
And now the construction of the environment for the test looks nice and succinct: environment: AppEnvironment( mainQueue: .unimplemented, uuid: UUID.unimplemented )
UUID 15:45
Let’s see what other environments we can chisel away at to make them prove that they don’t use everything from the environment.
UUID 15:50
When testing the add todo flow we can now substitute in an unimplemented main queue scheduler to prove that there is no asynchronous funny business going down: func testAddTodo() { let store = TestStore( initialState: AppState(), reducer: appReducer, environment: AppEnvironment( mainQueue: .unimplemented, uuid: UUID.incrementing ) ) … }
UUID 16:21
If we tried to further replace the
UUID 16:29
Then we will immediately see that our tests get caught on the fatalError in the dependency: extension UUID { static let unimplemented: () -> UUID = { fatalError() } } Thread 1: Fatal error
UUID 16:40
And directly below this stack from shows exactly where we used the dependency, which is nice: state.todos.insert(Todo(id: environment.uuid()), at: 0) Thread 1: Fatal error
UUID 16:50
So we definitely don’t want to use the unimplemented
UUID 17:07
The next test is for verifying what happens when a todo is edited. Without even looking at what we are asserting on or what is happening in the reducer I think we can be pretty certain that a
UUID 17:32
And tests still pass!
UUID 17:41
Next we have the test that verifies what happens when a todo is completed. Remember that this does have some asynchrony involved because after a todo is completed we wait a second and then sort it down to the bottom. So we would expect it to need the scheduler, but let’s first just try putting in unimplemented dependencies across the board: func testCompleteTodo() { … let store = TestStore( initialState: AppState(todos: todos), reducer: appReducer, environment: AppEnvironment( mainQueue: .unimplemented, uuid: UUID.unimplemented ) ) … }
UUID 18:07
Well, that causes us to get caught on the fatalError in the scheduler, which is what we thought would happen, but nice to have the test suite verifying it for us. So, looks like we really do need a test scheduler for testing this code, and so let’s put it back in: environment: AppEnvironment( mainQueue: self.scheduler.eraseToAnyScheduler(), uuid: UUID.unimplemented )
UUID 18:36
Now tests pass, and at least we know that the
UUID 18:46
We could continue on converting more test environments to chisel them away to their bare essentials, but they all look about the same. But we’ve shown that there is a lot of power in being more explicit with exactly what dependencies are being used in the slice of features you are testing.
UUID 19:04
First, the more you chisel away at the environment for a test the more proof you have that you understand how the feature works from the outside. You don’t even need to consult the implementation to prove that it makes no API requests, or tracks no analytics, or performs no asynchronous work. And that’s really powerful.
UUID 19:19
Second, it also acts as a signal for your colleagues and your future self to know how complex a test is. For example, if you come to a test you wrote a long time ago or one that you are not familiar with and nearly the entire environment is stubbed out with unimplemented dependencies, then you know has a pretty good shot at being a simple, straightforward test. However, if there are lots of implemented dependencies being used then there’s a good chance this test is testing a complex feature, and so more care may be needed. That’s also really powerful.
UUID 19:53
And finally, using unimplemented dependencies by default makes it really easy to add new dependencies to your feature and see exactly which tests need to be updated. You get instant visibility into what tests are making use of the new dependency, and which can just hum along without any changes. Adding analytics to features
UUID 20:12
This point in particular we are going to spend some time on because it’s really powerful. We are going to add some new functionality to our todo application, and something that I think a lot of people will agree has the potential to wreak havoc in even the cleanest, most well-kept code bases. And that’s analytics.
UUID 20:34
Analytics tend to be one of those things that you just sprinkle around the code base to get the job done. In every viewDidAppear , every button action, every toggle flip you call out to some singleton of an analytics library, and then somehow that data is sent to a server. Even worse, there is usually no test coverage on these analytics events so you will never know when you accidentally break something. You could accidentally be tracking too few events or too many events, and that will muddy up your analytics data, possibly leading you to make important business decisions on faulty data.
UUID 21:06
However, it doesn’t have to be that way. We can make tracking analytics a natural part of building the application, sitting right along side the standard way we perform state mutations and execute side effects. And even better, we can get test coverage on our analytics so that we can be confident in our data.
UUID 21:23
We’ll start by creating a new dependency from scratch. We are going to follow the style that we previously demonstrated in our “ Designing Dependencies ” series, where we showed how to model dependencies using basic structs for maximum flexibility. Our analytics client will be modeled as a struct with a single endpoint for tracking an event name and properties: struct AnalyticsClient { var track: (String, [String: String]) -> Effect<Never, Never> }
UUID 22:12
It’s just a simple struct wrapping a closure field, and that closure takes a string for the event name and a dictionary of properties that are attached to the analytics event. We also return an Effect because we don’t want to perform that work immediately, but rather have it run later by the store. We do this with all side effects that want to speak with the outside world. Also this effect is a <Never, Never> effect because it doesn’t need to feed anything back into the system and it never failures. It will just fire-and-forget.
UUID 22:31
It can also be nice to bundle the event name and properties into a proper type, especially for when we start writing tests and want to assert against what events are tracked. So let’s do that real quick: struct AnalyticsClient { var track: (Event) -> Effect<Never, Never> struct Event: Equatable { var name: String var properties: [String: String] } }
UUID 22:58
With the interface of our dependency defined we can start to create some instances. The first one will be the live client, which is the one that actually makes network requests and sends data to an external server. We like to house the instances as statics on the type, like this: extension AnalyticsClient { static let live = Self( track: { event in .fireAndForget { // TODO: send the event data to the analytics server } } ) }
UUID 23:44
We don’t actually have an analytics server to send this data to right now. But in here you would probably construct a URLRequest for posting the data and then use something like URLSession to send that off to your server: .fireAndForget { // TODO: send the event data to the analytics server URLSession.shared .dataTask( with: URL(string: "https://www.my-company.com/analytics")! ) .resume() }
UUID 24:17
That won’t actually succeed right now of course, but that’s how it could happen. For now, just so that we can actually see that something is being tracked, let’s throw a print statement into the effect too: .fireAndForget { print("Track name: \"\(event.name)\", properties: \(event.properties)") // TODO: send the event data to the analytics server // URLSession.shared // .dataTask( / with: URL(string: "https://www.my-company.com/analytics")! // ) // .resume() }
UUID 24:45
With the dependency defined we can add it to our feature’s environment: struct AppEnvironment { var analytics: AnalyticsClient var mainQueue: AnySchedulerOf<DispatchQueue> var uuid: () -> UUID }
UUID 25:04
This now explicitly says that in order to run our application we must provide it an analytics client, a scheduler, and a
UUID 25:13
With that done we now have access to the analytics client in our reducer. So, what do we want to track? Maybe we’re interested in how often people use the feature to clear their completed todos. Maybe it’s a feature that no one really uses, and so in the future we might even remove it.
UUID 25:28
Well, instrumenting that is as simple as returning an effect from the .clearCompletedButtonTapped action in the reducer: case .clearCompletedButtonTapped: state.todos.removeAll(where: { $0.isComplete }) return environment.analyticsClient .track(.init(name: "Cleared Completed Todos", properties: [:])) .fireAndForget()
UUID 26:26
We can even default properties to an empty dictionary: struct Event: Equatable { var name: String var properties: [String: String] = [:] }
UUID 26:33
So that we can omit it when we call .track : .track(.init(name: "Cleared Completed Todos"))
UUID 26:40
Let’s instrument something a bit more nuanced. Currently there are two ways to delete a todo. You can either swipe the row and then tap the delete button, or you can put the list into edit mode and then delete a todo. Suppose we are interested in how many people are using one method versus the other. This might inform how we add new functionality to this list.
UUID 27:08
Well, to do this we just have to tap into the .delete action and return an effect that tracks the event: case let .delete(indexSet): state.todos.remove(atOffsets: indexSet) return environment.analyticsClient .track(.init(name: "Todo Deleted"))
UUID 27:26
But we also want to track some properties with this event. The state of the app holds an editMode field, which determines if we are currently in edit mode or not. We can just turn that into a string and send it along to the properties: return environment.analyticsClient .track( .init( name: "Todo Deleted", properties: ["editMode": "\(state.editMode)"] ) ) .fireAndForget()
UUID 28:20
It’s probably not the best idea to stringify this enum directly like this. Future iOS releases could cause this to change, and that would muddy up our analytics. It would be better to define a little helper to turn it into a string, but that’s not really our focus right now so we’ll leave it as-is.
UUID 28:42
And just like that, anytime we clear completed todos or delete a todo we should be tracking some analytics events. Let’s try running in the simulator to verify this. To do this we need to fix two small compiler errors. First, in the preview we have to pass along an analytics client, and so I guess we could just provide the .live instance for now, though we should not use a live instance here in the long run since it would muddy up our analytics if previews were to hit a production server. struct AppView_Previews: PreviewProvider { static var previews: some View { AppView( store: Store( initialState: AppState(todos: .mock), reducer: appReducer, environment: AppEnvironment( analytics: .live, mainQueue: DispatchQueue.main.eraseToAnyScheduler(), uuid: UUID.init ) ) ) } }
UUID 29:22
And then over in the SceneDelegate we have to also provide an analytics client when constructing the root view. We’ll also use .live here, which is more appropriate, but even then we may later decide to use a different instance for debug builds. let rootView = AppView( store: Store( initialState: AppState(), reducer: appReducer, environment: AppEnvironment( analytics: .live, mainQueue: DispatchQueue.main.eraseToAnyScheduler(), uuid: UUID.init ) ) )
UUID 30:00
Now we can run the app in the simulator, and if we clear some todos and delete some we will see the analytics properly printing to the console: Track name: "Cleared Completed Todos", properties: [:] Track name: "Todo Deleted", properties: ["editMode": "transient"] Track name: "Todo Deleted", properties: ["editMode": "active"]
UUID 30:34
And because these logs are being printed to the console we can be fairly certain that if we were sending some data to a real analytics server that it should as we expect. Testing analytics exhaustively
UUID 30:45
So it’s nice to know that it is working, but also manually running the app in the simulator and inspecting the logs is not a great way to verify that analytics are tracking corretly. It would be far better to write some actual tests for this behavior so that we can be sure it currently works as expected and so that in the future as we start adding more functionality and refactoring code we can have confidence we didn’t accidentally mess up our analytics.
UUID 31:23
If we try to build tests right now we will see that we have a bunch of compiler errors because none of our test environments have been constructed with the analytics client: environment: AppEnvironment( analytics: <#AnalyticsClient#>, mainQueue: .unimplemented, uuid: UUID.incrementing ) Missing argument for parameter ‘analyticsClient’ in call
UUID 31:35
What should we put here? The only analytics client we’ve constructed so far is the live one that actually sends data to our analytics server. We definitely don’t want to do that in tests. Not only will that send events to the server that did not actually come from the user, but we still wouldn’t be able to verify that the events we expect to be tracked are actually tracked.
UUID 31:54
We can cook up a mock analytics client that both does not send data to a live server and also buffers its events into an array so that we can assert on them later, and we will do that in a moment, but perhaps a better default for this dependency would be an unimplemented instance. This would be an instance that simply does a fatalError inside the track endpoint.
UUID 32:14
Let’s cook up this instance real quick. We will also house it inside the AnalyticsClient type as a static: extension AnalyticsClient { static let unimplemented = Self( track: { _ in fatalError() } ) }
UUID 32:35
This instance now gives us something simple that we can plug into all the spots where the compiler is complaining, so let’s do that real quick: environment: AppEnvironment( analytics: .unimplemented, mainQueue: .unimplemented, uuid: UUID.incrementing ) …
UUID 32:53
Now tests are building, but as soon as we run them we get caught on the unimplemented analytics client. And this is a good thing. It means we have a test using analytics but we aren’t making any assertions against what events were tracked. This is what it means to be exhaustive in describing what dependencies are used in tests. Using this technique has allowed us to instantly find every part of the feature that is using analytics, all without reading the implementation. That’s really powerful.
UUID 33:23
So, let’s fix these tests. We need to introduce a new instance of the AnalyticsClient , one that does not send events to a real life server, but instead just buffers the events into a collection so that we can assert on them later.
UUID 33:34
We will call this instance test to make it very clear that it’s meant for testing: extension AnalyticsClient { static let test: Self( track: { event in ??? } ) }
UUID 33:48
We need to return an effect here, so let’s open a fireAndForget that can do the work needed for testing analytics. extension AnalyticsClient { static let test: Self( track: { event in .fireAndForget { } } ) }
UUID 33:55
And this already compiles, but in order for the user of this test instance to listen to what events are being tracked we need to allow the outside to pass us a closure that we will invoke whenever an event is tracked: extension AnalyticsClient { static func test(onEvent: @escaping (Event) -> Void) -> Self { Self( track: { event in .fireAndForget { onEvent(event) } } ) } }
UUID 34:51
This is what will make it possible for the outside world to snoop on what is happening on the inside of the analytics client.
UUID 34:56
Let’s take it for a spin. Currently it’s the testClearCompleted test that got caught on the unimplemented fatalError . Instead let’s put in a test client: environment: AppEnvironment( analytics: .test(onEvent: { event in }), mainQueue: self.scheduler.eraseToAnyScheduler(), uuid: UUID.unimplemented )
UUID 35:14
This gets tests building, but now we want to actually capture the events being tracked. To do that we can create a little mutable array of events and then append to it in the test client: var events: [AnalyticsClient.Event] = [] let store = TestStore( initialState: state, reducer: appReducer, environment: AppEnvironment( analytics: .test(onEvent: { event in events.append(events) }, mainQueue: self.scheduler.eraseToAnyScheduler(), uuid: UUID.failing ) )
UUID 35:38
And then after we perform our assertion on how state changes we can perform another assertion to confirm that the events are what we expect: store.assert( .send(.clearCompletedButtonTapped) { $0.todos.remove(at: 1) } ) XCTAssertEqual(events, [.init(name: "Cleared Completed Todos")])
UUID 36:05
And just like that we have a passing test that is asserting against analytics. Impressive!
UUID 36:15
Next we need to fix the other test. It can be done in a similar fashion to what we just did, except this time we are also assertion on some properties tracked by the analytics client: var events: [AnalyticsClient.Event] = [] let store = TestStore( initialState: .init(todos: todos), reducer: appReducer, environment: AppEnvironment( analytics: .test { events.append($0) }, mainQueue: .unimplemented, uuid: UUID.unimplemented ) ) store.assert( .send(.delete([1])) { $0.todos = [ $0.todos[0], $0.todos[2], ] } ) XCTAssertEqual( events, [.init(name: "Todo Deleted", properties: ["editMode": "inactive"])] )
UUID 37:21
Also, since we got this mutable array of events in two spots we could clean this up a bit by declaring it as an instance variable on the test case: class TodosTests: XCTestCase { var events: [AnalyticsClient.Event] = [] … }
UUID 37:54
And now when we run tests everything passes, which means we are now getting test coverage on the analytics being tracked in the application, and that’s pretty awesome.
UUID 37:58
We can even get some test coverage on the user flow where they go into editing mode and then delete the todo so that we can confirm that the analytics property is set to what we expect. To do this I will just add a few additional steps to our current assertion: store.assert( .send(.delete([1])) { $0.todos = [ $0.todos[0], $0.todos[2], ] }, .send(.editModeChanged(.active)) { $0.editMode = .active }, .send(.delete([0])) { $0.todos = [ $0.todos[1], ] } ) XCTAssertEqual( events, [ .init(name: "Todo Deleted", properties: ["editMode": "inactive"]), .init(name: "Todo Deleted", properties: ["editMode": "active"]), ] )
UUID 39:15
This test suite passes, but we can even get more granular with our assertion. In order to assert that an event is properly tied to a specific action, we can assert immediately after the action is sent using a .do test store assertion step. .send(.delete([1])) { $0.todos = [ $0.todos[0], $0.todos[2], ] }, .do { XCTAssertEqual( self.events, [.init(name: "Todo Deleted", properties: ["editMode": "inactive"])] ) },
UUID 39:39
The test still passes, which means we’re asserting more granularly on exactly where the event was tracked. It might seem a little noisy to assert against this past event again at the end of the test, but we have an exercise for the viewer to clean this up.
UUID 40:09
The fact that the test suite passes proves that there are only two tests of the entire suite that require analytics: when clearing todos and deleting todos. If we were to go in and start instrumenting more parts of our application we would instantly get feedback on which tests need to be updated. We wouldn’t need to hunt them down or audit our entire test suite to see where we should be further asserting for analytics events.
UUID 40:37
This means that if you come to this code 6 months from now in order to add some more analytics, you wouldn’t even have to think about what tests need to be updated. You could just run the entire suite and see which test cases get stuck on the unimplemented client. That’s the power of being more explicit and exhaustive with what dependencies your test cases are actually using. Next time: failability
UUID 40:55
However, there is one not very optimal thing about what we have done so far, and that’s the fact that when an unimplemented dependency is used it crashes the whole test suite. No other test will run, and that’s going to be really annoying in practice. If we have a long test suite then it takes just a single failure to stop the entire suite, and we’ll have no idea of what other tests failed until we fix the first one that failed.
UUID 41:20
So having the unimplemented dependencies was a nice way to get our feet wet with the concept of exhaustively describing our dependencies, but can we do better? Yes we can, but it comes with a few new complications that have to be worked out.
UUID 41:34
What if instead of doing a fatalError inside each endpoint of our dependency we put in a XCTFail ? That would make our test fail, while also letting the rest of the suite run.
UUID 41:45
Let’s try it out with our simplest dependency, the