EP 139 · Better Test Dependencies · Mar 22, 2021 ·Members

Video #139: Better Test Dependencies: Failability

smart_display

Loading stream…

Video #139: Better Test Dependencies: Failability

Episode: Video #139 Date: Mar 22, 2021 Access: Members Only 🔒 URL: https://www.pointfree.co/episodes/ep139-better-test-dependencies-failability

Episode thumbnail

Description

Exhaustively describing dependencies in your tests makes them stronger and easier to understand. We improve the ergonomics of this technique by ditching the fatalError in unimplemented dependencies, using XCTFail, and we open source a library along the way.

Video

Cloudflare Stream video ID: 2af7b00bd5d74ea4a18a086e8b03a457 Local file: video_139_better-test-dependencies-failability.mp4 *(download with --video 139)*

References

Transcript

0:05

The fact that the test suite passes proves that there are only two tests of the entire suite that require analytics: when clearing todos and deleting todos. If we were to go in and start instrumenting more parts of our application we would instantly get feedback on which tests need to be updated. We wouldn’t need to hunt them down or audit our entire test suite to see where we should be further asserting for analytics events.

0:33

This means that if you come to this code 6 months from now in order to add some more analytics, you wouldn’t even have to think about what tests need to be updated. You could just run the entire suite and see which test cases get stuck on the unimplemented client. That’s the power of being more explicit and exhaustive with what dependencies your test cases are actually using.

0:51

However, there is one not very optimal thing about what we have done so far, and that’s the fact that when an unimplemented dependency is used it crashes the whole test suite. No other test will run, and that’s going to be really annoying in practice. If we have a long test suite then it takes just a single failure to stop the entire suite, and we’ll have no idea of what other tests failed until we fix the first one that failed.

1:16

So having the unimplemented dependencies was a nice way to get our feet wet with the concept of exhaustively describing our dependencies, but can we do better? Yes we can, but it comes with a few new complications that have to be worked out.

1:30

What if instead of doing a fatalError inside each endpoint of our dependency we put in a XCTFail ? That would make our test fail, while also letting the rest of the suite run.

1:41

Let’s try it out with our simplest dependency, the

UUID 1:50

We can create a new static property on

UUID 2:37

However, we still have to return something from this closure. We didn’t have to do that for the .unimplemented version of this because it fatalError ‘d and that stop execution of the problem and so the compiler knows that it doesn’t matter what happens after that.

UUID 2:49

But what should we return? We could hard code to a single distinctive

UUID 3:05

Or we could just call out to the live initializer: extension UUID { static let failing: () -> UUID = { XCTFail() return UUID() } }

UUID 3:09

We’re not sure if one is definitely more correct than the other. You should feel free to choose whichever makes most sense to you. We’ll stick with using the live initializer under the hood for right now.

UUID 3:22

We can now replace all occurrences of UUID.unimplemented with UUID.failing and the test suite should still pass.

UUID 3:42

But now, if we sneak in a new usage of the uuid dependency in our reducer we’ll give a simple test failure telling us something went wrong. Let’s try it, we’ll invoke environment.uuid from the action that deletes a todo: case let .delete(indexSet): _ = environment.uuid() …

UUID 4:04

There is of course no reason we should need to do this, and so we would hope our test suite catches us in our shenanigans. And indeed it does!

UUID 4:20

In the testDelete test case we can clearly see that this test accessed the

UUID 4:28

This failure isn’t very descriptive, so let’s spell out what went wrong where we call XCTFail . extension UUID { static let failing: () -> UUID = { XCTFail("UUID initializer is unimplemented") return UUID() } }

UUID 4:52

And now it’s clear that in the lifecycle of store.assert , the UUID dependency was accessed unexpectedly. Now one strange thing about this is that the error is on the store.assert line rather than the line that actually caused the dependency to be accessed, in particular it happened when we sent the .delete actions. That’s going to be pretty annoying, especially for complex tests because you’ll never know the true origin of the failure.

UUID 5:19

One thing you may know about XCTFail is that it takes a file and line argument that can be used to tie a failure to a different line in the application: XCTFail("UUID initializer is unimplemented", file: <#file#>, line: <#line#>)

UUID 5:36

So perhaps we can provide that and it will properly tie the failure to where we invoke the .uuid() closure in the reducer.

UUID 5:38

However, closures in Swift do not get access to the file and line from which they were invoked. Functions do have this capability, but sadly closures do not. static let failing: () -> UUID = { XCTFail("UUID initializer is unimplemented") return UUID() }

UUID 5:49

We could try upgrading our failing property to be a failing function because then we could get the file and line arguments: static func failing( file: StaticString = #file, line: UInt = #line ) -> () -> UUID { { XCTFail( "UUID initializer is unimplemented", file: file, line: line ) return UUID() } }

UUID 6:41

However this will capture the file and line of when the failing

UUID 7:26

What we really want is a way for us to get the file and line of where a closure was called from within the closure. Swift could provide this functionality. We could even theorize a syntax. Perhaps the file and line could be passed to a closure use the square bracket syntax that is used for capturing parameters in the closures: static let failing: () -> UUID = { [#file, #line] in XCTFail("UUID initializer is unimplemented", file: file, line: line) return UUID() } Then this would work exactly as we want.

UUID 7:48

Luckily we don’t have to wait for this theoretical Swift feature. We can actually provide a pretty good solution using some new capabilities of Xcode and how it captures test failures. We’ll be looking at that soon, but for now let’s undo this work and keep pressing on.

UUID 8:06

The second simplest dependency we have is the analytics client. It has a single endpoint that returns an effect. One thing we could do is just immediately perform an XCTFail and then return a .none effect, which is an effect that does nothing and completes immediately: extension AnalyticsClient { static let failing = Self( track: { event in XCTFail() return .none } ) }

UUID 8:42

However, doing it this way means the mere act of calling .track on the analytics client will cause a test failure, even if that effect is never actually executed. This is a little weird. The analytics event wasn’t actually sent to the server, and so it feels like we probably shouldn’t error. Since effects are lazy and are not executed until run by the store, there could be some use cases where in your reducer you want to construct some of the effects upfront so that you can perform some complex logic to figure out which ones to ultimately return from the reducer.

UUID 9:18

So, to make this a little less strict what we can do is open up the .fireAndForget effect like we do in the live client, and then fail inside there: extension AnalyticsClient { static let failing = Self( track: { _ in .fireAndForget { XCTFail() } } ) }

UUID 9:35

In fact, we think this pattern of using failing effects will be pretty common, so we could even bake it into a helper on the Effect type: extension Effect { static func failing(_ title: String) -> Self { .fireAndForget { XCTFail("\(title): Effect is unimplemented") } } }

UUID 10:05

And then the failing analytics client can be: extension AnalyticsClient { static let failing = Self( track: { _ in .failing("AnalyticsClient.track") } ) }

UUID 10:15

Now we can replace all occurrences of AnalyticsClient.unimplemented with AnalyticsClient.failing and the test suite should continue passing.

UUID 11:11

Before moving on let’s also exercise this new failing dependency. Let’s track a new event in our reducer and make sure that it causes a test to fail. Say we want to track an event when the filter is changed. That’s easy enough to do now: case let .filterPicked(filter): state.filter = filter return environment.analyticsClient .track( .init( name: "Filter Changed", properties: ["filter": "\(filter)"] ) ) .fireAndForget()

UUID 11:58

We would hope this causes a test to fail because we are now tracking a new event, and we should strive to have our tests be as exhaustive as possible. If we run tests we so that indeed testFilteredEdit fails: store.assert( .send(.filterPicked(.completed)) { $0.filter = .completed }, .send( .todo( id: state.todos[1].id, action: .textFieldChanged("Did this already") ) ) { $0.todos[1].description = "Did this already" } ) failed - Analytics.track: Effect is unimplemented.

UUID 12:29

Again it’s a little weird that the failure is on the store.assert line instead of the .send(.filterPicked) line, but still this pretty amazing. Our test suite is really keeping us in check. We can’t just go around tracking analytics willy nilly without making our test suite confirm that events were tracked how we expected.

UUID 12:47

So let’s fix this test. We can start by using the test analytics client to buffer the events into a mutable array: var events: [AnalyticsClient.Event] = [] let store = TestStore( initialState: state, reducer: appReducer, environment: AppEnvironment( analyticsClient: .test { self.events.append($0) }, mainQueue: self.scheduler.eraseToAnyScheduler(), uuid: UUID.failing ) )

UUID 12:58

And then after our store assertion we can further assert which events were tracked: XCTAssertEqual( events, [.init(name: "Filter Changed", properties: ["filter": "completed"])] )

UUID 13:21

And this test passes!

UUID 13:27

So again, we think it’s awesome what being exhaustive about your dependencies brings to tests. It helpfully points out all the places we are using dependencies that we did not expect, and that means we either need to update our test to properly assert on that behavior, or maybe it means our reducer is doing something we didn’t expect it to. Failing schedulers

UUID 13:49

Let’s move onto the last and most completed dependency, the scheduler. We’d like to cook up a scheduler such that whenever it is used to schedule work it will fail the test.

UUID 14:16

To get something in place, we will copy and paste our fatalError -ing unimplemented scheduler, rename it to failing , and replace those fatalError s with XCTFail s. extension Scheduler { public static var failing: AnySchedulerOf<Self> { AnyScheduler( minimumTolerance: { XCTFail() }, now: { XCTFail() }, scheduleImmediately: { _, _ in XCTFail() }, delayed: { _, _, _, _ in XCTFail() }, interval: { _, _, _, _, _ in XCTFail() } ) } }

UUID 14:39

This doesn’t compile yet because as we’ve seen with our other “failing” dependencies, we must return values from these endpoints.

UUID 14:50

The first one is pretty easy. We can just fail and then return .zero , which is a static property that is guaranteed to exist by the Strideable protocol: minimumTolerance: { XCTFail() return .zero },

UUID 15:06

The second requirement is a little trickier to implement, so let’s leave it unimplemented for a moment.

UUID 15:11

The last three are also pretty easy. The first two are already compiling because they don’t return any data. scheduleImmediately: { options, action in XCTFail() }, delayed: { delay, tolerance, options, action in XCTFail() },

UUID 15:16

In the interval endpoint we have to further return a Cancellable , but that’s easy enough to do. interval: { delay, interval, tolerance, options, action in XCTFail() return AnyCancellable {} }

UUID 15:25

The final endpoint, now, is little trickier to handle. We can’t just throw a XCTFail in it because it also need to a return value: now: { XCTFail("Scheduler is unimplemented") }, Cannot convert value of type ‘()’ to closure result type ‘Self.SchedulerTimeType’

UUID 15:32

The type of value we need to return comes from the associated type of the Scheduler protocol, which is completely generic and we have no idea what it could be. For most schedulers it’s quite easy to create this value, but some schedulers have associated types that aren’t even constructible, such as Combine’s ImmediateScheduler .

UUID 15:54

Our only hope of returning something from this endpoint is if it is supplied to us by whoever wants the failing scheduler. It’s a bit of a bummer to push the responsibility to them, but also this value don’t really matter at all. We’re just going to fail the test suite anyway, so no one should care what is returned from these endpoints anyway.

UUID 16:12

So, let’s upgrade our static var to be a static func and we’ll pass in the now value to it: extension Scheduler { public static func failing( now: SchedulerTimeType ) -> AnySchedulerOf<Self> { .init( minimumTolerance: { XCTFail() return .zero }, now: { XCTFail() return now }, scheduleImmediately: { options, action in XCTFail() }, delayed: { delay, tolerance, options, action in XCTFail() }, interval: { delay, interval, tolerance, options, action in XCTFail() return AnyCancellable {} } ) } }

UUID 16:29

Finally, let’s improve the ergonomics by supplying descriptive failure messages to each XCTFail extension Scheduler { public static func failing( now: SchedulerTimeType ) -> AnySchedulerOf<Self> { .init( minimumTolerance: { XCTFail("Scheduler.minimumTolerance is unimplemented") return .zero }, now: { XCTFail("Scheduler.now is unimplemented") return now }, scheduleImmediately: { options, action in XCTFail("Scheduler.scheduleImmediately is unimplemented") }, delayed: { delay, tolerance, options, action in XCTFail("Scheduler.delayed is unimplemented") }, interval: { delay, interval, tolerance, options, action in XCTFail("Scheduler.interval is unimplemented") return AnyCancellable {} } ) } }

UUID 16:54

So it is now possible to construct a failing scheduler of any type, but it’d also be kind of annoying for us to have to provide this now value every time we want one. Luckily we can create a helper that is defined for schedulers for which we know more about their associated types. For example, if we wanted a failing scheduler that mimics dispatch queues, then we can define the following: extension Scheduler where SchedulerTimeType == DispatchQueue.SchedulerTimeType, SchedulerOptions == DispatchQueue.SchedulerOptions { public static var failing: AnySchedulerOf<Self> { .failing( now: .init(.init(uptimeNanoseconds: 0)) ) } }

UUID 18:07

The nesting init s is because DispatchQueue.SchedulerTimeType is a wrapper around DispatchTime .

UUID 18:18

We could also do something similar for schedulers that mimic run loops and operation queues, but we’ll leave that as an exercise for the viewer.

UUID 18:29

With this defined we can now replace all .unimplemented schedulers with .failing .

UUID 18:49

And tests still pass, so we can be sure that we really aren’t using the scheduler anywhere we don’t expect it to be used.

UUID 18:53

Let’s make sure this works. Let’s purposely make a test fail by replacing its scheduler with the failing one. For example, let’s alter testCompleteTodo ’s environment like so: environment: AppEnvironment( analyticsClient: .failing, // mainQueue: self.scheduler.eraseToAnyScheduler(), mainQueue: .failing, uuid: UUID.failing )

UUID 19:23

When we run tests we get 3 failures: store.assert( .send(.todo(id: todos[0].id, action: .checkBoxToggled)) { $0.todos[0].isComplete = true }, .do { self.scheduler.advance(by: 1) }, .receive(.sortCompletedTodos) { $0.todos = [ $0.todos[1], $0.todos[0], ] } ) Scheduler.delayed unimplemented An effect returned for this action is still running. It must complete before the end of the test. … Expected to receive an action, but received none.

UUID 19:27

That may seem like a lot, but it really is describing 3 distinct failures in this test:

UUID 19:31

First, we see that the scheduler was accessed and so that immediately triggers a failure. Again it’s a bit of a bummer that the failure is on the store.assert line even though it’s the first .send line that actually triggered the failure, but we’ll fix that soon enough.

UUID 19:47

Next we have an error on the .send line that tells us this action caused an effect to be fired and it did not complete by the time the assertion finished. This is an awesome capability of the Composable Architecture. It allows us to see exactly what started an effect, and it forces us to to make sure it completes before we get a passing test. If we did not have this we could accidentally have a bunch of in-flight effects that we have no idea what they are doing and not asserting on their behavior.

UUID 20:18

And finally we have a failure that says we expected to receive an action, but none was received. Again, this is a great exhaustive assertion to have. We asserted that an effect was going to feed an effect back into the store, but none came.

UUID 20:32

And all of these failures make sense because we substituted the test scheduler, which allows us to control the flow of time, with a failing scheduler, which never performs its work and instead emits test failures. This is why an effect is still in flight and never delivered a value back to the system.

UUID 20:51

So our failing scheduler really is working, and it will help us catch tests that use asynchrony when we do not expect them to. And this is a lot better than what we were doing with the unimplemented dependencies. Now our tests can fail but continue executing. Tracing failability

UUID 21:07

However, there still is something not ideal about some of these failures. The scheduler failure in particular is just pointing at the store.assert line, but really it’s the .send(.todo) line that caused the scheduler to be used, and so we would hope the failure would point to it. Without that we really have no idea which action caused the dependency to be used, and so we would have to hunt around. What if we were test a very complex user flow by sending a bunch of actions to see how the state evolved over time. If we suddenly got a scheduler failure we would have to look in each one of those actions to figure out the culprit.

UUID 21:54

Why is it that the failure cannot be traced back to the .send line? Well, unfortunately this is due to a design decision we made for our assertion helper.

UUID 22:20

When we first designed the assert helper we constructed a little mini-DSL for describing user actions that are sent into the store and then effects that are executed and send their data back into the store. There’s a type called TestStore.Step that describes all the different steps you can take during an assertion. You can .send actions to simulate something the user did. You can .receive actions to handle data that effects send back into the store, and you can use the .do step to insert little bits of imperative work in a big chain of steps. These steps are just simple values. Plain data. In fact, here’s the definition: public struct Step { fileprivate let type: StepType fileprivate let file: StaticString fileprivate let line: UInt }

UUID 23:02

We track the file and line of where the step was created because it allows us to tie some errors to those lines. For example, if you assert that you received an action from an effect but no such action was ever actually received by the store, then we can create a test failure right on that line using these properties.

UUID 23:28

However, our failing dependencies are getting used deep in our reducers and sometimes even deep in an asynchronous effect. We have no way of passing down file and line data all the way down to the parts of our code where we are using those dependencies.

UUID 23:44

All is not lost though. Recent versions of Xcode have greatly improved how well it captures the full stack trace of where a failure happened and what lined caused it, even if it happens asynchronously.

UUID 23:57

We can even see this pretty easily by using Combine to fire off a bit of asynchronous work that just immediately fails: _ = Future<Never, Never> { _ in XCTFail() } .sink { _ in }

UUID 24:23

Xcode points out the exact line that failed, the one with the XCTFail , but then also points out that the Future is what caused that code to execute in the first place. Xcode had to do extra work to keep track of that, and it’s awesome that it preserved that information.

UUID 24:48

It wasn’t too long ago that Xcode didn’t do this for us. In fact, all we have to do is open up Xcode 11, which we would have been using this time last year, and see that running this test produces only the single failure message: _ = Future<Never, Never> { _ in XCTFail() } .sink { _ in }

UUID 26:02

So, now that we see that Xcode is capable of preserving more information in the stack trace, how can we use it to our advantage? Well, right now we lose that information because we are holding onto a data description of the steps which then get interpreted later. We chose this design in the early days of the Composable Architecture because the TestStore was actually a re-implementation of the Store ’s functionality but with extra hooks put into place to make assertions.

UUID 26:33

However, recently we were able to figure out how get the store assertion to be powered off of the store, and that not only means that tests operate in a more real-world environment, but also opened us up to explore some alternative API designs for the assertion helper. We were able to come up with a slight alteration to the TestStore so that you can .send and .receive actions directly on the instance of the test store, not via the DSL, which is exactly what we need for Xcode to keep track of the stack trace.

UUID 27:07

We haven’t yet released this functionality for all users of the Composable Architecture because we wanted to convert some of our projects over to make sure everything works just as well, but we are now ready. I’m going to cherry-pick a commit that we have ready that brings in these changes: $ git cherry-pick …

UUID 27:37

All the changes are 100% backwards compatible with the old assert DSL, so you don’t have to update anything when you update the library. We considered showing off this refactor to our viewers, but honestly it’s not very exciting.

UUID 28:08

But now, if you just make a few small mechanical changes you will get an equivalent test that Xcode can better understand. We can start by dropping the store.assert( and closing paren ) : // store.assert( .send(.todo(id: todos[0].id, action: .checkBoxToggled)) { $0.todos[0].isComplete = true }, .do { self.scheduler.advance(by: 1) }, .receive(.sortCompletedTodos) { $0.todos = [ $0.todos[1], $0.todos[0], ] } // )

UUID 28:13

Then we prepend the .send and .receive with store because they are now just methods on the TestStore , and we can drop the comma since the steps aren’t in a variadic list anymore: // store.assert( store.send(.todo(id: todos[0].id, action: .checkBoxToggled)) { $0.todos[0].isComplete = true } // , .do { self.scheduler.advance(by: 1) }, store.receive(.sortCompletedTodos) { $0.todos = [ $0.todos[1], $0.todos[0], ] } // )

UUID 28:19

And then finally we no longer need a .do step to insert imperative work between steps. We are already in an imperative context by virtue of the fact that we are just calling methods on an object. So we can just comment out the .do block: // store.assert( store.send(.todo(id: todos[0].id, action: .checkBoxToggled)) { $0.todos[0].isComplete = true } // , // .do { self.scheduler.advance(by: 1) // }, store.receive(.sortCompletedTodos) { $0.todos = [ $0.todos[1], $0.todos[0], ] } // )

UUID 28:27

Cleaning up this code a bit we come to something that is actually a bit shorter: store.send(.todo(id: todos[0].id, action: .checkBoxToggled)) { $0.todos[0].isComplete = true } self.scheduler.advance(by: 1) store.receive(.sortCompletedTodos) { $0.todos = [ $0.todos[1], $0.todos[0], ] }

UUID 28:38

But the best part of this new style is that Xcode can properly attribute the scheduler failures to the line that caused them: store.send( .todo(id: todos[0].id, action: .checkBoxToggled) ) { $0.todos[0].isComplete = true } self.scheduler.advance(by: 1) store.receive(.sortCompletedTodos) { $0.todos = [ $0.todos[1], $0.todos[0], ] } failed - An effect returned for this action is still running… failed - Scheduler.delayed is unimplemented failed - Scheduler.minimumTolerance is unimplemented failed - Scheduler.now is unimplemented Expected to receive an action, but received none.

UUID 29:04

If we put back the test scheduler we’ll get passing tests: environment: AppEnvironment( analyticsClient: .failing, mainQueue: self.scheduler.eraseToAnyScheduler(), uuid: UUID.failing )

UUID 29:12

So this is looking pretty promising. Let’s try another test. If we go to the testDelete test we’ll that the environment is currently only using one non-failing dependency, a test analytics client: environment: AppEnvironment( analyticsClient: .test { events.append($0) }, mainQueue: .failing, uuid: UUID.failing )

UUID 29:26

Let’s swap out the test dependency for the failure analytics client: environment: AppEnvironment( analyticsClient: .failing, mainQueue: .failing, uuid: UUID.failing )

UUID 29:32

When we run tests we get a TestStore assertion failure, but unfortunately it isn’t super helpful: store.assert( .send(.delete([1])) { $0.todos.remove(at: 1) }, .send(.editModeChanged(.active)) { $0.editMode = .active }, .send(.delete([0])) { $0.todos.remove(at: 0) } ) failed - AnalyticsClient.track: Effect is unimplemented We would need to look up the implementations of each of these actions to see which one is using the analytics client.

UUID 29:41

We can quickly update the store.assert statement to using the new style of TestStore : // store.assert( store.send(.delete([1])) { $0.todos.remove(at: 1) } // , store.send(.editModeChanged(.active)) { $0.editMode = .active } // , store.send(.delete([0])) { $0.todos.remove(at: 0) } // )

UUID 30:03

And now we get a much better test failure pointing directly to the line that used the scheduler: store.send(.delete([1])) { $0.todos.remove(at: 1) } failed - AnalyticsClient.track: Effect is unimplemented

UUID 30:17

This makes it very clear that we either need to provide a non-failing scheduler for this test environment, or we need to investigate why this action is unexpectedly accessing the scheduler.

UUID 30:32

Let’s get things passing again by putting back the test analytics client: environment: AppEnvironment( analyticsClient: .test { events.append($0) }, mainQueue: .failing, uuid: UUID.failing )

UUID 30:39

So things are looking good, but there is one slight problem. Ideally we would not define these failing dependencies in the test module, but rather in the same module the dependency was defined. We want to do this because we may need access to this failing dependency from other test modules. Like say if this feature is embedded in a parent feature, and tests for that feature want to exercise how the two features interact with each other.

UUID 31:13

Let’s cut and paste all of our test dependencies into the main feature module.

UUID 31:30

In order for this code to compile we need to import XCTest : import XCTest

UUID 31:38

But the moment you do that this module stops compiling with some really strange symbol errors: ld: warning: Could not find or use auto-linked library ‘XCTestSwiftSupport’ ld: warning: Could not find or use auto-linked framework ‘XCTest’

UUID 31:50

It seems that you cannot import XCTest into a non-test target. We could try wrapping all of this code with a canImport directive: #if canImport(XCTest) import XCTest … #endif

UUID 32:01

But even that doesn’t work. It seems you can import the module, it just doesn’t work if you do.

UUID 32:11

We could also tinker with build settings by setting ENABLE_TESTING_SEARCH_PATHS = YES , which will magically make builds for the simulator work, but it will still break builds for the device with the same symbol error.

UUID 32:36

Another thing we could do is create a dedicated test support module. Something that is only built for tests, depends on the module that holds the dependencies, and holds all of our failing instances. But that’s going to lead to a proliferation of modules. For each feature you’ll need the main feature module, a module for your tests, and a test support module. SPM makes these things a bit easier, but still that might be going too far.

UUID 33:03

All of this is very unfortunate, and honestly it feels like Xcode should have a better story for this kind of thing. It should be possible to ship test helpers right along side regular production code. We’ve got a trick to make this work though. In the Composable Architecture we dynamically load XCTest , which is what allows us to ship the TestStore assert helper right alongside the rest of the library.

UUID 33:28

And because having access to this helper is so important for building failing dependencies we are open sourcing it as a separate library this week, so be on the look out for that. Next time: immediacy

UUID 33:37

So this is pretty amazing. We are now getting a ton of insight into our code base by embracing exhaustive dependencies, and in particular invoking XCTFail immediately for any dependencies that we do not expect to be called. This allows us to be instantly notified when one of our features starts accessing a dependency we don’t expect, and on the flip side allows us to introduce new dependencies to our feature and be instantly notified of which tests need to be updated.

UUID 34:23

But there’s still more to discover. Failing dependencies greatly improved the developer experience when writing tests, but there is still room for more improvement. When we write tests that deal with time, such as delaying or debouncing, we like to use a test scheduler because it allows us to deterministically control the flow of time. We even have a todo test that specifically asserts that when you complete a todo, wait half a second, then complete another todo, and then wait a full second, that the todos were not sorted until the full one and a half seconds passed. It could actually capture that intermediate moment where the second todo’s completion cancelled the sorting effect. And that’s incredibly powerful.

UUID 35:08

However, sometimes we deal with schedulers that do not involve the passage of time. They are just used to execute on specific queues, such as when you use the .subscribe(on:) or .receive(on:) operator. If we use the test scheduler for these situations we have to litter our tests with scheduler.advance() calls in order to push them a tick forward and execute their work. Sometimes you do really want that, like if you want to test some synchronous effects that run before an asynchronous effect. However, most of the times it’s an unnecessary annoyance, and we can definitely improve it.

UUID 35:41

Even better, by addressing this test annoyance we’ll actually unlock something really cool for SwiftUI previews. We’ll show how we can exercise more of our feature’s logic using static previews when typically you would have to resort to running the live preview.

UUID 35:56

Let’s start by demonstrating the problem that test schedulers can cause. We are going to resurrect the project we built for our “ Designing Dependencies ” series of episodes. In those episodes we built a moderately complex application that made use of an API client, a location manager, and a network monitor in order to implement a simple weather app. Let’s recap…next time! References Better Testing Bonanza Brandon Williams & Stephen Celis • Mar 22, 2021 We open sourced a library for dynamically loading XCTFail so that you can ship test support code right along side production code. We also released new versions of Composable Architecture and Combine Schedulers that take advantage of the dynamic XCTFail to ship failing effects and schedulers so that you can make your tests more exhaustive. Check out the details in this blog post. https://www.pointfree.co/blog/posts/56-better-testing-bonanza Designing Dependencies Brandon Williams & Stephen Celis • Jul 27, 2020 We develop the idea of dependencies from the ground up in this collection of episodes: Note Let’s take a moment to properly define what a dependency is and understand why they add so much complexity to our code. We will begin building a moderately complex application with three dependencies, and see how it complicates development, and what we can do about it. https://www.pointfree.co/collections/dependencies A Tour of the Composable Architecture Brandon Williams & Stephen Celis • May 4, 2020 When we open sourced the Composable Architecture we released a 4-part series of episodes to show how to build a moderately complex application from scratch with it. We covered state management, complex effects, testing and more. https://www.pointfree.co/collections/composable-architecture/a-tour-of-the-composable-architecture Composable Architecture: Dependency Management Brandon Williams & Stephen Celis • Feb 17, 2020 We made dependencies a first class concern of the Composable Architecture by baking the notion of dependencies directly into the definition of its atomic unit: the reducer. https://www.pointfree.co/collections/composable-architecture/dependency-management Composable Architecture Brandon Williams & Stephen Celis • May 4, 2020 The Composable Architecture is a library for building applications in a consistent and understandable way, with composition, testing and ergonomics in mind. http://github.com/pointfreeco/swift-composable-architecture Downloads Sample code 0139-better-test-dependencies-pt2 Point-Free A hub for advanced Swift programming. Brought to you by Brandon Williams and Stephen Celis . Content Become a member The Point-Free Way Beta previews Gifts Videos Collections Free clips Blog More About Us Community Slack Mastodon Twitter BlueSky GitHub Contact Us Privacy Policy © 2026 Point-Free, Inc. All rights are reserved for the videos and transcripts on this site. All other content is licensed under CC BY-NC-SA 4.0 , and the underlying source code to run this site is licensed under the MIT License .