EP 126 · Generalized Parsing · Nov 23, 2020 ·Members

Video #126: Generalized Parsing: Part 3

smart_display

Loading stream…

Video #126: Generalized Parsing: Part 3

Episode: Video #126 Date: Nov 23, 2020 Access: Members Only 🔒 URL: https://www.pointfree.co/episodes/ep126-generalized-parsing-part-3

Episode thumbnail

Description

Generalizing the parser type has allowed us to parse more types of inputs, but that is only scratching the surface. It also unlocks many new things that were previously impossible to see, including the ability to parse a stream of inputs and stream its output, making our parsers much more performant.

Video

Cloudflare Stream video ID: 2fa1d83b5fb8eae2e9fffa6a9416f6f5 Local file: video_126_generalized-parsing-part-3.mp4 *(download with --video 126)*

Transcript

0:05

This is pretty impressive. We add just 70 additional lines of parsers to our base library and we have unlocked the ability to very succinctly and expressively parse incoming requests so that we can route them to different parts of our application or web site. There are entire libraries out there devoted to this functionality, and yet here we have discovered it to be just a small corollary to having powerful, generalized parsing library available to us.

0:31

The only thing missing from this routing micro-library is a few more combinators for parsing the headers and body of the request, as well as a way to turn a nebulous URLRequest value into one of these RequestData values, but we will leave both of those things as exercises for the viewer.

0:47

So I think this is pretty incredible. We have massively generalized our parsing library, all of the parsers we previously wrote still compile and run just like before, but we can now perform parsing tasks on all new types of input that would have previously been impossible.

1:07

But as cool as all of that is, we still want to ask the all-important question that we ask at the end of every series of episodes on Point-Free: what’s the point? Because although we have generalized parsing we have also made it a little more complex. Not only do we have to think a bit harder when it comes to writing a general parser and have to have a bit of knowledge of generics and the Collection protocol, but we also sometimes have to give the compiler some extra hints in order for it to figure out the types.

1:42

So, is it worth this extra complexity? Instead of generalizing parsers should we have spent a little more time creating more robust parsers that perhaps could have handled the complexities of parsing a raw URLRequest rather than inventing the RequestData type and trying to parse it?

1:58

And we of course think its absolutely worth this extra complexity. Generalizing the parser signature has allowed us to parse all new types of input, but that’s only the beginning. The very act of generalizing has opened up all new possibilities that were previously impossible to see. For example:

2:19

With zero changes to our core parser type we can create a new parser operator that allows us to incrementally parse a stream of data coming in from an outside source, such as standard input, and even incrementally stream output to an outside source, such as a file, standard output, or anything. This can dramatically improve the performance of parsers that need to work on large data sets for which it is unreasonable to bring a large chunk of data into memory and parse it into a large array of data for processing.

2:52

So that’s incredible, but it gets better. By generalizing we can now see an all new form of composition that sits right next to our beloved map , zip and flatMap operators. This operator has the same shape that we have discovered on Point-Free time and time again, and it will be instrumental in allowing us to take a parser that works on a small piece of data and transform it into a parser that works on a larger, more complex piece of data.

3:16

And if that weren’t enough, things get even better. This new form of composition turns out to be the key to unlock a new tier of performance in our parsers. We can increase the performance of some of our parsers by as much as 5-10x with minimal changes to the parsers themselves, which makes their performance competitive with hand-rolled parsers and even beats the performance of Apple’s parser helpers, such as the Scanner type.

3:43

These are some really big claims we are making. We are saying that by simply generalizing the input type of our parsers we can unlock the ability to stream input into our parsers, uncover new forms of composition, and immediately improve the performance of our parsers, basically for free. Parsing in a memory-efficient manner

4:03

So, let’s demonstrate these amazing feats, starting with streaming. As we mentioned before, it can be very inefficient to parse a large set of data for two reasons: first, we will need to bring the entire input string into memory, which you wouldn’t want to do if you are parsing tens or hundreds of megabytes of data, and second you will need to process the whole string at once and produce a huge piece of output data.

4:29

So we not only want to be able to efficiently stream the input data into our parser to do work incrementally, but we may also want to stream its output somewhere, such as standard out or a file. Let’s tackle each of these problems separately.

4:43

First, how can we represent the concept of “stream of inputs” in Swift? For us a stream will be some kind of value such that we can ask it for its next chunk of data, and it may need to wait for a while before that data is ready, but eventually it will either return some data or it will return nil to signify that the stream has closed. Perhaps the most prototypical example of this is standard input, and its even something we briefly discussed when we developed a CLI tool for parsing the test logs from Swift in order to better present test failures and successes. In that episode we showed that we could gather up data from standard input like this: while let line = readLine() { // process line } The readLine() function is quite special: readLine as () -> String?

5:12

It will block the current thread and wait until it receives something from standard input. Once it gets a line it will return it, and once an end-of-file is reached it will return nil . This allows us to incrementally get input from an external source.

5:26

Let’s give this function a spin in a new project just so that we are very clear on how it works. I’m going to switch to a new project and initialize a new Swift package for an executable: $ mkdir stdin $ cd stdin $ swift package init --type executable $ open Package.swift

6:02

In main.swift we can loop over standard input and print out whenever we receive something: print("Starting...") while let line = readLine() { print("You typed: \(line)") } print("Done!")

6:29

And we can even run this directly in Xcode because the debug area actually acts as a mini terminal console: Starting... Hi You typed: Hi Hello You typed: Hello Goodbye You typed: Goodbye

6:53

We can even simulate end-of-file for standard input by typing control+D: Done! Program ended with exit code: 0

7:02

So standard input is definitely very stream-like. It’s some kind of handle that allows us to ask it for data, and once it has data it will send it to us.

7:11

The Swift standard library actually provides an abstraction for this concept, and it is known as the IteratorProtocol . The standard library says that this protocol is “A type that supplies the values of a sequence one at a time”, and that seems like what we are looking for. In fact, it has one single requirement, which is for conformers to provide a function of the form: mutating func next() -> Element?

7:36

This is basically what readLine looks like. You can ask it for the next value, and it will block and wait until that value is ready for you, and if the iterator has been closed it will return nil .

7:49

We can create iterators by defining new types to conform to the protocol, such as this one that simply emits all the positive integers in order when you ask for them: struct NaturalNumbers: IteratorProtocol { var count = 0 mutating func next() -> Int? { defer { self.count += 1 } return self.count } }

8:45

This represents an iterator that never finishes since it never returns nil , and so in some sense represents an infinite stream of integers. This isn’t possible to represent with arrays because arrays must have a finite length.

8:58

But that IteratorProtocol isn’t just handy for dealing with infinite streams. We can also deal with very large data sets that are only brought into memory in small chunks. For example, an iterator that represents the first one billion integers: struct OneBillionNumbers: IteratorProtocol { var count = 0 mutating func next() -> Int? { defer { self.count += 1 } return self.count <= 1_000_000_000 ? self.count : nil } }

9:37

It would be a very bad idea to bring one billion integers into memory at once like this: Array(1...1_000_000_000)

9:50

But the iterator allows us to still get a handle on the concept of a billion integers without actually having it in memory all at once.

9:58

We don’t always have to create a whole new type just to represent an iterator. Swift gives us a few convenience functions and types that can make this much more succinct. For example, to represent all natural numbers we can just use the sequence function: var naturals = sequence(first: 0, next: { $0 + 1 })

10:36

And similarly a billion numbers can be represented as: var oneBillion = sequence( first: 0, next: { $0 < 1_000_000_000 ? $0 + 1 : nil } )

11:01

More generally, there’s something known as an AnyIterator , which is a type-erased wrapper for the functionality of the iterator protocol. It allows you to create an iterator by simply providing a closure to implement the “next” functionality: AnyIterator { // produce next value }

11:34

For example, we could have an iterator that just generates a bunch of random integers: let randomNumbers = AnyIterator { Int.random(in: 1 ... .max) }

11:52

More interestingly, the AnyIterator makes it very easy for us to create an iterator that wraps the standard input function readLine : let stdin = AnyIterator { readLine() }

12:08

That’s all it takes.

12:09

So amazingly the IteratorProtocol really is a great embodiment of the concept of a “stream” of input data. It will provide us data when it is ready, and it ends when it returns nil .

12:20

So now that we have the concept, how can we make parsers act on a stream of input data rather than a whole input at once?

13:14

Well, let’s look at our test log parser again and see what exactly we expect from this behavior. We have a big chunk of logs like the following: Test Suite 'All tests' started at 2020-08-19 12:36:12.062 Test Suite 'VoiceMemosTests.xctest' started at 2020-08-19 12:36:12.062 Test Suite 'VoiceMemosTests' started at 2020-08-19 12:36:12.062 Test Case '-[VoiceMemosTests.VoiceMemosTests testDeleteMemo]' started. Test Case '-[VoiceMemosTests.VoiceMemosTests testDeleteMemo]' passed (0.004 seconds). Test Case '-[VoiceMemosTests.VoiceMemosTests testDeleteMemoWhilePlaying]' started. Test Case '-[VoiceMemosTests.VoiceMemosTests testDeleteMemoWhilePlaying]' passed (0.002 seconds). Test Case '-[VoiceMemosTests.VoiceMemosTests testPermissionDenied]' started. /Users/point-free/projects/swift-composable-architecture/Examples/VoiceMemos/VoiceMemosTests/VoiceMemosTests.swift:107: error: -[VoiceMemosTests.VoiceMemosTests testPermissionDenied] : XCTAssertTrue failed Test Case '-[VoiceMemosTests.VoiceMemosTests testPermissionDenied]' failed (0.003 seconds). Test Case '-[VoiceMemosTests.VoiceMemosTests testPlayMemoFailure]' started. Test Case '-[VoiceMemosTests.VoiceMemosTests testPlayMemoFailure]' passed (0.002 seconds). Test Case '-[VoiceMemosTests.VoiceMemosTests testPlayMemoHappyPath]' started. Test Case '-[VoiceMemosTests.VoiceMemosTests testPlayMemoHappyPath]' passed (0.002 seconds). Test Case '-[VoiceMemosTests.VoiceMemosTests testRecordMemoFailure]' started. /Users/point-free/projects/swift-composable-architecture/Examples/VoiceMemos/VoiceMemosTests/VoiceMemosTests.swift:144: error: -[VoiceMemosTests.VoiceMemosTests testRecordMemoFailure] : State change does not match expectation: … VoiceMemosState( − alert: nil, + alert: AlertState<VoiceMemosAction>( + title: "Voice memo recording failed.", + message: nil, + primaryButton: nil, + secondaryButton: nil + ), audioRecorderPermission: RecorderPermission.allowed, currentRecording: nil, voiceMemos: [ ] ) (Expected: −, Actual: +) Test Case '-[VoiceMemosTests.VoiceMemosTests testRecordMemoFailure]' failed (0.009 seconds). Test Case '-[VoiceMemosTests.VoiceMemosTests testRecordMemoHappyPath]' started. /Users/point-free/projects/swift-composable-architecture/Examples/VoiceMemos/VoiceMemosTests/VoiceMemosTests.swift:56: error: -[VoiceMemosTests.VoiceMemosTests testRecordMemoHappyPath] : State change does not match expectation: … VoiceMemosState( alert: nil, audioRecorderPermission: RecorderPermission.allowed, currentRecording: CurrentRecording( date: 2001-01-01T00:00:00Z, − duration: 3.0, + duration: 2.0, mode: Mode.recording, url: file:///tmp/DEADBEEF-DEAD-BEEF-DEAD-BEEFDEADBEEF.m4a ), voiceMemos: [ ] ) (Expected: −, Actual: +) Test Case '-[VoiceMemosTests.VoiceMemosTests testRecordMemoHappyPath]' failed (0.006 seconds). Test Case '-[VoiceMemosTests.VoiceMemosTests testStopMemo]' started. Test Case '-[VoiceMemosTests.VoiceMemosTests testStopMemo]' passed (0.001 seconds). Test Suite 'VoiceMemosTests' failed at 2020-08-19 12:36:12.094. Executed 8 tests, with 3 failures (0 unexpected) in 0.029 (0.032) seconds Test Suite 'VoiceMemosTests.xctest' failed at 2020-08-19 12:36:12.094. Executed 8 tests, with 3 failures (0 unexpected) in 0.029 (0.032) seconds Test Suite 'All tests' failed at 2020-08-19 12:36:12.095. Executed 8 tests, with 3 failures (0 unexpected) in 0.029 (0.033) seconds 2020-08-19 12:36:19.538 xcodebuild[45126:3958202] [MT] IDETestOperationsObserverDebug: 14.165 elapsed -- Testing started completed. 2020-08-19 12:36:19.538 xcodebuild[45126:3958202] [MT] IDETestOperationsObserverDebug: 0.000 sec, +0.000 sec -- start 2020-08-19 12:36:19.538 xcodebuild[45126:3958202] [MT] IDETestOperationsObserverDebug: 14.165 sec, +14.165 sec -- end Test session results, code coverage, and logs: /Users/point-free/Library/Developer/Xcode/DerivedData/ComposableArchitecture-fnpkwoynrpjrkrfemkkhfdzooaes/Logs/Test/Test-VoiceMemos-2020.08.19_12-35-57--0400.xcresult Failing tests: VoiceMemosTests: VoiceMemosTests.testPermissionDenied() VoiceMemosTests.testRecordMemoFailure() VoiceMemosTests.testRecordMemoHappyPath()

13:32

Imagine we are being fed this set of logs one line at a time from standard input. This was in fact what we did at the end of our episode on parsing test logs , where we actually ran the tests for one of the example apps in the Composable Architecture repo, and then piped the output of that command directly into our test logs pretty printer. xcodebuild test -scheme VoiceMemos -destination platform="iOS Simulator,name=iPhone 11 Pro Max" 2>&1 | ~/pretty-test-logs/.build/x86_64-apple-macosx/debug/pretty-test-logs

14:05

However, we currently aren’t handling those logs in small chunks. Instead we buffer all of the logs into one massive string: var stdinLogs = "" while let line = readLine() { stdinLogs.append(line) stdinLogs.append("\n") }

14:25

And then parse that massive string into a massive array, and print out the formatted results: testResults.run(stdinLogs[...]).match?.forEach { result in print(format(result: result)) }

15:08

What if instead we we could process the lines from standard input as they come in. Once enough lines came in for us to successfully parse a test result we can consume that part of the input and wait for more lines to come in. That could really help cut down on memory usage since we wouldn’t have to load all of the logs into memory at once.

15:26

We could do this in an ad-hoc way right in this file. Rather than waiting until we buffer all of standard input into a string we could try parsing each time we append a line to the buffer, and that would cause us to incrementally consume bits of the logs as we successfully parse results from it.

15:43

To do this we can create a mutable array of test results, and each time we append to our stdinLogs string we can try running the parser. If it succeeds we will add that result to the array: var stdinLogs: Substring = "" var results: [TestResult] = [] while let line = readLine() { stdinLogs.append(contentsOf: line) stdinLogs.append("\n") if let output = testResult.run(&stdinLogs) { results.append(output) } } results.forEach { result in print(format(result: result)) } A parser of streaming input

16:38

So this essentially does what we want. We no longer keep this huge string in memory and parse it all at once. But right now the solution is very ad-hoc, and basically only works for this one single use case.

16:52

What we’d like is to be able to package up this functionality into a new parser combinator so that you could transform any parser into one that instantly works on a stream of inputs. We can even see the shape of what this parser combinator would have in the code above.

17:16

We are running the testResult parser on the stream of standard input, and it has the type: let testResult: Parser<Substring, TestResult>

17:42

And then with a little upfront work of keep tracking of an accumulation of standard input logs and an array of results we have essentially come up with a parser of the form: Parser<AnyIterator<Substring>, [TestResult]> This is a parser that can consume a stream of substrings as they come in, ultimately producing a big array of results.

18:12

If we generalize this we see that we want a combinator that has the following form: (Parser<Input, Output>) -> Parser<AnyIterator<Input>, [Output]>

18:23

So let’s try to implement this combinator!

18:26

We’ll extend the Parser type and implement this as a computed property called stream : extension Parser { var stream: Parser<AnyIterator<Input>, [Output]> { .init { stream in } } }

18:48

We want to implement this property in much the same way that we did for our ad-hoc standard input parser, but we want to do it in more generality. So let’s start filling in some parts and see just how far we can take this.

19:02

Our readLine parser started with these two lines in order to set up a buffer of input that was accumulated from the stream, as well as an array of outputs that we would ultimately return: var stdinLogs: Substring = "" var results: [TestResult] = []

19:20

The key here is that stdinLogs was able to be initialized with an empty substring, and so it seems that in our more general parser we need to have the concept of creating an empty version of our type.

19:43

Since the Input generic is completely unconstrained we don’t have a way to actually do that yet, but let’s just put some code in place even though it won’t compile just yet: var buffer: Input // TODO: = Input() var outputs: [Output] = []

20:19

Next, in our readLine parser we performed a while loop over the input stream so that we could process each line as it came in and then bail out once it returned nil , and that basically looks the same in the general parser: while let chunk = stream.next() { }

20:33

Then inside this loop we want to accumulate the chunks from the string into the buffer. However, because Input is completely generic we have no idea if Input is even capable of such a thing. What if Input is a integer, or a User model? What does it mean to accumulate those things together? So again it seems like we need some kind of constraint on Input to give it those capabilities, but to keep things moving let’s just make a note of what we want: // TODO: append chunk to buffer

21:17

After accumulating the new chunk into the buffer we can try to run our parser on the buffer to see if it finds anything. If it does we can append it to the output: while let output = self.run(&buffer) { outputs.append(output) }

22:23

And finally we need to return the array of outputs we have amassed: return outputs

22:28

So this is nearly a working version of a generic parser for dealing with streams, but we just have these two to-dos to fix. We need to find a suitable constraint on Input that gives us the ability to create an empty version of the input and the ability to append two inputs together. We could create a new type to express this set of functionality, and in fact we will be doing that in the future, but for right now there is actually something we can leverage from the Swift standard library.

23:08

There is a protocol called RangeReplaceableCollection that inherits from Collection but adds the ability to create an empty collection and to append two collections together. It has a few additional capabilities that we don’t need, and so this type isn’t the most general thing we could use here, but it will get us very far without having to do any additional work.

23:42

So, if we constrain Input to be RangeReplaceableCollection we will be able to finish off this implementation: extension Parser where Input: RangeReplaceableCollection { var stream: Parser<AnyIterator<Input>, [Output]> { .init { stream in var buffer = Input() var outputs: [Output] = [] while let chunk = stream.next() { buffer.append(contentsOf: chunk) while let output = self.run(&buffer) { outputs.append(output) } } return outputs } } }

24:16

This parser combinator now encapsulates all of that ad-hoc work we were doing with the while loop, and so let’s try to replace it. We can take the testResult parser, which is capable of parsing a single result from the beginning of an input string and invoke .stream on it: testResult .stream // Parser<AnyIterator<Substring>, [TestResult]>

24:46

Instantly we have a parser that can now consume an input stream of substrings and then produce an array of results out the other side. So we’d like to run this parser, but we need to feed it an AnyIterator<Substring> . We already have this at our disposal because a few moments ago we created an iterator that simply read from standard input: let stdin = AnyIterator { readLine() }

25:10

But we have to make a couple small changes to this iterator. First of all it is an AnyIterator of String , not Substring . But no worries, we can just subscript into the string returned by readLine() : var stdin = AnyIterator { readLine()?[...] }

25:38

And by invoking readLine directly we are losing the work in which we were appending newlines to our buffer, but luckily readLine has an optional parameter that lets us tell it to not strip newlines: var stdin = AnyIterator { readLine(strippingNewline: false)[...] }

26:03

And now we can run our parser on that iterator: testResult .stream .run(stdin)

26:12

That will give us an optional array of results, which we can forEach over in order to print out the formatted test results: testResult .stream .run(stdin) .match? .forEach { print(format(result: $0)) } Streaming a parser’s output

26:26

This extremely compact code takes care of everything the messy while loop code does, which means we can take any existing parser and instantly turn it into an efficient machine for processing an incoming stream of inputs in order to produce an array of outputs. And this is all without making any changes to the core parser type. This is the power of having a properly generalized core unit to build a library off of. We get to implement future features without muddying the core. Even better, this is a combinator you could have easily implemented yourself outside the library. You don’t have to wait for us to build this functionality in order for you to get access to it.

27:25

But it gets even better. We’ve fixed one inefficiency of the original parser, which is that it had to bring in the entire input string into memory before it could start parsing, but there’s another inefficiency. We are parsing all of the results into a big array just so that we can then loop over the array and print out the formatted results. What if instead we could print out the results as soon as we processed one from the stream. This would not only mean we wouldn’t have to hold a big array of results in memory before printing, but it would also mean that we could output messages live as we process them rather than in one big batch at the end. So this will improve the efficiency and the user experience for our tool.

28:08

Amazingly, we can also add this functionality with no changes to the core parser type. However, this will not be implemented as an operator that returns a new parser. This is because its a bit different from what we have done previously. This operation works in tandem with streaming input so that each time we receive a chunk of parsed results we stream that to some output, whether it be standard output, a file, or what have you. That process of streaming to an external source is a side-effect, and if we bake that effect into a parser via a combinator we will never know when a parser is secretly streaming things to the outside world. That can be very confusing and fraught.

28:47

Luckily we don’t really need it to be a parser combinator. Instead, it’s more akin to run , where we want to run the parser, and just stream its output to an external source. This motivates us to define it as a method on Parser that does not return anything, and performs the streaming logic on the inside: extension Parser where Input: RangeReplaceableCollection { func run( input: inout AnyIterator<Input>, output streamOut: @escaping (Output) -> Void ) { fatalError("Not yet implemented") } }

30:07

If we had this operator implemented then we could run our test result parser like this: testResult .run( input: &stdin, output: { print(format(result: $0)) } )

30:41

This would simultaneously stream the inputs to the parser and stream the outputs, make this much more efficient than our previous attempts at this.

30:49

The implementation of .run(input:output:) is going to be similar to .stream since we need to process the iterator, accumulate a buffer, and run the parser, but crucially we no longer need to build up a large array of the outputs. Instead, as soon as we get a result from parsing we should just immediately send it to the output stream: extension Parser where Input: RangeReplaceableCollection { func run( input: inout AnyIterator<Input>, output streamOut: @escaping (Output) -> Void ) { var buffer = Input() while let chunk = stream.next() { buffer.append(contentsOf: chunk) while let output = self.run(&buffer) { streamOut(output) } } } }

31:35

And that’s all it takes.

31:40

Now you may not like how much the body of this function looks like the body of .stream , and perhaps you think we can make use of .stream inside .run(input:output:) . However, there is a big enough distinction between these two operators that we don’t think it’s worth trying to share their logic.

31:55

The distinction is that .stream wants to build up a big array of results from the input stream so that it can deliver those results all at once. This was inefficient for our use case since we were ultimately just printing those results to standard output, but there are certainly other use cases that may want that array of results so that it can do more with it. On the other hand, .run has absolutely no need for that array of results. As soon as it parses something it hands it off to the output stream and then forgets about it. That’s what allows it to be a little more efficient than .stream , and so this is reason enough for us to implement the bodies of these operators without sharing any code. Correction @GeekAndDad points out that even though we don’t want to define .run in terms of .stream , we can in fact eliminate this duplication if we write .stream in terms of .run : extension Parser where Input: RangeReplaceableCollection { var stream: Parser<AnyIterator<Input>, [Output]> { .init { stream in var outputs: [Output] = [] self.run( input: &stream, output: { outputs.append(contentsOf: $0) } ) return outputs } } }

32:34

These two lines would seem to pack quite the punch, but let’s make sure they work. Here we have the Composable Architecture checked out as a repo, so let’s run our tool against its tests. We can build our tool and then feed the results of a test run to it: $ swift build [4/4] Linking stdin $ cd ../swift-composable-architecture $ swift test 2>&1 | ~/projects/stdin/.build/debug/stdin

33:48

And sure enough, formatted test results appear to be streaming in real time! And to also make sure failures are being formatted correctly, we can purposely fail a test. Conclusion

34:35

So this is pretty amazing. We were able introduce two simple operators to unlock the ability for our parsers to process streams of inputs and to stream their output to an external source, which can really improve the performance of applications that need to parse many, many megabytes of data without having to load it all into memory at once. Even more amazing, these new streaming operators are also still totally testable. We have some exercises for you to explore, but you can essentially feed little chunks of inputs into the parser and verify that it processes the chunks correctly.

35:35

So that’s just one of the three big things that we said generalizing parsing has given us access to. The other 2 were new forms of composition and the ability to write more performant parsers. We had every intention of showing off all three of these topics right now so that we could truly demonstrate “the point” of generalized parsing, but each topic turns out to be so interesting on its own that we want to make sure we cover them properly.

35:55

That’s why we are going to end this episode right now, and next week we will start exploring performance and composition of our newly generalized parsing library. Until next time! Downloads Sample code 0126-generalized-parsing-pt3 Point-Free A hub for advanced Swift programming. Brought to you by Brandon Williams and Stephen Celis . Content Become a member The Point-Free Way Beta previews Gifts Videos Collections Free clips Blog More About Us Community Slack Mastodon Twitter BlueSky GitHub Contact Us Privacy Policy © 2026 Point-Free, Inc. All rights are reserved for the videos and transcripts on this site. All other content is licensed under CC BY-NC-SA 4.0 , and the underlying source code to run this site is licensed under the MIT License .