Please purchase the course to watch this video.

Full Course
No links available for this lesson.
Now that we have end-to-end tests testing our command running against a single file, either for one that does exist or for one that doesn't, in this lesson we're going to take a look at adding in a test for multiple files. This, however, is going to be a little trickier than our single file tests, due to the fact that our code has concurrency.
To show why this is harder, if I go ahead and run the following go run
command, passing in our words.txt
, lines.txt
and utf8.txt
, you can see that each time I run this, our output is different. This is due to the fact that we're using concurrency, and so the order of our files being counted isn't deterministic, due to the fact that some go routines are going to finish before others.
Therefore, in order to be able to test this, we're going to need to be able to handle the fact that this output is non-deterministic, and can come back in a number of different ways. There's actually a couple of ways we can solve this. The first is to make our tests robust to handle non-determinism, and the second is to change our code so that our output is deterministic, returning in the correct order for the files that we pass in.
My personal preference in this case would be to change our code to have deterministic output, meaning that the order in which we print our counts is the same order in which we pass our files in, which coincidentally is the same output that we had on our last run, with the words.txt
being first, the lines.txt
being second, and the utf8.txt
being third. We'll take a look at how to make our output deterministic in the next lesson.
For this lesson, however, we're instead going to take the approach of making a robust test that can handle non-deterministic behaviour, because it does happen from time to time, and knowing how to handle it is a good skill to have. Therefore, to begin, let's go about creating a new test file inside of our end-to-end tests, called multi-file-test.go
. Then we'll go ahead and set the correct package name. As for the actual test function, let's go ahead and set this to be TestMultipleFiles
, and set the t *testing.T
, and we'll make sure to import the testing package.
Then, in order to make this test a little easier, I'm actually going to go ahead and create a new helper function inside of the main test.go
, underneath our previous helper function of getCommand
. In this case, the helper function is going to be called createFile
, and we'll take an input string, which will be the contents of the file, which we can do as content string
, and we'll return an *os.File
, as well as an error
in case something went wrong.
This helper function is going to be used to easily create multiple files, so we don't have to go through the entire process of creating a file multiple times inside of our multiple file test. Therefore, let's go ahead and add in the file creation logic, which is going to be the same that we've seen before in this module, calling the os.CreateTemp
function, passing in emptyDir
, and passing in our pattern string of counter_test
, and the asterisk in order to generate a random name.
Next, we can do our if err != nil
check, as usual, returning nil
and an error
, letting the caller know that we could not create a file. I'll pass in the error as follows, and we can just go ahead and call this createFile
.
Next, we can then go ahead and call the WriteString
method of our actual file type, passing in the contents we want to write to, and we'll do an if err != nil
check again, returning nil
and fmt.Errorf("failed to write contents: %w", err)
wrapping the error as well. Next, we then want to make sure that we close the file so it can be opened by another process. Let's go ahead and do that as follows, and we'll also capture the error in this case, so that we can return it back to the caller, in the event we were unable to close the file. Close the file.
Then lastly, we can just go ahead and return our file, and a nil
error, in the case that everything was successful. With our new helper function created, let's head back on over to the multi-file-test
, and we can go ahead and use it to create three individual files.
The first of which can be fileA
, which is going to have an *os.File
, which we'll use createFile
, and we'll pass in the string of 1, 2, 3, 4, 5
with a new line at the end. We'll do an if err != nil
check, and if an error does exist, we'll call the t.Fatal
method, ensuring that we exit the test early. Could not create file A, let's just say.
Then we want to do the same thing for two other files, so that we have multiple files to test with. In this case, we'll do fileB
, and we can do createFile
, foo
, bar
, baz
, and we'll do some double new lines, just to make it a little different. And lastly, we'll do fileC
, error createFile
, and we'll just do an empty file for this one, just to give a control case.
Next, let's go ahead and make sure that we remove these files with a defer
statement after we create them, just so that we're keeping our code nice and tidy. defer os.Remove(fileC.Name())
. This is actually one area where specifying a directory in order to create all of our files in would be advantageous, as rather than having to remove each individual file using a deferred statement, we could instead just remove the directory that contains all of these files in.
In any case, I'm going to leave this as follows, but just something to think about if you want to add an improvement on your own code. Now, with our three files created, let's go about actually calling our command, passing these in as arguments. To do so, we can use the GetCommand
function, and we'll pass in os
, and we'll pass in fileA.Name()
, fileB.Name()
, and fileC.Name()
.
Then we'll do our if err != nil
check, and again, we'll do a t.Fatal
, because there's no point continuing if we could not create a command. We should actually just log the errors here as well. I'm going to go ahead and make sure that we are logging the errors, because it's going to be a little bit more useful to see what happens. In the event something does go wrong, it's just easier for us to debug.
Okay, great. Next, we then need to capture the standard output of the actual command, which we can do as we've done already in this module. Capture it, setting stdout := new(bytes.Buffer)
, and then setting it to the stdout
property of the actual command. Great.
Next, we can then go about actually running our code. So we'll do the following: if err != nil {...}
. In this case, whilst it could error if we pass in a non-existing file name, we're not actually checking for that in this test, although that would be a good test to add in your own time. Again, let's do another t.Fatal
, and we'll do failed to run command
, and we'll go ahead and log the error as well.
With that, we should now be calling our command and capturing the standard output stream. If I go ahead and run the fmt.Println
function, and we'll call stdout.String()
. Next up, we need to find a way that we're able to easily test this code, despite it not being consistent.
If I go ahead and run the go test
command, passing it to the test/e2e
directory, and use the -v
flag for verbose, which will print out all of the outputs, as you can see, whilst the ordering of the lines won't be consistent, the actual content of the lines themselves will be. For instance, we know that the total lines will always be the end line. For each of these cases, we know that these numbers are always going to be consistent as well, and we also know that the associated file, even though the file name will change, the file and its contents will also be consistent.
Therefore, because we know each line is going to be consistent, rather than checking the output as a whole, let's instead check the actual individual lines, which, fortunately, we actually already know how to do. To do so, we can go ahead and make use of the bufio.Scanner
, which will allow us to iterate over each of the lines inside of the actual bytes.Buffer
, where we can check each line for its expectation.
To show how this works, let's begin by first creating a new scanner
type, which we can do as follows: scanner := bufio.NewScanner(stdout)
. Then let's iterate over each of the tokens inside of the scanner using the for scanner.Scan()
method, then obtaining the line using the scanner.Text()
. Great. If we want to, we could check that this is going to work by printing it out as follows. Then we can go ahead and print it. Let's just do that.
There we go. As you can see, we're printing out each individual line. So now that we're breaking this up into lines, the next thing we want to do is make sure that we have expectations based on the actual file name. To do so, let's go ahead and define a new type called wants
. And the type of this is going to be a map of string
, which will represent the file name of string
, which will represent the expectation. Then we can go ahead and enter in the actual fields that we want.
So fileA.Name()
is going to be the first property, which for the moment we'll leave as an empty string. Then as for fileB
, we can go ahead and do the same. Then for fileC
, we can do the same as well. Let's also add an expectation in for our totals line as well. Although it would be better to check that this was happening at the end of the file. In any case, we'll make sure to do that in the next tests when we add in some more determinism.
So for the moment, we're just going to not worry. With each of our file keys defined, let's go ahead and actually define the expectations. In this case, we know that file one is one line and five words. So the expectation is going to be this. However, we can't just copy and paste this actual string in. The reason for that is because this file name again is non-deterministic. It's going to be randomly generated each time our test runs.
Fortunately, we know that this is going to be the actual file name dot a
. So we can go ahead and replace this a number of different ways, either using the fmt.Sprintf
method using a format string, or instead using the fmt.Sprintln
method. In this case, I'm going to use the Sprintln
method as I think it's just a little easier. And there will be a space in between each one. And we can just go ahead and pass in fileA.Name()
.
Let's leave the fileA.Name()
for the moment as the only test that passes. That way we can be sure that some of the code is working whilst others isn't. Next, with our expectations defined, at least partially, the next thing we need to do is to be able to pull out the expectation for a given file, as long as it represents a line.
To do that, we're going to need to split our line into individual tokens so that we can pull out the file name which is going to be the last component. Fortunately, there's always at least one space in between each element. And so we can again use another function that we've used in the past in order to pull out the individual fields, which is the fields
function of the strings
package, passing it in as the line.
If you'll remember, the fields
function returns back a slice of string
containing the individual tokens that have been separated from the spaces in between. Therefore, we want to pull out the last field from this slice, which we can do as follows. First, let's capture it in a variable called filename
, and we'll use the fields
subscripts and we'll do len(fields) - 1
.
Additionally, just in case the fields
is of zero length, let's go ahead and add in the following safety check. So if len(fields) == 0
, then we want to go ahead and t.Fatal("fields: line was empty")
. In fact, I don't think t.Fatal
is what we want. I think we may just want to continue because we may get an empty line for some reason. Not that I actually know. I think we'll just do t.Log("encountered empty line")
.
We shouldn't have any empty lines, so that would be an error and would just fail, but we still want to make sure we test everything else. In any case, now that we have the filename, the last thing we need to do is pull out the expectation from our map. Let's do lineWants := wants[filename]
as follows. However, because this is a test, there could be a situation where a filename is pulled out of the fields, but it's not actually related to any of our expectations.
Therefore, we want to check to make sure that this expectation existed in our actual expectations map, which we can do by capturing a second variable from the actual subscripting of our map called ok
. This ok
is a boolean variable letting us know whether or not the key existed in the actual map. So we can do if !ok {...}
meaning the key didn't exist. Let's go ahead and log this out saying no expect or no wants for %s\n
, passing the filename
.
And we'll do the t.Logf
, I think, and we'll go ahead and pass in the actual filename itself, just so that we know that this filename was in the output, but there was no expectation for it. We can also go ahead and t.Fail
as well. With that, we should be pulling out the correct expectation for the actual given filename. All that remains now is to compare the line against it.
So we can go ahead and do if line != lineWants {...}
. I'm going to just raise this up a little bit. if line != lineWants
, then we'll do a t.Fail
and we'll do a t.Logf
, line does not match. Got: %s, Want: %s
, and we want the got was the line and the wants was the line wants.
Now, if I go ahead and test this again, this time you can see it's failing. Line does not match. Got: ... Want: ...
. This is failing because I'm using the fmt.Println
method because this is adding in a new line character. However, the bufio.NewScanner
actually removes that new line for each of the lines that we're checking.
So to fix this, let's go ahead and instead use the fmt.Printf
method and use the %s
verb as follows. Now, when I go ahead and run this, this time you can see that our first file is matching and the other three are not. Well, the other two and the totals. So far, so good.
Therefore, let's go ahead and add in the rest of the checks as follows using the fmt.Printf
function. And in this case, it's 2.3.13
and this is going to be fileB.Name()
. Then we can do the same thing here. I'm just going to copy this and paste it just to be a little quicker. And in this case, this is going to be 0.0.0
, but we need to align it properly. And this will be fileC.Name()
.
And then lastly, it's going to be our totals, which in this case is going to be 3.8.37
, I believe. And totals
. Okay. Now, if I go ahead and clear this and run it again, this time it should work, which it does.
Whilst this is working, it may not actually be perfect. For instance, one of the issues that we currently have is that our expectations may not always be satisfied. To show what I mean, if I go ahead and remove fileC
from the inputs, and if I go ahead and run the tests again, you'll see that they still pass, even though we specified an expectation of fileC
and it didn't exist.
Therefore, we're going to need to keep some state in order to track how many files we accurately tested. And if the count does not match our expectations, then we throw an error. Therefore, to add this in, let's go ahead and keep track of the number of lines we checked. So checkedLines := 0
.
Then we can go ahead and increment these. So checkedLines++
, but we want to do this if we actually found a line. So we can do a continue
here. We only want to check increment this count if we manage to find an expectation. So checkedLines
isn't the best name. It could be checkedExpectations
or even checkedWants
, although that starts to feel a bit weird. We'll do checkedWants
anyway, just to keep it consistent.
So if we happen to find a wants within our expectations list, we increment the checked wants counts. And then after our actual scanning logic, we can go ahead and do if checkedWants != len(wants) {...}
. If checkedWants
is not equal to length of wants
, then we'll do a t.Fail
, and we'll t.Logf
, only checked %d expected
, and we'll just do the %d
verb to specify a number. So we only checked wants and we expected to check the length of wants as follows.
Now, if we go ahead and test this code, it should now fail, which it does. Only checked 3, expected to check 4
. And we can fix this again by making sure we're passing in the correct file. Now, if I go ahead and test this, everything should be passing, which it is.
With that, we've managed to add in an end-to-end test in order to handle non-deterministic behavior. However, as I mentioned, this isn't my favorite approach to solving this particular problem. However, it is a good thing to understand and to be able to problem solve, as non-determinism can often happen when you're writing code out in the wild.
However, in the next lesson, we're going to go ahead and take a look at how we can actually change our code in order to have deterministic output, which generally I think is a better feature for us to have anyway when it comes to our counter application. So we're going to go ahead and make that change.