Please purchase the course to watch this video.

Full Course
Concurrency in Go enhances the efficiency of data processing, allowing developers to tackle issues like file reading without multiple passes over the data. The lesson focuses on optimizing a counter function that previously required resetting the file read offset multiple times to count words, lines, and bytes. By employing a single-pass algorithm, the lesson demonstrates how to utilize the io.TReader
function to create a reader that reads from a file while simultaneously writing to another buffer. This design enables concurrent operations to gather counts without the redundancy of loading data into memory repeatedly. The session emphasizes the importance of memory efficiency and introduces the use of specialized types to further improve performance, paving the way for even greater optimizations in future lessons.
No links available for this lesson.
Let's make use of it in order to solve an issue we encountered back when we were adding in the ability to count both lines and bytes inside of our counter function.
If you remember, we started by adding in a separate function in order to count the words, the lines, and the individual bytes from an io.Reader
, which was our file. However, the only way we could do this when it came to actually accepting a file — which we can go ahead and change to file
here — was to use the seek
method to reset the offset back to the start of the file in between every time we called a count lines function.
📝 What That Looked Like
To show what that looked like, let's go ahead and quickly rename our existing function to:
getCountsSinglePass
And we'll add the new one:
func getCounts(file *os.File) counter.Counts {
return counter.Counts{}
}
If you'll remember, we initially would get the words by using the countWords
function:
countWords(file)
Then we would have to seek:
file.Seek(0, io.SeekStart)
And repeat this for each metric — read → reset → read again.
🚫 Inefficiency of Multiple Reads
Whilst this worked, it wasn't very efficient due to the fact that we were having to perform multiple passes across our data by reading through, returning back to the start and then reading through again, which we were doing three times in total.
Instead we ended up moving towards the single pass algorithm, which is what we currently have implemented, which allows us to iterate through all of the individual bytes in the data just once, and calculate our totals from that.
This single pass algorithm is the correct algorithm to use.
👀 But What If...
Now that we know how to do concurrency within Go, there is another way that we can actually use these individual functions to obtain the counts without having to seek back to the start of the file every time.
So in this lesson and the next one, we're going to take a look at a few types provided by the io
package that allow us to have multiple readers of a single stream.
📦 io.TeeReader
To begin, let me go ahead and clear all this from the getCounts
function that we were using for demonstration, and we're going to replace the *os.File
with an io.Reader
, keeping it the same interface that we had before.
The first of these functions is the TeeReader
function of the io
package, which accepts a reader
and a writer
.
func TeeReader(r Reader, w Writer) Reader
This function returns a brand new Reader
that writes to w
whatever it reads from r
.
So we can pass in our file and a bytes.Buffer
and get a reader back.
This works very similar to the tee
command in Linux:
echo "hello" | tee hello.txt | wc
This writes "hello" to both hello.txt
and pipes it to wc
.
cat hello.txt
# hello
Pretty cool. Let’s go ahead and remove that hello.txt
.
🛠️ Let’s Try It in Code
Define a TeeReader
:
var buffer bytes.Buffer
bytesReader := io.TeeReader(r, &buffer)
Count bytes:
byteCount := countBytes(bytesReader)
Now set the words reader:
wordsReader := &buffer
wordsCount := countWords(wordsReader)
Return the values:
return counter.Counts{
Bytes: byteCount,
Words: wordsCount,
}
Run:
go run main.go words.txt
✅ It works! But…
🧠 What’s the Problem?
We're storing all of this data in memory.
Let’s try adding another buffer:
var buff2 bytes.Buffer
wordsReader := io.TeeReader(&buffer, &buff2)
linesReader := &buff2
linesCount := countLines(linesReader)
Update return:
return counter.Counts{
Bytes: byteCount,
Words: wordsCount,
Lines: linesCount,
}
🟢 Same results as before — proving it works!
⚠️ But It’s Still Inefficient
Using bytes.Buffer
is inefficient — we’re storing the full stream in memory twice.
In the next lesson, we’re going to take a look at another type that will allow us to do this without storing anything in memory, but it does require the use of concurrency.
🚀 See you there!