Introduction
Concurrency is one of Go's defining features. The language was designed from the ground up to make concurrent programming straightforward and safe. Go's concurrency model is built on two core primitives: goroutines (lightweight threads managed by the Go runtime) and channels (typed conduits for communication between goroutines). The philosophy, coined by Rob Pike, is: "Do not communicate by sharing memory; instead, share memory by communicating."
This guide covers everything from launching your first goroutine to sophisticated fan-out/fan-in pipelines used in production services.
Goroutines — Lightweight Threads
A goroutine is a function executing concurrently with other goroutines in the same address space. They are extraordinarily cheap — a newly started goroutine requires a few kilobytes of stack space that grows and shrinks on demand.
package main
import (
"fmt"
"time"
)
func printNumbers(id int) {
for i := 0; i < 5; i++ {
fmt.Printf("goroutine %d: %d\n", id, i)
time.Sleep(100 * time.Millisecond)
}
}
func main() {
go printNumbers(1) // launch goroutine
go printNumbers(2) // launch another
printNumbers(3) // run in main goroutine
}
The go keyword before a function call starts a new goroutine. The main goroutine does not wait for the others by default — if it exits, all goroutines are terminated. This is where synchronization becomes essential.
sync.WaitGroup — Waiting for Goroutines
sync.WaitGroup lets the main goroutine wait for a collection of goroutines to finish. It maintains an internal counter: Add increments it, Done decrements it, and Wait blocks until the counter reaches zero.
package main
import (
"fmt"
"sync"
)
func worker(id int, wg *sync.WaitGroup) {
defer wg.Done() // decrement counter when done
fmt.Printf("Worker %d starting\n", id)
// simulate work
fmt.Printf("Worker %d done\n", id)
}
func main() {
var wg sync.WaitGroup
for i := 1; i <= 5; i++ {
wg.Add(1) // increment before launching goroutine
go worker(i, &wg)
}
wg.Wait() // block until all workers finish
fmt.Println("All workers completed")
}
Always call wg.Add(1) before the go statement — not inside the goroutine — to avoid a race condition where Wait returns before all goroutines start.
Channels — Communication Between Goroutines
Channels are typed message queues. They can be unbuffered (synchronous — sender blocks until receiver reads) or buffered (asynchronous up to capacity).
// Unbuffered channel
ch := make(chan int)
// Buffered channel (capacity 10)
bufferedCh := make(chan string, 10)
// Send value
ch <- 42
// Receive value
val := <-ch
// Close channel (sender's responsibility)
close(ch)
// Range over channel until closed
for v := range ch {
fmt.Println(v)
}
Worker Pool Pattern
The worker pool is the most common concurrency pattern in Go — a fixed set of goroutines consuming jobs from a shared channel:
package main
import (
"fmt"
"sync"
)
func workerPool(jobs <-chan int, results chan<- int, wg *sync.WaitGroup) {
defer wg.Done()
for j := range jobs {
result := j * j // simulate work
results <- result
}
}
func main() {
const numJobs = 20
const numWorkers = 4
jobs := make(chan int, numJobs)
results := make(chan int, numJobs)
var wg sync.WaitGroup
for w := 0; w < numWorkers; w++ {
wg.Add(1)
go workerPool(jobs, results, &wg)
}
// Send jobs
for j := 1; j <= numJobs; j++ {
jobs <- j
}
close(jobs)
// Close results when all workers done
go func() {
wg.Wait()
close(results)
}()
// Collect results
for r := range results {
fmt.Println(r)
}
}
The select Statement — Multiplexing Channels
select lets a goroutine wait on multiple channel operations simultaneously, proceeding with the first one that is ready. This is one of Go's most powerful constructs.
package main
import (
"fmt"
"time"
)
func producer(ch chan<- string, msg string, delay time.Duration) {
time.Sleep(delay)
ch <- msg
}
func main() {
ch1 := make(chan string)
ch2 := make(chan string)
go producer(ch1, "one", 2*time.Second)
go producer(ch2, "two", 1*time.Second)
// Wait for whichever arrives first
for i := 0; i < 2; i++ {
select {
case msg1 := <-ch1:
fmt.Println("Received from ch1:", msg1)
case msg2 := <-ch2:
fmt.Println("Received from ch2:", msg2)
}
}
}
Timeout with select
The timeout pattern prevents goroutines from blocking indefinitely:
select {
case result := <-ch:
fmt.Println("Got result:", result)
case <-time.After(3 * time.Second):
fmt.Println("Timed out")
}
Non-blocking Operations with default
select {
case msg := <-ch:
fmt.Println("Received:", msg)
default:
fmt.Println("No message available")
}
Context — Cancellation and Deadlines
The context package is the standard way to propagate cancellation signals and deadlines across goroutines. Every long-running operation should accept a context.Context as its first argument.
package main
import (
"context"
"fmt"
"time"
)
func fetchData(ctx context.Context, id int) (string, error) {
select {
case <-time.After(500 * time.Millisecond): // simulate work
return fmt.Sprintf("data-%d", id), nil
case <-ctx.Done():
return "", ctx.Err()
}
}
func main() {
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel() // always defer cancel to release resources
result, err := fetchData(ctx, 42)
if err != nil {
fmt.Println("Error:", err)
return
}
fmt.Println("Result:", result)
}
sync.Mutex — Protecting Shared State
When goroutines must share mutable state, use a mutex to ensure only one goroutine accesses the critical section at a time:
package main
import (
"fmt"
"sync"
)
type SafeCounter struct {
mu sync.Mutex
value int
}
func (c *SafeCounter) Increment() {
c.mu.Lock()
defer c.mu.Unlock()
c.value++
}
func (c *SafeCounter) Value() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.value
}
func main() {
counter := &SafeCounter{}
var wg sync.WaitGroup
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
counter.Increment()
}()
}
wg.Wait()
fmt.Println("Final count:", counter.Value()) // always 1000
}
Fan-Out / Fan-In Pipeline
Pipelines chain stages together where each stage reads from an input channel and writes to an output channel. Fan-out distributes work across multiple goroutines; fan-in merges results:
package main
import (
"fmt"
"sync"
)
// Stage 1: generate numbers
func generate(nums ...int) <-chan int {
out := make(chan int)
go func() {
for _, n := range nums {
out <- n
}
close(out)
}()
return out
}
// Stage 2: square numbers
func square(in <-chan int) <-chan int {
out := make(chan int)
go func() {
for n := range in {
out <- n * n
}
close(out)
}()
return out
}
// Merge multiple channels into one
func merge(cs ...<-chan int) <-chan int {
var wg sync.WaitGroup
merged := make(chan int, 10)
output := func(c <-chan int) {
defer wg.Done()
for v := range c {
merged <- v
}
}
wg.Add(len(cs))
for _, c := range cs {
go output(c)
}
go func() {
wg.Wait()
close(merged)
}()
return merged
}
func main() {
in := generate(2, 3, 4, 5)
// Fan-out: two square workers
c1 := square(in)
c2 := square(in)
// Fan-in: merge results
for v := range merge(c1, c2) {
fmt.Println(v)
}
}
Common Pitfalls
- Goroutine leak — goroutines that never exit because their channel is never closed or their context never cancelled. Always ensure goroutines have a way to terminate.
- Loop variable capture — in a loop, all goroutines capture the same loop variable. Pass it as a parameter:
go func(i int) { ... }(i). - Closing a channel twice — panics. Only the sender should close a channel, and only once. Use sync.Once if needed.
- Sending on a closed channel — panics. Design your channel ownership clearly.
- Data races — always run
go test -raceandgo run -raceduring development.
Detecting Race Conditions
# Build with race detector
go build -race ./...
# Test with race detector
go test -race ./...
# Run with race detector
go run -race main.go
DevKits Tools for Go Developers
When building Go services, these tools from DevKits can speed up your workflow:
- JSON Formatter — validate and pretty-print API responses from your Go services
- UUID Generator — generate UUIDs for testing and development
Summary
Go's concurrency model is powerful precisely because it is simple: goroutines are cheap, channels are composable, and the select statement elegantly handles multiple concurrent events. The key principles:
- Use goroutines freely — they are cheap
- Communicate via channels, not shared memory
- Use
sync.WaitGroupto wait for groups of goroutines - Use
contextfor cancellation and timeouts - Use
sync.Mutexonly when channels are impractical - Always run with the race detector during development