Go's simplicity masks subtle complexity in concurrent code. A goroutine launched without a way to stop it leaks memory. Shared state accessed without proper synchronization causes data races. Channel operations without proper closure handling create deadlocks. These issues compile successfully and often pass tests but fail unpredictably in production under load.
This checklist helps catch what matters: concurrency bugs, non-idiomatic patterns that fight the language, error handling mistakes, and resource management issues that cause production problems.
Quick Reference Checklist
Use this checklist as a quick reference during code review. Each item links to detailed explanations below.
Concurrency & Synchronization
- Goroutines have clear lifecycle and termination paths
- Shared state protected by
sync.Mutexorsync.RWMutex - Channels closed by sender, never by receiver
- Select statements have
defaultcase or timeout to prevent deadlocks sync.WaitGroupused for goroutine coordination, not sleep statements- Context passed as first parameter:
func Process(ctx context.Context, ...) - Context cancellation checked in loops:
case <-ctx.Done(): - No data races detected by
go run -race
Error Handling
- Errors checked immediately after function calls, not deferred
- Error wrapping uses
fmt.Errorfwith%wverb:fmt.Errorf("failed: %w", err) - Sentinel errors defined as package-level variables:
var ErrNotFound = errors.New("not found") - Custom error types implement
Error() stringmethod - Functions return errors as last return value
paniconly used for truly unrecoverable errorsrecoveronly in deferred functions at package boundaries- Error messages lowercase, no punctuation:
errors.New("connection failed")
Idiomatic Go Patterns
- Receiver names consistent and short (1-2 letters):
func (u *User) Save() - Pointer receivers for methods that modify state or large structs
- Interface definitions small and focused (1-3 methods)
- Interfaces defined by consumer, not producer
- Zero values are useful: structs work without explicit initialization
deferused for cleanup:defer file.Close()- Variable names short in small scopes, descriptive in larger scopes
- Early returns reduce nesting: avoid
elseafterif err != nil { return err }
Memory & Resource Management
deferstatements close resources: files, connections, locks- Contexts with deadlines prevent resource leaks:
ctx, cancel := context.WithTimeout() - Defer
cancel()immediately after creating contexts - Slices pre-allocated when size known:
make([]int, 0, expectedSize) - Maps not accessed concurrently without synchronization
- Pointer usage intentional: use pointers for large structs or when sharing
- Nil pointer checks before dereferencing
- String concatenation in loops uses
strings.Builder
Testing & Tooling
- Table-driven tests for multiple similar cases
- Test names describe scenario:
TestUserSave_WithInvalidEmail_ReturnsError - Subtests use
t.Run()for grouping - Tests clean up resources with
t.Cleanup() - Benchmarks exist for performance-critical code
go vetpasses without warningsgolangci-lintconfigured and passing- Test coverage focused on critical paths, not 100% coverage
Package Organization
- Package names lowercase, single word, no underscores:
package user mainpackage minimal, delegates to library packages- Internal packages (
internal/) for unexported code - Exported names start with uppercase:
type User struct - Package-level documentation on package declaration:
// Package user handles... - No circular dependencies between packages
- Test files in same package:
package usernotpackage user_test(except for integration tests)
Concurrency and Synchronization
Go's goroutines make concurrent programming accessible, but this accessibility
creates opportunities for subtle bugs that traditional sequential code doesn't
encounter. When we launch a goroutine, we create an independent execution path
that needs explicit coordination. The most common mistake is launching
goroutines without considering how they'll stop. A function that starts a
background worker with go process() creates a resource that lives until
program termination unless we provide a way to stop it.
Context provides the standard mechanism for controlling goroutine lifetime. When
we see code that launches goroutines, we look for context handling. A properly
designed concurrent function accepts context as its first parameter and checks
for cancellation in loops or before blocking operations. The pattern
select { case <-ctx.Done(): return ctx.Err(); case result := <-ch: } allows
goroutines to respond to cancellation while waiting for work. Without this,
goroutines become impossible to stop cleanly, causing resource leaks when
services restart or connections close.
Shared state requires synchronization, and Go provides several mechanisms with
different tradeoffs. Mutexes protect critical sections where multiple goroutines
might access the same data. When we review code with shared state, we verify
that every access happens within a mutex-protected region. The pattern is
straightforward: mu.Lock(); defer mu.Unlock() at the start of functions that
access shared state. The defer ensures the mutex releases even if the function
panics. We also check that the mutex protects all accesses to the shared data,
not just some of them—partial synchronization is often worse than none because
it creates false confidence.
Channels provide communication and synchronization, but they introduce their own
complexity. The fundamental rule is that the sender closes channels, never the
receiver. Closing an already-closed channel causes a panic, so only the
goroutine that sends values should close the channel when finished. When we see
channel operations, we trace the flow to verify closure happens in the right
place. For channels with multiple senders, this becomes more complex and often
suggests using sync.WaitGroup to coordinate sender completion before one
goroutine closes the channel.
Select statements with channels need careful review because they're common sites
for deadlocks. A select without a default case or timeout blocks until one
channel is ready. This might be intentional, but often it's a bug waiting to
happen. When upstream goroutines finish without sending expected values, the
select blocks forever. Adding default: for non-blocking behavior or
case <-time.After(timeout): for bounded waiting prevents these deadlocks.
The race detector finds many concurrency bugs that code review might miss, but
it only detects races in code paths that execute during the race-enabled run.
When we review concurrent code, we consider whether the test coverage exercises
concurrent paths thoroughly. Code that passes go test -race with low
concurrency might still have races that appear under higher load.
Error Handling in Go
Go's explicit error handling creates more robust code than exception-based systems, but only when we follow the patterns correctly. The fundamental pattern is checking errors immediately after operations that can fail. When we see code that calls a function returning an error, we verify the next line checks that error. Deferring error checks or ignoring errors entirely invites bugs that are hard to diagnose because problems surface far from their cause.
Error wrapping, introduced in Go 1.13, provides context while preserving the
original error for comparison. The pattern
fmt.Errorf("operation failed: %w", err) wraps the error with additional
context that helps debugging while allowing code to check the underlying error
type with errors.Is() or errors.As(). When reviewing error handling, we look
for error wrapping at each layer that adds meaningful context. Wrapping errors
with generic messages like fmt.Errorf("error: %w", err) provides no value—the
message should explain what operation failed and relevant context like IDs or
parameters.
Sentinel errors serve as markers for specific conditions that callers need to
handle differently. Package-level variables like
var ErrNotFound = errors.New("not found") create errors that callers can check
with errors.Is(err, pkg.ErrNotFound). These should be rare—most errors don't
need to be sentinel errors. We use them when callers need to make decisions
based on specific error conditions, not for general error handling.
Custom error types provide structured error information when simple strings
aren't sufficient. A type that implements Error() string can carry additional
fields that callers can extract. When we see custom error types, we verify
they're necessary—often error wrapping with %w provides sufficient context
without creating new types. Custom types make sense when errors carry structured
data that callers need programmatic access to, not just for displaying different
messages.
The position of error returns follows Go convention: errors are the last return
value. Functions that return results and errors follow the pattern
func Fetch(id string) (*Result, error). This consistency makes error handling
predictable. When functions return errors in different positions or return
errors alongside valid data, that creates confusion about when to check which
value.
Panic and recover exist for exceptional situations, not regular error handling.
When we see panic() in application code, we question whether the situation
truly warrants program termination. Library code should rarely panic—returning
errors gives callers control. The exception is during initialization when
configuration errors make the program unusable. Recover belongs in deferred
functions at package boundaries, catching panics from code we don't control.
Using recover for control flow creates fragile code that's hard to reason about.
Idiomatic Go Patterns
Go idioms aren't arbitrary style preferences—they reflect how the language is
designed to work. Fighting these patterns makes code harder to maintain and
often less performant. Receiver names demonstrate this principle. Go convention
uses short, consistent receiver names—typically one or two letters that reflect
the type. For a User type, we'd use u consistently across all methods:
func (u *User) Save(), func (u *User) Load(). Switching between u, user,
and this creates unnecessary cognitive load. The receiver name shouldn't be
self or this because Go isn't object-oriented in that sense.
Choosing between pointer and value receivers affects behavior and performance. Methods that modify the receiver require pointer receivers—value receivers create copies that don't affect the original. Large structs benefit from pointer receivers to avoid copying. The rule we follow is consistency within a type: if any method uses a pointer receiver, all methods on that type should use pointer receivers. Mixing them creates confusion about which methods modify state and which don't.
Interface design in Go differs from other languages in a crucial way: interfaces are satisfied implicitly. This means we define interfaces where they're used, not where types are defined. When we see interfaces defined in packages that also define the concrete types implementing them, that suggests unnecessary coupling. The idiomatic pattern is defining small interfaces in the package that uses them, allowing any compatible type to satisfy the interface without explicit declaration.
Small, focused interfaces prove more useful than large ones. The io.Reader
interface requires one method: Read([]byte) (int, error). This focused design
makes it incredibly reusable. When reviewing interfaces, we verify they're as
small as possible while still being useful. Interfaces with many methods suggest
we're thinking in terms of object hierarchies rather than behavior composition.
Zero value utility means types work correctly when declared without explicit
initialization. A var buf bytes.Buffer is immediately usable without calling
New() or setting fields. When we review struct definitions, we consider
whether the zero value is useful. If a struct requires initialization, its API
should make that clear, perhaps through a New() function that returns a
properly initialized instance. But making zero values useful when possible
reduces the API surface and prevents initialization bugs.
Defer statements handle cleanup elegantly, but they execute in LIFO order and carry slight performance overhead. When reviewing defer usage, we verify it's appropriate for the situation. Deferring function calls in tight loops might impact performance. Deferring expensive operations that could be avoided on error paths wastes resources. But for typical resource cleanup—closing files, releasing locks, canceling contexts—defer provides the clearest expression of intent.
Memory and Resource Management
Go's garbage collector handles memory automatically, but we still need to manage
resources carefully. Goroutines, file handles, network connections, and other
resources don't clean themselves up—we need explicit cleanup. The defer
statement provides the standard pattern. When we open a file, the next line
should defer its closure:
f, err := os.Open(file); if err != nil { return err }; defer f.Close(). This
ensures cleanup happens regardless of how the function exits, whether through
normal return, error return, or panic.
Context with deadlines prevents a common class of resource leaks. When we make
network calls or database queries, we should use contexts with timeouts:
ctx, cancel := context.WithTimeout(ctx, 5*time.Second); defer cancel(). This
bounds how long operations can take, preventing goroutines from hanging forever
on unresponsive services. The defer cancel() is essential—even though the
context times out automatically, canceling releases associated resources
immediately.
Slice and map initialization affects both correctness and performance. When we
know the approximate size, pre-allocating capacity avoids repeated allocations
as the collection grows: make([]int, 0, 100) creates a slice with zero length
but capacity for 100 elements. This matters in hot paths where the slice grows
to a predictable size. Maps benefit similarly: make(map[string]int, 100). For
small collections or unpredictable sizes, the optimization doesn't matter.
Concurrent map access requires synchronization. The race detector catches these
issues, but we can spot potential problems in code review. When we see maps
accessed from multiple goroutines—common in caches or registries—we verify
synchronization exists. Options include sync.RWMutex for protecting the map,
sync.Map for high-concurrency scenarios, or redesigning to avoid sharing.
Pointer usage should be intentional, not automatic. We use pointers for large structs to avoid copying costs, when methods need to modify the receiver, or when we need to share state. For small structs, value semantics often work better—they're safer (no nil pointers), clearer (no questions about ownership), and sometimes faster (better cache locality). When reviewing pointer usage, we ask whether the pointer is necessary or habitual.
String concatenation in loops deserves attention because strings are immutable.
Each concatenation allocates a new string, making repeated concatenation in
loops expensive. The pattern
var b strings.Builder; for {...; b.WriteString(s)}; result := b.String()
performs better for substantial concatenation. For joining a few strings or
concatenating outside loops, the performance difference doesn't matter.
Testing and Quality Assurance
Table-driven tests handle multiple similar test cases elegantly. Instead of duplicating test logic for each input, we define a slice of test cases and loop through them. This pattern appears frequently in Go code because it handles the common situation where we want to verify a function with various inputs. When reviewing tests, we verify table-driven tests include edge cases, not just happy paths. The cases should cover boundary conditions, invalid inputs, and error scenarios.
Test names in Go follow the pattern
TestFunctionName_Scenario_ExpectedBehavior. This creates clear test
documentation. Instead of TestSave, we write
TestUserSave_WithInvalidEmail_ReturnsValidationError. When test names are
vague, we can't understand what failed without reading the implementation. Clear
names make test failures self-documenting.
Subtests using t.Run() organize related tests and enable running specific test
cases. The pattern t.Run("invalid email", func(t *testing.T) { ... }) creates
a subtest that we can run independently with
go test -run TestUserSave/invalid. This proves valuable when debugging
specific failures in table-driven tests. Subtests also provide clearer failure
output by showing which specific case failed.
Test cleanup using t.Cleanup() ensures resources are released even when tests
fail. The pattern t.Cleanup(func() { db.Close() }) works like defer but has
advantages in table-driven tests where we want cleanup to run after each
subtest. When we see tests creating resources without cleanup, that's a code
smell—tests should leave the environment as they found it.
Benchmarks provide objective performance measurements, but only for code paths
actually executed during the benchmark. When reviewing benchmarks, we verify
they measure what they claim to measure. The pattern b.ResetTimer() excludes
setup time from measurements. The pattern b.ReportAllocs() shows allocation
counts, helping identify unexpected allocations. Benchmarks should run long
enough to get stable measurements but not so long that development becomes
frustrating.
Static analysis tools catch many issues that code review might miss. Go vet
finds common mistakes like incorrect printf formatting, unreachable code, and
suspicious constructs. When code doesn't pass go vet, we need to understand
why—either fix the issue or document why it's a false positive. Golangci-lint
combines multiple linters and provides configurable checks for style, bugs, and
performance issues. Projects should configure it appropriately for their
context, not just accept defaults.
Package Organization
Package naming affects how code reads and maintains. Go package names should be
lowercase, single words without underscores or mixedCaps. The package name
becomes part of every identifier exported from the package, so short, clear
names work best. Instead of package user_authentication, we use package auth
because imports will read as auth.Login() not user_authentication.Login().
The main package should be minimal, handling initialization and delegating to library packages. Substantial logic in main makes code hard to test and reuse. When we see large main packages, we suggest extracting logic into library packages that main coordinates. This separation allows testing business logic without running the entire application and enables reusing packages in different executables.
The internal directory restricts imports, preventing external packages from
depending on implementation details. Code in pkg/internal/ can only be
imported by code within pkg/. This provides stronger encapsulation than
unexported identifiers alone. When reviewing package structure, we verify
internal packages contain implementation details, not APIs intended for external
use.
Exported identifiers start with uppercase letters, unexported with lowercase. This simple rule provides package-level encapsulation. When reviewing new types and functions, we verify the capitalization matches the intended visibility. Over-exporting creates maintenance burden because changing exported APIs requires considering all possible callers. Under-exporting forces callers into workarounds or reflection. The right balance comes from understanding what callers need.
Package documentation appears in comments on the package declaration. The
comment should describe what the package does and how to use it. Well-documented
packages start with // Package name does... followed by overview and usage
examples. When package documentation is missing or vague, users can't understand
the package without reading implementation.
Circular dependencies indicate design problems. Go prevents them at compile time, but avoiding them requires thoughtful package organization. When we encounter code that struggles with circular dependencies, the solution usually involves extracting shared interfaces or types into a separate package that both sides can depend on. This often improves design by clarifying dependencies.
Bringing It Together
Effective Go code review balances catching significant issues with maintaining development velocity. Not every deviation from Go conventions warrants blocking a pull request. A goroutine leak or data race demands attention before merge. A variable name that could be shorter or a test that could use table-driven format might not. The key is developing judgment about which issues matter for production reliability and long-term maintenance versus which represent minor polish.
Go's tooling ecosystem provides automated checking for many common issues. Race detector, vet, and linters catch problems that humans might miss in review. Integrating these tools into continuous integration ensures they run on every change. Code review then focuses on issues that require human judgment: architectural decisions, API design, concurrency patterns, and business logic correctness.
The patterns in this checklist reflect how Go is designed to work. Following them creates code that other Go developers can read and maintain easily. When code fights Go idioms, it becomes harder to work with and often less efficient. Better review practices catch concurrency bugs before production, identify resource leaks before they cause outages, and create learning opportunities that improve team skills over time.

