Memoization is often introduced with Fibonacci because the recursive explosion is easy to visualize. But the real production applications are elsewhere: permission checks that hit a database, configuration lookups over a network, or expensive cryptographic computations. This post covers the full picture from basic implementation through data race prevention, generic wrappers, and time-to-live (TTL) invalidation.
The Fibonacci Baseline#
The classic recursive Fibonacci has exponential time complexity. It recalculates fib(n-2) from scratch every time, even though it already computed that value moments ago.
// fib has O(2^n) time complexity. Calling fib(40) makes ~2 billion calls.
func fib(n int) int {
if n <= 1 {
return n
}
return fib(n-1) + fib(n-2)
}A simple map cache eliminates the redundant work:
var fibCache = make(map[int]int)
func fib(n int) int {
if n <= 1 {
return n
}
if v, ok := fibCache[n]; ok {
return v
}
result := fib(n-1) + fib(n-2)
fibCache[n] = result
return result
}Time complexity drops to O(n). The problem is that this code has a data race.
The Data Race Problem#
The global fibCache map is read and written by every call to fib. In a sequential program this works. In any program where fib is called from multiple goroutines concurrently, it is undefined behavior. Go’s map is not safe for concurrent access.
Run the race detector and you will see it immediately:
go test -race ./...The fix is a sync.RWMutex. Reads take the read lock (multiple readers allowed simultaneously). Writes take the exclusive write lock.
package fib
import "sync"
type memoFib struct {
mu sync.RWMutex
cache map[int]int
}
func newMemoFib() *memoFib {
return &memoFib{cache: make(map[int]int)}
}
func (m *memoFib) compute(n int) int {
if n <= 1 {
return n
}
m.mu.RLock()
if v, ok := m.cache[n]; ok {
m.mu.RUnlock()
return v
}
m.mu.RUnlock()
// Compute without the lock (this is safe because fib is pure).
result := m.compute(n-1) + m.compute(n-2)
m.mu.Lock()
m.cache[n] = result
m.mu.Unlock()
return result
}Never hold a mutex while doing expensive computation. In the example above, the lock is released before the recursive calls and re-acquired only for the write. Holding the write lock during the recursion would cause a deadlock because nested calls would try to acquire the same lock.
An alternative for the single-result case is sync.Map, which is optimized for read-heavy workloads but has a less ergonomic API and worse performance when writes are frequent.
Generic Memoization (Go 1.18+)#
Writing a new memoized struct for every function type is repetitive. With Go generics you can write one reusable wrapper:
package memo
import "sync"
// Cache is a thread-safe memoization cache for a function of type func(K) (V, error).
type Cache[K comparable, V any] struct {
mu sync.Mutex
cache map[K]result[V]
fn func(K) (V, error)
}
type result[V any] struct {
value V
err error
}
// New creates a Cache wrapping fn. Calls to Get will invoke fn at most once per key.
func New[K comparable, V any](fn func(K) (V, error)) *Cache[K, V] {
return &Cache[K, V]{
cache: make(map[K]result[V]),
fn: fn,
}
}
// Get returns the memoized result for key, calling fn(key) if needed.
func (c *Cache[K, V]) Get(key K) (V, error) {
c.mu.Lock()
defer c.mu.Unlock()
if r, ok := c.cache[key]; ok {
return r.value, r.err
}
// Release the lock during the expensive call.
// Use singleflight in production for deduplication under concurrent load.
c.mu.Unlock()
v, err := c.fn(key)
c.mu.Lock()
c.cache[key] = result[V]{value: v, err: err}
return v, err
}Usage is clean and type-safe:
fetchPrice := memo.New(func(ticker string) (float64, error) {
return marketAPI.GetPrice(ticker)
})
price, err := fetchPrice.Get("AAPL")The generic Cache above uses a single mutex and unlocks during the function call to avoid holding the lock while doing I/O. Under very high concurrent load for the same key, two goroutines may both call fn(key) before either stores the result. For true deduplication use golang.org/x/sync/singleflight.
Real-World Use Case: API Call with TTL#
Memoizing an API call forever is rarely safe. Configuration, permissions, and feature flags change. You need time-based invalidation.
package memo
import (
"sync"
"time"
)
type entry[V any] struct {
value V
err error
expiresAt time.Time
}
// TTLCache memoizes function results for a fixed duration.
type TTLCache[K comparable, V any] struct {
mu sync.Mutex
cache map[K]entry[V]
fn func(K) (V, error)
ttl time.Duration
}
func NewTTL[K comparable, V any](fn func(K) (V, error), ttl time.Duration) *TTLCache[K, V] {
return &TTLCache[K, V]{
cache: make(map[K]entry[V]),
fn: fn,
ttl: ttl,
}
}
func (c *TTLCache[K, V]) Get(key K) (V, error) {
c.mu.Lock()
if e, ok := c.cache[key]; ok && time.Now().Before(e.expiresAt) {
c.mu.Unlock()
return e.value, e.err
}
c.mu.Unlock()
v, err := c.fn(key)
c.mu.Lock()
c.cache[key] = entry[V]{
value: v,
err: err,
expiresAt: time.Now().Add(c.ttl),
}
c.mu.Unlock()
return v, err
}A permission check that would hit the database on every request now hits it at most once per minute:
var permCache = memo.NewTTL(func(userID string) ([]string, error) {
return db.GetUserPermissions(context.Background(), userID)
}, time.Minute)
func hasPermission(userID, action string) (bool, error) {
perms, err := permCache.Get(userID)
if err != nil {
return false, err
}
return slices.Contains(perms, action), nil
}When NOT to Memoize#
Functions with side effects
Functions with large or unbounded inputs
golang.org/x/exp/cache or a well-maintained third-party implementation.Functions that are fast to begin with
Concurrent mutation of the cached value
Benchmark#
Here is a benchmark that quantifies the speedup from memoization on Fibonacci:
package fib_test
import (
"testing"
"github.com/yourorg/fib"
)
func BenchmarkFibNaive(b *testing.B) {
for i := 0; i < b.N; i++ {
fib.Naive(30)
}
}
func BenchmarkFibMemo(b *testing.B) {
m := fib.NewMemo()
// Warm the cache before measuring.
m.Compute(30)
b.ResetTimer()
for i := 0; i < b.N; i++ {
m.Compute(30)
}
}Sample output:
BenchmarkFibNaive-10 1000 1,832,455 ns/op 0 B/op 0 allocs/op
BenchmarkFibMemo-10 1000000000 0.87 ns/op 0 B/op 0 allocs/opThe memoized version is roughly two million times faster for fib(30). The difference grows exponentially with n.
Memoization vs caching: these terms are often used interchangeably but have a precise distinction. Memoization is a specific optimization for pure functions: given the same inputs, always produce the same output, so cache it. Caching is the broader concept of storing computed or fetched data to avoid repeating expensive work. Caching includes eviction policies (LRU, LFU), TTL, invalidation signals, and distributed layers. All memoization is caching, but most production caches are not memoization.
If you want to go deeper on any of this, I offer 1:1 coaching sessions for engineers working on AI integration, cloud architecture, and platform engineering. Book a session (50 EUR / 60 min) or reach out at manuel.fedele+website@gmail.com.