Skip to main content

Redis in Go with go-redis/v9: Caching, Pub/Sub, and Production Patterns

·10 mins
Table of Contents
Redis is not just a cache. It is a data structure server that speaks TCP, persists to disk, replicates across nodes, and handles pub/sub fan-out in a single binary. This post covers how to use it properly from Go with go-redis/v9: connection pools, TTL management, the cache-aside pattern, sorted sets for rate limiting, pub/sub, and pipelines.

Most teams reach for Redis when they need a fast key-value cache, hit it with SET/GET, and stop there. That misses most of what Redis can do. This post works through the data structures and operational patterns that matter in production services: proper connection pool configuration, TTL discipline, the cache-aside pattern with generics, lists as queues, sorted sets for sliding-window rate limiting, pub/sub for fan-out, and atomic pipelines.

All examples use go-redis/v9, which is the standard Go client. Do not use redigo or radix for new projects; go-redis has a richer API, native context support, and is actively maintained by the Redis team.

Architecture Overview
#

flowchart LR
    App[Go Service] -->|Pool| Pool[Connection Pool\nPoolSize / MinIdleConns]
    Pool --> Redis[(Redis Server)]
    Redis -->|Pub/Sub| Sub1[Subscriber 1]
    Redis -->|Pub/Sub| Sub2[Subscriber 2]
    Redis -->|Sorted Set| RL[Rate Limiter]
    Redis -->|Hash| Cache[Structured Cache]
    Redis -->|List| Queue[Work Queue]

Setup and Connection Pool Configuration
#

Install the library:

go get github.com/redis/go-redis/v9

The most important thing you will do is configure the connection pool. The default values are conservative; in any service handling real load you need to tune them.

internal/cache/client.go
package cache

import (
    "context"
    "fmt"
    "time"

    "github.com/redis/go-redis/v9"
)

func NewClient(addr, password string, db int) *redis.Client {
    rdb := redis.NewClient(&redis.Options{
        Addr:     addr,
        Password: password,
        DB:       db,

        // Pool settings. Tune to your load profile.
        PoolSize:     20,            // max open connections per process
        MinIdleConns: 5,             // keep these warm even at low traffic
        MaxIdleTime:  5 * time.Minute,

        // Dial and command timeouts.
        DialTimeout:  3 * time.Second,
        ReadTimeout:  2 * time.Second,
        WriteTimeout: 2 * time.Second,

        // Retry transient errors up to 3 times with backoff.
        MaxRetries:      3,
        MinRetryBackoff: 8 * time.Millisecond,
        MaxRetryBackoff: 512 * time.Millisecond,
    })
    return rdb
}

func Ping(ctx context.Context, rdb *redis.Client) error {
    status := rdb.Ping(ctx)
    if err := status.Err(); err != nil {
        return fmt.Errorf("redis ping: %w", err)
    }
    return nil
}
Important

PoolSize is per process. If you run 5 replicas of a service with PoolSize: 20, you have up to 100 connections open to Redis. Check your Redis maxclients setting (default 10000, fine for most cases, but worth monitoring).

Strings and TTL
#

Every key you write should have a TTL unless you have an explicit reason not to. Without a TTL, keys accumulate forever and you eventually hit memory limits.

internal/cache/strings.go
package cache

import (
    "context"
    "time"

    "github.com/redis/go-redis/v9"
)

const defaultTTL = 15 * time.Minute

// Set stores a value with a TTL.
func Set(ctx context.Context, rdb *redis.Client, key, value string, ttl time.Duration) error {
    return rdb.Set(ctx, key, value, ttl).Err()
}

// Get retrieves a value. Returns redis.Nil if the key does not exist.
func Get(ctx context.Context, rdb *redis.Client, key string) (string, error) {
    return rdb.Get(ctx, key).Result()
}

// SetNX sets a value only if the key does not already exist.
// This is the building block for distributed locks.
func SetNX(ctx context.Context, rdb *redis.Client, key, value string, ttl time.Duration) (bool, error) {
    return rdb.SetNX(ctx, key, value, ttl).Result()
}

// GetEX retrieves a value and resets its TTL (sliding expiration).
func GetEX(ctx context.Context, rdb *redis.Client, key string, ttl time.Duration) (string, error) {
    return rdb.GetEx(ctx, key, ttl).Result()
}
Note

SetNX (SET if Not eXists) is the primitive for distributed locks. Acquire the lock by calling SetNX("lock:resource", workerID, 30*time.Second). Release it with a Lua script that checks the value before deleting (to prevent releasing another worker’s lock). For production use, consider the Redlock algorithm or the redsync library.

The Cache-Aside Pattern
#

The cache-aside pattern is: check cache first, on miss fetch from the source of truth, then populate the cache. Wrapping this in a generic function keeps calling code clean.

internal/cache/getorset.go
package cache

import (
    "context"
    "encoding/json"
    "errors"
    "fmt"
    "time"

    "github.com/redis/go-redis/v9"
)

// GetOrSet attempts to retrieve a cached value of type T.
// On a cache miss it calls fetch(), stores the result, and returns it.
func GetOrSet[T any](
    ctx context.Context,
    rdb *redis.Client,
    key string,
    ttl time.Duration,
    fetch func(ctx context.Context) (T, error),
) (T, error) {
    var zero T

    raw, err := rdb.Get(ctx, key).Result()
    if err == nil {
        // Cache hit.
        var value T
        if jsonErr := json.Unmarshal([]byte(raw), &value); jsonErr != nil {
            return zero, fmt.Errorf("cache unmarshal %q: %w", key, jsonErr)
        }
        return value, nil
    }

    if !errors.Is(err, redis.Nil) {
        // Real Redis error. Fail open: go straight to the source.
        return fetch(ctx)
    }

    // Cache miss: fetch from source of truth.
    value, err := fetch(ctx)
    if err != nil {
        return zero, err
    }

    data, err := json.Marshal(value)
    if err != nil {
        return zero, fmt.Errorf("cache marshal %q: %w", key, err)
    }

    // Best-effort write; do not fail the request if caching fails.
    _ = rdb.Set(ctx, key, data, ttl).Err()
    return value, nil
}

Usage from a service layer:

internal/service/user.go
type User struct {
    ID    string `json:"id"`
    Email string `json:"email"`
    Plan  string `json:"plan"`
}

func (s *UserService) GetUser(ctx context.Context, id string) (User, error) {
    key := fmt.Sprintf("user:%s", id)
    return cache.GetOrSet(ctx, s.redis, key, 10*time.Minute, func(ctx context.Context) (User, error) {
        return s.db.FindUserByID(ctx, id)
    })
}

Lists as Queues
#

Redis lists are doubly linked. LPUSH + RPOP gives you a FIFO queue. BLPOP blocks until an item is available, which avoids polling loops in worker processes.

internal/queue/queue.go
package queue

import (
    "context"
    "encoding/json"
    "fmt"
    "time"

    "github.com/redis/go-redis/v9"
)

type Queue[T any] struct {
    rdb  *redis.Client
    name string
}

func New[T any](rdb *redis.Client, name string) *Queue[T] {
    return &Queue[T]{rdb: rdb, name: name}
}

func (q *Queue[T]) Push(ctx context.Context, item T) error {
    data, err := json.Marshal(item)
    if err != nil {
        return fmt.Errorf("queue marshal: %w", err)
    }
    return q.rdb.LPush(ctx, q.name, data).Err()
}

// Pop blocks for up to timeout waiting for an item.
// Returns (zero, false, nil) on timeout.
func (q *Queue[T]) Pop(ctx context.Context, timeout time.Duration) (T, bool, error) {
    var zero T
    res, err := q.rdb.BRPop(ctx, timeout, q.name).Result()
    if err != nil {
        if err == redis.Nil {
            return zero, false, nil // timeout
        }
        return zero, false, fmt.Errorf("queue pop: %w", err)
    }
    var item T
    if err := json.Unmarshal([]byte(res[1]), &item); err != nil {
        return zero, false, fmt.Errorf("queue unmarshal: %w", err)
    }
    return item, true, nil
}

Sorted Sets: Sliding-Window Rate Limiting
#

A sorted set where each member is a request timestamp (score = Unix nanoseconds) is a clean implementation of a sliding window rate limiter. The pattern: add the current request, remove entries older than the window, count what remains.

internal/ratelimit/slidingwindow.go
package ratelimit

import (
    "context"
    "fmt"
    "time"

    "github.com/redis/go-redis/v9"
)

// Allow returns true if the caller is within their rate limit.
// key is typically something like "rl:user:<id>:endpoint".
func Allow(ctx context.Context, rdb *redis.Client, key string, limit int64, window time.Duration) (bool, error) {
    now := time.Now()
    windowStart := now.Add(-window)

    pipe := rdb.Pipeline()
    // Remove entries outside the window.
    pipe.ZRemRangeByScore(ctx, key, "0", fmt.Sprintf("%d", windowStart.UnixNano()))
    // Add the current request.
    pipe.ZAdd(ctx, key, redis.Z{Score: float64(now.UnixNano()), Member: now.UnixNano()})
    // Count requests in the window.
    countCmd := pipe.ZCard(ctx, key)
    // Keep the key alive for the window duration.
    pipe.Expire(ctx, key, window)

    if _, err := pipe.Exec(ctx); err != nil {
        return false, fmt.Errorf("rate limit pipeline: %w", err)
    }

    return countCmd.Val() <= limit, nil
}
Tip

For high-throughput rate limiting (thousands of checks per second), consider a Lua script that bundles all four commands into a single atomic server-side call. This removes the pipeline round-trip and the small race window between ZRemRangeByScore and ZAdd.

Pub/Sub: Fan-Out
#

NATS or Kafka are better choices for durable messaging. But for lightweight fan-out where losing a message during a restart is acceptable (cache invalidation broadcast, live dashboard updates), Redis pub/sub is fast and simple.

internal/pubsub/pubsub.go
package pubsub

import (
    "context"
    "fmt"
    "log/slog"

    "github.com/redis/go-redis/v9"
)

// Publish sends a message to a channel.
func Publish(ctx context.Context, rdb *redis.Client, channel, message string) error {
    if err := rdb.Publish(ctx, channel, message).Err(); err != nil {
        return fmt.Errorf("publish to %q: %w", channel, err)
    }
    return nil
}

// Subscribe runs a blocking subscriber. Call from a goroutine.
// The handler is called for each received message.
func Subscribe(ctx context.Context, rdb *redis.Client, channel string, handler func(msg string)) {
    sub := rdb.Subscribe(ctx, channel)
    defer sub.Close()

    ch := sub.Channel()
    for {
        select {
        case msg, ok := <-ch:
            if !ok {
                return
            }
            handler(msg.Payload)
        case <-ctx.Done():
            slog.Info("pubsub subscriber shutting down", "channel", channel)
            return
        }
    }
}

Fan-out: publish once, any number of subscribers on the same channel will receive the message. Each subscriber gets its own copy. Redis does not distribute work here (use queue groups for that).

Hash Maps: Structured Objects
#

Hashes store field-value pairs under a single key. They are memory-efficient for objects with many fields and let you update individual fields without re-serialising the whole struct.

internal/cache/hash.go
package cache

import (
    "context"
    "fmt"

    "github.com/redis/go-redis/v9"
)

// HSet stores a struct as a Redis hash. Pass a pointer to the struct.
// go-redis uses struct tags to determine field names.
func HSet(ctx context.Context, rdb *redis.Client, key string, value interface{}) error {
    if err := rdb.HSet(ctx, key, value).Err(); err != nil {
        return fmt.Errorf("hset %q: %w", key, err)
    }
    return nil
}

// HGetAll retrieves all fields and populates a struct via pointer.
func HGetAll[T any](ctx context.Context, rdb *redis.Client, key string) (T, error) {
    var result T
    if err := rdb.HGetAll(ctx, key).Scan(&result); err != nil {
        return result, fmt.Errorf("hgetall %q: %w", key, err)
    }
    return result, nil
}

Example struct:

type Session struct {
    UserID    string `redis:"user_id"`
    Role      string `redis:"role"`
    CreatedAt int64  `redis:"created_at"`
}

// Store
_ = cache.HSet(ctx, rdb, "session:abc123", &Session{UserID: "u1", Role: "admin", CreatedAt: time.Now().Unix()})
rdb.Expire(ctx, "session:abc123", 24*time.Hour)

// Retrieve
sess, _ := cache.HGetAll[Session](ctx, rdb, "session:abc123")

Pipelines and Transactions
#

A pipeline batches multiple commands into a single round-trip. A transaction (MULTI/EXEC) makes those commands atomic.

internal/cache/pipeline.go
package cache

import (
    "context"
    "fmt"
    "time"

    "github.com/redis/go-redis/v9"
)

// IncrWithExpire increments a counter and sets its TTL atomically.
// Without a transaction, a crash between INCR and EXPIRE leaks a key forever.
func IncrWithExpire(ctx context.Context, rdb *redis.Client, key string, ttl time.Duration) (int64, error) {
    var incr *redis.IntCmd

    _, err := rdb.TxPipelined(ctx, func(pipe redis.Pipeliner) error {
        incr = pipe.Incr(ctx, key)
        pipe.Expire(ctx, key, ttl)
        return nil
    })
    if err != nil {
        return 0, fmt.Errorf("incr with expire %q: %w", key, err)
    }
    return incr.Val(), nil
}
Warning

TxPipelined uses MULTI/EXEC. It does not retry on contention. If you need optimistic locking (check-then-act), use WATCH with a manual retry loop. For most counters and short atomic sequences, TxPipelined is the right tool.

Common Mistakes
#

No TTL on keys (memory leak)
Every key written to Redis without a TTL lives until the server runs out of memory and starts evicting with whatever maxmemory-policy is set. The safest default is allkeys-lru, but the correct fix is to always set a TTL at write time. Audit with redis-cli --scan --pattern "*" | xargs redis-cli object encoding and check for keyspace stats in INFO.
Default connection pool settings under load
The default PoolSize in go-redis is runtime.GOMAXPROCS * 10, which is often too low for services with high Redis concurrency. Monitor pool_stats via rdb.PoolStats() and watch for Timeouts increasing. Set PoolSize and MinIdleConns explicitly based on your measured concurrency.
Using GET/SET for structured objects
Serialising a struct to JSON and storing it as a string means every update re-writes the whole object. Use a Hash when you have a struct with multiple independent fields you may update separately. This is more memory-efficient and supports HINCRBY for numeric fields.
Not handling redis.Nil
rdb.Get(ctx, key).Result() returns redis.Nil when the key does not exist. This is not an error in the traditional sense. Always check with errors.Is(err, redis.Nil) before treating it as a real failure.
Storing entire token strings in the blocklist
When implementing JWT revocation, storing the full token string as the Redis key is wasteful. Store the jti (JWT ID) claim instead. It is short, unique per token, and contains no sensitive data.

If you want to go deeper on any of this, I offer 1:1 coaching sessions for engineers working on AI integration, cloud architecture, and platform engineering. Book a session (50 EUR / 60 min) or reach out at manuel.fedele+website@gmail.com.

Related