Redis and Go

Redis and Go

Redis is one of the most widely adopted in-memory data stores in modern software infrastructure. Its speed, simplicity, and versatility make it an excellent companion for Go applications, which themselves are built around performance and concurrency. This article explores how to integrate Redis with Go, covering client libraries, common patterns, data modeling, caching strategies, pub/sub messaging, distributed locking, and production-level best practices.

Why Redis and Go Work Well Together

Go and Redis share a common philosophy: do a few things exceptionally well, and stay out of the developer's way. Go offers fast compilation, lightweight goroutines, and a rich standard library. Redis offers sub-millisecond latency, atomic operations, and a surprisingly deep set of data structures beyond simple key-value pairs. When combined, they form a backend stack that can handle millions of operations per second with minimal resource consumption.

Both technologies favor explicit behavior over magic. Redis commands are straightforward and predictable. Go's error handling is verbose but clear. This alignment in design philosophy means that the code you write to connect the two tends to be readable, testable, and easy to reason about.

Choosing a Client Library

The Go ecosystem offers several Redis client libraries. The two most prominent are go-redis (also known as redis/go-redis) and redigo. Each has a different design philosophy.

go-redis is a type-safe client that provides a dedicated method for every Redis command. It supports Redis Cluster, Sentinel, pipelining, Lua scripting, and streams out of the box. It is the most actively maintained client and is the recommended choice for new projects.

redigo is a lower-level client that uses a connection pool and a generic Do method. It is simpler in design but requires more boilerplate for type conversions.

This article uses go-redis (v9) for all examples. Install it with:

go get github.com/redis/go-redis/v9

Connecting to Redis

The most basic connection requires only an address. In production, you will typically configure timeouts, pool sizes, and authentication.

package main

import (
    "context"
    "fmt"
    "log"
    "time"

    "github.com/redis/go-redis/v9"
)

func main() {
    rdb := redis.NewClient(&redis.Options{
        Addr:         "localhost:6379",
        Password:     "",
        DB:           0,
        DialTimeout:  5 * time.Second,
        ReadTimeout:  3 * time.Second,
        WriteTimeout: 3 * time.Second,
        PoolSize:     20,
        MinIdleConns: 5,
    })
    defer rdb.Close()

    ctx := context.Background()

    if err := rdb.Ping(ctx).Err(); err != nil {
        log.Fatalf("failed to connect to redis: %v", err)
    }
    fmt.Println("connected to redis")
}

The context.Context parameter is threaded through every command. This is idiomatic Go and allows you to enforce deadlines, cancel operations, and propagate tracing metadata.

Basic Operations: Strings, Hashes, and Lists

Redis strings are the simplest data type. They can hold text, serialized JSON, or raw bytes. The SET and GET commands are the bread and butter of any Redis integration.

// SET with expiration
err := rdb.Set(ctx, "user:1001:session", "abc123", 30*time.Minute).Err()
if err != nil {
    log.Fatal(err)
}

// GET
val, err := rdb.Get(ctx, "user:1001:session").Result()
if err == redis.Nil {
    fmt.Println("key does not exist")
} else if err != nil {
    log.Fatal(err)
} else {
    fmt.Println("session:", val)
}

Notice the distinction between redis.Nil and an actual error. This is a critical pattern: a missing key is not an error condition in most applications. Always check for redis.Nil explicitly.

Hashes are ideal for storing structured objects without serialization overhead. Each field in a hash can be read or written independently.

// Store a user profile as a hash
err := rdb.HSet(ctx, "user:1001", map[string]interface{}{
    "name":    "Gabriele",
    "email":   "gabriele@example.com",
    "plan":    "pro",
    "credits": 150,
}).Err()

// Read individual fields
name, _ := rdb.HGet(ctx, "user:1001", "name").Result()
fmt.Println("name:", name)

// Read all fields at once
all, _ := rdb.HGetAll(ctx, "user:1001").Result()
for k, v := range all {
    fmt.Printf("  %s = %s\n", k, v)
}

// Atomic increment on a single field
rdb.HIncrBy(ctx, "user:1001", "credits", -10)

Lists support push and pop operations from both ends, making them useful for queues, activity feeds, and bounded logs.

// Push events to a list
rdb.LPush(ctx, "events:user:1001", "login", "page_view", "purchase")

// Trim to keep only the latest 100 events
rdb.LTrim(ctx, "events:user:1001", 0, 99)

// Read the most recent 10
events, _ := rdb.LRange(ctx, "events:user:1001", 0, 9).Result()
for _, e := range events {
    fmt.Println(e)
}

Serialization with JSON

While hashes work well for flat structures, complex nested objects are better stored as serialized JSON in a Redis string. Go's encoding/json package integrates naturally with this pattern.

type Product struct {
    ID       string   `json:"id"`
    Name     string   `json:"name"`
    Price    float64  `json:"price"`
    Tags     []string `json:"tags"`
}

func CacheProduct(ctx context.Context, rdb *redis.Client, p Product) error {
    data, err := json.Marshal(p)
    if err != nil {
        return fmt.Errorf("marshal product: %w", err)
    }
    return rdb.Set(ctx, "product:"+p.ID, data, 1*time.Hour).Err()
}

func GetCachedProduct(ctx context.Context, rdb *redis.Client, id string) (*Product, error) {
    data, err := rdb.Get(ctx, "product:"+id).Bytes()
    if err == redis.Nil {
        return nil, nil // cache miss
    }
    if err != nil {
        return nil, fmt.Errorf("get product: %w", err)
    }
    var p Product
    if err := json.Unmarshal(data, &p); err != nil {
        return nil, fmt.Errorf("unmarshal product: %w", err)
    }
    return &p, nil
}

Returning nil, nil for a cache miss is a common Go convention when the absence of data is a normal, expected outcome rather than a failure.

Pipelining for Bulk Operations

Every Redis command involves a network round trip. When you need to execute many commands in sequence, pipelining batches them into a single round trip, dramatically reducing latency.

pipe := rdb.Pipeline()

incr := pipe.Incr(ctx, "counter:page_views")
expire := pipe.Expire(ctx, "counter:page_views", 24*time.Hour)
get := pipe.Get(ctx, "config:site_name")

_, err := pipe.Exec(ctx)
if err != nil && err != redis.Nil {
    log.Fatal(err)
}

fmt.Println("page views:", incr.Val())
fmt.Println("expire set:", expire.Val())
fmt.Println("site name:", get.Val())

The commands are queued locally and sent together when Exec is called. Each command returns a future-like object whose value is populated after execution. This pattern is especially effective inside loops or batch-processing functions.

Transactions with MULTI/EXEC

Redis transactions guarantee that a group of commands executes atomically. The TxPipeline method in go-redis wraps commands in MULTI/EXEC.

func TransferCredits(ctx context.Context, rdb *redis.Client, from, to string, amount int64) error {
    txf := func(tx *redis.Tx) error {
        fromCredits, err := tx.HGet(ctx, from, "credits").Int64()
        if err != nil {
            return err
        }
        if fromCredits < amount {
            return fmt.Errorf("insufficient credits: have %d, need %d", fromCredits, amount)
        }

        _, err = tx.TxPipelined(ctx, func(pipe redis.Pipeliner) error {
            pipe.HIncrBy(ctx, from, "credits", -amount)
            pipe.HIncrBy(ctx, to, "credits", amount)
            return nil
        })
        return err
    }

    // Retry on optimistic lock failure (WATCH)
    for i := 0; i < 5; i++ {
        err := rdb.Watch(ctx, txf, from)
        if err == redis.TxFailedErr {
            continue // another client modified the watched key
        }
        return err
    }
    return fmt.Errorf("transaction failed after max retries")
}

The Watch method implements optimistic locking. If another client modifies the watched key between the WATCH and EXEC, the transaction is aborted and redis.TxFailedErr is returned. The retry loop handles this gracefully.

Caching Patterns

Caching is the most common use case for Redis. Two patterns dominate in practice: cache-aside and write-through.

In the cache-aside pattern, the application checks the cache first. On a miss, it fetches from the primary data source, stores the result in Redis, and returns it. This is the simplest and most widely used approach.

func GetUser(ctx context.Context, rdb *redis.Client, db *sql.DB, userID string) (*User, error) {
    // Try cache first
    cached, err := rdb.Get(ctx, "user:"+userID).Bytes()
    if err == nil {
        var u User
        json.Unmarshal(cached, &u)
        return &u, nil
    }
    if err != redis.Nil {
        // Log the Redis error but fall through to the database.
        // Redis being unavailable should not break the application.
        log.Printf("redis error: %v", err)
    }

    // Cache miss: query the database
    u, err := queryUserFromDB(db, userID)
    if err != nil {
        return nil, err
    }

    // Populate cache asynchronously
    go func() {
        data, _ := json.Marshal(u)
        rdb.Set(context.Background(), "user:"+userID, data, 15*time.Minute)
    }()

    return u, nil
}

A crucial detail in this example is that a Redis failure does not block the request. The database is the source of truth; Redis is an optimization layer. If Redis is down, the application degrades gracefully by hitting the database directly.

The asynchronous cache population (using a goroutine) ensures that the response is not delayed by the Redis write. In high-throughput systems, you might replace this with a buffered channel that batches cache writes.

Cache Stampede Prevention

When a popular cache key expires, many goroutines may simultaneously miss the cache and all query the database. This is called a cache stampede. A common mitigation is the singleflight pattern, available in the golang.org/x/sync/singleflight package.

import "golang.org/x/sync/singleflight"

var group singleflight.Group

func GetUserSafe(ctx context.Context, rdb *redis.Client, db *sql.DB, userID string) (*User, error) {
    cacheKey := "user:" + userID

    cached, err := rdb.Get(ctx, cacheKey).Bytes()
    if err == nil {
        var u User
        json.Unmarshal(cached, &u)
        return &u, nil
    }

    // singleflight ensures only one goroutine fetches from DB
    val, err, _ := group.Do(cacheKey, func() (interface{}, error) {
        u, err := queryUserFromDB(db, userID)
        if err != nil {
            return nil, err
        }
        data, _ := json.Marshal(u)
        rdb.Set(ctx, cacheKey, data, 15*time.Minute)
        return u, nil
    })
    if err != nil {
        return nil, err
    }
    return val.(*User), nil
}

Pub/Sub for Real-Time Messaging

Redis Pub/Sub provides a lightweight messaging system. Publishers send messages to channels; subscribers receive them in real time. This is useful for broadcasting events, invalidating caches across instances, or building chat systems.

// Publisher
func PublishEvent(ctx context.Context, rdb *redis.Client, channel string, payload interface{}) error {
    data, err := json.Marshal(payload)
    if err != nil {
        return err
    }
    return rdb.Publish(ctx, channel, data).Err()
}

// Subscriber
func Subscribe(ctx context.Context, rdb *redis.Client, channel string) {
    sub := rdb.Subscribe(ctx, channel)
    defer sub.Close()

    ch := sub.Channel()
    for msg := range ch {
        fmt.Printf("received on %s: %s\n", msg.Channel, msg.Payload)
    }
}

Keep in mind that Redis Pub/Sub is fire-and-forget. If a subscriber is not connected when a message is published, that message is lost. For durable messaging, consider Redis Streams instead.

Redis Streams for Durable Event Processing

Streams were introduced in Redis 5.0 and provide a log-like data structure with consumer groups, message acknowledgment, and replay capabilities. They are the right choice when you need message durability and at-least-once delivery.

// Produce a message
func ProduceEvent(ctx context.Context, rdb *redis.Client, stream string, event map[string]interface{}) error {
    return rdb.XAdd(ctx, &redis.XAddArgs{
        Stream: stream,
        Values: event,
    }).Err()
}

// Consume with a consumer group
func ConsumeEvents(ctx context.Context, rdb *redis.Client, stream, group, consumer string) {
    // Create the consumer group (ignore error if it already exists)
    rdb.XGroupCreateMkStream(ctx, stream, group, "0")

    for {
        results, err := rdb.XReadGroup(ctx, &redis.XReadGroupArgs{
            Group:    group,
            Consumer: consumer,
            Streams:  []string{stream, ">"},
            Count:    10,
            Block:    5 * time.Second,
        }).Result()
        if err != nil {
            if err == redis.Nil {
                continue // no new messages
            }
            log.Printf("stream read error: %v", err)
            time.Sleep(1 * time.Second)
            continue
        }

        for _, s := range results {
            for _, msg := range s.Messages {
                fmt.Printf("processing %s: %v\n", msg.ID, msg.Values)

                // Acknowledge after successful processing
                rdb.XAck(ctx, stream, group, msg.ID)
            }
        }
    }
}

The ">" in the Streams array tells Redis to deliver only new, unacknowledged messages. If a consumer crashes before acknowledging, the message remains pending and can be claimed by another consumer using XPENDING and XCLAIM.

Distributed Locking

When multiple instances of your application need to coordinate access to a shared resource, a distributed lock is necessary. Redis provides the primitives for this through SET NX EX, but implementing it correctly requires care.

import "github.com/google/uuid"

type Lock struct {
    rdb   *redis.Client
    key   string
    token string
    ttl   time.Duration
}

func AcquireLock(ctx context.Context, rdb *redis.Client, key string, ttl time.Duration) (*Lock, error) {
    token := uuid.New().String()
    ok, err := rdb.SetNX(ctx, "lock:"+key, token, ttl).Result()
    if err != nil {
        return nil, err
    }
    if !ok {
        return nil, fmt.Errorf("lock %s is already held", key)
    }
    return &Lock{rdb: rdb, key: "lock:" + key, token: token, ttl: ttl}, nil
}

// Release uses a Lua script to ensure only the holder can release the lock
var releaseLockScript = redis.NewScript(`
    if redis.call("GET", KEYS[1]) == ARGV[1] then
        return redis.call("DEL", KEYS[1])
    end
    return 0
`)

func (l *Lock) Release(ctx context.Context) error {
    result, err := releaseLockScript.Run(ctx, l.rdb, []string{l.key}, l.token).Int64()
    if err != nil {
        return err
    }
    if result == 0 {
        return fmt.Errorf("lock was not held or already expired")
    }
    return nil
}

The Lua script in Release is essential. Without it, a race condition exists: the lock could expire between the GET and DEL commands, causing one client to delete another client's lock. The Lua script executes atomically inside Redis, eliminating this window.

For production systems requiring stronger guarantees across multiple Redis nodes, consider the Redlock algorithm or libraries like github.com/bsm/redislock which implement it.

Rate Limiting

Redis is a natural fit for rate limiting because of its atomic increment operations and key expiration. The sliding window pattern using a sorted set is one of the most precise approaches.

func AllowRequest(ctx context.Context, rdb *redis.Client, userID string, limit int64, window time.Duration) (bool, error) {
    key := "ratelimit:" + userID
    now := time.Now().UnixMicro()
    windowStart := now - window.Microseconds()

    pipe := rdb.Pipeline()
    // Remove entries outside the window
    pipe.ZRemRangeByScore(ctx, key, "0", fmt.Sprintf("%d", windowStart))
    // Add the current request
    pipe.ZAdd(ctx, key, redis.Z{Score: float64(now), Member: now})
    // Count requests in the window
    count := pipe.ZCard(ctx, key)
    // Set expiration on the key itself
    pipe.Expire(ctx, key, window)

    _, err := pipe.Exec(ctx)
    if err != nil {
        return false, err
    }

    return count.Val() <= limit, nil
}

Each request is stored as a member in a sorted set, scored by its timestamp. Old entries are pruned on every call. The total count determines whether the request is within the allowed limit. This approach provides a true sliding window with no fixed bucket boundaries.

Health Checks and Connection Management

In production, you should monitor the health of your Redis connection and expose it to your orchestration layer (Kubernetes liveness/readiness probes, load balancer health checks, etc.).

func RedisHealthCheck(ctx context.Context, rdb *redis.Client) error {
    ctx, cancel := context.WithTimeout(ctx, 2*time.Second)
    defer cancel()
    return rdb.Ping(ctx).Err()
}

// Example HTTP handler
func healthHandler(rdb *redis.Client) http.HandlerFunc {
    return func(w http.ResponseWriter, r *http.Request) {
        if err := RedisHealthCheck(r.Context(), rdb); err != nil {
            w.WriteHeader(http.StatusServiceUnavailable)
            fmt.Fprintf(w, "redis: %v", err)
            return
        }
        w.WriteHeader(http.StatusOK)
        fmt.Fprint(w, "ok")
    }
}

The go-redis client manages a connection pool internally. The PoolSize option controls the maximum number of connections. Under heavy load, if all connections are in use, new commands will block until a connection is available or the context deadline is exceeded. Monitor pool statistics with rdb.PoolStats() to tune these values.

Testing with Miniredis

Integration tests that depend on a running Redis instance are fragile and slow. The miniredis library provides an in-memory Redis server written in pure Go, perfect for unit and integration tests.

go get github.com/alicebob/miniredis/v2
import (
    "testing"

    "github.com/alicebob/miniredis/v2"
    "github.com/redis/go-redis/v9"
)

func TestCacheProduct(t *testing.T) {
    mr := miniredis.RunT(t)

    rdb := redis.NewClient(&redis.Options{
        Addr: mr.Addr(),
    })
    defer rdb.Close()

    ctx := context.Background()
    p := Product{ID: "42", Name: "Widget", Price: 9.99, Tags: []string{"sale"}}

    if err := CacheProduct(ctx, rdb, p); err != nil {
        t.Fatalf("unexpected error: %v", err)
    }

    got, err := GetCachedProduct(ctx, rdb, "42")
    if err != nil {
        t.Fatalf("unexpected error: %v", err)
    }
    if got.Name != "Widget" {
        t.Errorf("expected Widget, got %s", got.Name)
    }

    // You can manipulate time in miniredis
    mr.FastForward(2 * time.Hour)

    got, err = GetCachedProduct(ctx, rdb, "42")
    if err != nil {
        t.Fatalf("unexpected error: %v", err)
    }
    if got != nil {
        t.Error("expected cache miss after expiration")
    }
}

Miniredis supports most Redis commands, including Lua scripting, transactions, and streams. The FastForward method lets you simulate time passing, which is invaluable for testing TTL-based logic without waiting in real time.

Redis Cluster and Sentinel

For high availability and horizontal scaling, Redis offers Cluster and Sentinel modes. The go-redis library supports both with dedicated client constructors.

// Redis Cluster
rdb := redis.NewClusterClient(&redis.ClusterOptions{
    Addrs: []string{
        "redis-node-1:6379",
        "redis-node-2:6379",
        "redis-node-3:6379",
    },
    Password:     "secret",
    PoolSize:     20,
    MinIdleConns: 5,
})

// Redis Sentinel (for automatic failover with a master-replica setup)
rdb := redis.NewFailoverClient(&redis.FailoverOptions{
    MasterName:    "mymaster",
    SentinelAddrs: []string{
        "sentinel-1:26379",
        "sentinel-2:26379",
        "sentinel-3:26379",
    },
    Password:  "secret",
    DB:        0,
    PoolSize:  20,
})

With Cluster mode, keys are automatically sharded across nodes using hash slots. Be aware that multi-key operations (like transactions involving keys on different slots) require all keys to map to the same slot. Use hash tags to control this: keys like {user:1001}.profile and {user:1001}.session will always land on the same node.

Performance Considerations

A few practices make a significant difference in production:

Key design matters. Use a consistent naming convention such as entity:id:attribute. Keep keys short but descriptive. Avoid excessively long keys, as they consume memory and slow down lookups.

Set TTLs on everything. Memory is finite. Every key should have a time-to-live unless you have a strong reason for it to be permanent. Forgotten keys are one of the most common causes of Redis memory exhaustion.

Prefer pipelining over sequential commands. If you need to execute five or more commands, pipeline them. The difference between five round trips and one is substantial at scale.

Avoid large values. Redis is optimized for values under 100 KB. Storing multi-megabyte blobs degrades performance for all clients sharing the instance, because Redis is single-threaded for command execution.

Use SCAN instead of KEYS. The KEYS command blocks the server while it iterates over the entire keyspace. In production, always use SCAN with a cursor for iteration.

var cursor uint64
for {
    keys, nextCursor, err := rdb.Scan(ctx, cursor, "user:*", 100).Result()
    if err != nil {
        log.Fatal(err)
    }
    for _, key := range keys {
        fmt.Println(key)
    }
    cursor = nextCursor
    if cursor == 0 {
        break
    }
}

Observability

Monitoring Redis in production requires attention to several metrics. The INFO command provides detailed server statistics. From Go, you can fetch and parse them periodically.

info, err := rdb.Info(ctx, "memory", "stats", "clients").Result()
if err != nil {
    log.Fatal(err)
}
fmt.Println(info)

Key metrics to watch include used_memory (total memory consumption), connected_clients (current connection count), evicted_keys (keys removed due to memory pressure), keyspace_hits and keyspace_misses (cache hit ratio), and instantaneous_ops_per_sec (throughput).

On the Go side, monitor pool statistics to detect connection exhaustion:

stats := rdb.PoolStats()
fmt.Printf("hits=%d misses=%d timeouts=%d total=%d idle=%d stale=%d\n",
    stats.Hits, stats.Misses, stats.Timeouts,
    stats.TotalConns, stats.IdleConns, stats.StaleConns,
)

A Complete Example: Session Store

Bringing several concepts together, here is a session store implementation that uses hashes for structured data, TTLs for automatic expiration, and proper error handling throughout.

package session

import (
    "context"
    "fmt"
    "time"

    "github.com/google/uuid"
    "github.com/redis/go-redis/v9"
)

const sessionTTL = 24 * time.Hour

type Store struct {
    rdb *redis.Client
}

func NewStore(rdb *redis.Client) *Store {
    return &Store{rdb: rdb}
}

func (s *Store) Create(ctx context.Context, userID string, metadata map[string]string) (string, error) {
    sessionID := uuid.New().String()
    key := "session:" + sessionID

    fields := make(map[string]interface{}, len(metadata)+2)
    fields["user_id"] = userID
    fields["created_at"] = time.Now().UTC().Format(time.RFC3339)
    for k, v := range metadata {
        fields[k] = v
    }

    pipe := s.rdb.Pipeline()
    pipe.HSet(ctx, key, fields)
    pipe.Expire(ctx, key, sessionTTL)
    if _, err := pipe.Exec(ctx); err != nil {
        return "", fmt.Errorf("create session: %w", err)
    }
    return sessionID, nil
}

func (s *Store) Get(ctx context.Context, sessionID string) (map[string]string, error) {
    key := "session:" + sessionID
    data, err := s.rdb.HGetAll(ctx, key).Result()
    if err != nil {
        return nil, fmt.Errorf("get session: %w", err)
    }
    if len(data) == 0 {
        return nil, nil // session not found or expired
    }
    // Refresh TTL on access
    s.rdb.Expire(ctx, key, sessionTTL)
    return data, nil
}

func (s *Store) Destroy(ctx context.Context, sessionID string) error {
    return s.rdb.Del(ctx, "session:"+sessionID).Err()
}

This implementation refreshes the TTL on every access, implementing a sliding expiration window. The session automatically disappears from Redis after 24 hours of inactivity, with no cleanup jobs or garbage collection required.

Conclusion

Redis and Go form a powerful combination for building high-performance backend systems. The go-redis library provides a complete, type-safe interface to the full breadth of Redis capabilities. By understanding the patterns covered here, from basic caching and serialization to distributed locking, streams, and rate limiting, you can build systems that are fast, resilient, and maintainable. The key principle to carry forward is that Redis should always be treated as an optimization layer, never as a source of truth. Design your systems so that Redis being temporarily unavailable causes degraded performance, not data loss or application failure.