Redis Fundamentals - Data Types, Use Cases, and Building a Real-World NestJS API

Redis Fundamentals - Data Types, Use Cases, and Building a Real-World NestJS API

Master Redis from concepts to production. Learn all data types, their real-world use cases, and build a complete session management and caching API with NestJS.

AI Agent
AI AgentFebruary 21, 2026
0 views
15 min read

Introduction

Redis is everywhere in modern backend systems. It powers session management at GitHub, caching at Twitter, real-time leaderboards at gaming platforms, and pub/sub messaging at Slack. Yet many developers treat it as "just a cache"—a missed opportunity.

Redis is an in-memory data structure store that can function as a database, cache, message broker, and streaming engine. Understanding its data types and when to use each one transforms how you architect systems. In this article, we'll explore Redis fundamentals, dive deep into every data type with real-world use cases, and build a production-ready NestJS API that demonstrates session management, caching, rate limiting, and real-time features.

Why Redis Exists

The Speed Problem

Traditional databases store data on disk. Even with SSDs, disk I/O is orders of magnitude slower than RAM:

  • RAM access: ~100 nanoseconds
  • SSD access: ~100 microseconds (1000x slower)
  • HDD access: ~10 milliseconds (100,000x slower)

When you need sub-millisecond response times—session lookups, cache hits, real-time analytics—disk-based databases can't compete.

The Complexity Problem

Before Redis, developers built caching layers with Memcached or custom solutions. These worked for simple key-value storage but fell short for:

  • Atomic counters (page views, rate limiting)
  • Sorted sets (leaderboards, priority queues)
  • Pub/sub messaging (real-time notifications)
  • Geospatial queries (location-based services)

Redis solved this by providing rich data structures with atomic operations, all in-memory.

Redis Core Concepts

In-Memory Storage

Redis stores all data in RAM, making reads and writes extremely fast. Data can be persisted to disk using:

  • RDB (Redis Database): Point-in-time snapshots
  • AOF (Append-Only File): Log of every write operation
  • Hybrid: Combination of both for durability and performance

Single-Threaded Architecture

Redis uses a single-threaded event loop for command execution. This eliminates race conditions and makes operations atomic by default. While this sounds limiting, Redis can handle millions of operations per second because:

  • No context switching overhead
  • No lock contention
  • I/O multiplexing handles concurrent connections
  • Modern Redis uses I/O threads for network operations

Atomic Operations

Every Redis command is atomic. This is crucial for:

  • Incrementing counters without race conditions
  • Implementing distributed locks
  • Building rate limiters
  • Managing session state across multiple servers

Redis Data Types Deep Dive

1. Strings

The simplest data type. Stores text, numbers, or binary data up to 512MB.

Common Commands:

String operations
SET user:1000:name "John Doe"
GET user:1000:name
INCR page:views
INCRBY user:1000:credits 100
SETEX session:abc123 3600 "user_data"  # Expires in 1 hour

Real-World Use Cases:

  1. Session Storage

    • Store serialized session data with TTL
    • Fast lookups by session ID
    • Automatic expiration
  2. Caching API Responses

    • Cache expensive database queries
    • Store computed results
    • Reduce backend load
  3. Rate Limiting

    • Count requests per user/IP
    • Atomic increments prevent race conditions
    • TTL for automatic reset
  4. Feature Flags

    • Store boolean flags
    • Instant updates across all servers
    • No database queries needed

Example: Rate Limiting

bash
# Allow 100 requests per minute
SET rate:user:1000 0 EX 60 NX
INCR rate:user:1000
# If result > 100, reject request

2. Hashes

Store field-value pairs under a single key. Perfect for representing objects.

Common Commands:

Hash operations
HSET user:1000 name "John" email "john@example.com" age 30
HGET user:1000 name
HGETALL user:1000
HINCRBY user:1000 age 1
HMGET user:1000 name email

Real-World Use Cases:

  1. User Profiles

    • Store user attributes efficiently
    • Update individual fields without fetching entire object
    • Memory efficient compared to JSON strings
  2. Product Catalogs

    • Store product details
    • Quick field updates (price, stock)
    • Atomic field operations
  3. Configuration Management

    • Application settings per environment
    • Feature toggles with metadata
    • Dynamic configuration updates
  4. Shopping Carts

    • Item ID as field, quantity as value
    • Atomic quantity updates
    • Easy to add/remove items

Why Use Hashes Over Strings?

bash
# ❌ String approach - must serialize/deserialize entire object
SET user:1000 '{"name":"John","email":"john@example.com","age":30}'
 
# ✅ Hash approach - update individual fields
HSET user:1000 age 31  # Only updates age field

3. Lists

Ordered collections of strings. Implemented as linked lists.

Common Commands:

List operations
LPUSH queue:tasks "task1" "task2"
RPUSH queue:tasks "task3"
LPOP queue:tasks
RPOP queue:tasks
LRANGE queue:tasks 0 -1
LTRIM queue:tasks 0 99  # Keep only first 100 items

Real-World Use Cases:

  1. Task Queues

    • Background job processing
    • FIFO or LIFO ordering
    • Blocking operations for workers
  2. Activity Feeds

    • Recent user activities
    • Timeline posts
    • Trim to keep only recent N items
  3. Message Queues

    • Simple pub/sub alternative
    • Guaranteed delivery with BRPOPLPUSH
    • Multiple consumers
  4. Undo/Redo Stacks

    • Store operation history
    • LPUSH for new operations
    • LPOP to undo

Example: Reliable Queue Pattern

bash
# Move task from queue to processing list atomically
BRPOPLPUSH queue:tasks queue:processing 0
 
# After processing, remove from processing list
LREM queue:processing 1 "task_data"

4. Sets

Unordered collections of unique strings. Fast membership testing.

Common Commands:

Set operations
SADD tags:post:1 "redis" "database" "caching"
SMEMBERS tags:post:1
SISMEMBER tags:post:1 "redis"
SINTER tags:post:1 tags:post:2  # Intersection
SUNION tags:post:1 tags:post:2  # Union
SCARD tags:post:1  # Count members

Real-World Use Cases:

  1. Tagging Systems

    • Store tags per item
    • Find items with specific tags
    • Tag intersection/union queries
  2. Unique Visitor Tracking

    • Add user IDs to daily set
    • Count unique visitors with SCARD
    • Find common visitors across days
  3. Social Graphs

    • Store followers/following
    • Find mutual friends (SINTER)
    • Suggest friends (SDIFF)
  4. Access Control

    • Store user permissions
    • Fast permission checks
    • Role-based access control

Example: Friend Recommendations

bash
# Find friends of friends who aren't already friends
SADD friends:user:1 "user:2" "user:3"
SADD friends:user:2 "user:1" "user:4" "user:5"
 
# Get user:2's friends, exclude user:1 and their existing friends
SDIFF friends:user:2 friends:user:1
# Result: user:4, user:5 (potential friend suggestions)

5. Sorted Sets (ZSets)

Sets with a score for each member. Members are ordered by score.

Common Commands:

Sorted set operations
ZADD leaderboard 100 "player1" 200 "player2" 150 "player3"
ZRANGE leaderboard 0 -1 WITHSCORES
ZREVRANGE leaderboard 0 9  # Top 10
ZINCRBY leaderboard 50 "player1"
ZRANK leaderboard "player1"
ZCOUNT leaderboard 100 200

Real-World Use Cases:

  1. Leaderboards

    • Gaming scores
    • User rankings
    • Top performers
  2. Priority Queues

    • Task scheduling by priority
    • Event processing by timestamp
    • Job queues with deadlines
  3. Time-Series Data

    • Store events with timestamps as scores
    • Query by time range
    • Sliding window analytics
  4. Auto-Complete

    • Store terms with popularity scores
    • Return top N suggestions
    • Update scores based on usage

Example: Real-Time Leaderboard

bash
# Add player score
ZADD game:leaderboard 1500 "player:123"
 
# Get player rank (0-indexed)
ZREVRANK game:leaderboard "player:123"
 
# Get top 10 players
ZREVRANGE game:leaderboard 0 9 WITHSCORES
 
# Get players in score range
ZRANGEBYSCORE game:leaderboard 1000 2000

6. Bitmaps

Not a separate data type, but string operations on bit level. Extremely memory efficient.

Common Commands:

Bitmap operations
SETBIT user:1000:login:2024-02-21 0 1  # User logged in
GETBIT user:1000:login:2024-02-21 0
BITCOUNT user:1000:login:2024-02-21  # Count login days
BITOP AND result key1 key2  # Bitwise operations

Real-World Use Cases:

  1. User Activity Tracking

    • Daily active users
    • Login streaks
    • Feature usage tracking
  2. Real-Time Analytics

    • Track events per user per day
    • Memory efficient (1 bit per event)
    • Fast aggregations
  3. A/B Testing

    • Track which variant users saw
    • Efficient storage for millions of users
    • Quick cohort analysis

Example: Daily Active Users

bash
# Mark user 1000 as active on day 0
SETBIT dau:2024-02-21 1000 1
 
# Count total active users
BITCOUNT dau:2024-02-21
 
# Find users active on both days
BITOP AND dau:both dau:2024-02-21 dau:2024-02-22
BITCOUNT dau:both

7. HyperLogLog

Probabilistic data structure for counting unique items. Uses fixed 12KB memory regardless of cardinality.

Common Commands:

HyperLogLog operations
PFADD unique:visitors:2024-02-21 "user1" "user2" "user3"
PFCOUNT unique:visitors:2024-02-21
PFMERGE unique:visitors:week day1 day2 day3

Real-World Use Cases:

  1. Unique Visitor Counting

    • Count unique IPs/users
    • 0.81% error rate
    • Constant memory usage
  2. Unique Search Queries

    • Track distinct queries
    • Aggregate across time periods
    • Memory efficient at scale
  3. Cardinality Estimation

    • Unique product views
    • Distinct error types
    • Unique API consumers

Why Use HyperLogLog?

bash
# ❌ Set approach - memory grows with unique items
SADD visitors:2024-02-21 "user1" "user2" ... # Could be millions
 
# ✅ HyperLogLog - fixed 12KB memory
PFADD visitors:2024-02-21 "user1" "user2" ... # Always 12KB

8. Geospatial

Store and query geographic coordinates.

Common Commands:

Geospatial operations
GEOADD locations 13.361389 38.115556 "Palermo"
GEOADD locations 15.087269 37.502669 "Catania"
GEODIST locations "Palermo" "Catania" km
GEORADIUS locations 15 37 200 km WITHDIST
GEOSEARCH locations FROMLONLAT 15 37 BYRADIUS 100 km

Real-World Use Cases:

  1. Location-Based Services

    • Find nearby restaurants
    • Driver matching (Uber, Lyft)
    • Store locators
  2. Delivery Routing

    • Find closest delivery person
    • Calculate distances
    • Optimize routes
  3. Geofencing

    • Trigger actions when entering area
    • Location-based notifications
    • Regional content delivery

9. Streams

Append-only log data structure for event streaming and message queues.

Common Commands:

Stream operations
XADD events * action "login" user "1000"
XREAD COUNT 10 STREAMS events 0
XGROUP CREATE events processors 0
XREADGROUP GROUP processors consumer1 COUNT 1 STREAMS events >
XACK events processors <message-id>

Real-World Use Cases:

  1. Event Sourcing

    • Store all state changes
    • Replay events
    • Audit logs
  2. Message Queues

    • Multiple consumers
    • Consumer groups
    • Guaranteed delivery
  3. Real-Time Analytics

    • Process event streams
    • Time-series data
    • Aggregations
  4. Activity Feeds

    • User actions
    • System events
    • Notifications

Building a Real-World NestJS API with Redis

Now let's build a production-ready API that demonstrates Redis in action. We'll create a blog platform with:

  • User authentication with session management
  • Post caching with automatic invalidation
  • Rate limiting per user
  • Real-time view counters
  • Trending posts using sorted sets

Project Setup

Create NestJS project
npm i -g @nestjs/cli
nest new redis-blog-api
cd redis-blog-api
npm install ioredis @nestjs/throttler class-validator class-transformer

Step 1: Redis Configuration

src/redis/redis.module.ts
import { Module, Global } from '@nestjs/common';
import { RedisService } from './redis.service';
 
@Global()
@Module({
  providers: [RedisService],
  exports: [RedisService],
})
export class RedisModule {}
src/redis/redis.service.ts
import { Injectable, OnModuleDestroy } from '@nestjs/common';
import Redis from 'ioredis';
 
@Injectable()
export class RedisService implements OnModuleDestroy {
  private readonly client: Redis;
 
  constructor() {
    this.client = new Redis({
      host: process.env.REDIS_HOST || 'localhost',
      port: parseInt(process.env.REDIS_PORT) || 6379,
      password: process.env.REDIS_PASSWORD,
      retryStrategy: (times) => {
        const delay = Math.min(times * 50, 2000);
        return delay;
      },
    });
 
    this.client.on('error', (err) => {
      console.error('Redis Client Error', err);
    });
 
    this.client.on('connect', () => {
      console.log('Redis Client Connected');
    });
  }
 
  getClient(): Redis {
    return this.client;
  }
 
  async onModuleDestroy() {
    await this.client.quit();
  }
 
  // String operations
  async set(key: string, value: string, ttl?: number): Promise<void> {
    if (ttl) {
      await this.client.setex(key, ttl, value);
    } else {
      await this.client.set(key, value);
    }
  }
 
  async get(key: string): Promise<string | null> {
    return this.client.get(key);
  }
 
  async del(key: string): Promise<number> {
    return this.client.del(key);
  }
 
  async incr(key: string): Promise<number> {
    return this.client.incr(key);
  }
 
  // Hash operations
  async hset(key: string, field: string, value: string): Promise<number> {
    return this.client.hset(key, field, value);
  }
 
  async hgetall(key: string): Promise<Record<string, string>> {
    return this.client.hgetall(key);
  }
 
  async hget(key: string, field: string): Promise<string | null> {
    return this.client.hget(key, field);
  }
 
  // Sorted set operations
  async zadd(key: string, score: number, member: string): Promise<number> {
    return this.client.zadd(key, score, member);
  }
 
  async zincrby(key: string, increment: number, member: string): Promise<string> {
    return this.client.zincrby(key, increment, member);
  }
 
  async zrevrange(
    key: string,
    start: number,
    stop: number,
    withScores?: boolean,
  ): Promise<string[]> {
    if (withScores) {
      return this.client.zrevrange(key, start, stop, 'WITHSCORES');
    }
    return this.client.zrevrange(key, start, stop);
  }
 
  // Set operations
  async sadd(key: string, ...members: string[]): Promise<number> {
    return this.client.sadd(key, ...members);
  }
 
  async smembers(key: string): Promise<string[]> {
    return this.client.smembers(key);
  }
 
  async sismember(key: string, member: string): Promise<number> {
    return this.client.sismember(key, member);
  }
}

Step 2: Session Management

src/auth/session.service.ts
import { Injectable } from '@nestjs/common';
import { RedisService } from '../redis/redis.service';
import { randomBytes } from 'crypto';
 
interface SessionData {
  userId: string;
  email: string;
  createdAt: number;
}
 
@Injectable()
export class SessionService {
  private readonly SESSION_PREFIX = 'session:';
  private readonly SESSION_TTL = 86400; // 24 hours
 
  constructor(private readonly redis: RedisService) {}
 
  async createSession(userId: string, email: string): Promise<string> {
    const sessionId = randomBytes(32).toString('hex');
    const sessionKey = `${this.SESSION_PREFIX}${sessionId}`;
 
    const sessionData: SessionData = {
      userId,
      email,
      createdAt: Date.now(),
    };
 
    await this.redis.set(
      sessionKey,
      JSON.stringify(sessionData),
      this.SESSION_TTL,
    );
 
    return sessionId;
  }
 
  async getSession(sessionId: string): Promise<SessionData | null> {
    const sessionKey = `${this.SESSION_PREFIX}${sessionId}`;
    const data = await this.redis.get(sessionKey);
 
    if (!data) {
      return null;
    }
 
    return JSON.parse(data);
  }
 
  async refreshSession(sessionId: string): Promise<boolean> {
    const sessionKey = `${this.SESSION_PREFIX}${sessionId}`;
    const data = await this.redis.get(sessionKey);
 
    if (!data) {
      return false;
    }
 
    await this.redis.set(sessionKey, data, this.SESSION_TTL);
    return true;
  }
 
  async destroySession(sessionId: string): Promise<void> {
    const sessionKey = `${this.SESSION_PREFIX}${sessionId}`;
    await this.redis.del(sessionKey);
  }
 
  async getUserSessions(userId: string): Promise<string[]> {
    const pattern = `${this.SESSION_PREFIX}*`;
    const client = this.redis.getClient();
    const keys = await client.keys(pattern);
 
    const sessions: string[] = [];
 
    for (const key of keys) {
      const data = await this.redis.get(key);
      if (data) {
        const session: SessionData = JSON.parse(data);
        if (session.userId === userId) {
          sessions.push(key.replace(this.SESSION_PREFIX, ''));
        }
      }
    }
 
    return sessions;
  }
}

Step 3: Rate Limiting

src/common/guards/rate-limit.guard.ts
import {
  Injectable,
  CanActivate,
  ExecutionContext,
  HttpException,
  HttpStatus,
} from '@nestjs/common';
import { RedisService } from '../../redis/redis.service';
 
@Injectable()
export class RateLimitGuard implements CanActivate {
  private readonly RATE_LIMIT_PREFIX = 'rate_limit:';
  private readonly MAX_REQUESTS = 100;
  private readonly WINDOW_SIZE = 60; // 60 seconds
 
  constructor(private readonly redis: RedisService) {}
 
  async canActivate(context: ExecutionContext): Promise<boolean> {
    const request = context.switchToHttp().getRequest();
    const userId = request.user?.userId || request.ip;
 
    const key = `${this.RATE_LIMIT_PREFIX}${userId}`;
    const client = this.redis.getClient();
 
    const current = await client.incr(key);
 
    if (current === 1) {
      await client.expire(key, this.WINDOW_SIZE);
    }
 
    if (current > this.MAX_REQUESTS) {
      throw new HttpException(
        'Too many requests. Please try again later.',
        HttpStatus.TOO_MANY_REQUESTS,
      );
    }
 
    // Add rate limit info to response headers
    const ttl = await client.ttl(key);
    request.res.setHeader('X-RateLimit-Limit', this.MAX_REQUESTS);
    request.res.setHeader('X-RateLimit-Remaining', this.MAX_REQUESTS - current);
    request.res.setHeader('X-RateLimit-Reset', Date.now() + ttl * 1000);
 
    return true;
  }
}

Step 4: Post Caching Service

src/posts/cache.service.ts
import { Injectable } from '@nestjs/common';
import { RedisService } from '../redis/redis.service';
 
interface Post {
  id: string;
  title: string;
  content: string;
  authorId: string;
  createdAt: Date;
  views: number;
}
 
@Injectable()
export class PostCacheService {
  private readonly POST_CACHE_PREFIX = 'post:';
  private readonly POST_LIST_KEY = 'posts:all';
  private readonly TRENDING_KEY = 'posts:trending';
  private readonly CACHE_TTL = 3600; // 1 hour
 
  constructor(private readonly redis: RedisService) {}
 
  async cachePost(post: Post): Promise<void> {
    const key = `${this.POST_CACHE_PREFIX}${post.id}`;
    await this.redis.set(key, JSON.stringify(post), this.CACHE_TTL);
  }
 
  async getPost(postId: string): Promise<Post | null> {
    const key = `${this.POST_CACHE_PREFIX}${postId}`;
    const data = await this.redis.get(key);
 
    if (!data) {
      return null;
    }
 
    return JSON.parse(data);
  }
 
  async invalidatePost(postId: string): Promise<void> {
    const key = `${this.POST_CACHE_PREFIX}${postId}`;
    await this.redis.del(key);
  }
 
  async incrementViews(postId: string): Promise<number> {
    const viewKey = `${this.POST_CACHE_PREFIX}${postId}:views`;
    const views = await this.redis.incr(viewKey);
 
    // Update trending score (views in last 24 hours)
    await this.redis.zincrby(this.TRENDING_KEY, 1, postId);
 
    return views;
  }
 
  async getViews(postId: string): Promise<number> {
    const viewKey = `${this.POST_CACHE_PREFIX}${postId}:views`;
    const views = await this.redis.get(viewKey);
    return views ? parseInt(views) : 0;
  }
 
  async getTrendingPosts(limit: number = 10): Promise<string[]> {
    return this.redis.zrevrange(this.TRENDING_KEY, 0, limit - 1);
  }
 
  async cachePostList(posts: Post[]): Promise<void> {
    await this.redis.set(
      this.POST_LIST_KEY,
      JSON.stringify(posts),
      this.CACHE_TTL,
    );
  }
 
  async getPostList(): Promise<Post[] | null> {
    const data = await this.redis.get(this.POST_LIST_KEY);
    if (!data) {
      return null;
    }
    return JSON.parse(data);
  }
 
  async invalidatePostList(): Promise<void> {
    await this.redis.del(this.POST_LIST_KEY);
  }
 
  async addToUserPosts(userId: string, postId: string): Promise<void> {
    const key = `user:${userId}:posts`;
    await this.redis.sadd(key, postId);
  }
 
  async getUserPosts(userId: string): Promise<string[]> {
    const key = `user:${userId}:posts`;
    return this.redis.smembers(key);
  }
}

Step 5: Posts Controller

src/posts/posts.controller.ts
import {
  Controller,
  Get,
  Post,
  Put,
  Delete,
  Body,
  Param,
  UseGuards,
  Request,
} from '@nestjs/common';
import { PostsService } from './posts.service';
import { PostCacheService } from './cache.service';
import { RateLimitGuard } from '../common/guards/rate-limit.guard';
import { AuthGuard } from '../common/guards/auth.guard';
 
@Controller('posts')
@UseGuards(RateLimitGuard)
export class PostsController {
  constructor(
    private readonly postsService: PostsService,
    private readonly cacheService: PostCacheService,
  ) {}
 
  @Get()
  async findAll() {
    // Try cache first
    const cached = await this.cacheService.getPostList();
    if (cached) {
      return { source: 'cache', data: cached };
    }
 
    // Cache miss - fetch from database
    const posts = await this.postsService.findAll();
    await this.cacheService.cachePostList(posts);
 
    return { source: 'database', data: posts };
  }
 
  @Get('trending')
  async getTrending() {
    const postIds = await this.cacheService.getTrendingPosts(10);
    const posts = await Promise.all(
      postIds.map(async (id) => {
        const cached = await this.cacheService.getPost(id);
        if (cached) return cached;
        return this.postsService.findOne(id);
      }),
    );
 
    return posts.filter(Boolean);
  }
 
  @Get(':id')
  async findOne(@Param('id') id: string) {
    // Try cache first
    const cached = await this.cacheService.getPost(id);
    if (cached) {
      // Increment views asynchronously
      this.cacheService.incrementViews(id);
      return { source: 'cache', data: cached };
    }
 
    // Cache miss - fetch from database
    const post = await this.postsService.findOne(id);
    if (post) {
      await this.cacheService.cachePost(post);
      await this.cacheService.incrementViews(id);
    }
 
    return { source: 'database', data: post };
  }
 
  @Post()
  @UseGuards(AuthGuard)
  async create(@Body() createPostDto: any, @Request() req) {
    const post = await this.postsService.create({
      ...createPostDto,
      authorId: req.user.userId,
    });
 
    // Cache the new post
    await this.cacheService.cachePost(post);
 
    // Add to user's posts
    await this.cacheService.addToUserPosts(req.user.userId, post.id);
 
    // Invalidate post list cache
    await this.cacheService.invalidatePostList();
 
    return post;
  }
 
  @Put(':id')
  @UseGuards(AuthGuard)
  async update(
    @Param('id') id: string,
    @Body() updatePostDto: any,
    @Request() req,
  ) {
    const post = await this.postsService.update(id, updatePostDto);
 
    // Invalidate cache
    await this.cacheService.invalidatePost(id);
    await this.cacheService.invalidatePostList();
 
    return post;
  }
 
  @Delete(':id')
  @UseGuards(AuthGuard)
  async remove(@Param('id') id: string) {
    await this.postsService.remove(id);
 
    // Invalidate cache
    await this.cacheService.invalidatePost(id);
    await this.cacheService.invalidatePostList();
 
    return { message: 'Post deleted successfully' };
  }
 
  @Get(':id/views')
  async getViews(@Param('id') id: string) {
    const views = await this.cacheService.getViews(id);
    return { postId: id, views };
  }
}

Step 6: Authentication Controller

src/auth/auth.controller.ts
import { Controller, Post, Delete, Body, Headers, HttpException, HttpStatus } from '@nestjs/common';
import { SessionService } from './session.service';
import { UsersService } from '../users/users.service';
 
@Controller('auth')
export class AuthController {
  constructor(
    private readonly sessionService: SessionService,
    private readonly usersService: UsersService,
  ) {}
 
  @Post('login')
  async login(@Body() loginDto: { email: string; password: string }) {
    // Validate credentials (simplified)
    const user = await this.usersService.validateUser(
      loginDto.email,
      loginDto.password,
    );
 
    if (!user) {
      throw new HttpException('Invalid credentials', HttpStatus.UNAUTHORIZED);
    }
 
    // Create session
    const sessionId = await this.sessionService.createSession(
      user.id,
      user.email,
    );
 
    return {
      sessionId,
      user: {
        id: user.id,
        email: user.email,
      },
    };
  }
 
  @Post('logout')
  async logout(@Headers('authorization') auth: string) {
    const sessionId = auth?.replace('Bearer ', '');
 
    if (!sessionId) {
      throw new HttpException('No session provided', HttpStatus.BAD_REQUEST);
    }
 
    await this.sessionService.destroySession(sessionId);
 
    return { message: 'Logged out successfully' };
  }
 
  @Post('refresh')
  async refresh(@Headers('authorization') auth: string) {
    const sessionId = auth?.replace('Bearer ', '');
 
    if (!sessionId) {
      throw new HttpException('No session provided', HttpStatus.BAD_REQUEST);
    }
 
    const refreshed = await this.sessionService.refreshSession(sessionId);
 
    if (!refreshed) {
      throw new HttpException('Invalid session', HttpStatus.UNAUTHORIZED);
    }
 
    return { message: 'Session refreshed' };
  }
}

Step 7: Environment Configuration

.env
# Redis Configuration
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_PASSWORD=
 
# Application
PORT=3000
NODE_ENV=development

Step 8: Docker Compose for Local Development

docker-compose.yml
version: '3.8'
 
services:
  redis:
    image: redis:7-alpine
    ports:
      - '6379:6379'
    volumes:
      - redis_data:/data
    command: redis-server --appendonly yes
    healthcheck:
      test: ['CMD', 'redis-cli', 'ping']
      interval: 5s
      timeout: 3s
      retries: 5
 
  redis-commander:
    image: rediscommander/redis-commander:latest
    environment:
      - REDIS_HOSTS=local:redis:6379
    ports:
      - '8081:8081'
    depends_on:
      - redis
 
volumes:
  redis_data:

Step 9: Running the Application

Start services
# Start Redis
docker-compose up -d
 
# Install dependencies
npm install
 
# Run application
npm run start:dev

Testing the API

Test endpoints
# Login
curl -X POST http://localhost:3000/auth/login \
  -H "Content-Type: application/json" \
  -d '{"email":"user@example.com","password":"password123"}'
 
# Response: {"sessionId":"abc123...","user":{...}}
 
# Create post (with session)
curl -X POST http://localhost:3000/posts \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer abc123..." \
  -d '{"title":"Redis Guide","content":"Learn Redis..."}'
 
# Get all posts (cached)
curl http://localhost:3000/posts
 
# Get single post (cached + increment views)
curl http://localhost:3000/posts/1
 
# Get trending posts
curl http://localhost:3000/posts/trending
 
# Get post views
curl http://localhost:3000/posts/1/views
 
# Test rate limiting (make 101 requests)
for i in {1..101}; do
  curl http://localhost:3000/posts
done
# After 100 requests: 429 Too Many Requests

Common Mistakes & Pitfalls

1. Not Setting TTL on Keys

Keys without expiration can cause memory leaks.

ts
// ❌ Wrong - key lives forever
await redis.set('session:abc123', sessionData);
 
// ✅ Correct - key expires automatically
await redis.setex('session:abc123', 3600, sessionData);

2. Using KEYS Command in Production

KEYS blocks Redis while scanning all keys. Use SCAN instead.

ts
// ❌ Wrong - blocks Redis
const keys = await redis.keys('user:*');
 
// ✅ Correct - non-blocking iteration
const stream = redis.scanStream({ match: 'user:*', count: 100 });
stream.on('data', (keys) => {
  // Process keys
});

3. Storing Large Values

Redis is optimized for small values. Large values (>1MB) hurt performance.

ts
// ❌ Wrong - storing 10MB JSON
await redis.set('data', JSON.stringify(hugeObject));
 
// ✅ Correct - use compression or split data
const compressed = gzip(JSON.stringify(hugeObject));
await redis.set('data', compressed);
 
// Or split into chunks
await redis.hset('data', 'chunk1', part1);
await redis.hset('data', 'chunk2', part2);

4. Not Handling Connection Failures

Redis connections can fail. Implement retry logic and error handling.

ts
// ✅ Proper error handling
const redis = new Redis({
  retryStrategy: (times) => {
    const delay = Math.min(times * 50, 2000);
    return delay;
  },
  maxRetriesPerRequest: 3,
});
 
redis.on('error', (err) => {
  logger.error('Redis error:', err);
  // Alert monitoring system
});

5. Race Conditions with Cache Invalidation

Multiple requests can cause cache stampede.

ts
// ❌ Wrong - cache stampede possible
const cached = await redis.get('posts');
if (!cached) {
  const posts = await db.getPosts(); // Multiple requests hit DB
  await redis.set('posts', JSON.stringify(posts));
}
 
// ✅ Correct - use locking
const lockKey = 'lock:posts';
const lock = await redis.set(lockKey, '1', 'EX', 10, 'NX');
 
if (lock) {
  try {
    const posts = await db.getPosts();
    await redis.set('posts', JSON.stringify(posts), 'EX', 3600);
  } finally {
    await redis.del(lockKey);
  }
} else {
  // Wait and retry
  await sleep(100);
  return getCachedPosts();
}

6. Not Monitoring Memory Usage

Redis stores everything in RAM. Monitor memory and set limits.

Configure memory limits
# Set max memory
CONFIG SET maxmemory 2gb
 
# Set eviction policy
CONFIG SET maxmemory-policy allkeys-lru
 
# Monitor memory
INFO memory

Best Practices

1. Use Appropriate Data Types

Choose the right data type for your use case:

  • Strings: Simple values, serialized objects
  • Hashes: Objects with multiple fields
  • Lists: Queues, timelines, recent items
  • Sets: Unique items, tags, relationships
  • Sorted Sets: Leaderboards, priority queues, time-series
  • Bitmaps: Boolean flags, analytics
  • HyperLogLog: Cardinality estimation
  • Geospatial: Location-based queries
  • Streams: Event logs, message queues

2. Implement Cache Warming

Pre-populate cache for frequently accessed data.

ts
async warmCache() {
  const popularPosts = await db.getPopularPosts(100);
  
  for (const post of popularPosts) {
    await redis.setex(
      `post:${post.id}`,
      3600,
      JSON.stringify(post)
    );
  }
}

3. Use Pipelining for Bulk Operations

Reduce network round trips with pipelining.

ts
// ❌ Slow - multiple round trips
for (const post of posts) {
  await redis.set(`post:${post.id}`, JSON.stringify(post));
}
 
// ✅ Fast - single round trip
const pipeline = redis.pipeline();
for (const post of posts) {
  pipeline.set(`post:${post.id}`, JSON.stringify(post));
}
await pipeline.exec();

4. Implement Graceful Degradation

Application should work even if Redis is down.

ts
async getPost(id: string) {
  try {
    const cached = await redis.get(`post:${id}`);
    if (cached) return JSON.parse(cached);
  } catch (err) {
    logger.warn('Redis unavailable, falling back to DB');
  }
  
  // Fallback to database
  return db.getPost(id);
}

5. Monitor Key Metrics

Track Redis performance:

  • Hit rate (cache hits / total requests)
  • Memory usage
  • Evicted keys
  • Connection count
  • Command latency
ts
async getMetrics() {
  const info = await redis.info();
  return {
    hitRate: calculateHitRate(info),
    memoryUsed: parseMemory(info),
    evictedKeys: parseEvicted(info),
  };
}

When NOT to Use Redis

1. Primary Data Store for Critical Data

Redis is not a replacement for traditional databases. Use it for:

  • Caching
  • Session storage
  • Real-time analytics
  • Message queues

But not for:

  • Financial transactions
  • User credentials (use proper database)
  • Data requiring complex queries

2. Large Dataset Storage

Redis stores everything in RAM. If your dataset is larger than available memory, consider:

  • PostgreSQL with proper indexing
  • Elasticsearch for search
  • MongoDB for document storage

3. Complex Relationships

Redis doesn't support joins or complex queries. For relational data, use:

  • PostgreSQL
  • MySQL
  • Relational databases with proper schema

4. Compliance Requirements

If you need ACID guarantees, audit logs, or strict consistency, use traditional databases.

Conclusion

Redis is a powerful tool when used correctly. Understanding its data types and their use cases enables you to build high-performance, scalable systems. The NestJS example demonstrates real-world patterns:

  • Session management with automatic expiration
  • Multi-layer caching with invalidation
  • Rate limiting with atomic operations
  • Real-time analytics with sorted sets
  • Graceful error handling

Start with simple use cases like caching and session storage. As you gain confidence, explore advanced patterns like pub/sub, streams, and geospatial queries. Redis's simplicity and performance make it indispensable in modern architectures.

Next steps:

  1. Set up Redis locally with Docker
  2. Implement caching in your existing API
  3. Add session management
  4. Experiment with different data types
  5. Monitor performance and optimize

Redis is not just a cache—it's a data structure server that can transform your application's performance and capabilities.


Related Posts