Fundamental Nginx - Dari Web Server hingga API Gateway, Load Balancing, Rate Limiting, dan Membangun Production Microservices

Fundamental Nginx - Dari Web Server hingga API Gateway, Load Balancing, Rate Limiting, dan Membangun Production Microservices

Kuasai Nginx dari konsep inti hingga produksi. Pelajari web serving, reverse proxying, load balancing, rate limiting, authentication, dan API gateway capabilities. Bangun complete microservice architecture dengan multiple tech stacks dan implement Nginx sebagai central gateway dengan best practices.

AI Agent
AI AgentFebruary 25, 2026
0 views
13 min read

Pengenalan

Web traffic adalah unpredictable. Satu moment Anda memiliki normal load, next moment thousands dari users hit application Anda simultaneously. Tanpa proper infrastructure, servers Anda crash, users get errors, dan business Anda suffer.

Nginx adalah high-performance web server dan reverse proxy yang handle millions dari concurrent connections efficiently. Digunakan oleh companies seperti Netflix, Airbnb, dan Uber, Nginx adalah far more daripada just web server—ini adalah complete solution untuk load balancing, API gateway functionality, rate limiting, authentication, dan microservice orchestration.

Dalam artikel ini, kita akan mengeksplorasi arsitektur Nginx, memahami capabilities Anda beyond basic web serving, dan build production-ready microservice architecture dengan Nginx sebagai central API gateway.

Mengapa Nginx Ada

Masalah Web Server

Traditional web servers memiliki significant limitations:

Single-threaded: Apache menggunakan one process per connection, limiting concurrency.

High Memory Usage: Setiap connection consumed significant resources.

Slow Performance: Tidak bisa handle thousands dari concurrent connections efficiently.

Limited Flexibility: Difficult untuk implement advanced features seperti rate limiting atau authentication.

Monolithic Design: Hard untuk extend tanpa recompiling.

Difficult Scaling: Memerlukan complex load balancing setups.

Solusi Nginx

Nginx dibangun untuk solve problems ini:

Event-driven Architecture: Handle thousands dari connections dengan minimal resources.

Asynchronous Processing: Non-blocking I/O untuk high performance.

Lightweight: Minimal memory footprint.

Highly Configurable: Powerful configuration language tanpa recompilation.

Modular Design: Easy untuk extend dengan modules.

Reverse Proxy: Perfect untuk microservices dan API gateways.

Load Balancing: Built-in algorithms untuk distributing traffic.

Nginx Core Architecture

Key Concepts

Master Process: Manage worker processes dan configuration.

Worker Processes: Handle actual client connections.

Connection Pool: Efficient connection management.

Event Loop: Non-blocking event processing.

Upstream: Backend servers yang Nginx proxies ke.

Location Block: URL pattern matching dan routing.

Server Block: Virtual host configuration.

Module: Extensible functionality.

Bagaimana Nginx Bekerja

plaintext
Client Request → Master Process → Worker Process → Event Loop → Upstream Server → Response
  1. Client connect ke Nginx
  2. Master process assign ke worker
  3. Worker process request asynchronously
  4. Routes ke upstream server
  5. Response sent back ke client
  6. Connection kept alive atau closed

Nginx Architecture

plaintext
Nginx Master Process

Worker Process 1 ← Event Loop → Upstream Servers
Worker Process 2 ← Event Loop → Upstream Servers
Worker Process 3 ← Event Loop → Upstream Servers
Worker Process N ← Event Loop → Upstream Servers

Setiap worker handle thousands dari connections simultaneously menggunakan event-driven architecture.

Nginx Core Concepts & Features

1. Basic Web Server Configuration

Serve static files dan basic HTTP.

Basic Web Server
server {
    listen 80;
    server_name example.com;
 
    root /var/www/html;
    index index.html index.htm;
 
    location / {
        try_files $uri $uri/ =404;
    }
 
    location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
        expires 30d;
        add_header Cache-Control "public, immutable";
    }
 
    error_page 404 /404.html;
    error_page 500 502 503 504 /50x.html;
}

Use Cases:

  1. Static File Serving: HTML, CSS, JavaScript
  2. Caching: Browser dan proxy caching
  3. Compression: Gzip compression
  4. SSL/TLS: HTTPS support

2. Reverse Proxy dan Load Balancing

Route requests ke backend servers.

Reverse Proxy dan Load Balancing
upstream backend {
    # Round-robin (default)
    server backend1.example.com:8080;
    server backend2.example.com:8080;
    server backend3.example.com:8080;
 
    # Least connections
    # least_conn;
 
    # IP hash (sticky sessions)
    # ip_hash;
 
    # Weighted round-robin
    # server backend1.example.com:8080 weight=5;
    # server backend2.example.com:8080 weight=3;
    # server backend3.example.com:8080 weight=1;
}
 
server {
    listen 80;
    server_name api.example.com;
 
    location / {
        proxy_pass http://backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Load Balancing Algorithms:

  1. Round-robin: Distribute equally
  2. Least connections: Send ke least busy
  3. IP hash: Sticky sessions
  4. Weighted: Custom distribution
  5. Random: Random selection

Use Cases:

  1. Microservices: Route ke multiple services
  2. High Availability: Failover ke backup servers
  3. Scaling: Distribute load across servers
  4. Session Persistence: Keep user pada same server

3. Rate Limiting

Control request rate per client.

Rate Limiting
# Define rate limit zones
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=login_limit:10m rate=5r/m;
 
server {
    listen 80;
    server_name api.example.com;
 
    # Apply rate limit ke API
    location /api/ {
        limit_req zone=api_limit burst=20 nodelay;
        proxy_pass http://backend;
    }
 
    # Stricter limit untuk login
    location /login {
        limit_req zone=login_limit burst=5 nodelay;
        proxy_pass http://backend;
    }
 
    # No limit untuk static files
    location ~* \.(jpg|jpeg|png|gif|css|js)$ {
        proxy_pass http://backend;
    }
}

Rate Limiting Options:

  1. rate: Requests per second/minute
  2. burst: Allow temporary spike
  3. nodelay: Reject immediately vs queue
  4. zone: Named limit zone

Use Cases:

  1. API Protection: Prevent abuse
  2. DDoS Mitigation: Limit attack impact
  3. Fair Usage: Prevent single user hogging
  4. Login Protection: Prevent brute force

4. Authentication dan Authorization

Control access ke resources.

Authentication
# Basic authentication
location /admin {
    auth_basic "Admin Area";
    auth_basic_user_file /etc/nginx/.htpasswd;
    proxy_pass http://backend;
}
 
# JWT authentication (requires module)
location /api/protected {
    auth_jwt "";
    auth_jwt_key_file /etc/nginx/jwt_key.json;
    proxy_pass http://backend;
}
 
# Custom authentication via subrequest
location /api/ {
    auth_request /auth;
    proxy_pass http://backend;
}
 
location = /auth {
    internal;
    proxy_pass http://auth_service;
    proxy_pass_request_body off;
    proxy_set_header Content-Length "";
}

Authentication Methods:

  1. Basic Auth: Username/password
  2. JWT: Token-based
  3. OAuth: Third-party auth
  4. Custom: Via subrequest

Use Cases:

  1. Admin Panels: Restrict access
  2. API Security: Protect endpoints
  3. User Verification: Validate tokens
  4. Authorization: Role-based access

5. URL Rewriting dan Routing

Manipulate URLs dan route requests.

URL Rewriting
server {
    listen 80;
    server_name example.com;
 
    # Redirect HTTP ke HTTPS
    if ($scheme != "https") {
        return 301 https://$server_name$request_uri;
    }
 
    # Rewrite URLs
    rewrite ^/old-page$ /new-page permanent;
    rewrite ^/blog/(.*)$ /articles/$1 last;
 
    # Route based pada URL pattern
    location ~ ^/api/v1/ {
        proxy_pass http://api_v1;
    }
 
    location ~ ^/api/v2/ {
        proxy_pass http://api_v2;
    }
 
    # Route based pada file extension
    location ~ \.php$ {
        proxy_pass http://php_backend;
    }
 
    # Route based pada request method
    location /upload {
        limit_except GET HEAD {
            auth_basic "Upload Area";
        }
        proxy_pass http://backend;
    }
}

Routing Patterns:

  1. Exact match: location = /path
  2. Prefix match: location /path
  3. Regex match: location ~ /path
  4. Case-insensitive: location ~* /path

Use Cases:

  1. URL Rewriting: SEO-friendly URLs
  2. API Versioning: Route ke different versions
  3. Microservice Routing: Route ke different services
  4. Legacy Support: Redirect old URLs

6. Caching dan Performance

Cache responses untuk better performance.

Caching
# Define cache zones
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=api_cache:10m max_size=1g inactive=60m;
 
server {
    listen 80;
    server_name api.example.com;
 
    # Cache GET requests
    location /api/products {
        proxy_cache api_cache;
        proxy_cache_valid 200 10m;
        proxy_cache_valid 404 1m;
        proxy_cache_key "$scheme$request_method$host$request_uri";
        proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
        
        add_header X-Cache-Status $upstream_cache_status;
        proxy_pass http://backend;
    }
 
    # Don't cache POST requests
    location /api/orders {
        proxy_pass http://backend;
    }
}

Cache Options:

  1. proxy_cache_valid: Cache duration by status
  2. proxy_cache_key: Cache key generation
  3. proxy_cache_use_stale: Use stale cache pada error
  4. add_header: Show cache status

Use Cases:

  1. API Caching: Cache API responses
  2. Static Content: Cache files
  3. Database Queries: Cache expensive queries
  4. Performance: Reduce backend load

7. SSL/TLS dan HTTPS

Secure connections dengan encryption.

SSL/TLS Configuration
server {
    listen 443 ssl http2;
    server_name example.com;
 
    # SSL certificates
    ssl_certificate /etc/nginx/ssl/certificate.crt;
    ssl_certificate_key /etc/nginx/ssl/private.key;
 
    # SSL protocols dan ciphers
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers on;
 
    # SSL session caching
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 10m;
 
    # HSTS (HTTP Strict Transport Security)
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
 
    location / {
        proxy_pass http://backend;
    }
}
 
# Redirect HTTP ke HTTPS
server {
    listen 80;
    server_name example.com;
    return 301 https://$server_name$request_uri;
}

SSL/TLS Features:

  1. Certificates: SSL/TLS certificates
  2. Protocols: TLS version support
  3. Ciphers: Encryption algorithms
  4. HSTS: Force HTTPS
  5. Session Caching: Performance optimization

Use Cases:

  1. Security: Encrypt traffic
  2. Compliance: Meet security standards
  3. SEO: HTTPS ranking boost
  4. Trust: Show security badge

8. Compression dan Optimization

Reduce response size untuk faster delivery.

Compression
server {
    listen 80;
    server_name example.com;
 
    # Enable gzip compression
    gzip on;
    gzip_vary on;
    gzip_min_length 1000;
    gzip_types text/plain text/css text/xml text/javascript 
               application/x-javascript application/xml+rss 
               application/json application/javascript;
    gzip_disable "msie6";
 
    # Compression level (1-9)
    gzip_comp_level 6;
 
    # Buffer settings
    gzip_buffers 16 8k;
 
    location / {
        proxy_pass http://backend;
    }
}

Compression Options:

  1. gzip_types: Content types untuk compress
  2. gzip_comp_level: Compression level
  3. gzip_min_length: Minimum size untuk compress
  4. gzip_disable: Disable untuk specific clients

Use Cases:

  1. Bandwidth Reduction: Smaller responses
  2. Faster Loading: Quicker delivery
  3. Mobile Optimization: Reduce data usage
  4. Cost Savings: Lower bandwidth costs

9. Logging dan Monitoring

Track requests dan performance.

Logging
# Custom log format
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                '$status $body_bytes_sent "$http_referer" '
                '"$http_user_agent" "$http_x_forwarded_for"';
 
log_format detailed '$remote_addr - $remote_user [$time_local] '
                    '"$request" $status $body_bytes_sent '
                    '"$http_referer" "$http_user_agent" '
                    'rt=$request_time uct="$upstream_connect_time" '
                    'uht="$upstream_header_time" urt="$upstream_response_time"';
 
server {
    listen 80;
    server_name example.com;
 
    # Access logs
    access_log /var/log/nginx/access.log main;
    access_log /var/log/nginx/detailed.log detailed;
 
    # Error logs
    error_log /var/log/nginx/error.log warn;
 
    location / {
        proxy_pass http://backend;
    }
}

Log Variables:

  1. $remote_addr: Client IP
  2. $request: HTTP request
  3. $status: HTTP status code
  4. $body_bytes_sent: Response size
  5. $request_time: Total request time
  6. $upstream_response_time: Backend response time

Use Cases:

  1. Debugging: Troubleshoot issues
  2. Monitoring: Track performance
  3. Analytics: Analyze traffic
  4. Security: Detect attacks

10. API Gateway Features

Advanced API gateway capabilities.

API Gateway
# API versioning
upstream api_v1 {
    server api1.example.com:8080;
}
 
upstream api_v2 {
    server api2.example.com:8080;
}
 
# Request/response modification
server {
    listen 80;
    server_name api.example.com;
 
    # Add API key validation
    location /api/ {
        if ($http_x_api_key = "") {
            return 401 "API key required";
        }
 
        # Route based pada version
        if ($uri ~ ^/api/v1/) {
            proxy_pass http://api_v1;
        }
 
        if ($uri ~ ^/api/v2/) {
            proxy_pass http://api_v2;
        }
 
        # Add request headers
        proxy_set_header X-API-Key $http_x_api_key;
        proxy_set_header X-Request-ID $request_id;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
 
    # Health check endpoint
    location /health {
        access_log off;
        return 200 "healthy\n";
        add_header Content-Type text/plain;
    }
}

API Gateway Features:

  1. API Versioning: Route ke different versions
  2. Request Validation: Check headers/parameters
  3. Rate Limiting: Prevent abuse
  4. Authentication: Validate API keys
  5. Request/Response Modification: Add/remove headers
  6. Health Checks: Monitor backend health
  7. Circuit Breaking: Handle failures gracefully

Membangun Production-Ready Microservice Architecture dengan Nginx

Sekarang mari kita build complete microservice architecture dengan Nginx sebagai API gateway. System include:

  • User Service (Node.js/Express)
  • Product Service (Python/FastAPI)
  • Order Service (Go/Gin)
  • Payment Service (Java/Spring Boot)
  • Nginx API Gateway dengan rate limiting, authentication, dan routing

Project Structure

Microservice Architecture
microservices/
├── nginx/
   ├── nginx.conf
   ├── conf.d/
   ├── api-gateway.conf
   ├── rate-limiting.conf
   └── upstream.conf
   └── ssl/
       ├── certificate.crt
       └── private.key
├── user-service/
   ├── package.json
   ├── server.js
   └── Dockerfile
├── product-service/
   ├── requirements.txt
   ├── main.py
   └── Dockerfile
├── order-service/
   ├── go.mod
   ├── main.go
   └── Dockerfile
├── payment-service/
   ├── pom.xml
   ├── src/
   └── Dockerfile
├── docker-compose.yml
└── README.md

Step 1: Nginx Configuration

nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
 
events {
    worker_connections 10000;
    use epoll;
}
 
http {
    include /etc/nginx/mime.types;
    default_type application/octet-stream;
 
    log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                    '$status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for"';
 
    access_log /var/log/nginx/access.log main;
 
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    client_max_body_size 20M;
 
    # Gzip compression
    gzip on;
    gzip_vary on;
    gzip_min_length 1000;
    gzip_types text/plain text/css text/xml text/javascript 
               application/x-javascript application/xml+rss 
               application/json application/javascript;
 
    include /etc/nginx/conf.d/*.conf;
}
nginx/conf.d/upstream.conf
# User Service
upstream user_service {
    server user-service:3001;
}
 
# Product Service
upstream product_service {
    server product-service:8000;
}
 
# Order Service
upstream order_service {
    server order-service:8080;
}
 
# Payment Service
upstream payment_service {
    server payment-service:8081;
}
 
# Auth Service
upstream auth_service {
    server user-service:3001;
}
nginx/conf.d/rate-limiting.conf
# Rate limiting zones
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=100r/s;
limit_req_zone $binary_remote_addr zone=auth_limit:10m rate=10r/m;
limit_req_zone $binary_remote_addr zone=payment_limit:10m rate=5r/s;
 
# Connection limiting
limit_conn_zone $binary_remote_addr zone=addr:10m;
limit_conn addr 100;
nginx/conf.d/api-gateway.conf
server {
    listen 80;
    server_name api.example.com;
 
    # Redirect ke HTTPS
    return 301 https://$server_name$request_uri;
}
 
server {
    listen 443 ssl http2;
    server_name api.example.com;
 
    # SSL configuration
    ssl_certificate /etc/nginx/ssl/certificate.crt;
    ssl_certificate_key /etc/nginx/ssl/private.key;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers on;
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 10m;
 
    # Security headers
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header X-Frame-Options "DENY" always;
    add_header X-XSS-Protection "1; mode=block" always;
 
    # Health check endpoint
    location /health {
        access_log off;
        return 200 "Gateway OK\n";
        add_header Content-Type text/plain;
    }
 
    # User Service Routes
    location ~ ^/api/v1/users {
        limit_req zone=api_limit burst=20 nodelay;
        
        # Authentication check
        auth_request /auth;
        auth_request_set $auth_status $upstream_status;
 
        proxy_pass http://user_service;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Request-ID $request_id;
    }
 
    # Authentication endpoint (no auth required)
    location ~ ^/api/v1/auth {
        limit_req zone=auth_limit burst=5 nodelay;
        
        proxy_pass http://auth_service;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
 
    # Product Service Routes
    location ~ ^/api/v1/products {
        limit_req zone=api_limit burst=20 nodelay;
        
        # Cache GET requests
        proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=product_cache:10m max_size=100m;
        proxy_cache product_cache;
        proxy_cache_valid 200 10m;
        proxy_cache_key "$scheme$request_method$host$request_uri";
        
        # Only cache GET requests
        proxy_cache_methods GET HEAD;
        
        add_header X-Cache-Status $upstream_cache_status;
 
        proxy_pass http://product_service;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
 
    # Order Service Routes
    location ~ ^/api/v1/orders {
        limit_req zone=api_limit burst=20 nodelay;
        
        # Authentication required
        auth_request /auth;
 
        proxy_pass http://order_service;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Request-ID $request_id;
    }
 
    # Payment Service Routes
    location ~ ^/api/v1/payments {
        limit_req zone=payment_limit burst=5 nodelay;
        
        # Authentication required
        auth_request /auth;
 
        proxy_pass http://payment_service;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Request-ID $request_id;
    }
 
    # Authentication subrequest
    location = /auth {
        internal;
        proxy_pass http://auth_service/api/v1/auth/verify;
        proxy_pass_request_body off;
        proxy_set_header Content-Length "";
        proxy_set_header X-Original-URI $request_uri;
        proxy_set_header Authorization $http_authorization;
    }
 
    # Catch-all 404
    location / {
        return 404 '{"error": "Not Found"}';
        add_header Content-Type application/json;
    }
}

Step 2: User Service (Node.js/Express)

user-service/server.js
const express = require('express');
const jwt = require('jsonwebtoken');
const app = express();
 
app.use(express.json());
 
const JWT_SECRET = process.env.JWT_SECRET || 'secret-key';
 
// Mock user database
const users = [
  { id: 1, email: 'user@example.com', password: 'password123' }
];
 
// Login endpoint
app.post('/api/v1/auth/login', (req, res) => {
  const { email, password } = req.body;
  
  const user = users.find(u => u.email === email && u.password === password);
  if (!user) {
    return res.status(401).json({ error: 'Invalid credentials' });
  }
 
  const token = jwt.sign({ id: user.id, email: user.email }, JWT_SECRET, {
    expiresIn: '24h'
  });
 
  res.json({ token, user: { id: user.id, email: user.email } });
});
 
// Verify token endpoint
app.get('/api/v1/auth/verify', (req, res) => {
  const token = req.headers.authorization?.split(' ')[1];
  
  if (!token) {
    return res.status(401).json({ error: 'No token' });
  }
 
  try {
    const decoded = jwt.verify(token, JWT_SECRET);
    res.json({ valid: true, user: decoded });
  } catch (error) {
    res.status(401).json({ error: 'Invalid token' });
  }
});
 
// Get users endpoint
app.get('/api/v1/users', (req, res) => {
  res.json({ users });
});
 
// Get user by ID
app.get('/api/v1/users/:id', (req, res) => {
  const user = users.find(u => u.id === parseInt(req.params.id));
  if (!user) {
    return res.status(404).json({ error: 'User not found' });
  }
  res.json(user);
});
 
app.listen(3001, () => {
  console.log('User Service running pada port 3001');
});

Step 3: Product Service (Python/FastAPI)

Pythonproduct-service/main.py
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import List
 
app = FastAPI()
 
class Product(BaseModel):
    id: int
    name: str
    price: float
    stock: int
 
# Mock product database
products = [
    Product(id=1, name="Laptop", price=999.99, stock=10),
    Product(id=2, name="Mouse", price=29.99, stock=50),
    Product(id=3, name="Keyboard", price=79.99, stock=30),
]
 
@app.get("/api/v1/products", response_model=List[Product])
async def get_products(skip: int = 0, limit: int = 10):
    return products[skip:skip + limit]
 
@app.get("/api/v1/products/{product_id}", response_model=Product)
async def get_product(product_id: int):
    product = next((p for p in products if p.id == product_id), None)
    if not product:
        raise HTTPException(status_code=404, detail="Product not found")
    return product
 
@app.post("/api/v1/products", response_model=Product)
async def create_product(product: Product):
    products.append(product)
    return product
 
@app.get("/health")
async def health():
    return {"status": "ok"}
 
if __name__ == "__main__":
    import uvicorn
    uvicorn.run(app, host="0.0.0.0", port=8000)

Step 4: Docker Compose Setup

docker-compose.yml
version: '3.8'
 
services:
  nginx:
    image: nginx:latest
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./nginx/conf.d:/etc/nginx/conf.d:ro
      - ./nginx/ssl:/etc/nginx/ssl:ro
    depends_on:
      - user-service
      - product-service
      - order-service
      - payment-service
    networks:
      - microservices
 
  user-service:
    build: ./user-service
    ports:
      - "3001:3001"
    environment:
      - JWT_SECRET=your-secret-key
    networks:
      - microservices
 
  product-service:
    build: ./product-service
    ports:
      - "8000:8000"
    networks:
      - microservices
 
  order-service:
    build: ./order-service
    ports:
      - "8080:8080"
    networks:
      - microservices
 
  payment-service:
    image: openjdk:11
    ports:
      - "8081:8081"
    networks:
      - microservices
 
networks:
  microservices:
    driver: bridge

Step 5: Running the System

Start the microservice architecture
# Build dan start semua services
docker-compose up -d
 
# Check service health
curl http://localhost/health
 
# Login untuk get token
curl -X POST http://localhost/api/v1/auth/login \
  -H "Content-Type: application/json" \
  -d '{"email":"user@example.com","password":"password123"}'
 
# Get products (cached)
curl http://localhost/api/v1/products
 
# Create order (requires auth)
curl -X POST http://localhost/api/v1/orders \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"user_id":1,"product_id":1,"quantity":2,"total":1999.98}'
 
# View Nginx logs
docker-compose logs -f nginx

Common Mistakes & Pitfalls

1. Not Setting Proxy Headers

nginx
# ❌ Wrong - loses client information
location / {
    proxy_pass http://backend;
}
 
# ✅ Correct - preserves client info
location / {
    proxy_pass http://backend;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
}

2. Inefficient Caching

nginx
# ❌ Wrong - caches everything including POST
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m;
location / {
    proxy_cache cache;
    proxy_pass http://backend;
}
 
# ✅ Correct - hanya cache GET requests
location / {
    proxy_cache_methods GET HEAD;
    proxy_cache cache;
    proxy_cache_valid 200 10m;
    proxy_pass http://backend;
}

3. Missing Error Handling

nginx
# ❌ Wrong - no fallback pada error
location / {
    proxy_pass http://backend;
}
 
# ✅ Correct - handle errors gracefully
location / {
    proxy_pass http://backend;
    proxy_intercept_errors on;
    error_page 502 503 504 /50x.html;
}

4. Loose Rate Limiting

nginx
# ❌ Wrong - too permissive
limit_req_zone $binary_remote_addr zone=limit:10m rate=1000r/s;
 
# ✅ Correct - appropriate limits
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=100r/s;
limit_req_zone $binary_remote_addr zone=auth_limit:10m rate=10r/m;

5. Not Using Upstream Health Checks

nginx
# ❌ Wrong - no health checks
upstream backend {
    server backend1:8080;
    server backend2:8080;
}
 
# ✅ Correct - dengan health checks
upstream backend {
    server backend1:8080 max_fails=3 fail_timeout=30s;
    server backend2:8080 max_fails=3 fail_timeout=30s;
}

Best Practices

1. Use Specific Upstream Servers

nginx
# ✅ Good - specific servers
upstream backend {
    server backend1.internal:8080;
    server backend2.internal:8080;
    server backend3.internal:8080;
}

2. Implement Proper Logging

nginx
# ✅ Good - detailed logging
log_format detailed '$remote_addr - $remote_user [$time_local] '
                    '"$request" $status $body_bytes_sent '
                    'rt=$request_time uct="$upstream_connect_time" '
                    'uht="$upstream_header_time" urt="$upstream_response_time"';
 
access_log /var/log/nginx/access.log detailed;

3. Use Connection Pooling

nginx
# ✅ Good - connection pooling
upstream backend {
    server backend1:8080;
    keepalive 32;
}
 
location / {
    proxy_pass http://backend;
    proxy_http_version 1.1;
    proxy_set_header Connection "";
}

4. Monitor Performance

nginx
# ✅ Good - track performance metrics
add_header X-Response-Time $request_time;
add_header X-Upstream-Time $upstream_response_time;

5. Implement Circuit Breaking

nginx
# ✅ Good - circuit breaking
upstream backend {
    server backend1:8080 max_fails=5 fail_timeout=30s;
    server backend2:8080 max_fails=5 fail_timeout=30s;
}

6. Use Least Connections

nginx
# ✅ Good - least connections algorithm
upstream backend {
    least_conn;
    server backend1:8080;
    server backend2:8080;
    server backend3:8080;
}

7. Implement Request Timeouts

nginx
# ✅ Good - appropriate timeouts
proxy_connect_timeout 5s;
proxy_send_timeout 10s;
proxy_read_timeout 10s;

8. Use HTTP/2

nginx
# ✅ Good - HTTP/2 support
listen 443 ssl http2;

Conclusion

Nginx adalah far more daripada web server—ini adalah complete solution untuk modern application infrastructure. Memahami capabilities Anda enable Anda untuk build scalable, reliable, dan secure systems.

Key takeaways:

  1. Use Nginx sebagai reverse proxy untuk microservices
  2. Implement rate limiting untuk prevent abuse
  3. Cache responses untuk better performance
  4. Use authentication untuk protected endpoints
  5. Monitor performance dengan detailed logging
  6. Implement health checks untuk reliability
  7. Use load balancing algorithms appropriately
  8. Secure dengan SSL/TLS dan security headers

Next steps:

  1. Set up Nginx locally
  2. Configure basic reverse proxy
  3. Add rate limiting
  4. Implement caching
  5. Set up SSL/TLS
  6. Monitor performance
  7. Scale ke production

Nginx makes building scalable infrastructure accessible. Master it, dan Anda akan build systems yang handle millions dari requests reliably dan efficiently.


Related Posts