Kuasai Nginx dari konsep inti hingga produksi. Pelajari web serving, reverse proxying, load balancing, rate limiting, authentication, dan API gateway capabilities. Bangun complete microservice architecture dengan multiple tech stacks dan implement Nginx sebagai central gateway dengan best practices.

Web traffic adalah unpredictable. Satu moment Anda memiliki normal load, next moment thousands dari users hit application Anda simultaneously. Tanpa proper infrastructure, servers Anda crash, users get errors, dan business Anda suffer.
Nginx adalah high-performance web server dan reverse proxy yang handle millions dari concurrent connections efficiently. Digunakan oleh companies seperti Netflix, Airbnb, dan Uber, Nginx adalah far more daripada just web server—ini adalah complete solution untuk load balancing, API gateway functionality, rate limiting, authentication, dan microservice orchestration.
Dalam artikel ini, kita akan mengeksplorasi arsitektur Nginx, memahami capabilities Anda beyond basic web serving, dan build production-ready microservice architecture dengan Nginx sebagai central API gateway.
Traditional web servers memiliki significant limitations:
Single-threaded: Apache menggunakan one process per connection, limiting concurrency.
High Memory Usage: Setiap connection consumed significant resources.
Slow Performance: Tidak bisa handle thousands dari concurrent connections efficiently.
Limited Flexibility: Difficult untuk implement advanced features seperti rate limiting atau authentication.
Monolithic Design: Hard untuk extend tanpa recompiling.
Difficult Scaling: Memerlukan complex load balancing setups.
Nginx dibangun untuk solve problems ini:
Event-driven Architecture: Handle thousands dari connections dengan minimal resources.
Asynchronous Processing: Non-blocking I/O untuk high performance.
Lightweight: Minimal memory footprint.
Highly Configurable: Powerful configuration language tanpa recompilation.
Modular Design: Easy untuk extend dengan modules.
Reverse Proxy: Perfect untuk microservices dan API gateways.
Load Balancing: Built-in algorithms untuk distributing traffic.
Master Process: Manage worker processes dan configuration.
Worker Processes: Handle actual client connections.
Connection Pool: Efficient connection management.
Event Loop: Non-blocking event processing.
Upstream: Backend servers yang Nginx proxies ke.
Location Block: URL pattern matching dan routing.
Server Block: Virtual host configuration.
Module: Extensible functionality.
Client Request → Master Process → Worker Process → Event Loop → Upstream Server → ResponseNginx Master Process
↓
Worker Process 1 ← Event Loop → Upstream Servers
Worker Process 2 ← Event Loop → Upstream Servers
Worker Process 3 ← Event Loop → Upstream Servers
Worker Process N ← Event Loop → Upstream ServersSetiap worker handle thousands dari connections simultaneously menggunakan event-driven architecture.
Serve static files dan basic HTTP.
server {
listen 80;
server_name example.com;
root /var/www/html;
index index.html index.htm;
location / {
try_files $uri $uri/ =404;
}
location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
expires 30d;
add_header Cache-Control "public, immutable";
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
}Use Cases:
Route requests ke backend servers.
upstream backend {
# Round-robin (default)
server backend1.example.com:8080;
server backend2.example.com:8080;
server backend3.example.com:8080;
# Least connections
# least_conn;
# IP hash (sticky sessions)
# ip_hash;
# Weighted round-robin
# server backend1.example.com:8080 weight=5;
# server backend2.example.com:8080 weight=3;
# server backend3.example.com:8080 weight=1;
}
server {
listen 80;
server_name api.example.com;
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}Load Balancing Algorithms:
Use Cases:
Control request rate per client.
# Define rate limit zones
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=login_limit:10m rate=5r/m;
server {
listen 80;
server_name api.example.com;
# Apply rate limit ke API
location /api/ {
limit_req zone=api_limit burst=20 nodelay;
proxy_pass http://backend;
}
# Stricter limit untuk login
location /login {
limit_req zone=login_limit burst=5 nodelay;
proxy_pass http://backend;
}
# No limit untuk static files
location ~* \.(jpg|jpeg|png|gif|css|js)$ {
proxy_pass http://backend;
}
}Rate Limiting Options:
Use Cases:
Control access ke resources.
# Basic authentication
location /admin {
auth_basic "Admin Area";
auth_basic_user_file /etc/nginx/.htpasswd;
proxy_pass http://backend;
}
# JWT authentication (requires module)
location /api/protected {
auth_jwt "";
auth_jwt_key_file /etc/nginx/jwt_key.json;
proxy_pass http://backend;
}
# Custom authentication via subrequest
location /api/ {
auth_request /auth;
proxy_pass http://backend;
}
location = /auth {
internal;
proxy_pass http://auth_service;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
}Authentication Methods:
Use Cases:
Manipulate URLs dan route requests.
server {
listen 80;
server_name example.com;
# Redirect HTTP ke HTTPS
if ($scheme != "https") {
return 301 https://$server_name$request_uri;
}
# Rewrite URLs
rewrite ^/old-page$ /new-page permanent;
rewrite ^/blog/(.*)$ /articles/$1 last;
# Route based pada URL pattern
location ~ ^/api/v1/ {
proxy_pass http://api_v1;
}
location ~ ^/api/v2/ {
proxy_pass http://api_v2;
}
# Route based pada file extension
location ~ \.php$ {
proxy_pass http://php_backend;
}
# Route based pada request method
location /upload {
limit_except GET HEAD {
auth_basic "Upload Area";
}
proxy_pass http://backend;
}
}Routing Patterns:
location = /pathlocation /pathlocation ~ /pathlocation ~* /pathUse Cases:
Cache responses untuk better performance.
# Define cache zones
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=api_cache:10m max_size=1g inactive=60m;
server {
listen 80;
server_name api.example.com;
# Cache GET requests
location /api/products {
proxy_cache api_cache;
proxy_cache_valid 200 10m;
proxy_cache_valid 404 1m;
proxy_cache_key "$scheme$request_method$host$request_uri";
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
add_header X-Cache-Status $upstream_cache_status;
proxy_pass http://backend;
}
# Don't cache POST requests
location /api/orders {
proxy_pass http://backend;
}
}Cache Options:
Use Cases:
Secure connections dengan encryption.
server {
listen 443 ssl http2;
server_name example.com;
# SSL certificates
ssl_certificate /etc/nginx/ssl/certificate.crt;
ssl_certificate_key /etc/nginx/ssl/private.key;
# SSL protocols dan ciphers
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
# SSL session caching
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
# HSTS (HTTP Strict Transport Security)
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
location / {
proxy_pass http://backend;
}
}
# Redirect HTTP ke HTTPS
server {
listen 80;
server_name example.com;
return 301 https://$server_name$request_uri;
}SSL/TLS Features:
Use Cases:
Reduce response size untuk faster delivery.
server {
listen 80;
server_name example.com;
# Enable gzip compression
gzip on;
gzip_vary on;
gzip_min_length 1000;
gzip_types text/plain text/css text/xml text/javascript
application/x-javascript application/xml+rss
application/json application/javascript;
gzip_disable "msie6";
# Compression level (1-9)
gzip_comp_level 6;
# Buffer settings
gzip_buffers 16 8k;
location / {
proxy_pass http://backend;
}
}Compression Options:
Use Cases:
Track requests dan performance.
# Custom log format
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
log_format detailed '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'rt=$request_time uct="$upstream_connect_time" '
'uht="$upstream_header_time" urt="$upstream_response_time"';
server {
listen 80;
server_name example.com;
# Access logs
access_log /var/log/nginx/access.log main;
access_log /var/log/nginx/detailed.log detailed;
# Error logs
error_log /var/log/nginx/error.log warn;
location / {
proxy_pass http://backend;
}
}Log Variables:
Use Cases:
Advanced API gateway capabilities.
# API versioning
upstream api_v1 {
server api1.example.com:8080;
}
upstream api_v2 {
server api2.example.com:8080;
}
# Request/response modification
server {
listen 80;
server_name api.example.com;
# Add API key validation
location /api/ {
if ($http_x_api_key = "") {
return 401 "API key required";
}
# Route based pada version
if ($uri ~ ^/api/v1/) {
proxy_pass http://api_v1;
}
if ($uri ~ ^/api/v2/) {
proxy_pass http://api_v2;
}
# Add request headers
proxy_set_header X-API-Key $http_x_api_key;
proxy_set_header X-Request-ID $request_id;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# Health check endpoint
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
}API Gateway Features:
Sekarang mari kita build complete microservice architecture dengan Nginx sebagai API gateway. System include:
microservices/
├── nginx/
│ ├── nginx.conf
│ ├── conf.d/
│ │ ├── api-gateway.conf
│ │ ├── rate-limiting.conf
│ │ └── upstream.conf
│ └── ssl/
│ ├── certificate.crt
│ └── private.key
├── user-service/
│ ├── package.json
│ ├── server.js
│ └── Dockerfile
├── product-service/
│ ├── requirements.txt
│ ├── main.py
│ └── Dockerfile
├── order-service/
│ ├── go.mod
│ ├── main.go
│ └── Dockerfile
├── payment-service/
│ ├── pom.xml
│ ├── src/
│ └── Dockerfile
├── docker-compose.yml
└── README.mduser nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 10000;
use epoll;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
client_max_body_size 20M;
# Gzip compression
gzip on;
gzip_vary on;
gzip_min_length 1000;
gzip_types text/plain text/css text/xml text/javascript
application/x-javascript application/xml+rss
application/json application/javascript;
include /etc/nginx/conf.d/*.conf;
}# User Service
upstream user_service {
server user-service:3001;
}
# Product Service
upstream product_service {
server product-service:8000;
}
# Order Service
upstream order_service {
server order-service:8080;
}
# Payment Service
upstream payment_service {
server payment-service:8081;
}
# Auth Service
upstream auth_service {
server user-service:3001;
}# Rate limiting zones
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=100r/s;
limit_req_zone $binary_remote_addr zone=auth_limit:10m rate=10r/m;
limit_req_zone $binary_remote_addr zone=payment_limit:10m rate=5r/s;
# Connection limiting
limit_conn_zone $binary_remote_addr zone=addr:10m;
limit_conn addr 100;server {
listen 80;
server_name api.example.com;
# Redirect ke HTTPS
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name api.example.com;
# SSL configuration
ssl_certificate /etc/nginx/ssl/certificate.crt;
ssl_certificate_key /etc/nginx/ssl/private.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
# Security headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "DENY" always;
add_header X-XSS-Protection "1; mode=block" always;
# Health check endpoint
location /health {
access_log off;
return 200 "Gateway OK\n";
add_header Content-Type text/plain;
}
# User Service Routes
location ~ ^/api/v1/users {
limit_req zone=api_limit burst=20 nodelay;
# Authentication check
auth_request /auth;
auth_request_set $auth_status $upstream_status;
proxy_pass http://user_service;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Request-ID $request_id;
}
# Authentication endpoint (no auth required)
location ~ ^/api/v1/auth {
limit_req zone=auth_limit burst=5 nodelay;
proxy_pass http://auth_service;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# Product Service Routes
location ~ ^/api/v1/products {
limit_req zone=api_limit burst=20 nodelay;
# Cache GET requests
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=product_cache:10m max_size=100m;
proxy_cache product_cache;
proxy_cache_valid 200 10m;
proxy_cache_key "$scheme$request_method$host$request_uri";
# Only cache GET requests
proxy_cache_methods GET HEAD;
add_header X-Cache-Status $upstream_cache_status;
proxy_pass http://product_service;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Order Service Routes
location ~ ^/api/v1/orders {
limit_req zone=api_limit burst=20 nodelay;
# Authentication required
auth_request /auth;
proxy_pass http://order_service;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Request-ID $request_id;
}
# Payment Service Routes
location ~ ^/api/v1/payments {
limit_req zone=payment_limit burst=5 nodelay;
# Authentication required
auth_request /auth;
proxy_pass http://payment_service;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Request-ID $request_id;
}
# Authentication subrequest
location = /auth {
internal;
proxy_pass http://auth_service/api/v1/auth/verify;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
proxy_set_header Authorization $http_authorization;
}
# Catch-all 404
location / {
return 404 '{"error": "Not Found"}';
add_header Content-Type application/json;
}
}const express = require('express');
const jwt = require('jsonwebtoken');
const app = express();
app.use(express.json());
const JWT_SECRET = process.env.JWT_SECRET || 'secret-key';
// Mock user database
const users = [
{ id: 1, email: 'user@example.com', password: 'password123' }
];
// Login endpoint
app.post('/api/v1/auth/login', (req, res) => {
const { email, password } = req.body;
const user = users.find(u => u.email === email && u.password === password);
if (!user) {
return res.status(401).json({ error: 'Invalid credentials' });
}
const token = jwt.sign({ id: user.id, email: user.email }, JWT_SECRET, {
expiresIn: '24h'
});
res.json({ token, user: { id: user.id, email: user.email } });
});
// Verify token endpoint
app.get('/api/v1/auth/verify', (req, res) => {
const token = req.headers.authorization?.split(' ')[1];
if (!token) {
return res.status(401).json({ error: 'No token' });
}
try {
const decoded = jwt.verify(token, JWT_SECRET);
res.json({ valid: true, user: decoded });
} catch (error) {
res.status(401).json({ error: 'Invalid token' });
}
});
// Get users endpoint
app.get('/api/v1/users', (req, res) => {
res.json({ users });
});
// Get user by ID
app.get('/api/v1/users/:id', (req, res) => {
const user = users.find(u => u.id === parseInt(req.params.id));
if (!user) {
return res.status(404).json({ error: 'User not found' });
}
res.json(user);
});
app.listen(3001, () => {
console.log('User Service running pada port 3001');
});from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import List
app = FastAPI()
class Product(BaseModel):
id: int
name: str
price: float
stock: int
# Mock product database
products = [
Product(id=1, name="Laptop", price=999.99, stock=10),
Product(id=2, name="Mouse", price=29.99, stock=50),
Product(id=3, name="Keyboard", price=79.99, stock=30),
]
@app.get("/api/v1/products", response_model=List[Product])
async def get_products(skip: int = 0, limit: int = 10):
return products[skip:skip + limit]
@app.get("/api/v1/products/{product_id}", response_model=Product)
async def get_product(product_id: int):
product = next((p for p in products if p.id == product_id), None)
if not product:
raise HTTPException(status_code=404, detail="Product not found")
return product
@app.post("/api/v1/products", response_model=Product)
async def create_product(product: Product):
products.append(product)
return product
@app.get("/health")
async def health():
return {"status": "ok"}
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)version: '3.8'
services:
nginx:
image: nginx:latest
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- ./nginx/ssl:/etc/nginx/ssl:ro
depends_on:
- user-service
- product-service
- order-service
- payment-service
networks:
- microservices
user-service:
build: ./user-service
ports:
- "3001:3001"
environment:
- JWT_SECRET=your-secret-key
networks:
- microservices
product-service:
build: ./product-service
ports:
- "8000:8000"
networks:
- microservices
order-service:
build: ./order-service
ports:
- "8080:8080"
networks:
- microservices
payment-service:
image: openjdk:11
ports:
- "8081:8081"
networks:
- microservices
networks:
microservices:
driver: bridge# Build dan start semua services
docker-compose up -d
# Check service health
curl http://localhost/health
# Login untuk get token
curl -X POST http://localhost/api/v1/auth/login \
-H "Content-Type: application/json" \
-d '{"email":"user@example.com","password":"password123"}'
# Get products (cached)
curl http://localhost/api/v1/products
# Create order (requires auth)
curl -X POST http://localhost/api/v1/orders \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{"user_id":1,"product_id":1,"quantity":2,"total":1999.98}'
# View Nginx logs
docker-compose logs -f nginx# ❌ Wrong - loses client information
location / {
proxy_pass http://backend;
}
# ✅ Correct - preserves client info
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}# ❌ Wrong - caches everything including POST
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m;
location / {
proxy_cache cache;
proxy_pass http://backend;
}
# ✅ Correct - hanya cache GET requests
location / {
proxy_cache_methods GET HEAD;
proxy_cache cache;
proxy_cache_valid 200 10m;
proxy_pass http://backend;
}# ❌ Wrong - no fallback pada error
location / {
proxy_pass http://backend;
}
# ✅ Correct - handle errors gracefully
location / {
proxy_pass http://backend;
proxy_intercept_errors on;
error_page 502 503 504 /50x.html;
}# ❌ Wrong - too permissive
limit_req_zone $binary_remote_addr zone=limit:10m rate=1000r/s;
# ✅ Correct - appropriate limits
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=100r/s;
limit_req_zone $binary_remote_addr zone=auth_limit:10m rate=10r/m;# ❌ Wrong - no health checks
upstream backend {
server backend1:8080;
server backend2:8080;
}
# ✅ Correct - dengan health checks
upstream backend {
server backend1:8080 max_fails=3 fail_timeout=30s;
server backend2:8080 max_fails=3 fail_timeout=30s;
}# ✅ Good - specific servers
upstream backend {
server backend1.internal:8080;
server backend2.internal:8080;
server backend3.internal:8080;
}# ✅ Good - detailed logging
log_format detailed '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'rt=$request_time uct="$upstream_connect_time" '
'uht="$upstream_header_time" urt="$upstream_response_time"';
access_log /var/log/nginx/access.log detailed;# ✅ Good - connection pooling
upstream backend {
server backend1:8080;
keepalive 32;
}
location / {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
}# ✅ Good - track performance metrics
add_header X-Response-Time $request_time;
add_header X-Upstream-Time $upstream_response_time;# ✅ Good - circuit breaking
upstream backend {
server backend1:8080 max_fails=5 fail_timeout=30s;
server backend2:8080 max_fails=5 fail_timeout=30s;
}# ✅ Good - least connections algorithm
upstream backend {
least_conn;
server backend1:8080;
server backend2:8080;
server backend3:8080;
}# ✅ Good - appropriate timeouts
proxy_connect_timeout 5s;
proxy_send_timeout 10s;
proxy_read_timeout 10s;# ✅ Good - HTTP/2 support
listen 443 ssl http2;Nginx adalah far more daripada web server—ini adalah complete solution untuk modern application infrastructure. Memahami capabilities Anda enable Anda untuk build scalable, reliable, dan secure systems.
Key takeaways:
Next steps:
Nginx makes building scalable infrastructure accessible. Master it, dan Anda akan build systems yang handle millions dari requests reliably dan efficiently.