Master gRPC from basics to production. Understand why gRPC exists, how it compares to REST and GraphQL, and build high-performance microservices with NestJS.

gRPC has revolutionized how services communicate at scale. Built by Google and open-sourced in 2015, gRPC powers infrastructure at companies like Netflix, Uber, Slack, and countless others handling millions of requests per second.
Unlike REST or GraphQL, gRPC isn't designed for client-server communication over the public internet. It's engineered for high-performance, low-latency communication between services in your infrastructure. If you're building microservices, gRPC is worth understanding deeply.
This guide takes you from gRPC fundamentals to production-ready implementations using NestJS. We'll explore the history, compare it with REST and GraphQL, understand the core concepts, and build real-world microservices that matter.
In the early 2010s, Google's infrastructure was massive and complex. Thousands of microservices communicated with each other using various protocols and serialization formats. This heterogeneity created problems:
Google needed a unified, high-performance protocol for internal service-to-service communication.
In 2014, Google started working on a new RPC framework internally. They called it gRPC—the "g" stands for "gRPC Remote Procedure Call" (a recursive acronym, Google's style).
gRPC was built on three key technologies:
In 2015, Google open-sourced gRPC. The response was immediate. Companies recognized that gRPC solved real infrastructure problems.
Today, gRPC handles trillions of requests daily across the internet.
REST APIs use JSON over HTTP/1.1. Each request-response cycle involves:
gRPC uses binary Protocol Buffers over HTTP/2:
For microservices making thousands of inter-service calls, this difference compounds dramatically.
A typical REST response:
{
"id": "user-123",
"name": "John Doe",
"email": "john@example.com",
"createdAt": "2024-03-02T10:30:00Z",
"status": "active"
}This JSON is ~120 bytes. The same data in Protocol Buffers is ~30 bytes. For services exchanging millions of messages, this 4x reduction in bandwidth is significant.
REST is request-response only. If you need to stream data, you need WebSockets or Server-Sent Events.
gRPC has native support for four communication patterns:
REST APIs often lack strong typing. You might receive unexpected data types or missing fields.
gRPC uses Protocol Buffers, which enforce strict schemas. The compiler generates type-safe client and server code in multiple languages.
gRPC works across languages seamlessly. A Python service can call a Go service, which calls a Node.js service. Protocol Buffers handle serialization, so language differences don't matter.
Strengths:
Weaknesses:
Best for: Public APIs, simple CRUD, browser clients, when caching is critical.
Strengths:
Weaknesses:
Best for: Client-facing APIs, multiple client types, complex data relationships.
Strengths:
Weaknesses:
Best for: Microservices communication, high-performance systems, real-time streaming, internal service-to-service communication.
| Scenario | Best Choice | Why |
|---|---|---|
| Public API | REST or GraphQL | Discoverability, browser support |
| Microservices communication | gRPC | Performance, streaming, typing |
| Mobile app backend | GraphQL | Precise fetching, multiple clients |
| Real-time data streaming | gRPC | Native streaming, low latency |
| Simple CRUD | REST | Simplicity, caching |
| Complex data relationships | GraphQL | Single request, precise fetching |
| High-frequency trading | gRPC | Ultra-low latency |
| IoT devices | REST or gRPC | Depends on bandwidth/latency needs |
Protocol Buffers (protobuf) are Google's language-neutral, platform-neutral serialization format. They're more efficient than JSON and provide strong typing.
syntax = "proto3";
package user;
message User {
string id = 1;
string name = 2;
string email = 3;
int64 created_at = 4;
string status = 5;
}
message CreateUserRequest {
string name = 1;
string email = 2;
}
message CreateUserResponse {
User user = 1;
string message = 2;
}Key concepts:
syntax = "proto3" - Protocol Buffers version 3string, int64, bool, etc.Services define RPC methods. They're the interface between client and server.
syntax = "proto3";
package user;
service UserService {
rpc GetUser(GetUserRequest) returns (User);
rpc CreateUser(CreateUserRequest) returns (CreateUserResponse);
rpc ListUsers(ListUsersRequest) returns (stream User);
rpc UpdateUser(stream UpdateUserRequest) returns (UpdateUserResponse);
}
message GetUserRequest {
string id = 1;
}
message ListUsersRequest {
int32 limit = 1;
int32 offset = 2;
}
message UpdateUserRequest {
string id = 1;
string name = 2;
}
message UpdateUserResponse {
string message = 1;
}Unary RPC - Single request, single response:
rpc GetUser(GetUserRequest) returns (User);Server Streaming - Single request, stream of responses:
rpc ListUsers(ListUsersRequest) returns (stream User);Client Streaming - Stream of requests, single response:
rpc UpdateUsers(stream UpdateUserRequest) returns (UpdateUserResponse);Bidirectional Streaming - Stream of requests and responses:
rpc SyncUsers(stream SyncUserRequest) returns (stream SyncUserResponse);gRPC uses HTTP/2, which enables multiplexing. Multiple requests can be sent over a single connection without waiting for responses.
Connection 1:
├─ Request A (stream 1)
├─ Request B (stream 3)
├─ Request C (stream 5)
└─ Response A (stream 1)
Response B (stream 3)
Response C (stream 5)This is dramatically more efficient than HTTP/1.1, where each request needs its own connection.
gRPC metadata is like HTTP headers. It carries request/response metadata like authentication tokens, tracing IDs, etc.
const metadata = new grpc.Metadata();
metadata.add('authorization', 'Bearer token123');
metadata.add('x-trace-id', 'trace-456');gRPC has standard error codes:
grpc.status.OK = 0
grpc.status.CANCELLED = 1
grpc.status.UNKNOWN = 2
grpc.status.INVALID_ARGUMENT = 3
grpc.status.DEADLINE_EXCEEDED = 4
grpc.status.NOT_FOUND = 5
grpc.status.ALREADY_EXISTS = 6
grpc.status.PERMISSION_DENIED = 7
grpc.status.RESOURCE_EXHAUSTED = 8
grpc.status.FAILED_PRECONDITION = 9
grpc.status.ABORTED = 10
grpc.status.OUT_OF_RANGE = 11
grpc.status.UNIMPLEMENTED = 12
grpc.status.INTERNAL = 13
grpc.status.UNAVAILABLE = 14
grpc.status.DATA_LOSS = 15
grpc.status.UNAUTHENTICATED = 16npm install @nestjs/microservices @grpc/grpc-js @grpc/proto-loader
npm install -D @types/nodesyntax = "proto3";
package user;
service UserService {
rpc GetUser(GetUserRequest) returns (User);
rpc CreateUser(CreateUserRequest) returns (User);
rpc ListUsers(ListUsersRequest) returns (stream User);
}
message User {
string id = 1;
string name = 2;
string email = 3;
int64 created_at = 4;
}
message GetUserRequest {
string id = 1;
}
message CreateUserRequest {
string name = 1;
string email = 2;
}
message ListUsersRequest {
int32 limit = 1;
int32 offset = 2;
}npx grpc_tools_node_protoc \
--js_out=import_style=commonjs,binary:./src/proto \
--grpc_out=grpc_js:./src/proto \
--plugin=protoc-gen-grpc=`which grpc_tools_node_protoc_plugin` \
proto/user.protoimport { Injectable } from '@nestjs/common';
import { User } from './user.entity';
@Injectable()
export class UserService {
private users: User[] = [];
async getUser(id: string): Promise<User> {
return this.users.find(u => u.id === id);
}
async createUser(name: string, email: string): Promise<User> {
const user: User = {
id: Math.random().toString(),
name,
email,
created_at: Date.now(),
};
this.users.push(user);
return user;
}
async listUsers(limit: number, offset: number): Promise<User[]> {
return this.users.slice(offset, offset + limit);
}
}Let's build a practical payment system with order service, payment service, and notification service communicating via gRPC.
Problem: Changing field numbers or removing fields breaks compatibility.
Solution: Never reuse field numbers. Mark deprecated fields:
message User {
string id = 1;
string name = 2;
string email = 3;
reserved 4; // Don't reuse this number
string phone = 5;
}Problem: Requests hang indefinitely if services are slow.
Solution: Always set deadlines:
const deadline = new Date();
deadline.setSeconds(deadline.getSeconds() + 5);
const metadata = new grpc.Metadata();
metadata.add('grpc-timeout', '5S');Problem: Memory leaks from unclosed streams.
Solution: Always unsubscribe:
const subscription = this.paymentService.streamPaymentUpdates(data)
.subscribe(
(update) => console.log(update),
(error) => console.error(error),
() => console.log('Stream completed')
);
// Later
subscription.unsubscribe();Problem: Creating new connections for each request is slow.
Solution: Reuse connections:
@Injectable()
export class PaymentClient implements OnModuleInit {
@Client({
transport: Transport.GRPC,
options: {
package: 'payment',
protoPath: join(__dirname, '../proto/payment.proto'),
url: 'localhost:50052',
keepalive: {
keepaliveTimeMs: 10000,
keepaliveTimeoutMs: 5000,
},
},
})
client: ClientGrpc;
}Problem: Generic error messages don't help debugging.
Solution: Use proper gRPC error codes:
import { status } from '@grpc/grpc-js';
throw {
code: status.INVALID_ARGUMENT,
message: 'Amount must be positive',
details: { field: 'amount' },
};syntax = "proto3";
package payment.v1;
service PaymentService {
// ...
}syntax = "proto3";
package grpc.health.v1;
service Health {
rpc Check(HealthCheckRequest) returns (HealthCheckResponse);
rpc Watch(HealthCheckRequest) returns (stream HealthCheckResponse);
}
message HealthCheckRequest {
string service = 1;
}
message HealthCheckResponse {
enum ServingStatus {
UNKNOWN = 0;
SERVING = 1;
NOT_SERVING = 2;
}
ServingStatus status = 1;
}import { Injectable } from '@nestjs/common';
import { Counter, Histogram } from 'prom-client';
@Injectable()
export class GrpcMetrics {
private requestCounter = new Counter({
name: 'grpc_requests_total',
help: 'Total gRPC requests',
labelNames: ['service', 'method', 'status'],
});
private requestDuration = new Histogram({
name: 'grpc_request_duration_seconds',
help: 'gRPC request duration',
labelNames: ['service', 'method'],
});
recordRequest(service: string, method: string, status: string) {
this.requestCounter.inc({ service, method, status });
}
recordDuration(service: string, method: string, duration: number) {
this.requestDuration.observe({ service, method }, duration);
}
}import { Injectable } from '@nestjs/common';
import { retry, delay } from 'rxjs/operators';
@Injectable()
export class PaymentClientWithRetry {
processPayment(request: any) {
return this.paymentService.processPayment(request).pipe(
retry({
count: 3,
delay: (error, retryCount) => {
const backoff = Math.pow(2, retryCount) * 1000;
return delay(backoff);
},
})
);
}
}const client = new grpc.Client(
{
'grpc.lb_policy_name': 'round_robin',
'grpc.service_config': JSON.stringify({
loadBalancingConfig: [{ round_robin: {} }],
}),
}
);gRPC is powerful but not always the right choice. Consider alternatives when:
gRPC represents a fundamental shift in how services communicate at scale. It solves real infrastructure problems that REST developers face: latency, bandwidth efficiency, weak typing, and lack of streaming support.
When you understand gRPC's core concepts—Protocol Buffers, HTTP/2 multiplexing, four communication patterns, and error handling—you can build microservices that scale with your infrastructure's complexity. NestJS makes implementing production-grade gRPC straightforward with its decorators and dependency injection.
The payment processing example demonstrates how gRPC handles real-world scenarios: inter-service communication, streaming updates, error handling, and transaction management. Proper connection pooling prevents performance degradation, health checks ensure reliability, and metrics monitoring keeps systems observable.
Start with a simple gRPC service. Measure the latency improvements over REST. Once you experience the benefits, you'll understand why gRPC has become the standard for microservices communication.
Next steps: