Service Interactions

Services need to communicate. The protocol you choose shapes your system's performance characteristics, developer experience, and operational complexity. The three dominant patterns each serve different needs.

REST

REST (Representational State Transfer) uses HTTP as its transport and JSON as its data format. It's the default for most web APIs — well understood, broadly supported, and easy to debug with standard tools.

Resources are identified by URLs. Operations map to HTTP methods: GET reads, POST creates, PUT replaces, PATCH modifies, DELETE removes. Status codes communicate outcomes — 200 for success, 404 for not found, 429 for rate limiting, 500 for server errors.

REST's strengths are simplicity and ubiquity. Its weaknesses emerge at scale: over-fetching (getting more data than you need), under-fetching (requiring multiple requests to assemble a response), and the lack of a built-in schema or type system. API versioning — via URL paths, headers, or content negotiation — adds complexity as your API evolves.

gRPC

gRPC uses Protocol Buffers (protobuf) to define a strict schema for services and their messages. You write a .proto file describing your service, and code generation produces typed client and server implementations in your language of choice.

The result is faster serialization (binary, not text), built-in type safety, and automatic client library generation. gRPC supports four communication patterns:

  • Unary — single request, single response (like REST)
  • Server streaming — one request, a stream of responses
  • Client streaming — a stream of requests, one response
  • Bidirectional streaming — both sides stream simultaneously

gRPC excels for internal service-to-service communication where performance matters and both sides are under your control. It's less suited for public-facing APIs — browsers can't speak gRPC natively (gRPC-Web bridges the gap but adds complexity).

WebSockets

WebSockets provide a persistent, full-duplex connection between client and server. After an initial HTTP handshake, both sides can send messages at any time without the overhead of establishing new connections.

This makes WebSockets the right choice for real-time features: live dashboards, collaborative editing, chat, and notifications. The connection stays open, and data flows in both directions with minimal latency.

The tradeoffs: WebSocket connections are stateful, which complicates horizontal scaling (you need sticky sessions or a shared state layer). They also require explicit handling of reconnection, heartbeats, and connection lifecycle that HTTP handles implicitly.

Server-Sent Events (SSE) offer a simpler alternative when data flows in only one direction — server to client. SSE uses standard HTTP, supports automatic reconnection, and works through proxies and firewalls more reliably than WebSockets.

Cross-Cutting Concerns

Regardless of protocol, certain patterns apply everywhere:

Idempotency means that sending the same request multiple times produces the same result. This is essential for safe retries — if a network timeout occurs, the client can resend without fear of duplicate side effects. GET, PUT, and DELETE should always be idempotent. For POST, include an idempotency key.

Timeouts prevent one slow service from cascading failures through the system. Every outbound request should have a deadline.

Circuit breakers stop calling a failing service after a threshold of errors, giving it time to recover rather than overwhelming it with requests it can't handle.

Request tracing assigns a unique identifier to each request as it flows through multiple services. When something goes wrong, the trace ID connects logs across the entire call chain. See logging for how we propagate request context through structured log entries, and metrics for the standard HTTP instrumentation that measures these interactions.