A reverse proxy sits between your users and your application servers. Users connect to NGINX; NGINX forwards their requests to your backend (Node.js, PHP-FPM, a Python API, a Docker container, whatever). The backend sends its response to NGINX; NGINX forwards it to the user. From the user’s perspective, they’re talking directly to NGINX. Your backend never needs to be exposed to the internet at all.
This is the most common NGINX deployment pattern in 2026. SSL termination at NGINX, backend over plain HTTP on localhost. Caching, rate limiting, and load balancing all handled by NGINX before your application code runs. It’s clean, fast, and secure.
Why Use NGINX as a Reverse Proxy?
- SSL termination: NGINX handles TLS; your backend speaks plain HTTP. No TLS library needed in your app.
- Connection pooling: NGINX keeps persistent connections to your backend, amortizing TCP handshake overhead
- Buffering: NGINX buffers slow client connections so your backend thread is freed immediately after sending the response
- Static file serving: NGINX serves CSS, JS, and images directly without touching your application
- Security: Backend never exposed to the internet; rate limiting, WAF, and auth can be applied at the proxy layer
- HTTP/3 and HTTP/2: NGINX handles modern protocols; your backend can stay on HTTP/1.1
Basic Reverse Proxy Configuration
server {
listen 443 ssl;
http2 on;
server_name api.example.com;
ssl_certificate /etc/letsencrypt/live/api.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
location / {
proxy_pass http://127.0.0.1:3000; # Backend on port 3000
# Pass real client IP to backend
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
That’s the minimum working config. The four proxy_set_header lines are important — without them, your backend sees NGINX’s loopback IP as the client address, not the real user’s IP.
Timeouts: The Most Important Tuning
Default NGINX proxy timeouts are generous. For most applications, tighten them:
location / {
proxy_pass http://127.0.0.1:3000;
proxy_connect_timeout 5s; # Max time to establish connection to backend
proxy_send_timeout 60s; # Max time to send request to backend
proxy_read_timeout 60s; # Max time to receive response from backend
# For long-polling / streaming responses, increase read timeout:
# proxy_read_timeout 3600s;
}
A backend that takes more than 60 seconds to respond is either broken or overwhelmed. Failing fast (with a 504) is better than keeping the connection open indefinitely.
Upstream Blocks: Clean Backend Management
Instead of hardcoding http://127.0.0.1:3000 everywhere, use an upstream block. This makes it easy to add servers later, and enables keepalive connection pooling:
http {
upstream app_backend {
server 127.0.0.1:3000;
keepalive 32; # Keep 32 persistent connections to the backend
}
server {
location / {
proxy_pass http://app_backend;
# Required for keepalive to work with HTTP/1.1
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
}
The keepalive 32 keeps 32 idle connections to the backend alive in each NGINX worker. This eliminates the TCP handshake overhead for most requests. On a busy server, this alone reduces backend connection setup latency by 30–50%.
Buffering: For Slow Clients
By default, NGINX buffers the full response from your backend before sending it to the client. This frees your backend worker immediately, even if the client is on a slow connection:
location / {
proxy_pass http://app_backend;
proxy_buffering on; # Buffer responses (default: on)
proxy_buffer_size 4k; # Header buffer size
proxy_buffers 8 16k; # Response body buffers
proxy_busy_buffers_size 32k; # Max buffer before writing to temp file
# For large file downloads, disable buffering to stream directly:
# proxy_buffering off;
# For Server-Sent Events / long-polling, disable buffering:
# proxy_buffering off;
# proxy_cache off;
}
Proxy Caching
NGINX can cache backend responses and serve them directly without hitting your backend at all. For content that doesn’t change per-user (public API responses, rendered pages), this is a massive performance win:
http {
# Define cache zone: 100MB storage, 10 minutes inactive TTL
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=app_cache:10m
max_size=100m inactive=10m use_temp_path=off;
upstream app_backend {
server 127.0.0.1:3000;
keepalive 32;
}
server {
location /api/public/ {
proxy_pass http://app_backend;
proxy_cache app_cache;
proxy_cache_valid 200 60s; # Cache 200 responses for 60 seconds
proxy_cache_valid 404 10s; # Cache 404s for 10 seconds
proxy_cache_use_stale error timeout updating;
# Add cache status to response headers (for debugging)
add_header X-Cache-Status $upstream_cache_status;
}
}
}
Cache status values in the X-Cache-Status header: HIT (served from cache), MISS (fetched from backend and cached), BYPASS (cache bypassed), EXPIRED (cache entry expired, re-fetched).
WebSocket Proxying
WebSocket upgrades require specific headers to work through a reverse proxy:
location /ws/ {
proxy_pass http://app_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 3600s; # Keep WebSocket connections alive
proxy_send_timeout 3600s;
}
Serving Static Files Directly
For maximum performance, serve static files directly from NGINX’s file system, bypassing your backend entirely:
server {
root /var/www/app/public;
# Serve static assets directly
location ~* .(js|css|png|jpg|webp|svg|woff2|ico)$ {
expires 1y;
add_header Cache-Control "public, immutable";
try_files $uri =404;
}
# Everything else goes to the backend
location / {
try_files $uri @backend;
}
location @backend {
proxy_pass http://app_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Security Headers and Hardening
server {
# Hide backend server header
proxy_hide_header X-Powered-By;
proxy_hide_header Server;
# Add security headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Content-Type-Options nosniff always;
add_header X-Frame-Options SAMEORIGIN always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
location / {
proxy_pass http://app_backend;
# Don't pass internal headers to the backend
proxy_set_header X-Internal-Auth "";
proxy_set_header Authorization "";
}
}
Proxying to Unix Sockets
If your backend runs on the same server, Unix sockets are faster than TCP loopback — no network stack overhead:
upstream app_backend {
server unix:/run/app/gunicorn.sock; # Python/Gunicorn
# or:
server unix:/run/php/php8.4-fpm.sock; # PHP-FPM
keepalive 16;
}
location / {
proxy_pass http://app_backend;
}
Frequently Asked Questions
Related Posts
- NGINX Load Balancing Guide — extend this to multiple backends with health checks and failover
- TLS Configuration for NGINX — the SSL termination config that pairs with reverse proxy
- Enable HTTP/3 on NGINX — add QUIC to your reverse proxy for modern browsers
- NGINX Performance Expert Guide — full tuning guide including proxy cache and upstream configuration
- Angie Web Server Complete Guide — Angie handles reverse proxy identically to NGINX with extra monitoring features