NGINX Reverse Proxy Configuration: The Complete Setup Guide

A reverse proxy sits between your users and your application servers. Users connect to NGINX; NGINX forwards their requests to your backend (Node.js, PHP-FPM, a Python API, a Docker container, whatever). The backend sends its response to NGINX; NGINX forwards it to the user. From the user’s perspective, they’re talking directly to NGINX. Your backend never needs to be exposed to the internet at all.

This is the most common NGINX deployment pattern in 2026. SSL termination at NGINX, backend over plain HTTP on localhost. Caching, rate limiting, and load balancing all handled by NGINX before your application code runs. It’s clean, fast, and secure.

Why Use NGINX as a Reverse Proxy?

  • SSL termination: NGINX handles TLS; your backend speaks plain HTTP. No TLS library needed in your app.
  • Connection pooling: NGINX keeps persistent connections to your backend, amortizing TCP handshake overhead
  • Buffering: NGINX buffers slow client connections so your backend thread is freed immediately after sending the response
  • Static file serving: NGINX serves CSS, JS, and images directly without touching your application
  • Security: Backend never exposed to the internet; rate limiting, WAF, and auth can be applied at the proxy layer
  • HTTP/3 and HTTP/2: NGINX handles modern protocols; your backend can stay on HTTP/1.1

Basic Reverse Proxy Configuration

server {
    listen 443 ssl;
    http2 on;
    server_name api.example.com;

    ssl_certificate     /etc/letsencrypt/live/api.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;

    location / {
        proxy_pass http://127.0.0.1:3000;   # Backend on port 3000

        # Pass real client IP to backend
        proxy_set_header Host              $host;
        proxy_set_header X-Real-IP         $remote_addr;
        proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

That’s the minimum working config. The four proxy_set_header lines are important — without them, your backend sees NGINX’s loopback IP as the client address, not the real user’s IP.

Timeouts: The Most Important Tuning

Default NGINX proxy timeouts are generous. For most applications, tighten them:

location / {
    proxy_pass http://127.0.0.1:3000;

    proxy_connect_timeout  5s;    # Max time to establish connection to backend
    proxy_send_timeout    60s;    # Max time to send request to backend
    proxy_read_timeout    60s;    # Max time to receive response from backend

    # For long-polling / streaming responses, increase read timeout:
    # proxy_read_timeout 3600s;
}

A backend that takes more than 60 seconds to respond is either broken or overwhelmed. Failing fast (with a 504) is better than keeping the connection open indefinitely.

Upstream Blocks: Clean Backend Management

Instead of hardcoding http://127.0.0.1:3000 everywhere, use an upstream block. This makes it easy to add servers later, and enables keepalive connection pooling:

http {
    upstream app_backend {
        server 127.0.0.1:3000;
        keepalive 32;   # Keep 32 persistent connections to the backend
    }

    server {
        location / {
            proxy_pass http://app_backend;

            # Required for keepalive to work with HTTP/1.1
            proxy_http_version 1.1;
            proxy_set_header Connection "";
        }
    }
}

The keepalive 32 keeps 32 idle connections to the backend alive in each NGINX worker. This eliminates the TCP handshake overhead for most requests. On a busy server, this alone reduces backend connection setup latency by 30–50%.

Buffering: For Slow Clients

By default, NGINX buffers the full response from your backend before sending it to the client. This frees your backend worker immediately, even if the client is on a slow connection:

location / {
    proxy_pass http://app_backend;

    proxy_buffering        on;           # Buffer responses (default: on)
    proxy_buffer_size      4k;           # Header buffer size
    proxy_buffers          8 16k;        # Response body buffers
    proxy_busy_buffers_size 32k;         # Max buffer before writing to temp file

    # For large file downloads, disable buffering to stream directly:
    # proxy_buffering off;

    # For Server-Sent Events / long-polling, disable buffering:
    # proxy_buffering off;
    # proxy_cache off;
}

Proxy Caching

NGINX can cache backend responses and serve them directly without hitting your backend at all. For content that doesn’t change per-user (public API responses, rendered pages), this is a massive performance win:

http {
    # Define cache zone: 100MB storage, 10 minutes inactive TTL
    proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=app_cache:10m
                     max_size=100m inactive=10m use_temp_path=off;

    upstream app_backend {
        server 127.0.0.1:3000;
        keepalive 32;
    }

    server {
        location /api/public/ {
            proxy_pass http://app_backend;

            proxy_cache            app_cache;
            proxy_cache_valid 200  60s;  # Cache 200 responses for 60 seconds
            proxy_cache_valid 404  10s;  # Cache 404s for 10 seconds
            proxy_cache_use_stale  error timeout updating;

            # Add cache status to response headers (for debugging)
            add_header X-Cache-Status $upstream_cache_status;
        }
    }
}

Cache status values in the X-Cache-Status header: HIT (served from cache), MISS (fetched from backend and cached), BYPASS (cache bypassed), EXPIRED (cache entry expired, re-fetched).

WebSocket Proxying

WebSocket upgrades require specific headers to work through a reverse proxy:

location /ws/ {
    proxy_pass http://app_backend;

    proxy_http_version 1.1;
    proxy_set_header Upgrade    $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_read_timeout 3600s;   # Keep WebSocket connections alive
    proxy_send_timeout 3600s;
}

Serving Static Files Directly

For maximum performance, serve static files directly from NGINX’s file system, bypassing your backend entirely:

server {
    root /var/www/app/public;

    # Serve static assets directly
    location ~* .(js|css|png|jpg|webp|svg|woff2|ico)$ {
        expires 1y;
        add_header Cache-Control "public, immutable";
        try_files $uri =404;
    }

    # Everything else goes to the backend
    location / {
        try_files $uri @backend;
    }

    location @backend {
        proxy_pass http://app_backend;
        proxy_set_header Host            $host;
        proxy_set_header X-Real-IP       $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

Security Headers and Hardening

server {
    # Hide backend server header
    proxy_hide_header X-Powered-By;
    proxy_hide_header Server;

    # Add security headers
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
    add_header X-Content-Type-Options nosniff always;
    add_header X-Frame-Options SAMEORIGIN always;
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;

    location / {
        proxy_pass http://app_backend;

        # Don't pass internal headers to the backend
        proxy_set_header X-Internal-Auth "";
        proxy_set_header Authorization  "";
    }
}

Proxying to Unix Sockets

If your backend runs on the same server, Unix sockets are faster than TCP loopback — no network stack overhead:

upstream app_backend {
    server unix:/run/app/gunicorn.sock;  # Python/Gunicorn
    # or:
    server unix:/run/php/php8.4-fpm.sock;  # PHP-FPM
    keepalive 16;
}

location / {
    proxy_pass http://app_backend;
}

Frequently Asked Questions

What is the difference between proxy_pass and fastcgi_pass?
proxy_pass forwards HTTP requests to an HTTP backend (Node.js, Python, Ruby, Go). fastcgi_pass uses the FastCGI protocol to communicate with PHP-FPM. They’re for different backend types: use proxy_pass for any HTTP server, fastcgi_pass specifically for PHP-FPM. Both support Unix sockets and TCP addresses.
How do I pass the real client IP to my backend?
Use proxy_set_header X-Real-IP $remote_addr and proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for. Your backend then reads the X-Real-IP or X-Forwarded-For header instead of the REMOTE_ADDR. Make sure your backend trusts these headers only from NGINX (not from arbitrary clients).
Why does my backend show NGINX’s IP instead of the client’s IP?
You’re missing the proxy_set_header X-Real-IP and X-Forwarded-For headers. Add them as shown above. Also make sure your backend application is configured to read client IP from X-Real-IP or X-Forwarded-For rather than from REMOTE_ADDR.
Should I use keepalive connections to my backend?
Yes, almost always. Keepalive eliminates the TCP handshake overhead for each request. Set keepalive to roughly 2x the number of worker_processes. Add proxy_http_version 1.1 and proxy_set_header Connection “” — these are required for keepalive to work correctly.
Does NGINX reverse proxy work with HTTP/2 between NGINX and the backend?
Yes, with grpc_pass for gRPC backends. For standard HTTP/2 backends, you can use proxy_pass with proxy_http_version 2.0 (available in recent NGINX versions). Most backends use HTTP/1.1 over Unix sockets which is simpler and just as fast on the same machine.
How do I handle 502 errors from my backend?
502 means NGINX can’t connect to the backend. Check that your backend process is running (systemctl status app), that it’s listening on the expected socket/port (ss -tlnp), and that the proxy_pass address matches. Also check proxy_connect_timeout — if your backend is slow to start, increase it temporarily.

Related Posts