Complete Guide to Using Nginx with Lua: Enhanced Web Server Functionality

The nginx Lua module lets you run Lua code inside nginx request processing — same worker thread, no IPC, no process switches. Combined with LuaJIT’s trace compiler, you get near-native C speed for logic that would otherwise require a backend call. This guide covers installing and using Lua with nginx from the myguard repository, which ships the module built against LuaJIT 2.x and includes the full lua-resty-* library ecosystem.

How nginx Lua execution works

The nginx Lua module (ngx_http_lua_module) runs Lua code within an nginx worker process at specific phases of request handling. Each phase maps to a Lua directive:

  • init_by_lua_block — runs once in the master process after config is loaded; use it to preload large shared tables and validate configuration.
  • init_worker_by_lua_block — runs once per worker at startup; use it to seed RNG, open persistent connections, or start background timers.
  • set_by_lua_block — computes a single nginx variable; runs synchronously (no I/O allowed).
  • rewrite_by_lua_block — the rewrite phase; redirect, rewrite URIs, modify headers before the location is selected.
  • access_by_lua_block — the access phase; authenticate, rate-limit, or block requests before they reach the upstream. The most common hook for security logic.
  • content_by_lua_block — generate the full response body from Lua; replaces a proxy_pass or static file. Use for API endpoints and micro-services embedded in nginx.
  • header_filter_by_lua_block — modify response headers (e.g. add HSTS, remove Server:) after the upstream responds.
  • body_filter_by_lua_block — transform the response body chunk-by-chunk; useful for HTML injection or on-the-fly compression.
  • log_by_lua_block — runs after the request is complete; write to custom log stores, push metrics, or sample payloads without blocking the response.

All blocking I/O in Lua — Redis, MySQL, HTTP, DNS — is performed via the cosocket API. When your Lua code calls a cosocket operation, the nginx event loop yields the coroutine and picks up the next request. From the caller’s perspective the code looks synchronous; from nginx’s perspective it is fully non-blocking. This is what makes Lua the right tool for per-request backend calls without sacrificing concurrency.

Install nginx with Lua from the myguard repository

The myguard repository ships nginx with Lua as a loadable dynamic module. No compilation required.

# 1. Add the repository (if not already done)
wget https://deb.myguard.nl/pool/myguard.deb
dpkg -i myguard.deb
apt-get update

# 2. Install nginx and the Lua module
apt-get install nginx libnginx-mod-http-ndk libnginx-mod-http-lua lua-resty

# For Angie instead of nginx:
apt-get install angie angie-module-http-ndk angie-module-http-lua lua-resty

Load both modules in the top of your nginx.conf (before the http block):

load_module modules/ndk_http_module.so;
load_module modules/ngx_http_lua_module.so;

Hello world — verify the module works

server {
    listen 80;
    server_name _;

    location /hello {
        default_type text/plain;
        content_by_lua_block {
            ngx.say("nginx Lua is working. PID: " .. ngx.worker.pid())
        }
    }
}

Test: nginx -t && systemctl reload nginx && curl http://localhost/hello

Shared memory dictionaries

Lua code running in different workers can share data through lua_shared_dict — a lock-free shared memory zone backed by nginx’s slab allocator. Declare it in the http block:

http {
    lua_shared_dict rate_limit 10m;  # 10 MB shared across all workers
    lua_shared_dict token_cache 5m;
    lua_shared_dict counters 1m;
}

Values stored in shared dicts survive worker restarts (but not master restarts). The dict API is atomic for incr, add, and set operations — safe to call from concurrent workers without a mutex.

Example: Redis-backed rate limiting

This example uses lua-resty-redis (included in the lua-resty package) to track request counts per client IP in Redis, with a sliding window. Unlike the built-in limit_req module, this approach works across multiple nginx instances behind a load balancer.

http {
    lua_shared_dict redis_pool 1m;

    server {
        listen 80;

        location /api/ {
            access_by_lua_block {
                local redis = require "resty.redis"
                local red = redis:new()
                red:set_timeouts(100, 100, 100)  -- connect/send/read ms

                local ok, err = red:connect("127.0.0.1", 6379)
                if not ok then
                    ngx.log(ngx.ERR, "Redis connect failed: ", err)
                    return  -- fail open: let the request through
                end

                local key = "rl:" .. ngx.var.binary_remote_addr
                local limit = 60  -- requests per minute

                local count, err = red:incr(key)
                if count == 1 then
                    red:expire(key, 60)  -- set TTL on first request
                end

                -- return connection to the pool instead of closing it
                red:set_keepalive(10000, 100)

                if count and count > limit then
                    ngx.header["Retry-After"] = "60"
                    ngx.status = 429
                    ngx.say("Rate limit exceeded. Try again in 60 seconds.")
                    return ngx.exit(429)
                end
            }
            proxy_pass http://backend;
        }
    }
}

Key details: set_keepalive returns the connection to a pool (max 100 idle connections, 10 s idle timeout) so the next request reuses it without a TCP handshake. Fail-open (return on connect error) is intentional — a Redis outage should not block your API.

Example: JWT authentication with lua-resty-jwt

Install the library first:

apt-get install lua-resty-jwt
http {
    lua_shared_dict jwks_cache 1m;

    server {
        listen 443 ssl;

        location /api/protected/ {
            access_by_lua_block {
                local jwt = require "resty.jwt"

                local auth_header = ngx.req.get_headers()["Authorization"]
                if not auth_header or not auth_header:find("^Bearer ") then
                    ngx.status = 401
                    ngx.header["WWW-Authenticate"] = 'Bearer realm="api"'
                    ngx.say("{"error":"missing_token"}")
                    return ngx.exit(401)
                end

                local token = auth_header:sub(8)  -- strip "Bearer "
                local secret = os.getenv("JWT_SECRET")

                local jwt_obj = jwt:verify(secret, token)

                if not jwt_obj.verified then
                    ngx.status = 401
                    ngx.say("{"error":"invalid_token","reason":"" .. (jwt_obj.reason or "") .. ""}")
                    return ngx.exit(401)
                end

                -- Pass claims to upstream via headers
                ngx.req.set_header("X-User-Id",  jwt_obj.payload.sub)
                ngx.req.set_header("X-User-Role", jwt_obj.payload.role or "user")
            }
            proxy_pass http://backend;
        }
    }
}

Example: in-memory LRU cache with lua-resty-lrucache

For data that changes rarely (feature flags, config values, lookup tables), an in-process LRU cache avoids the Redis round-trip entirely:

http {
    init_by_lua_block {
        local lrucache = require "resty.lrucache"
        -- 200-entry cache, shared via the global Lua state
        -- (one cache instance per worker; not shared across workers)
        ngx_cache, err = lrucache.new(200)
        if not ngx_cache then
            error("failed to create cache: " .. (err or "unknown"))
        end
    }

    server {
        listen 80;

        location /user-info {
            content_by_lua_block {
                local user_id = ngx.var.arg_id
                local cached = ngx_cache:get(user_id)

                if cached then
                    ngx.header["X-Cache"] = "HIT"
                    ngx.say(cached)
                    return
                end

                -- Simulate backend fetch
                local http = require "resty.http"
                local httpc = http.new()
                local res, err = httpc:request_uri(
                    "http://user-service/users/" .. user_id,
                    { method = "GET", headers = { ["Accept"] = "application/json" } }
                )

                if res and res.status == 200 then
                    ngx_cache:set(user_id, res.body, 60)  -- cache 60 s
                    ngx.header["X-Cache"] = "MISS"
                    ngx.say(res.body)
                else
                    ngx.status = 502
                    ngx.say("{"error":"upstream_error"}")
                end
            }
        }
    }
}

Example: dynamic upstream routing

Route requests to different backends based on a header, a cookie, or any arbitrary logic — with fallback:

http {
    upstream backend_v1 { server 10.0.0.10:8080; }
    upstream backend_v2 { server 10.0.0.20:8080; }

    server {
        listen 80;

        location / {
            set_by_lua_block $upstream {
                local version = ngx.req.get_headers()["X-Api-Version"]
                if version == "v2" then
                    return "backend_v2"
                end
                -- Default to v1, but route 10% of traffic to v2 (canary)
                if math.random(100) <= 10 then
                    return "backend_v2"
                end
                return "backend_v1"
            }
            proxy_pass http://$upstream;
        }
    }
}

Example: response body transformation

Inject a banner into every HTML response from the upstream without buffering the full body in memory:

location / {
    proxy_pass http://backend;

    header_filter_by_lua_block {
        -- Remove Content-Length since we will modify the body
        ngx.header.content_length = nil
    }

    body_filter_by_lua_block {
        local chunk, eof = ngx.arg[1], ngx.arg[2]
        -- Inject banner before closing </body>
        if eof and ngx.header.content_type
                and ngx.header.content_type:find("text/html") then
            ngx.arg[1] = chunk .. "<div id='banner'>Powered by myguard nginx</div>"
        end
    }
}

Example: async background tasks with ngx.timer

Fire-and-forget tasks (webhooks, audit logging, metric pushes) after the response is sent to the client:

log_by_lua_block {
    local function push_metric(premature, data)
        if premature then return end  -- nginx is shutting down
        local http = require "resty.http"
        local httpc = http.new()
        httpc:set_timeout(2000)
        httpc:request_uri("http://metrics-service/ingest", {
            method = "POST",
            body   = require("cjson").encode(data),
            headers = { ["Content-Type"] = "application/json" },
        })
    end

    local data = {
        uri     = ngx.var.request_uri,
        status  = ngx.status,
        latency = ngx.now() - ngx.req.start_time(),
        ip      = ngx.var.remote_addr,
    }
    ngx.timer.at(0, push_metric, data)  -- 0 = run immediately, non-blocking
}

LuaJIT performance tips

The nginx Lua module uses LuaJIT, not standard Lua 5.x. LuaJIT’s trace compiler produces machine code comparable to GCC -O2 for hot loops, but there are patterns that de-opt it:

  • Avoid pcall and error in hot paths — they interrupt the trace compiler. Use explicit nil-check return patterns instead.
  • Use local for all variables — local access is a register operation; global access goes through the environment table.
  • Pre-require modules in init_by_lua_blockrequire is cheap after the first call (returns a cached value), but the first call compiles the module.
  • Re-use objectsresty.redis:new() and resty.http:new() allocate a table. Call them once per request, not once per loop iteration.
  • Use lua_code_cache on (the default in production) — with it off, every request re-reads and re-compiles the Lua file. Only disable for development.

Available lua-resty libraries

The lua-resty package from the myguard repository includes:

  • resty.redis — full Redis client with pipelining and connection pooling
  • resty.mysql — non-blocking MySQL/MariaDB client
  • resty.http — HTTP/1.1 client for upstream API calls
  • resty.memcached — Memcached client
  • resty.lrucache — in-process LRU cache (per worker)
  • resty.lock — distributed lock via lua_shared_dict
  • resty.upstream.healthcheck — background health checks for upstreams
  • resty.string — fast string utilities including HMAC-SHA1/SHA256
  • resty.md5, resty.sha1, resty.sha256 — digest functions
  • resty.aes — AES encryption/decryption
  • resty.dns.resolver — non-blocking DNS lookups

Additional libraries (JWT, limit-req, session management) are available as separate packages:

apt-get install lua-resty-jwt lua-resty-limit-traffic lua-resty-session

Lua vs NJS: which to choose

Criteria Lua (LuaJIT) NJS (nginx JavaScript)
Execution model Coroutines + cosocket (non-blocking I/O) Event loop, no I/O in most phases
Performance LuaJIT trace-compiled, near-C for compute Bytecode interpreted, good for simple logic
Backend I/O in access phase ✓ cosocket (Redis, MySQL, HTTP) ✗ not available in access/rewrite phases
Language familiarity Lua (small, learnable in a day) JavaScript / ECMAScript 5.1+
Library ecosystem lua-resty-* (Redis, MySQL, JWT, HTTP, AES…) Built-in only (no external packages)
Config integration Directives in every phase js_import + js_content/js_access/js_set
Best for Complex auth, rate limiting, caching, routing Header manipulation, subrequests, stream proxying

Both modules are included in the myguard nginx and angie packages. You can load both and use each where it fits best.

Frequently asked questions

Does the Lua module work with Angie?

Yes. Install angie-module-http-ndk and angie-module-http-lua instead of the libnginx-mod-* packages. The module API and all directives are identical — the Lua code itself is unchanged.

Is lua_code_cache safe to disable?

Only in development. With lua_code_cache off, nginx re-reads and re-compiles every Lua file on each request. This makes reload-free editing possible but multiplies CPU usage significantly. Never use it in production.

Why does my cosocket call fail in init_by_lua_block?

The cosocket API is only available in request-handling phases (access, content, etc.) and in init_worker_by_lua_block timers. It is not available in init_by_lua_block because at that point the nginx event loop has not started. Use a timer in init_worker_by_lua_block instead.

How do I debug Lua errors in nginx?

Set error_log /var/log/nginx/error.log debug; temporarily. All Lua errors (including stack traces) go to the error log. You can also use ngx.log(ngx.ERR, "message") from any Lua phase. For production, use ngx.log(ngx.WARN, ...) to avoid filling the log.

Can I share data between workers without Redis?

Yes — use lua_shared_dict. Data stored there is visible to all workers in the same nginx process and survives worker restarts. It does not persist across master restarts. For permanent cross-worker storage you need Redis or a database.

What is the difference between lua-resty-redis and a socket connection to Redis?

resty.redis uses nginx’s cosocket API, so a blocking Redis call (e.g. GET) yields the coroutine and lets other requests run while waiting. A plain Lua socket would block the entire worker. Always use resty.redis, never the plain Lua socket module, inside nginx.

Is there a performance overhead vs pure C modules?

For simple logic (header checks, variable manipulation), LuaJIT is within 5–10% of equivalent C code after warm-up. For compute-heavy logic, LuaJIT’s trace compiler often matches C at -O2. The main overhead is shared-dict locking (microseconds) and cosocket round-trips (network latency). Neither is significant compared to a typical proxy_pass upstream latency.

Leave a comment