NGINX Lua Module: Your Web Server Just Got a Superpower

What if your web server could think? Not just serve files and proxy requests, but actually make smart decisions — check Redis, validate JWTs, rate-limit by IP, route traffic based on custom logic — all without your backend seeing a single request? That’s not science fiction. That’s the NGINX Lua module, and it’s been quietly making sysadmins’ lives significantly better for years.

Lua is a tiny, fast scripting language that NGINX can run inside its own worker processes. The magic ingredient is LuaJIT — a Just-In-Time compiler that makes Lua run at near-C speeds. Combined with NGINX’s non-blocking I/O model, you get something genuinely powerful: scripts that can call Redis, query MySQL, fetch URLs, and make complex routing decisions, all without blocking a single request. If you’re on Angie (our NGINX fork with better features), it works there too.

How NGINX Lua Execution Actually Works

The Lua module (ngx_http_lua_module) hooks into different phases of NGINX’s request handling lifecycle. Think of it like a series of checkpoints that each HTTP request passes through — you can run Lua code at any of them:

  • init_by_lua_block — runs once at startup in the master process. Good for preloading shared data, initializing caches, validating config.
  • init_worker_by_lua_block — runs once per worker at startup. Use it to open persistent connections, seed RNG, start background timers.
  • rewrite_by_lua_block — early in the request lifecycle. Redirect, rewrite URIs, modify headers before routing decisions are made.
  • access_by_lua_block — the authentication/authorization checkpoint. Block or allow requests before they touch your backend. Most commonly used.
  • content_by_lua_block — generate a complete response from Lua. Use this to build API endpoints directly inside NGINX.
  • header_filter_by_lua_block — modify response headers after the upstream responds (add HSTS, remove Server header, etc.).
  • body_filter_by_lua_block — transform the response body chunk by chunk. Great for HTML injection or on-the-fly content modification.
  • log_by_lua_block — runs after the response is sent. Push metrics, audit logs, webhooks — all without delaying the response to the client.

The really clever part is how blocking I/O works. When your Lua code calls Redis or makes an HTTP request, NGINX doesn’t freeze up. The Lua module uses the cosocket API — the coroutine yields, NGINX handles other requests, and when the I/O completes, execution resumes where it left off. Your code looks synchronous (easy to write), but NGINX is fully non-blocking under the hood. Best of both worlds.

Installation from the MyGuard Repository

The MyGuard repository ships NGINX and Angie with Lua as a loadable dynamic module, including LuaJIT 2.x and the full lua-resty-* library ecosystem. No compiling from source needed:

# Install nginx with Lua module
apt-get install nginx libnginx-mod-http-ndk libnginx-mod-http-lua lua-resty

# Or for Angie:
apt-get install angie angie-module-http-ndk angie-module-http-lua lua-resty

Load both modules in nginx.conf (before the http block):

load_module modules/ndk_http_module.so;
load_module modules/ngx_http_lua_module.so;

Hello World: Verify It Works

server {
    listen 80;

    location /hello {
        default_type text/plain;
        content_by_lua_block {
            ngx.say("NGINX Lua is working. Worker PID: " .. ngx.worker.pid())
        }
    }
}
nginx -t && systemctl reload nginx
curl http://localhost/hello
# NGINX Lua is working. Worker PID: 12345

Real Use Case: Redis-Backed Rate Limiting

Unlike NGINX’s built-in limit_req module (which only works on a single server), Redis-backed rate limiting works across a whole cluster of NGINX instances. If you have multiple servers behind a load balancer, this is the right approach:

http {
    server {
        listen 80;

        location /api/ {
            access_by_lua_block {
                local redis = require "resty.redis"
                local red = redis:new()
                red:set_timeouts(100, 100, 100)  -- connect/send/read ms

                local ok, err = red:connect("127.0.0.1", 6379)
                if not ok then
                    ngx.log(ngx.ERR, "Redis connect failed: ", err)
                    return  -- fail open: let the request through
                end

                local key = "rl:" .. ngx.var.binary_remote_addr
                local limit = 60

                local count = red:incr(key)
                if count == 1 then
                    red:expire(key, 60)
                end

                red:set_keepalive(10000, 100)  -- return to pool, don't close

                if count and count > limit then
                    ngx.header["Retry-After"] = "60"
                    return ngx.exit(429)
                end
            }
            proxy_pass http://backend;
        }
    }
}

Real Use Case: JWT Authentication at the Edge

http {
    server {
        listen 443 ssl;

        location /api/protected/ {
            access_by_lua_block {
                local jwt = require "resty.jwt"

                local auth_header = ngx.req.get_headers()["Authorization"]
                if not auth_header or not auth_header:find("^Bearer ") then
                    ngx.status = 401
                    ngx.header["WWW-Authenticate"] = 'Bearer realm="api"'
                    ngx.say('{"error":"missing_token"}')
                    return ngx.exit(401)
                end

                local token = auth_header:sub(8)  -- strip "Bearer "
                local secret = os.getenv("JWT_SECRET")
                local jwt_obj = jwt:verify(secret, token)

                if not jwt_obj.verified then
                    ngx.status = 401
                    ngx.say('{"error":"invalid_token"}')
                    return ngx.exit(401)
                end

                -- Pass user info to backend via headers
                ngx.req.set_header("X-User-Id",   jwt_obj.payload.sub)
                ngx.req.set_header("X-User-Role",  jwt_obj.payload.role or "user")
            }
            proxy_pass http://backend;
        }
    }
}

Real Use Case: In-Memory LRU Cache

For data that doesn’t change often (config values, feature flags, user lookup tables), skip Redis entirely and cache inside the NGINX process:

http {
    init_by_lua_block {
        local lrucache = require "resty.lrucache"
        ngx_cache, err = lrucache.new(200)  -- 200-entry cache per worker
        if not ngx_cache then
            error("failed to create cache: " .. (err or "unknown"))
        end
    }

    server {
        location /user-info {
            content_by_lua_block {
                local user_id = ngx.var.arg_id
                local cached = ngx_cache:get(user_id)

                if cached then
                    ngx.header["X-Cache"] = "HIT"
                    ngx.say(cached)
                    return
                end

                local http = require "resty.http"
                local httpc = http.new()
                local res = httpc:request_uri("http://user-service/users/" .. user_id)

                if res and res.status == 200 then
                    ngx_cache:set(user_id, res.body, 60)  -- cache for 60s
                    ngx.header["X-Cache"] = "MISS"
                    ngx.say(res.body)
                else
                    ngx.status = 502
                    ngx.say('{"error":"upstream_error"}')
                end
            }
        }
    }
}

Shared Memory Across Workers

Need data shared across all NGINX workers? Use lua_shared_dict — a lock-free shared memory zone backed by NGINX’s slab allocator. Declare it in the http block:

http {
    lua_shared_dict rate_limit  10m;
    lua_shared_dict token_cache  5m;
    lua_shared_dict counters     1m;
}

The incr, add, and set operations are atomic across workers. Values survive worker restarts but not master restarts. Perfect for counters, shared rate-limit state, or caching token validation results.

Background Tasks with ngx.timer

Want to push metrics or fire webhooks after the response is sent, without making the user wait? ngx.timer.at(0, ...) runs a function asynchronously after the response:

log_by_lua_block {
    local function push_metric(premature, data)
        if premature then return end
        local http = require "resty.http"
        local httpc = http.new()
        httpc:set_timeout(2000)
        httpc:request_uri("http://metrics-service/ingest", {
            method  = "POST",
            body    = require("cjson").encode(data),
            headers = { ["Content-Type"] = "application/json" },
        })
    end

    local data = {
        uri     = ngx.var.request_uri,
        status  = ngx.status,
        latency = ngx.now() - ngx.req.start_time(),
        ip      = ngx.var.remote_addr,
    }
    ngx.timer.at(0, push_metric, data)
}

Lua vs NJS: Which One Should You Use?

Both Lua and NJS (NGINX’s JavaScript module) are included in our NGINX and Angie packages. Here’s when to use each:

CriteriaLua (LuaJIT)NJS (JavaScript)
Backend I/O in access phaseYes — cosocket (Redis, MySQL, HTTP)No
Library ecosystemlua-resty-* (Redis, MySQL, JWT, AES, HTTP…)Built-in only
PerformanceLuaJIT trace-compiled, near-CBytecode, fast for simple logic
LanguageLua (small, learnable in a day)JavaScript / ES5+
Best forAuth, rate limiting, caching, complex routingHeader manipulation, simple routing, stream proxying

Short version: if you need to talk to Redis, MySQL, or an external API during request processing, use Lua. If you’re just manipulating headers or doing simple routing logic and you know JavaScript, NJS is the simpler choice.

LuaJIT Performance Tips

  • Always use local variables — local access is a register op; global access goes through a hash table lookup. Huge difference in hot loops.
  • Pre-require modules in init_by_lua_block — the first require call compiles the module. After that it’s a cache hit. Don’t re-require in every request handler.
  • Avoid pcall/error in hot paths — they interrupt LuaJIT’s trace compiler. Use nil-check return patterns instead.
  • Use set_keepalive on Redis/HTTP connections — return connections to the pool instead of closing them. Eliminates TCP handshakes on every request.
  • Keep lua_code_cache on — the default in production. With it off, NGINX re-compiles every Lua file on each request. Only turn it off during development.

Available lua-resty Libraries

The lua-resty package includes everything you’ll need for most use cases:

  • resty.redis — full Redis client with pipelining and connection pooling
  • resty.mysql — non-blocking MySQL/MariaDB client
  • resty.http — HTTP/1.1 client for upstream API calls
  • resty.lrucache — in-process LRU cache
  • resty.lock — distributed lock via lua_shared_dict
  • resty.string — string utilities including HMAC-SHA1/SHA256
  • resty.dns.resolver — non-blocking DNS lookups

Extra libraries available as separate packages:

apt-get install lua-resty-jwt lua-resty-limit-traffic lua-resty-session

Frequently Asked Questions

Does the Lua module work with Angie?

Yes — install angie-module-http-ndk and angie-module-http-lua instead of the libnginx-mod-* packages. The module API and all directives are identical. Your Lua code is unchanged.

Is lua_code_cache off safe to use?

Only in development. With it off, NGINX re-reads and re-compiles every Lua file on each request — makes editing without reloads possible, but absolutely destroys performance. Never in production.

Why does my cosocket call fail in init_by_lua_block?

The cosocket API is only available in request-handling phases and in init_worker_by_lua_block timers. In init_by_lua_block, the event loop hasn’t started yet, so there’s no I/O available. Move your connection setup to init_worker_by_lua_block instead.

How do I debug Lua errors in NGINX?

Set error_log /var/log/nginx/error.log debug; temporarily. All Lua errors, including stack traces, go to the error log. Use ngx.log(ngx.ERR, "message") from any Lua phase to add your own debug output. Switch to ngx.WARN in production to avoid log spam.

Can I share data between workers without Redis?

Yes — use lua_shared_dict. Data stored there is visible to all workers in the same NGINX process and survives worker restarts (but not master restarts). For permanent cross-worker storage you need Redis or a database.

Is there performance overhead vs pure C modules?

For simple logic, LuaJIT is within 5-10% of equivalent C code after warm-up. For compute-heavy work, LuaJIT’s trace compiler often matches GCC -O2. The main overhead is network latency from cosocket calls — which you’d have with any approach that talks to Redis.

Related Posts