If you run nginx as a reverse proxy, API gateway, or web server, Nginx NJS — the official Nginx JavaScript module — lets you write JavaScript that executes inside the nginx worker process itself. Authentication, request validation, rate limiting, response transformation, and routing decisions happen at the network edge in microseconds, without a round-trip to any backend service and without running a separate process.
This guide explains NJS from first principles through to production-grade patterns used in real API gateways and edge deployments. Every directive and API call is explained — not just shown. All packages are available from the deb.myguard.nl repository for Debian Bookworm/Trixie and Ubuntu Jammy/Noble.
What Is Nginx NJS and How Does It Work Internally?
NJS is a JavaScript interpreter built into the nginx worker process. When a request arrives, nginx can invoke a JavaScript function at any phase of request processing — access checking, content generation, header filtering, body filtering, or pre-read. The function runs in the same OS thread as the connection handler. There is no IPC, no socket, no serialisation overhead. The call is a direct function invocation.
This is the fundamental reason NJS is fast: it eliminates the round-trip that every other approach requires. A Lua, Python, or Node.js solution that nginx calls over a Unix socket still requires OS scheduling, a context switch, and socket I/O. NJS has none of that — the JavaScript runs and returns, and nginx continues processing the request in the same event loop iteration.
The NJS Request Object
Every NJS function receives a single argument — conventionally named r — which is the nginx request object. Through this object the function can read and write anything nginx knows about the request:
r.uri,r.method,r.args— the URI, HTTP method, and query stringr.headersIn— incoming request headers (read-only map)r.headersOut— outgoing response headers (writable)r.remoteAddress— client IP addressr.variables— all nginx variables, readable and writabler.return(status, body)— terminate the request immediately with a responser.internalRedirect(uri)— hand off to a different nginx location blockr.subrequest(uri, options, callback)— make an async sub-request and act on the response
Modern NJS Capabilities (NJS 0.8.x, nginx 1.25+)
Early guides describe NJS as “ES5.1 only” — that was accurate in 2017. Modern NJS supports a large subset of ES2019:
- Arrow functions:
items.map(x => x * 2) - const / let with block scoping
- Template literals:
`Hello ${name}` - Destructuring:
const { uri, method } = r; - Spread operator:
[...arr1, ...arr2] - Promise and async/await for non-blocking operations
ngx.fetch()— async HTTP requests from inside NJS (like browser fetch)ngx.shared— shared memory dictionaries for state across all worker processesBufferclass for binary data handling (base64, hex, binary encoding)- Crypto module — HMAC-SHA256, SHA-256, MD5 for real JWT signature verification
Installing the Nginx NJS Module on Debian and Ubuntu
The NJS module ships as a dynamic module — loaded at runtime without recompiling nginx. The deb.myguard.nl repository provides the NJS module for all supported distributions, built against the same extended nginx binary as the full module set.
# Add the deb.myguard.nl repository
curl -fsSL https://deb.myguard.nl/gpg.key | sudo gpg --dearmor -o /usr/share/keyrings/myguard-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/myguard-archive-keyring.gpg] https://deb.myguard.nl bookworm main"
| sudo tee /etc/apt/sources.list.d/myguard.list
# Install nginx with the NJS HTTP and Stream modules
sudo apt-get update
sudo apt-get install nginx libnginx-mod-http-js libnginx-mod-stream-js
Two modules are available:
libnginx-mod-http-js— NJS for HTTP proxying and content generation (most common use case)libnginx-mod-stream-js— NJS for the stream module: TCP and UDP proxying with JavaScript logic
# Load at the top of nginx.conf (outside any http or stream blocks)
load_module modules/ngx_http_js_module.so;
load_module modules/ngx_stream_js_module.so; # Only if using stream proxying
# Verify the modules loaded correctly
sudo nginx -t
nginx -T 2>&1 | grep js_module
Core NJS Directives: What Each One Does and When to Use It
NJS integrates into nginx through phase-specific directives. Using the wrong directive for a given task either fails silently or runs JavaScript at the wrong moment in the request lifecycle — the most common source of NJS bugs.
http {
js_import main from /etc/nginx/njs/main.js; # Compile the JS file once at startup
server {
listen 443 ssl;
# js_set — runs JS to compute an nginx variable value
# Evaluated lazily: only runs if the variable is referenced elsewhere in the config
# Result is cached for the lifetime of the request
js_set $computed_value main.computeValue;
location /api {
# js_access — runs during the access phase, before nginx contacts the upstream
# Call r.return(r.OK) to allow, or r.return(403) to block
# This is where authentication and rate limiting belong
js_access main.checkAuth;
proxy_pass http://backend;
}
location /content {
# js_content — replaces the content handler entirely
# The JS function IS the response — use r.return() or r.sendBuffer()
# No proxy_pass needed; the JS generates the response directly
js_content main.generateResponse;
}
location /transform {
# js_filter — intercepts the response body from the upstream
# Receives the body in chunks via r.on('data') and r.on('end')
# Can modify, replace, or drop the body
js_filter main.transformBody;
proxy_pass http://backend;
}
location /headers-only {
# js_header_filter — runs after upstream response headers arrive
# Can read and write r.headersOut; cannot touch the body
# More efficient than js_filter when you only need header changes
js_header_filter main.addSecurityHeaders;
proxy_pass http://backend;
}
}
}
Always use the earliest phase that provides the data you need. A request rejected in js_access never allocates memory for the response body. A request modified in js_filter has already waited for the full upstream response. Phase choice is a performance decision as much as a correctness one.
Structuring NJS Code: Use js_import, Not Inline Scripts
Use js_import with a file. The file is compiled once at nginx startup and referenced by module name. Inline code (the deprecated js_include directive) is re-parsed on every nginx -s reload and cannot export named functions.
// /etc/nginx/njs/auth.js
// Named exports make each function referenceable as auth.functionName in nginx.conf
export default { checkAuth, validateToken };
function checkAuth(r) {
const token = r.headersIn['Authorization']?.replace('Bearer ', '');
if (!token) {
r.return(401, JSON.stringify({ error: 'Missing authorization token' }));
return;
}
r.return(r.OK); // Allow the request to continue
}
function validateToken(r) { /* ... */ }
# nginx.conf
http {
js_import auth from /etc/nginx/njs/auth.js;
server {
location /api {
js_access auth.checkAuth; # module.function syntax
proxy_pass http://backend;
}
}
}
Nginx NJS Examples with Full Explanations
Example 1: JWT Authentication with Real Cryptographic Verification
// /etc/nginx/njs/auth.js
import crypto from 'crypto';
const SECRET = 'your-hmac-secret-key'; // In production, use an environment variable
export default { validateJWT };
async function validateJWT(r) {
const authHeader = r.headersIn['Authorization'] || '';
const token = authHeader.startsWith('Bearer ') ? authHeader.slice(7) : null;
if (!token) {
r.return(401, JSON.stringify({ error: 'No token provided' }));
return;
}
const parts = token.split('.');
if (parts.length !== 3) {
r.return(401, JSON.stringify({ error: 'Malformed token' }));
return;
}
// Cryptographically verify the HMAC-SHA256 signature
// Without this check, anyone who knows the payload format can forge a token
const signingInput = parts[0] + '.' + parts[1];
const expectedSig = crypto
.createHmac('SHA256', SECRET)
.update(signingInput)
.digest('base64url');
if (expectedSig !== parts[2]) {
r.return(401, JSON.stringify({ error: 'Invalid signature' }));
return;
}
// Decode the payload and check the expiry claim
let payload;
try {
payload = JSON.parse(Buffer.from(parts[1], 'base64url').toString());
} catch (e) {
r.return(401, JSON.stringify({ error: 'Invalid payload' }));
return;
}
if (payload.exp && payload.exp < Math.floor(Date.now() / 1000)) {
r.return(401, JSON.stringify({ error: 'Token expired' }));
return;
}
// Inject user context as headers that the backend will receive via proxy_pass
r.headersOut['X-User-ID'] = payload.sub || '';
r.headersOut['X-User-Role'] = payload.role || 'user';
r.return(r.OK); // Allow the request
}
Why the signature check matters: most JWT validation tutorials show structural checks — splitting on dots, decoding base64, reading the payload. Without the HMAC-SHA256 signature check, any client who knows the expected payload format can create a token that will be accepted. The NJS crypto module recomputes the expected signature from the header and payload and compares it to the token’s signature. A forged or tampered token will not match and is rejected at the nginx layer, before any backend code runs.
The headers injected (X-User-ID, X-User-Role) are forwarded to the upstream by nginx’s proxy_pass. The backend can trust these because they come from nginx after verification — a client cannot inject them directly into a proxied request.
Example 2: Rate Limiting with Shared Memory Across All nginx Workers
// /etc/nginx/njs/ratelimit.js
export default { checkRateLimit };
function checkRateLimit(r) {
const key = r.remoteAddress; // Rate limit per client IP
const windowMs = 60 * 1000; // 1-minute sliding window
const limit = 100; // 100 requests per window
const now = Date.now();
// ngx.shared.ratelimit is a shared memory zone visible to ALL nginx workers
// Without shared memory, each worker has its own counter and the effective
// limit is (limit * worker_count) — not what you want
const store = ngx.shared.ratelimit;
const entry = store.get(key);
let count = 1, windowStart = now;
if (entry) {
const data = JSON.parse(entry);
if (now - data.start < windowMs) {
count = data.count + 1; // Still in the same window
windowStart = data.start;
}
// else: window expired, reset to count=1
}
// TTL argument (3rd param) evicts the key automatically after the window
store.set(key, JSON.stringify({ count, start: windowStart }), windowMs / 1000);
r.headersOut['X-RateLimit-Limit'] = String(limit);
r.headersOut['X-RateLimit-Remaining'] = String(Math.max(0, limit - count));
r.headersOut['X-RateLimit-Reset'] = String(Math.ceil((windowStart + windowMs) / 1000));
if (count > limit) {
const retryAfter = Math.ceil((windowStart + windowMs - now) / 1000);
r.headersOut['Retry-After'] = String(retryAfter);
r.return(429, JSON.stringify({ error: 'Rate limit exceeded', retry_after: retryAfter }));
return;
}
r.return(r.OK);
}
# nginx.conf
http {
js_import rl from /etc/nginx/njs/ratelimit.js;
# Declare shared memory zone: 10 MB shared across all workers
# 10 MB can hold roughly 100,000 active rate-limit entries
js_shared_dict_zone zone=ratelimit:10m;
server {
location /api/ {
js_access rl.checkRateLimit;
proxy_pass http://api_backend/;
}
}
}
The sliding window approach — storing the window start timestamp alongside the count — prevents the boundary burst problem of fixed windows. With a fixed window, a client can make 100 requests at 00:59 and another 100 at 01:00 when the counter resets, sending 200 requests in 2 seconds. The sliding window sees a continuous 60-second view and prevents this. The Retry-After header tells well-behaved clients exactly how long to wait, which reduces unnecessary retry storms.
Example 3: Async Request Enrichment with ngx.fetch()
// /etc/nginx/njs/enrichment.js
export default { enrichRequest };
// ngx.fetch() is available in NJS 0.7.0+ and makes async HTTP requests
// using nginx's own connection infrastructure (keepalive pools, etc.)
async function enrichRequest(r) {
const userId = r.headersIn['X-User-ID'];
if (!userId) {
r.return(r.OK);
return;
}
try {
// Fire both requests simultaneously with Promise.all
// If profile takes 10ms and permissions takes 8ms, total is 10ms not 18ms
const [profileResp, permissionsResp] = await Promise.all([
ngx.fetch(`http://user-service/profile/${userId}`),
ngx.fetch(`http://authz-service/permissions/${userId}`)
]);
if (!profileResp.ok || !permissionsResp.ok) {
r.warn(`Enrichment failed: profile=${profileResp.status} perms=${permissionsResp.status}`);
r.return(r.OK); // Fail open: continue without enrichment headers
return;
}
const [profile, permissions] = await Promise.all([
profileResp.json(),
permissionsResp.json()
]);
// These headers are forwarded to the upstream backend by proxy_pass
r.headersOut['X-User-Name'] = profile.name || '';
r.headersOut['X-User-Tier'] = profile.tier || 'free';
r.headersOut['X-User-Permissions'] = (permissions.grants || []).join(',');
} catch (e) {
r.warn(`Enrichment error: ${e.message}`);
// Best-effort enrichment: a failure here should not block the primary request
}
r.return(r.OK);
}
Without this pattern, the backend service has to look up the user profile and permissions on every request — adding two sequential database queries per request. With NJS enrichment at the gateway, the backend receives the user context as headers and trusts them, eliminating those lookups entirely. The backend becomes simpler and faster; the enrichment happens once at the entry point, in parallel, before the backend is involved.
Example 4: Response Body Transformation — Rewrite JSON from Upstream
// /etc/nginx/njs/transform.js
export default { rewriteApiResponse };
function rewriteApiResponse(r) {
const contentType = r.headersOut['Content-Type'] || '';
// Only process JSON — pass all other content types through unchanged
if (!contentType.includes('application/json')) {
return;
}
const chunks = [];
// NJS delivers the response body in chunks (may be a single chunk or many)
// Collecting all chunks before parsing handles both small and large responses
r.on('data', chunk => chunks.push(chunk));
r.on('end', () => {
try {
const body = Buffer.concat(chunks).toString('utf8');
const data = JSON.parse(body);
// Strip internal fields the API client should never see
delete data.internal_id;
delete data.db_shard;
delete data._debug;
// Add gateway-level metadata
data.api_version = 'v2';
data.served_at = new Date().toISOString();
// Normalise pagination format across different upstream backends
if (data.page_info) {
data.pagination = {
page: data.page_info.current,
per_page: data.page_info.size,
total: data.page_info.total_records,
next_cursor: data.page_info.cursor
};
delete data.page_info;
}
const newBody = JSON.stringify(data);
// Critical: update Content-Length to match the new body size
// Omitting this causes the client to truncate or hang waiting for bytes
r.headersOut['Content-Length'] = String(Buffer.byteLength(newBody, 'utf8'));
r.sendBuffer(newBody, { last: true });
} catch (e) {
r.warn(`Transform error: ${e.message}`);
r.sendBuffer(Buffer.concat(chunks), { last: true }); // Send original on error
}
});
}
This is the strangler-fig migration pattern applied at the API gateway: a v1 backend is exposed as a v2 API with a different response shape, without touching the backend code. The proxy_pass points to the old backend; NJS rewrites the response body to the new format on the way out. Clients see v2 behaviour; the backend is unchanged.
Always recalculate Content-Length after modifying the body. If the body grows (or shrinks) but the header stays at the original value, the browser will either cut the response short or keep waiting for bytes that never arrive. Use Buffer.byteLength(str, 'utf8') rather than str.length — multibyte characters make those values different.
Example 5: GraphQL Mutation vs Query Routing
// /etc/nginx/njs/graphql-router.js
export default { routeGraphQL };
async function routeGraphQL(r) {
if (r.uri !== '/graphql' && r.uri !== '/query') {
r.return(r.OK);
return;
}
const contentType = r.headersIn['Content-Type'] || '';
if (!contentType.includes('application/json')) {
r.return(r.OK);
return;
}
try {
// Read the full request body to inspect the GraphQL operation
const body = await r.requestText();
const payload = JSON.parse(body);
const operation = (payload.query || '').trim();
// GraphQL mutations are writes — must go to the primary DB node
// GraphQL queries are reads — can go to a replica for load distribution
if (operation.startsWith('mutation')) {
r.variables.graphql_backend = 'graphql_primary';
} else {
r.variables.graphql_backend = 'graphql_replica';
}
} catch (e) {
r.warn(`GraphQL routing error: ${e.message}`);
r.variables.graphql_backend = 'graphql_primary'; // Fail safe to primary
}
r.return(r.OK);
}
# nginx.conf
http {
js_import graphql from /etc/nginx/njs/graphql-router.js;
# Requires the body to be buffered before js_access can read it
client_body_in_single_buffer on;
upstream graphql_primary { server 10.0.2.10:4000; } # Read-write
upstream graphql_replica { server 10.0.2.20:4000; server 10.0.2.21:4000; } # Read-only
server {
location /graphql {
js_access graphql.routeGraphQL;
proxy_pass http://$graphql_backend;
}
}
}
GraphQL has no equivalent of a GET vs POST distinction for reads vs writes — all requests are POST to the same endpoint. The only way to route by operation type is to inspect the request body. With client_body_in_single_buffer on, nginx buffers the full request body before the access phase runs, making it readable via r.requestText(). Without this directive, the body is not yet available in the access phase and requestText() returns empty. Routing mutations to the primary and queries to replicas is a meaningful scalability improvement for GraphQL APIs with heavy read traffic.
Example 6: Sub-request-based External Authentication
// /etc/nginx/njs/extauth.js
export default { authWithSubrequest };
async function authWithSubrequest(r) {
// r.subrequest() makes an nginx internal sub-request
// It reuses nginx's existing connection pool — no new TCP connections needed
const authResp = await r.subrequest('/internal/auth', {
method: 'GET',
args: '',
body: ''
});
if (authResp.status !== 200) {
// Return whatever the auth service returned (401, 403, etc.)
r.return(authResp.status, authResp.responseText);
return;
}
// Parse user context from the auth service response
const authData = JSON.parse(authResp.responseText);
r.headersOut['X-Auth-User'] = authData.user_id || '';
r.headersOut['X-Auth-Scope'] = (authData.scopes || []).join(' ');
r.return(r.OK);
}
# nginx.conf
http {
js_import extauth from /etc/nginx/njs/extauth.js;
server {
location /api/ {
js_access extauth.authWithSubrequest;
proxy_pass http://api_backend/;
}
# internal; means only reachable via nginx sub-requests, not from the internet
# External clients cannot call /internal/auth directly
location /internal/auth {
internal;
proxy_pass http://auth_service;
proxy_set_header Authorization $http_authorization;
proxy_set_header X-Original-URI $request_uri;
}
}
}
This is the NJS equivalent of nginx’s built-in auth_request module — but with full JavaScript control over the response. The standard auth_request can only forward specific headers from the auth response; this pattern lets you parse the auth response body, apply business logic, and set any combination of request headers. The internal directive is non-negotiable for security: without it, an attacker can call /internal/auth from the internet and see what the auth service returns for any request URI.
NJS with the Nginx Stream Module: TCP and UDP Proxying
The ngx_stream_js_module brings NJS to nginx’s TCP and UDP proxy layer. This enables protocol-aware routing for databases, MQTT, custom binary protocols, and anything that is not HTTP — at the TCP level, before any application-layer decoding.
// /etc/nginx/njs/stream-router.js
// Stream NJS functions receive a session object (s) instead of a request (r)
export default { routeByFirstBytes };
function routeByFirstBytes(s) {
const buf = s.buffer;
// Wait until we have enough bytes to make a routing decision
if (buf.length < 8) {
s.allow(); // Tell nginx to wait for more data
return;
}
// Read the first 4 bytes as a big-endian 32-bit integer
const magic = buf.readUInt32BE(0);
// PostgreSQL SSLRequest has magic number 80877102 (0x04D2162E)
// This tells us the client wants to negotiate TLS before sending credentials
if (magic === 80877102) {
s.variables.pg_backend = 'pg_tls_pool'; // TLS-capable replicas
} else {
s.variables.pg_backend = 'pg_plain_pool'; // Plain TCP replicas
}
s.allow(); // Allow the connection to proceed to the selected upstream
}
stream {
js_import stream_router from /etc/nginx/njs/stream-router.js;
upstream pg_tls_pool { server 10.0.3.10:5432; server 10.0.3.11:5432; }
upstream pg_plain_pool { server 10.0.3.20:5432; server 10.0.3.21:5432; }
server {
listen 5432;
js_preread stream_router.routeByFirstBytes; # Runs before nginx picks an upstream
proxy_pass $pg_backend;
}
}
Stream NJS uses js_preread (which runs while nginx is buffering the initial client bytes, before it decides where to forward the connection) and js_filter (which runs on data flowing in both directions through the proxy). This enables routing decisions that are impossible at the HTTP level — inspecting binary protocol headers, multiplexing multiple protocols on a single port, or enforcing custom auth at the TCP layer for non-HTTP services.
NJS vs Lua for Nginx: Which Should You Use?
| Aspect | NJS | Lua (via OpenResty / LuaJIT) |
|---|---|---|
| Language | JavaScript (ES2019 subset) | Lua 5.1 / LuaJIT |
| Learning curve | Low for JS developers | Requires learning Lua syntax |
| Raw performance | Very fast | Slightly faster (LuaJIT) |
| Library ecosystem | Limited; no npm access | Extensive via OpenResty libs |
| Async support | Native async/await + ngx.fetch() | Cosocket API |
| Cryptography | Built-in crypto module | Via resty.openssl |
| TCP/UDP (stream) | Yes (ngx_stream_js_module) | Limited |
| Official support | First-party from nginx / F5 | Community (OpenResty project) |
| Best for | Auth, JSON transform, routing, JWT | Complex caching, heavy business logic |
Choose NJS when your team writes JavaScript, when you need an officially supported module maintained by the nginx team itself, or when you need TCP/UDP stream processing. Choose Lua when you need the breadth of the OpenResty library ecosystem or the highest possible per-request throughput for compute-heavy workloads (LuaJIT is measurably faster than NJS for tight computation loops).
Debugging Nginx NJS: Tools and Techniques
The njs Command-Line Interpreter
# Install the standalone NJS interpreter
sudo apt-get install njs
# Test a function outside of nginx
njs -c 'const b = Buffer.from("eyJzdWIiOiJ1c2VyMTIzIn0", "base64url"); print(b.toString());'
# Output: {"sub":"user123"}
# Run a script file
njs /etc/nginx/njs/auth.js
The CLI does not have the r request object (that only exists inside nginx), but it is the fastest way to test utility functions — JWT parsing, JSON transformation, crypto operations — before deploying. Syntax errors in your JS show up immediately without needing an nginx reload.
Logging Inside NJS Functions
function myFunction(r) {
// r.warn() writes to the nginx error log at WARN level — visible without debug mode
r.warn(`Processing request: method=${r.method} uri=${r.uri} ip=${r.remoteAddress}`);
// r.log() writes at INFO level — only visible with error_log ... info;
r.log('Detailed trace info');
// r.error() writes at ERROR level
r.error('Something went wrong');
}
# Set error log level in nginx.conf for development
error_log /var/log/nginx/error.log warn;
# Or for maximum detail
error_log /var/log/nginx/error.log debug;
Configuration Test and Hot Reload
# Test nginx config including NJS syntax
sudo nginx -t
# If the test passes, reload without dropping connections
sudo nginx -s reload
# Tail the error log to see NJS output immediately
tail -f /var/log/nginx/error.log | grep -E '(njs|WARN|ERROR)'
Frequently Asked Questions: Nginx NJS
What is Nginx NJS and how is it different from Node.js?
NJS is a JavaScript interpreter embedded directly inside the nginx worker process. It is not Node.js and does not share Node’s standard library or the npm ecosystem. NJS provides its own API surface specifically for interacting with nginx internals — request headers, response bodies, nginx variables, sub-requests. The advantage over Node.js is zero IPC overhead: the JavaScript executes in the same OS thread as the connection handler, with a direct function call, no socket or process boundary. The trade-off is a smaller API surface and no access to npm.
Does NJS support async/await and modern JavaScript features?
Yes. NJS 0.7.0+ (shipped with nginx 1.25+) supports arrow functions, const/let, template literals, destructuring, spread, Promise, async/await, and ngx.fetch() for async HTTP requests. Guides that say “NJS only supports ES5.1” are outdated — that was the state in 2017. Run njs -v to check which version is installed, or nginx -V 2>&1 | grep njs.
How does nginx NJS handle state between requests?
JavaScript variables inside an NJS function are per-request and per-worker — they are destroyed when the function returns and are not visible to other workers. For persistent state shared across all workers (rate limit counters, session caches), use ngx.shared with a zone declared by js_shared_dict_zone. This creates a mutex-protected shared memory segment. For truly persistent state across restarts, use Redis via ngx.fetch() to a Redis HTTP adapter or the nginx redis2 module.
Can NJS read and modify the request or response body?
Yes. Use js_filter to intercept and rewrite the response body — NJS receives body data in chunks via r.on('data', chunk => ...) and r.on('end', () => ...). To read the request body in the access phase, use await r.requestText() and set client_body_in_single_buffer on in nginx.conf. Body operations consume memory proportional to the body size — set appropriate client_max_body_size limits.
How do I install the nginx NJS module on Debian or Ubuntu?
Install libnginx-mod-http-js from the deb.myguard.nl repository or the official nginx mainline packages. Add load_module modules/ngx_http_js_module.so; at the top of nginx.conf, outside any http or stream block. For TCP/UDP stream proxying with JavaScript, also install libnginx-mod-stream-js and load ngx_stream_js_module.so.
Is nginx NJS suitable for production use?
Yes. NJS is the official JavaScript module for nginx, maintained by F5 and the nginx team. It ships in the nginx mainline and stable release tracks and is used in production by API gateway and CDN deployments. Start with well-defined use cases (JWT validation, header manipulation, rate limiting), validate under load, and expand from there. The njs -v version shipped in the deb.myguard.nl repository is always kept current with the nginx mainline release.
Can NJS access Redis directly?
NJS cannot speak the Redis wire protocol directly — it has no built-in TCP client beyond ngx.fetch() (which is HTTP/HTTPS only). The two practical approaches are: (1) use nginx’s native redis2 module for direct Redis protocol access, and use r.subrequest() from NJS to call a redis2 location; (2) put a lightweight HTTP adapter (Valkey, a Redis-compatible server with HTTP support, or a small proxy) in front of Redis and call it via ngx.fetch().
Production Nginx NJS Stack: Complete Installation
# Install nginx and NJS modules from deb.myguard.nl
sudo apt-get install nginx-minimal
libnginx-mod-http-js
libnginx-mod-stream-js
libnginx-mod-http-headers-more-filter
libnginx-mod-http-modsecurity
libnginx-mod-http-dynamic-limit-req
# Recommended directory structure for NJS modules
mkdir -p /etc/nginx/njs
# /etc/nginx/njs/auth.js — JWT validation and external auth
# /etc/nginx/njs/ratelimit.js — shared-memory rate limiting
# /etc/nginx/njs/router.js — content-based routing
# /etc/nginx/njs/transform.js — response body transformation
# /etc/nginx/nginx.conf (production skeleton)
load_module modules/ngx_http_js_module.so;
load_module modules/ngx_http_modsecurity_module.so;
load_module modules/ngx_http_headers_more_filter_module.so;
http {
js_import auth from /etc/nginx/njs/auth.js;
js_import rl from /etc/nginx/njs/ratelimit.js;
js_import transform from /etc/nginx/njs/transform.js;
js_shared_dict_zone zone=ratelimit:10m;
server {
listen 443 ssl;
listen 443 quic reuseport; # HTTP/3
http2 on;
ssl_certificate /etc/ssl/certs/example.com.pem;
ssl_certificate_key /etc/ssl/private/example.com.key;
ssl_protocols TLSv1.2 TLSv1.3;
add_header Alt-Svc 'h3=":443"; ma=86400';
location /api/ {
# Access phase: auth first, then rate limit
js_access auth.validateJWT;
js_access rl.checkRateLimit;
# Filter phase: rewrite the response from the backend
js_filter transform.rewriteApiResponse;
modsecurity on;
modsecurity_rules_file /etc/nginx/modsecurity/modsecurity.conf;
proxy_pass http://api_backends/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
upstream api_backends {
server 10.0.1.10:8080;
server 10.0.1.11:8080;
keepalive 32;
}
}
All packages listed are available from the deb.myguard.nl repository for Debian Bookworm/Trixie and Ubuntu Jammy/Noble, rebuilt automatically within hours of each nginx upstream release.