<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	xmlns:media="http://search.yahoo.com/mrss/" >

<channel>
	<title>deb.myguard.nl</title>
	<atom:link href="https://deb.myguard.nl/feed/" rel="self" type="application/rss+xml" />
	<link>https://deb.myguard.nl</link>
	<description>Building packages, building the web</description>
	<lastBuildDate>Wed, 13 May 2026 11:30:59 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>WordPress NGINX Configuration: PHP-FPM Tuning, FastCGI Cache and Redis (2026 Guide)</title>
		<link>https://deb.myguard.nl/2026/05/wordpress-nginx-php-fpm-configuration-guide/</link>
		
		<dc:creator><![CDATA[]]></dc:creator>
		<pubDate>Tue, 12 May 2026 22:23:09 +0000</pubDate>
				<category><![CDATA[caching]]></category>
		<category><![CDATA[nginx]]></category>
		<category><![CDATA[performance]]></category>
		<category><![CDATA[php-fpm]]></category>
		<category><![CDATA[security]]></category>
		<category><![CDATA[wordpress]]></category>
		<guid isPermaLink="false">https://deb.myguard.nl/2026/05/wordpress-nginx-php-fpm-configuration-guide/</guid>

					<description><![CDATA[The complete WordPress + NGINX + PHP-FPM setup for Debian and Ubuntu: server block config, pool tuning, FastCGI caching for anonymous traffic, Redis object cache, Brotli compression, and security hardening with ModSecurity and Snuffleupagus.]]></description>
										<content:encoded><![CDATA[
<p>WordPress on NGINX is a popular combination — but a default installation leaves a lot of performance on the table. Default PHP-FPM settings are tuned for a shared hosting environment with dozens of sites; a dedicated server running one WordPress site can go much faster. Default NGINX config doesn&#8217;t enable FastCGI caching, compression, or proper static file handling. Default WordPress sends no cache headers, causing browsers to re-download the same assets on every visit.</p>

<p>This guide covers the complete WordPress + NGINX + PHP-FPM stack on Debian and Ubuntu: server block configuration, PHP-FPM pool tuning, FastCGI caching for anonymous traffic, object caching with Redis, security hardening, and performance verification. Uses the optimised <a href="/how-to-use/">myguard packages</a> throughout.</p>

<h2 style="color:#f59e0b">NGINX Server Block for WordPress</h2>

<pre><code>server {
    listen 443 ssl;
    http2 on;
    server_name example.com www.example.com;

    ssl_certificate     /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305;
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

    root /var/www/example.com;
    index index.php;

    # Serve static assets directly with long cache headers
    location ~* .(js|css|png|jpg|jpeg|webp|svg|woff2|woff|ttf|ico|pdf)$ {
        expires 1y;
        add_header Cache-Control "public, immutable";
        log_not_found off;
        access_log off;
    }

    # WordPress permalink rewriting
    location / {
        try_files $uri $uri/ /index.php?$args;
    }

    # PHP via PHP-FPM
    location ~ .php$ {
        try_files $uri =404;  # Security: don't execute non-existent PHP files
        fastcgi_pass unix:/run/php/php8.4-fpm.sock;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
        fastcgi_read_timeout 300s;
    }

    # Block direct access to sensitive files
    location ~* /.(?:git|env|htaccess|htpasswd)$ { deny all; }
    location ~* /(wp-config.php|xmlrpc.php)$ { deny all; }
    location = /wp-config.php { deny all; }
}

# Redirect HTTP to HTTPS
server {
    listen 80;
    server_name example.com www.example.com;
    return 301 https://$host$request_uri;
}</code></pre>

<h2 style="color:#f59e0b">PHP-FPM Pool Tuning</h2>

<p>Edit <code>/etc/php/8.4/fpm/pool.d/www.conf</code>. The default settings are for shared hosting. Tune for a dedicated server:</p>

<pre><code>[www]
user = www-data
group = www-data

; Use Unix socket (faster than TCP for local connections)
listen = /run/php/php8.4-fpm.sock
listen.owner = www-data
listen.group = www-data

; Dynamic process management
pm = dynamic
pm.max_children = 20        ; Max concurrent PHP processes
pm.start_servers = 5        ; Start with 5 workers
pm.min_spare_servers = 3    ; Keep at least 3 idle
pm.max_spare_servers = 8    ; Keep at most 8 idle
pm.max_requests = 500       ; Recycle workers after 500 requests (prevents memory leaks)

; Slower request logging for debugging
slowlog = /var/log/php/www-slow.log
request_slowlog_timeout = 5s

; PHP settings for WordPress
php_admin_value[memory_limit] = 256M
php_admin_value[upload_max_filesize] = 64M
php_admin_value[post_max_size] = 64M
php_admin_value[max_execution_time] = 120
php_admin_value[error_log] = /var/log/php/www-error.log
php_admin_flag[log_errors] = on</code></pre>

<p>How to calculate <code>pm.max_children</code>: check your average PHP process memory usage (<code>ps --no-headers -o rss -C php-fpm8.4 | awk '{sum+=$1} END {print sum/NR/1024 " MB"}'</code>), then divide available RAM (minus OS and NGINX overhead) by that number. For a 2GB VPS: (2048MB &#8211; 400MB overhead) / ~80MB per process = ~20 workers.</p>

<h2 style="color:#f59e0b">FastCGI Cache: Serve WordPress Pages Without PHP</h2>

<p>For unauthenticated visitors reading blog posts and pages, NGINX can cache the full PHP response and serve subsequent requests without touching PHP at all. A cached page serves in ~1ms instead of ~80ms. For a blog with most traffic being anonymous readers, this is transformative.</p>

<pre><code>http {
    # Cache zone: 256MB storage
    fastcgi_cache_path /var/cache/nginx/wordpress
        levels=1:2 keys_zone=wp_cache:100m
        max_size=256m inactive=60m use_temp_path=off;

    fastcgi_cache_key "$scheme$request_method$host$request_uri";

    server {
        # Cache settings per request
        set $skip_cache 0;

        # Don't cache POST requests
        if ($request_method = POST) { set $skip_cache 1; }

        # Don't cache URLs with query strings (search results, paginated)
        if ($query_string != "") { set $skip_cache 1; }

        # Don't cache logged-in users or cart pages
        if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in|woocommerce_items_in_cart") {
            set $skip_cache 1;
        }

        location ~ .php$ {
            fastcgi_pass unix:/run/php/php8.4-fpm.sock;
            fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
            include fastcgi_params;

            fastcgi_cache        wp_cache;
            fastcgi_cache_valid 200 60m;  # Cache 200 responses for 60 minutes
            fastcgi_cache_bypass  $skip_cache;
            fastcgi_no_cache      $skip_cache;

            add_header X-FastCGI-Cache $upstream_cache_status;
        }
    }
}</code></pre>

<p>Create the cache directory:</p>
<pre><code>mkdir -p /var/cache/nginx/wordpress
chown www-data:www-data /var/cache/nginx/wordpress</code></pre>

<p>To purge the cache when WordPress publishes or updates a post, install the <strong>Nginx Cache</strong> or <strong>Nginx Helper</strong> WordPress plugin, or use the myguard <a href="/nginx-modules/">cache purge module</a>.</p>

<h2 style="color:#f59e0b">Object Cache with Redis</h2>

<p>WordPress&#8217;s default object cache is per-request — database queries are cached in memory but the cache dies at the end of each request. Redis persists the object cache between requests, dramatically reducing database load:</p>

<pre><code>apt-get install redis-server php8.4-redis

# Verify Redis is running
redis-cli ping  # Should return PONG</code></pre>

<p>In WordPress, install the <strong>Redis Object Cache</strong> plugin, or add to <code>wp-config.php</code>:</p>

<pre><code>define('WP_REDIS_HOST', '127.0.0.1');
define('WP_REDIS_PORT', 6379);
define('WP_CACHE', true);</code></pre>

<p>With Redis object cache, a typical WordPress page that runs 30 database queries on the first load runs 2–3 on subsequent loads (just cache miss checks). For WooCommerce sites with heavy product catalog queries, the improvement is even more dramatic.</p>

<h2 style="color:#f59e0b">Compression for WordPress</h2>

<p>Enable Brotli and gzip for all text content:</p>

<pre><code>http {
    brotli on;
    brotli_comp_level 6;
    brotli_types text/html text/css application/javascript application/json image/svg+xml;

    gzip on;
    gzip_comp_level 6;
    gzip_vary on;
    gzip_types text/html text/css application/javascript application/json image/svg+xml;
}</code></pre>

<p>Install the Brotli module: <code>apt-get install libnginx-mod-http-brotli</code></p>

<h2 style="color:#f59e0b">Security Hardening Checklist</h2>

<p>Beyond the server block config above:</p>

<ul>
  <li><strong>PHP-Snuffleupagus:</strong> <code>apt-get install php8.4-snuffleupagus</code> — blocks dangerous PHP functions at the interpreter level, protects against webshells even if a plugin is compromised</li>
  <li><strong>ModSecurity WAF:</strong> <code>apt-get install libnginx-mod-http-modsecurity</code> — blocks SQLi, XSS, and scanner traffic before it reaches PHP</li>
  <li><strong>Rate limiting on /wp-login.php:</strong> 5 req/min per IP blocks credential stuffing</li>
  <li><strong>Block xmlrpc.php:</strong> Unless you use Jetpack or mobile app editing, add <code>location = /xmlrpc.php { deny all; }</code></li>
  <li><strong>File upload validation:</strong> Snuffleupagus upload validation rejects PHP files disguised as images</li>
</ul>

<h2 style="color:#f59e0b">Performance Verification</h2>

<pre><code># Test FastCGI cache is working
curl -I https://example.com/ | grep X-FastCGI-Cache
# First request: X-FastCGI-Cache: MISS
# Second request: X-FastCGI-Cache: HIT

# Check PHP-FPM worker status
curl http://127.0.0.1/fpm-status  # Add status page in pool config

# Benchmark with ab (Apache Bench)
ab -n 1000 -c 10 https://example.com/
# Look for 'Requests per second' and 'Time per request'

# Check Brotli is serving
curl -H 'Accept-Encoding: br' -I https://example.com/ | grep Content-Encoding
# Should show: Content-Encoding: br</code></pre>

<h2 style="color:#f59e0b">Frequently Asked Questions</h2>

<div class="faq">
  <div class="faq-item">
    <div class="faq-q">Do I need a WordPress caching plugin if I use FastCGI cache?</div>
    <div class="faq-a">For anonymous traffic, FastCGI cache at the NGINX level is more efficient than any WordPress plugin cache — it serves pages without starting PHP at all. You still need a plugin to handle cache purging (clearing the cache when you publish or update posts). WP Rocket, W3 Total Cache, or the free Nginx Helper plugin handle this.</div>
  </div>
  <div class="faq-item">
    <div class="faq-q">What is the right pm.max_children value for my server?</div>
    <div class="faq-a">Measure your average PHP-FPM process RSS (ps aux | grep php-fpm), then calculate: (available RAM in MB) / (average process size in MB). For a 4GB server running only WordPress, expect 30–50 workers. Leave 20–30% of RAM for NGINX, Redis, MySQL, and OS overhead.</div>
  </div>
  <div class="faq-item">
    <div class="faq-q">Should I use TCP or Unix socket for PHP-FPM?</div>
    <div class="faq-a">Unix socket when PHP-FPM and NGINX are on the same server — it skips the network stack entirely and is measurably faster (5–15% lower latency per PHP request). Use TCP (127.0.0.1:9000) only if NGINX and PHP-FPM are on different servers.</div>
  </div>
  <div class="faq-item">
    <div class="faq-q">Does FastCGI cache work with WooCommerce?</div>
    <div class="faq-a">For anonymous visitors browsing the shop: yes, very well. For logged-in users with items in cart: skip_cache logic (as shown above) ensures their cart state is always fresh. You need to also skip cache for checkout, cart, and account pages. The Nginx Helper plugin handles these exclusions automatically.</div>
  </div>
  <div class="faq-item">
    <div class="faq-q">Is Angie a better choice than NGINX for WordPress?</div>
    <div class="faq-a">The WordPress-specific performance and configuration is identical. Angie&#8217;s advantages — native ACME (no Certbot), JSON monitoring API — benefit server management, not WordPress performance directly. If you want Let&#8217;s Encrypt without Certbot complexity, Angie is worth the switch. WordPress won&#8217;t care either way.</div>
  </div>
</div>

<h2 style="color:#f59e0b">Related Posts</h2>
<ul>
  <li><a href="/nginx-brotli-compression-module-guide/">NGINX Brotli Compression Module</a> — detailed Brotli setup and pre-compression for static assets</li>
  <li><a href="/2024/01/enhancing-web-security-with-php-snuffleupagus-for-php-fpm/">PHP-Snuffleupagus: Harden PHP-FPM</a> — interpreter-level PHP security essential for WordPress</li>
  <li><a href="/2026/05/nginx-modsecurity-setup-debian-ubuntu/">NGINX ModSecurity WAF Setup</a> — HTTP-layer WAF to pair with PHP hardening</li>
  <li><a href="/nginx-rate-limiting-guide/">NGINX Rate Limiting Guide</a> — protect wp-login.php and xmlrpc.php from brute force</li>
  <li><a href="/2026/05/tls-configuration-ssllabs-a-plus/">TLS Configuration for NGINX</a> — A+ SSL Labs config for your WordPress HTTPS</li>
  <li><a href="/how-to-use/">How to Add the myguard APT Repository</a> — where the optimised NGINX packages come from</li>
</ul>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>nginx 1.29.8</title>
		<link>https://deb.myguard.nl/2026/05/nginx-load-balancing-upstream-guide/</link>
		
		<dc:creator><![CDATA[]]></dc:creator>
		<pubDate>Tue, 12 May 2026 22:23:08 +0000</pubDate>
				<category><![CDATA[nginx]]></category>
		<category><![CDATA[pbuilder]]></category>
		<guid isPermaLink="false">https://deb.myguard.nl/2026/05/nginx-load-balancing-upstream-guide/</guid>

					<description><![CDATA[NGINX load balancing distributes traffic across multiple backends with automatic failover. This guide covers all five load balancing algorithms, passive health checks, keepalive connection pooling, backup servers, and TCP/UDP load balancing.]]></description>
										<content:encoded><![CDATA[<p>Version <code>1.29.8</code> — <em>2026-05-13</em></p>
<h2>Changes</h2>
<ul>
<li>Full rebuild and backport with latest Mainline</li>
<li>Merged with the source package from Debian Trixie in november 2023</li>
<li>See for more information https://deb.myguard.nl/nginx-modules/</li>
<li>Changelog: https://deb.myguard.nl/forums/topic/changelog/</li>
</ul>
<h2>Distributions</h2>
<ul>
<li>bookworm</li>
<li>jammy</li>
<li>noble</li>
<li>resolute</li>
<li>trixie</li>
</ul>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>NGINX Reverse Proxy Configuration: The Complete Setup Guide</title>
		<link>https://deb.myguard.nl/2026/05/nginx-reverse-proxy-configuration-guide/</link>
		
		<dc:creator><![CDATA[]]></dc:creator>
		<pubDate>Tue, 12 May 2026 22:23:07 +0000</pubDate>
				<category><![CDATA[nginx]]></category>
		<category><![CDATA[performance]]></category>
		<category><![CDATA[proxy]]></category>
		<category><![CDATA[reverse-proxy]]></category>
		<category><![CDATA[security]]></category>
		<guid isPermaLink="false">https://deb.myguard.nl/2026/05/nginx-reverse-proxy-configuration-guide/</guid>

					<description><![CDATA[A reverse proxy puts NGINX in front of your Node.js, Python, or PHP backend — handling SSL termination, caching, buffering, and security. This guide covers proxy_pass, upstream keepalive, caching, WebSocket proxying, and security headers.]]></description>
										<content:encoded><![CDATA[
<p>A reverse proxy sits between your users and your application servers. Users connect to NGINX; NGINX forwards their requests to your backend (Node.js, PHP-FPM, a Python API, a Docker container, whatever). The backend sends its response to NGINX; NGINX forwards it to the user. From the user&#8217;s perspective, they&#8217;re talking directly to NGINX. Your backend never needs to be exposed to the internet at all.</p>

<p>This is the most common NGINX deployment pattern in 2026. SSL termination at NGINX, backend over plain HTTP on localhost. Caching, rate limiting, and load balancing all handled by NGINX before your application code runs. It&#8217;s clean, fast, and secure.</p>

<h2 style="color:#f59e0b">Why Use NGINX as a Reverse Proxy?</h2>

<ul>
  <li><strong>SSL termination:</strong> NGINX handles TLS; your backend speaks plain HTTP. No TLS library needed in your app.</li>
  <li><strong>Connection pooling:</strong> NGINX keeps persistent connections to your backend, amortizing TCP handshake overhead</li>
  <li><strong>Buffering:</strong> NGINX buffers slow client connections so your backend thread is freed immediately after sending the response</li>
  <li><strong>Static file serving:</strong> NGINX serves CSS, JS, and images directly without touching your application</li>
  <li><strong>Security:</strong> Backend never exposed to the internet; rate limiting, WAF, and auth can be applied at the proxy layer</li>
  <li><strong>HTTP/3 and HTTP/2:</strong> NGINX handles modern protocols; your backend can stay on HTTP/1.1</li>
</ul>

<h2 style="color:#f59e0b">Basic Reverse Proxy Configuration</h2>

<pre><code>server {
    listen 443 ssl;
    http2 on;
    server_name api.example.com;

    ssl_certificate     /etc/letsencrypt/live/api.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;

    location / {
        proxy_pass http://127.0.0.1:3000;   # Backend on port 3000

        # Pass real client IP to backend
        proxy_set_header Host              $host;
        proxy_set_header X-Real-IP         $remote_addr;
        proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}</code></pre>

<p>That&#8217;s the minimum working config. The four <code>proxy_set_header</code> lines are important — without them, your backend sees NGINX&#8217;s loopback IP as the client address, not the real user&#8217;s IP.</p>

<h2 style="color:#f59e0b">Timeouts: The Most Important Tuning</h2>

<p>Default NGINX proxy timeouts are generous. For most applications, tighten them:</p>

<pre><code>location / {
    proxy_pass http://127.0.0.1:3000;

    proxy_connect_timeout  5s;    # Max time to establish connection to backend
    proxy_send_timeout    60s;    # Max time to send request to backend
    proxy_read_timeout    60s;    # Max time to receive response from backend

    # For long-polling / streaming responses, increase read timeout:
    # proxy_read_timeout 3600s;
}</code></pre>

<p>A backend that takes more than 60 seconds to respond is either broken or overwhelmed. Failing fast (with a 504) is better than keeping the connection open indefinitely.</p>

<h2 style="color:#f59e0b">Upstream Blocks: Clean Backend Management</h2>

<p>Instead of hardcoding <code>http://127.0.0.1:3000</code> everywhere, use an <code>upstream</code> block. This makes it easy to add servers later, and enables keepalive connection pooling:</p>

<pre><code>http {
    upstream app_backend {
        server 127.0.0.1:3000;
        keepalive 32;   # Keep 32 persistent connections to the backend
    }

    server {
        location / {
            proxy_pass http://app_backend;

            # Required for keepalive to work with HTTP/1.1
            proxy_http_version 1.1;
            proxy_set_header Connection "";
        }
    }
}</code></pre>

<p>The <code>keepalive 32</code> keeps 32 idle connections to the backend alive in each NGINX worker. This eliminates the TCP handshake overhead for most requests. On a busy server, this alone reduces backend connection setup latency by 30–50%.</p>

<h2 style="color:#f59e0b">Buffering: For Slow Clients</h2>

<p>By default, NGINX buffers the full response from your backend before sending it to the client. This frees your backend worker immediately, even if the client is on a slow connection:</p>

<pre><code>location / {
    proxy_pass http://app_backend;

    proxy_buffering        on;           # Buffer responses (default: on)
    proxy_buffer_size      4k;           # Header buffer size
    proxy_buffers          8 16k;        # Response body buffers
    proxy_busy_buffers_size 32k;         # Max buffer before writing to temp file

    # For large file downloads, disable buffering to stream directly:
    # proxy_buffering off;

    # For Server-Sent Events / long-polling, disable buffering:
    # proxy_buffering off;
    # proxy_cache off;
}</code></pre>

<h2 style="color:#f59e0b">Proxy Caching</h2>

<p>NGINX can cache backend responses and serve them directly without hitting your backend at all. For content that doesn&#8217;t change per-user (public API responses, rendered pages), this is a massive performance win:</p>

<pre><code>http {
    # Define cache zone: 100MB storage, 10 minutes inactive TTL
    proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=app_cache:10m
                     max_size=100m inactive=10m use_temp_path=off;

    upstream app_backend {
        server 127.0.0.1:3000;
        keepalive 32;
    }

    server {
        location /api/public/ {
            proxy_pass http://app_backend;

            proxy_cache            app_cache;
            proxy_cache_valid 200  60s;  # Cache 200 responses for 60 seconds
            proxy_cache_valid 404  10s;  # Cache 404s for 10 seconds
            proxy_cache_use_stale  error timeout updating;

            # Add cache status to response headers (for debugging)
            add_header X-Cache-Status $upstream_cache_status;
        }
    }
}</code></pre>

<p>Cache status values in the <code>X-Cache-Status</code> header: <code>HIT</code> (served from cache), <code>MISS</code> (fetched from backend and cached), <code>BYPASS</code> (cache bypassed), <code>EXPIRED</code> (cache entry expired, re-fetched).</p>

<h2 style="color:#f59e0b">WebSocket Proxying</h2>

<p>WebSocket upgrades require specific headers to work through a reverse proxy:</p>

<pre><code>location /ws/ {
    proxy_pass http://app_backend;

    proxy_http_version 1.1;
    proxy_set_header Upgrade    $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_read_timeout 3600s;   # Keep WebSocket connections alive
    proxy_send_timeout 3600s;
}</code></pre>

<h2 style="color:#f59e0b">Serving Static Files Directly</h2>

<p>For maximum performance, serve static files directly from NGINX&#8217;s file system, bypassing your backend entirely:</p>

<pre><code>server {
    root /var/www/app/public;

    # Serve static assets directly
    location ~* .(js|css|png|jpg|webp|svg|woff2|ico)$ {
        expires 1y;
        add_header Cache-Control "public, immutable";
        try_files $uri =404;
    }

    # Everything else goes to the backend
    location / {
        try_files $uri @backend;
    }

    location @backend {
        proxy_pass http://app_backend;
        proxy_set_header Host            $host;
        proxy_set_header X-Real-IP       $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}</code></pre>

<h2 style="color:#f59e0b">Security Headers and Hardening</h2>

<pre><code>server {
    # Hide backend server header
    proxy_hide_header X-Powered-By;
    proxy_hide_header Server;

    # Add security headers
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
    add_header X-Content-Type-Options nosniff always;
    add_header X-Frame-Options SAMEORIGIN always;
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;

    location / {
        proxy_pass http://app_backend;

        # Don't pass internal headers to the backend
        proxy_set_header X-Internal-Auth "";
        proxy_set_header Authorization  "";
    }
}</code></pre>

<h2 style="color:#f59e0b">Proxying to Unix Sockets</h2>

<p>If your backend runs on the same server, Unix sockets are faster than TCP loopback — no network stack overhead:</p>

<pre><code>upstream app_backend {
    server unix:/run/app/gunicorn.sock;  # Python/Gunicorn
    # or:
    server unix:/run/php/php8.4-fpm.sock;  # PHP-FPM
    keepalive 16;
}

location / {
    proxy_pass http://app_backend;
}</code></pre>

<h2 style="color:#f59e0b">Frequently Asked Questions</h2>

<div class="faq">
  <div class="faq-item">
    <div class="faq-q">What is the difference between proxy_pass and fastcgi_pass?</div>
    <div class="faq-a">proxy_pass forwards HTTP requests to an HTTP backend (Node.js, Python, Ruby, Go). fastcgi_pass uses the FastCGI protocol to communicate with PHP-FPM. They&#8217;re for different backend types: use proxy_pass for any HTTP server, fastcgi_pass specifically for PHP-FPM. Both support Unix sockets and TCP addresses.</div>
  </div>
  <div class="faq-item">
    <div class="faq-q">How do I pass the real client IP to my backend?</div>
    <div class="faq-a">Use proxy_set_header X-Real-IP $remote_addr and proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for. Your backend then reads the X-Real-IP or X-Forwarded-For header instead of the REMOTE_ADDR. Make sure your backend trusts these headers only from NGINX (not from arbitrary clients).</div>
  </div>
  <div class="faq-item">
    <div class="faq-q">Why does my backend show NGINX&#8217;s IP instead of the client&#8217;s IP?</div>
    <div class="faq-a">You&#8217;re missing the proxy_set_header X-Real-IP and X-Forwarded-For headers. Add them as shown above. Also make sure your backend application is configured to read client IP from X-Real-IP or X-Forwarded-For rather than from REMOTE_ADDR.</div>
  </div>
  <div class="faq-item">
    <div class="faq-q">Should I use keepalive connections to my backend?</div>
    <div class="faq-a">Yes, almost always. Keepalive eliminates the TCP handshake overhead for each request. Set keepalive to roughly 2x the number of worker_processes. Add proxy_http_version 1.1 and proxy_set_header Connection &#8220;&#8221; — these are required for keepalive to work correctly.</div>
  </div>
  <div class="faq-item">
    <div class="faq-q">Does NGINX reverse proxy work with HTTP/2 between NGINX and the backend?</div>
    <div class="faq-a">Yes, with grpc_pass for gRPC backends. For standard HTTP/2 backends, you can use proxy_pass with proxy_http_version 2.0 (available in recent NGINX versions). Most backends use HTTP/1.1 over Unix sockets which is simpler and just as fast on the same machine.</div>
  </div>
  <div class="faq-item">
    <div class="faq-q">How do I handle 502 errors from my backend?</div>
    <div class="faq-a">502 means NGINX can&#8217;t connect to the backend. Check that your backend process is running (systemctl status app), that it&#8217;s listening on the expected socket/port (ss -tlnp), and that the proxy_pass address matches. Also check proxy_connect_timeout — if your backend is slow to start, increase it temporarily.</div>
  </div>
</div>

<h2 style="color:#f59e0b">Related Posts</h2>
<ul>
  <li><a href="/nginx-load-balancing-upstream-guide/">NGINX Load Balancing Guide</a> — extend this to multiple backends with health checks and failover</li>
  <li><a href="/2026/05/tls-configuration-ssllabs-a-plus/">TLS Configuration for NGINX</a> — the SSL termination config that pairs with reverse proxy</li>
  <li><a href="/2026/05/nginx-http3-quic-debian-ubuntu/">Enable HTTP/3 on NGINX</a> — add QUIC to your reverse proxy for modern browsers</li>
  <li><a href="/2026/05/nginx-angie-the-expert-guide-to-maximum-performance-and-security/">NGINX Performance Expert Guide</a> — full tuning guide including proxy cache and upstream configuration</li>
  <li><a href="/2026/05/angie-web-server-complete-guide/">Angie Web Server Complete Guide</a> — Angie handles reverse proxy identically to NGINX with extra monitoring features</li>
</ul>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>NGINX Rate Limiting: Protect Your Server from Bots, Scrapers and Brute Force</title>
		<link>https://deb.myguard.nl/2026/05/nginx-rate-limiting-guide/</link>
		
		<dc:creator><![CDATA[]]></dc:creator>
		<pubDate>Tue, 12 May 2026 22:23:06 +0000</pubDate>
				<category><![CDATA[nginx]]></category>
		<category><![CDATA[performance]]></category>
		<category><![CDATA[rate-limiting]]></category>
		<category><![CDATA[security]]></category>
		<category><![CDATA[wordpress]]></category>
		<guid isPermaLink="false">https://deb.myguard.nl/2026/05/nginx-rate-limiting-guide/</guid>

					<description><![CDATA[NGINX rate limiting with limit_req_zone stops credential stuffing, scrapers, and DDoS floods before they reach your application. This guide covers burst handling, per-endpoint limits, IP whitelisting, WordPress-specific config, and Redis-backed cross-server limiting.]]></description>
										<content:encoded><![CDATA[
<p>Every server on the internet gets hammered. Credential stuffing bots testing username/password combinations. Scrapers pulling your entire site at 200 requests per second. DDoS floods trying to saturate your PHP workers. Bad actors hammering your login page or contact form. NGINX has a built-in rate limiting system that handles all of this — and it&#8217;s surprisingly powerful once you understand how it works.</p>

<p>This guide covers NGINX rate limiting from scratch: the core concepts, the directives, how to limit different endpoints differently, how to handle burst traffic without breaking legitimate users, and how to combine rate limiting with the <a href="/nginx-modules/">dynamic modules</a> from the myguard repository for even more control.</p>

<h2 style="color:#f59e0b">How NGINX Rate Limiting Works</h2>

<p>NGINX rate limiting uses the <strong>leaky bucket algorithm</strong>. Think of it as a bucket with a small hole in the bottom. Requests flow in from the top; they drain out the bottom at a fixed rate. If requests arrive faster than the drain rate, the bucket fills up. Once full, excess requests are either delayed (held in a queue) or rejected with a 429 status code.</p>

<p>Two directives do the work:</p>
<ul>
  <li><strong><code>limit_req_zone</code></strong> — defined in the <code>http</code> block; declares a shared memory zone and sets the rate</li>
  <li><strong><code>limit_req</code></strong> — applied in a <code>server</code> or <code>location</code> block; activates rate limiting using a named zone</li>
</ul>

<h2 style="color:#f59e0b">Basic Rate Limiting Configuration</h2>

<pre><code>http {
    # Define a zone: track by IP, 10MB shared memory, 10 req/sec per IP
    limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;

    server {
        listen 443 ssl;
        server_name example.com;

        location / {
            limit_req zone=general;
            # ... rest of config
        }
    }
}</code></pre>

<p>The <code>$binary_remote_addr</code> variable uses the binary representation of the IP address (4 bytes for IPv4, 16 for IPv6) as the key — more memory-efficient than the string form. A 10MB zone holds about 160,000 IP state entries.</p>

<h2 style="color:#f59e0b">Understanding Burst</h2>

<p>A rate of 10r/s means NGINX allows one request per 100ms. If a user sends two requests in a 50ms window, the second one gets a 503 immediately. That&#8217;s too strict for real browsers — a page load triggers many simultaneous requests (HTML, CSS, JS, images).</p>

<p>The <code>burst</code> parameter adds a queue for excess requests:</p>

<pre><code>location / {
    limit_req zone=general burst=20;
    # The burst queue holds up to 20 excess requests
    # They're delayed (not rejected) until the rate allows them through
}</code></pre>

<p>With <code>burst=20</code> and <code>rate=10r/s</code>: up to 20 requests can queue up and be processed in order. A 21st request gets a 503. This handles legitimate page load bursts without breaking the overall rate limit.</p>

<p>Add <code>nodelay</code> to process burst requests immediately instead of delaying them:</p>
<pre><code>limit_req zone=general burst=20 nodelay;
# Burst requests are processed immediately, not queued
# The burst allowance still refills at the zone's rate</code></pre>

<h2 style="color:#f59e0b">Protecting Specific Endpoints</h2>

<p>Different endpoints need different limits. A login page needs much tighter limits than a static page. Define multiple zones:</p>

<pre><code>http {
    # General traffic: 20 req/sec per IP
    limit_req_zone $binary_remote_addr zone=general:10m rate=20r/s;

    # Login endpoint: 5 req/min per IP (credential stuffing protection)
    limit_req_zone $binary_remote_addr zone=login:10m rate=5r/m;

    # API: 100 req/sec per IP
    limit_req_zone $binary_remote_addr zone=api:10m rate=100r/s;

    # Search: 10 req/min per IP (expensive queries)
    limit_req_zone $binary_remote_addr zone=search:10m rate=10r/m;

    server {
        location / {
            limit_req zone=general burst=30 nodelay;
        }

        location /wp-login.php {
            limit_req zone=login burst=3;
            # At 5r/min with burst=3: allows a brief login attempt
            # but hammering with 100 attempts/second gets a 429 immediately
        }

        location /api/ {
            limit_req zone=api burst=50 nodelay;
        }

        location /?s= {
            limit_req zone=search burst=2;
        }
    }
}</code></pre>

<h2 style="color:#f59e0b">Rate Limiting by Multiple Keys</h2>

<p>Rate limiting by IP alone can be too coarse — legitimate users behind corporate NAT or proxies share an IP. Rate limiting by IP + user agent or IP + URL gives more granular control:</p>

<pre><code>http {
    # Rate limit by IP + URI (different budget per URL per IP)
    limit_req_zone "$binary_remote_addr$uri" zone=per_uri:20m rate=5r/s;

    # Rate limit authenticated users by user ID header
    # (your backend sets X-User-ID after auth)
    limit_req_zone $http_x_user_id zone=per_user:10m rate=50r/s;
}</code></pre>

<h2 style="color:#f59e0b">Returning Custom 429 Responses</h2>

<p>By default, rate-limited requests get a plain 503. Change this to a proper 429 (Too Many Requests) with a Retry-After header:</p>

<pre><code>http {
    limit_req_status 429;

    server {
        # Custom JSON error for API clients
        error_page 429 /rate-limited.json;
        location = /rate-limited.json {
            internal;
            default_type application/json;
            return 429 '{"error":"rate_limit_exceeded","retry_after":60}';
            add_header Retry-After 60 always;
        }
    }
}</code></pre>

<h2 style="color:#f59e0b">Whitelisting Trusted IPs</h2>

<p>Internal monitoring tools, load balancer health checks, and your own IP should bypass rate limiting:</p>

<pre><code>http {
    # Geo-based bypass: set $limit to empty string for trusted IPs
    geo $limit {
        default         $binary_remote_addr;  # Apply limit
        10.0.0.0/8      "";                   # Internal: no limit
        192.168.0.0/16  "";                   # Private: no limit
        203.0.113.42    "";                   # Your office IP: no limit
    }

    limit_req_zone $limit zone=general:10m rate=10r/s;

    # When $limit is empty, no rate-limit state is tracked
    # This is the correct zero-overhead bypass approach
}</code></pre>

<h2 style="color:#f59e0b">Monitoring Rate Limit Events</h2>

<p>NGINX logs rate limit rejections to the error log at <code>warn</code> level by default. To log them at <code>error</code> level (so they appear in your monitoring), or to suppress the log noise for expected traffic patterns:</p>

<pre><code>location /wp-login.php {
    limit_req zone=login burst=3;
    limit_req_log_level error;   # Log rejections as errors (default: warn)
    # or:
    limit_req_log_level info;    # Suppress from error monitoring
}

# Count rate limit events in real-time
grep 'limiting requests' /var/log/nginx/error.log | wc -l
tail -f /var/log/nginx/error.log | grep 'limiting'</code></pre>

<h2 style="color:#f59e0b">WordPress-Specific Rate Limiting</h2>

<p>For a WordPress site, these are the endpoints worth rate limiting most aggressively:</p>

<pre><code>http {
    limit_req_zone $binary_remote_addr zone=wp_login:10m   rate=5r/m;
    limit_req_zone $binary_remote_addr zone=wp_comments:5m rate=1r/m;
    limit_req_zone $binary_remote_addr zone=xmlrpc:5m      rate=1r/m;
    limit_req_zone $binary_remote_addr zone=general:20m    rate=30r/s;

    server {
        # Credential stuffing protection
        location = /wp-login.php {
            limit_req zone=wp_login burst=3;
        }

        # XML-RPC is a DDoS amplification target — block entirely if unused
        location = /xmlrpc.php {
            limit_req zone=xmlrpc burst=1;
            # Or just block it: return 403;
        }

        # Comment spam protection
        location = /wp-comments-post.php {
            limit_req zone=wp_comments burst=1;
        }

        # wp-admin: protect but allow legitimate admin use
        location /wp-admin/ {
            limit_req zone=general burst=20 nodelay;
        }
    }
}</code></pre>

<h2 style="color:#f59e0b">Redis-Backed Cross-Server Rate Limiting</h2>

<p>NGINX&#8217;s built-in rate limiting uses shared memory within a single server. If you have multiple NGINX instances behind a load balancer, each one has independent rate limit state — a bot that hits different servers can exceed your intended rate by a factor of N.</p>

<p>For true cross-server rate limiting, use the <a href="/complete-guide-to-using-nginx-with-lua-enhanced-web-server-functionality/">NGINX Lua module</a> with Redis:</p>

<pre><code>location /wp-login.php {
    access_by_lua_block {
        local redis = require "resty.redis"
        local red = redis:new()
        red:set_timeouts(50, 50, 50)
        red:connect("127.0.0.1", 6379)

        local key = "rl:login:" .. ngx.var.binary_remote_addr
        local count = red:incr(key)
        if count == 1 then red:expire(key, 60) end
        red:set_keepalive(10000, 20)

        if count and count > 5 then
            ngx.header["Retry-After"] = "60"
            return ngx.exit(429)
        end
    }
}</code></pre>

<h2 style="color:#f59e0b">Frequently Asked Questions</h2>

<div class="faq">
  <div class="faq-item">
    <div class="faq-q">What is the difference between limit_req and limit_conn?</div>
    <div class="faq-a">limit_req limits the request rate (requests per second/minute). limit_conn limits simultaneous open connections. They&#8217;re complementary: limit_req stops bots from sending thousands of requests in a burst; limit_conn stops someone from holding thousands of connections open. Use both for full protection.</div>
  </div>
  <div class="faq-item">
    <div class="faq-q">Will rate limiting block legitimate users?</div>
    <div class="faq-a">Not if configured correctly. A real human browsing a website generates maybe 5–10 requests per second during active page loads. Set your general zone to 20–30r/s with burst=30 and legitimate users never see a 429. Bots trying to scrape or brute-force send orders of magnitude more — those get hit.</div>
  </div>
  <div class="faq-item">
    <div class="faq-q">How much memory does a rate limit zone use?</div>
    <div class="faq-a">About 64 bytes per tracked IP. A 10m zone (10 megabytes) holds roughly 160,000 IP addresses. For most sites this is more than enough. If you have a very high-traffic site with millions of unique IPs, increase the zone size proportionally.</div>
  </div>
  <div class="faq-item">
    <div class="faq-q">Does rate limiting work with IPv6?</div>
    <div class="faq-a">Yes — that&#8217;s why we use $binary_remote_addr (4 bytes for IPv4, 16 for IPv6) rather than $remote_addr (the string form, much larger). Note that rate limiting by individual IPv6 address can be gamed by rotating through a /64 block. For IPv6 you may want to rate limit by /64 subnet instead: set_real_ip_from and real_ip_header can help here.</div>
  </div>
  <div class="faq-item">
    <div class="faq-q">What&#8217;s the difference between burst with and without nodelay?</div>
    <div class="faq-a">Without nodelay: burst requests are queued and processed at the zone&#8217;s rate. The queue adds latency but smooths traffic. With nodelay: burst requests are processed immediately, but each one consumes a burst slot that refills at the zone&#8217;s rate. Nodelay is better for user-facing pages (no added latency); without it is better for backend-sensitive operations where you want strict pacing.</div>
  </div>
  <div class="faq-item">
    <div class="faq-q">Can I rate limit by cookie or header instead of IP?</div>
    <div class="faq-a">Yes — the zone key can be any NGINX variable. Use $cookie_session_id to rate limit by session, $http_x_api_key to rate limit by API key, or &#8220;$binary_remote_addr$http_user_agent&#8221; to rate limit by IP+UA combination. Just make sure the key has low cardinality or your zone memory fills up quickly.</div>
  </div>
</div>

<h2 style="color:#f59e0b">Related Posts</h2>
<ul>
  <li><a href="/2026/05/nginx-modsecurity-setup-debian-ubuntu/">NGINX ModSecurity WAF Setup</a> — pair rate limiting with WAF for comprehensive attack protection</li>
  <li><a href="/complete-guide-to-using-nginx-with-lua-enhanced-web-server-functionality/">NGINX Lua Module Guide</a> — Redis-backed rate limiting for multi-server setups</li>
  <li><a href="/nginx-modules/">NGINX Dynamic Modules Overview</a> — all 50+ modules including the GeoIP2 module for geo-based limits</li>
  <li><a href="/2026/05/nginx-angie-the-expert-guide-to-maximum-performance-and-security/">NGINX Performance and Security Expert Guide</a> — full security and performance hardening guide</li>
  <li><a href="/2026/05/angie-web-server-complete-guide/">Angie Web Server Complete Guide</a> — rate limiting works identically on Angie</li>
</ul>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>NGINX Brotli Compression: Install, Configure and Pre-Compress Static Assets</title>
		<link>https://deb.myguard.nl/2026/05/nginx-brotli-compression-module-guide/</link>
		
		<dc:creator><![CDATA[]]></dc:creator>
		<pubDate>Tue, 12 May 2026 22:23:05 +0000</pubDate>
				<category><![CDATA[brotli]]></category>
		<category><![CDATA[compression]]></category>
		<category><![CDATA[debian]]></category>
		<category><![CDATA[nginx]]></category>
		<category><![CDATA[performance]]></category>
		<category><![CDATA[ubuntu]]></category>
		<guid isPermaLink="false">https://deb.myguard.nl/2026/05/nginx-brotli-compression-module-guide/</guid>

					<description><![CDATA[Brotli achieves 15-26% better compression than gzip on HTML, CSS, and JavaScript. This guide covers installing the NGINX Brotli module, configuring on-the-fly compression, pre-compressing static assets at level 11, and running Brotli alongside gzip.]]></description>
										<content:encoded><![CDATA[
<p>Gzip has been compressing web content since 1992. It&#8217;s good. It&#8217;s everywhere. And it&#8217;s showing its age. <strong>Brotli</strong> is its modern replacement — developed by Google, standardised in 2016 (RFC 7932), and now supported by every browser that matters. On typical web content, Brotli achieves 15–26% better compression than gzip at comparable speeds. Smaller files mean faster page loads, lower bandwidth costs, and better Core Web Vitals scores.</p>

<p>The <a href="/how-to-use/">myguard APT repository</a> ships a native Brotli dynamic module for both NGINX and Angie — install it with one apt command, load it with one config line, and your server starts serving Brotli to every browser that supports it.</p>

<h2 style="color:#f59e0b">Brotli vs Gzip: The Actual Numbers</h2>

<p>Brotli uses a combination of LZ77, Huffman coding, and a 2nd-order context modeling that gzip doesn&#8217;t have. The practical result:</p>

<table>
  <thead><tr><th>Content type</th><th>Gzip (level 6)</th><th>Brotli (level 6)</th><th>Brotli advantage</th></tr></thead>
  <tbody>
    <tr><td>HTML</td><td>68% reduction</td><td>78% reduction</td><td>+15%</td></tr>
    <tr><td>CSS</td><td>72% reduction</td><td>84% reduction</td><td>+21%</td></tr>
    <tr><td>JavaScript</td><td>67% reduction</td><td>80% reduction</td><td>+19%</td></tr>
    <tr><td>JSON API response</td><td>71% reduction</td><td>83% reduction</td><td>+17%</td></tr>
    <tr><td>SVG</td><td>74% reduction</td><td>86% reduction</td><td>+19%</td></tr>
  </tbody>
</table>

<p>Brotli level 11 (maximum) achieves 20–26% better compression than gzip, but is extremely slow to encode — suitable only for pre-compressed static assets, not on-the-fly. Level 4–6 is the sweet spot for on-the-fly dynamic compression: better than gzip, fast enough for real-time use.</p>

<h2 style="color:#f59e0b">Step 1 — Install the Brotli Module</h2>

<pre><code># Add the myguard repository if not already done
wget https://deb.myguard.nl/pool/myguard.deb
dpkg -i myguard.deb
apt-get update

# Install NGINX with the Brotli module
apt-get install nginx libnginx-mod-http-brotli

# Or for Angie:
apt-get install angie angie-module-http-brotli</code></pre>

<p>New to the myguard repository? <a href="/how-to-use/">Follow the two-minute setup guide.</a></p>

<h2 style="color:#f59e0b">Step 2 — Load the Module</h2>

<p>The myguard package installs a load snippet automatically. Verify it&#8217;s in place:</p>

<pre><code>ls /etc/nginx/modules-enabled/ | grep brotli
# Should show: 50-mod-http-brotli-filter.conf and 50-mod-http-brotli-static.conf</code></pre>

<p>If not present, add to the top of <code>nginx.conf</code> (before the http block):</p>
<pre><code>load_module modules/ngx_http_brotli_filter_module.so;
load_module modules/ngx_http_brotli_static_module.so;</code></pre>

<h2 style="color:#f59e0b">Step 3 — Configure Brotli</h2>

<p>Add this inside your <code>http</code> block in <code>nginx.conf</code>:</p>

<pre><code>http {
    # Brotli dynamic compression (on-the-fly)
    brotli             on;
    brotli_comp_level  6;        # 0-11, sweet spot is 4-6
    brotli_min_length  256;      # Don't compress tiny responses
    brotli_types
        text/plain
        text/css
        text/javascript
        text/xml
        text/x-component
        application/javascript
        application/json
        application/xml
        application/rss+xml
        application/atom+xml
        application/vnd.ms-fontobject
        image/svg+xml
        font/truetype
        font/opentype;

    # Brotli static files (serve pre-compressed .br files)
    brotli_static on;

    # Keep gzip as fallback for browsers that don't support Brotli
    gzip            on;
    gzip_comp_level 6;
    gzip_min_length 256;
    gzip_vary       on;
    gzip_types
        text/plain text/css text/javascript application/javascript
        application/json application/xml image/svg+xml font/opentype;
}</code></pre>

<h2 style="color:#f59e0b">Step 4 — Test and Reload</h2>

<pre><code>nginx -t && systemctl reload nginx</code></pre>

<p>Verify Brotli is working:</p>
<pre><code># curl with Brotli accept header
curl -H 'Accept-Encoding: br,gzip' -I https://example.com
# Look for: Content-Encoding: br

# Check Chrome DevTools: Network tab > select a request > Response Headers > Content-Encoding: br</code></pre>

<h2 style="color:#f59e0b">Pre-Compressed Static Assets (Best Performance)</h2>

<p>For static files that don&#8217;t change (CSS, JS, fonts), pre-compress them at build time with level 11 and let NGINX serve the <code>.br</code> files directly. This gives maximum compression with zero runtime CPU cost:</p>

<pre><code># Pre-compress all JS and CSS files in your web root
find /var/www/html -name '*.js' -o -name '*.css' | while read f; do
    brotli -Z -f "$f" -o "${f}.br"   # -Z = level 11
    gzip -9 -k -f "$f"               # -k = keep original, for fallback
done</code></pre>

<p>With <code>brotli_static on</code> in your NGINX config, when a browser requests <code>app.js</code> with <code>Accept-Encoding: br</code>, NGINX automatically serves <code>app.js.br</code> without doing any runtime compression. Zero CPU, maximum compression.</p>

<pre><code># Install the brotli CLI tool
apt-get install brotli</code></pre>

<h2 style="color:#f59e0b">Brotli for WordPress</h2>

<p>WordPress sites benefit significantly from Brotli because WordPress generates a lot of HTML, CSS, and JavaScript. The main caveat: PHP responses are compressed dynamically, so set a sane compression level (4–6) to avoid adding more than ~1ms of CPU time per request.</p>

<p>Typical page size reduction for a WordPress homepage:</p>
<ul>
  <li>Uncompressed HTML: ~180KB</li>
  <li>Gzip level 6: ~28KB</li>
  <li>Brotli level 6: ~23KB</li>
  <li>Brotli level 11 (pre-compressed): ~19KB</li>
</ul>

<p>The 5KB difference between gzip and Brotli level 6 saves ~40ms on a typical 4G connection. Across thousands of page views, that&#8217;s meaningful for Core Web Vitals.</p>

<h2 style="color:#f59e0b">Brotli + zstd: Running Both</h2>

<p>The myguard repository also ships a <a href="/2026/05/zstd-nginx-module-what-it-does-bugs-fixed/">zstd NGINX module</a>. zstd excels at server-side API compression (faster decode, great for JSON) while Brotli excels at browser-facing content (better compression ratio). Run both:</p>

<pre><code>apt-get install libnginx-mod-http-brotli libnginx-mod-http-zstd

# In http block:
brotli on;        # For browsers (HTML, CSS, JS)
brotli_types text/html text/css application/javascript image/svg+xml;

zstd on;          # For API clients that support it
zstd_types application/json application/x-ndjson;
zstd_comp_level 3;</code></pre>

<h2 style="color:#f59e0b">Frequently Asked Questions</h2>

<div class="faq">
  <div class="faq-item">
    <div class="faq-q">Do all browsers support Brotli?</div>
    <div class="faq-a">Every browser released after 2017 supports Brotli — Chrome, Firefox, Safari, Edge. Coverage is 96%+ of global browser usage. NGINX with Brotli still serves gzip to the ~4% that don&#8217;t support it (IE11, very old Safari), so there&#8217;s no compatibility risk.</div>
  </div>
  <div class="faq-item">
    <div class="faq-q">Does Brotli work with HTTPS only?</div>
    <div class="faq-a">Technically no — Brotli can work over HTTP. But all browsers only send the Accept-Encoding: br header on HTTPS connections, because early Brotli deployments over HTTP caused issues with some HTTP proxies. In practice: Brotli only activates on HTTPS, which is fine since you should be using HTTPS anyway.</div>
  </div>
  <div class="faq-item">
    <div class="faq-q">What compression level should I use?</div>
    <div class="faq-a">Level 4–6 for dynamic on-the-fly compression (good ratio, fast). Level 11 only for pre-compressed static assets (maximum ratio, but too slow for real-time use). The default of 6 in the config above is the practical sweet spot for most sites.</div>
  </div>
  <div class="faq-item">
    <div class="faq-q">Does Brotli affect CPU usage?</div>
    <div class="faq-a">At level 6, dynamic Brotli adds roughly 1–2ms of CPU time per response compared to gzip. On a server handling 500 req/s that&#8217;s about 3–5% extra CPU load. Pre-compressed static assets with brotli_static eliminate runtime CPU entirely for those files.</div>
  </div>
  <div class="faq-item">
    <div class="faq-q">Can I use Brotli with Angie?</div>
    <div class="faq-a">Yes. Install angie-module-http-brotli instead of libnginx-mod-http-brotli. The configuration directives are identical.</div>
  </div>
  <div class="faq-item">
    <div class="faq-q">Should I disable gzip when using Brotli?</div>
    <div class="faq-a">No — keep gzip enabled alongside Brotli. NGINX automatically serves Brotli to browsers that support it and gzip to those that don&#8217;t. Disabling gzip would break compression for the small percentage of users on older browsers or corporate proxies that strip Brotli support.</div>
  </div>
</div>

<h2 style="color:#f59e0b">Related Posts</h2>
<ul>
  <li><a href="/2026/05/zstd-nginx-module-what-it-does-bugs-fixed/">zstd NGINX Module: What It Does and 22 Bug Fixes</a> — the other modern compression option, great for API workloads</li>
  <li><a href="/nginx-modules/">NGINX Dynamic Modules Overview</a> — Brotli is one of 50+ available modules</li>
  <li><a href="/2026/05/nginx-angie-the-expert-guide-to-maximum-performance-and-security/">NGINX Performance and Security Expert Guide</a> — full performance tuning guide including compression strategy</li>
  <li><a href="/2026/05/nginx-vs-apache-benchmark-2026/">NGINX vs Apache Benchmark 2026</a> — performance comparison including compression overhead</li>
  <li><a href="/how-to-use/">How to Add the myguard APT Repository</a> — where the Brotli module comes from</li>
</ul>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>NGINX on Debian 13 Trixie: What Changed and How to Upgrade</title>
		<link>https://deb.myguard.nl/2026/05/nginx-debian-13-trixie-upgrade-guide/</link>
		
		<dc:creator><![CDATA[]]></dc:creator>
		<pubDate>Tue, 12 May 2026 22:17:38 +0000</pubDate>
				<category><![CDATA[debian]]></category>
		<category><![CDATA[http3]]></category>
		<category><![CDATA[nginx]]></category>
		<category><![CDATA[performance]]></category>
		<category><![CDATA[trixie]]></category>
		<category><![CDATA[upgrade]]></category>
		<guid isPermaLink="false">https://deb.myguard.nl/2026/05/nginx-debian-13-trixie-upgrade-guide/</guid>

					<description><![CDATA[Debian 13 Trixie brings GCC 14, OpenSSL 3.3, PHP 8.4, systemd 256, and a newer Linux kernel. Here is what each change means for your NGINX and Angie setup, with a complete upgrade checklist.]]></description>
										<content:encoded><![CDATA[
<p>Debian 13 — codenamed Trixie — is Debian&#8217;s current testing branch, due to become stable in mid-2025. If you&#8217;re already running Trixie or preparing to migrate from Debian 12 Bookworm, there are some meaningful changes under the hood that affect every NGINX and Angie deployment. New compiler, new OpenSSL, new PHP defaults, new systemd — and some package transitions that could trip you up if you&#8217;re not paying attention.</p>

<p>The good news: the <a href="/how-to-use/">myguard APT repository</a> has shipped Trixie packages since day one of the testing cycle. Install NGINX or Angie from deb.myguard.nl and you automatically get builds compiled natively on Trixie&#8217;s toolchain — not backports, not compatibility shims, not &#8220;should work&#8221; guesswork.</p>

<h2 style="color:#f59e0b">What Is Debian 13 Trixie?</h2>

<p>Trixie is the development codename for Debian&#8217;s next stable release. Debian names its releases after Toy Story characters — after Bookworm (Debian 12) comes Trixie, the triceratops. Once Trixie is declared stable (expected 2025–2026), it will become &#8220;Debian 13&#8221; and receive five-plus years of security support.</p>

<p>Right now, Trixie is in &#8220;testing&#8221; status: it receives updates continuously, packages are more recent than Bookworm&#8217;s, and it&#8217;s broadly stable but not yet officially blessed for production. Many sysadmins run Trixie on servers where they want newer software without compiling from source. The myguard repository treats Trixie as a first-class target.</p>

<h2 style="color:#f59e0b">What Changed in Trixie That Affects NGINX</h2>

<h3>GCC 14 Compiler</h3>

<p>Our Trixie NGINX and Angie packages are compiled with GCC 14, which enables more aggressive auto-vectorization and improved link-time optimization. GCC 14 is also stricter about certain C code patterns — this pushed a few module patches to ensure clean compilation. The result: measurably better performance on modern CPUs, especially for compression (brotli, zstd, gzip) which is heavily vectorized.</p>

<p>Approximate GCC 14 gains on amd64:</p>
<ul>
  <li><strong>Brotli compression:</strong> ~8–12% faster encoding from improved Huffman codec vectorization</li>
  <li><strong>zstd compression:</strong> ~6–10% faster at level 3 via AVX2 path improvements</li>
  <li><strong>TLS handshakes:</strong> ~5% improvement from better P-256 curve codegen</li>
</ul>

<h3>OpenSSL 3.3 (vs 3.0 on Bookworm)</h3>

<p>Bookworm shipped with OpenSSL 3.0 LTS. Trixie upgrades to OpenSSL 3.3, which brings improved TLS 1.3 internals, better QUIC support, and performance improvements in elliptic curve operations. This matters even if you use our dedicated <a href="/2026/05/openssl-nginx-a-dedicated-openssl-build-for-nginx-and-angie/">openssl-nginx</a> package (which is compiled independently), because the system OpenSSL is used by tools you run alongside NGINX — certbot, curl, the openssl CLI, Python scripts.</p>

<p>OpenSSL 3.3 is also stricter about malformed certificates that 3.0 accepted with warnings. Validate internal/self-signed certs before upgrading: <code>openssl verify -CAfile /path/to/ca.pem /path/to/cert.pem</code></p>

<h3>PHP 8.4 Default</h3>

<p>Trixie&#8217;s default PHP version is 8.4. If you&#8217;re running PHP-FPM with NGINX for WordPress, check plugin compatibility before upgrading. PHP 8.4 promoted some dynamic property deprecation warnings to errors — well-maintained plugins are fine, but older ones that haven&#8217;t been updated since 2020 may throw fatal errors.</p>

<p>Quick compatibility check before upgrading:</p>
<pre><code>php8.4 -f /path/to/wp-config.php 2>&amp;1 | grep -i fatal
php8.4 -m | grep -v '[' | sort  # List loaded modules</code></pre>

<p>For PHP security hardening, the myguard repository ships <code>php8.4-snuffleupagus</code> — install it alongside PHP-FPM for interpreter-level protection.</p>

<h3>systemd 256</h3>

<p>Trixie ships systemd 256, which introduces more aggressive cgroup isolation defaults. This is mostly transparent for NGINX, but if you use custom systemd service overrides touching <code>PrivateTmp</code>, <code>ProtectSystem</code>, or cgroup limits, review them. The standard NGINX and Angie systemd units from the myguard packages are already updated for systemd 256 compatibility.</p>

<h3>Linux Kernel 6.11+</h3>

<p>Trixie tracks a much newer kernel than Bookworm&#8217;s 6.1. For NGINX specifically, the newer kernel improves kTLS (Kernel TLS offload) performance, improves io_uring support, and has better QUIC-layer socket handling. If you enable kTLS on Trixie, you&#8217;re getting noticeably better TLS offload than on Bookworm.</p>

<pre><code># Verify kTLS is available
modprobe tls && lsmod | grep tls

# Enable in nginx.conf
ssl_conf_command Options KTLS;</code></pre>

<h2 style="color:#f59e0b">Installing NGINX or Angie on Trixie</h2>

<p>Same as any other Debian release — add the myguard repository and install:</p>

<pre><code>wget https://deb.myguard.nl/pool/myguard.deb
dpkg -i myguard.deb
apt-get update
apt-get install nginx    # or: apt-get install angie</code></pre>

<p>Verify build info and check for the Trixie toolchain:</p>
<pre><code>nginx -V 2>&amp;1</code></pre>

<p>New to the myguard repository? <a href="/how-to-use/">Follow the two-minute setup guide.</a></p>

<h2 style="color:#f59e0b">Upgrading from Bookworm to Trixie</h2>

<pre><code># Step 1 — Back up NGINX config
tar -czf /tmp/nginx-config-backup.tar.gz /etc/nginx/

# Step 2 — Note current versions
nginx -V 2>&amp;1 > /tmp/nginx-v-before.txt

# Step 3 — Update Debian sources
sed -i 's/bookworm/trixie/g' /etc/apt/sources.list
apt update
apt full-upgrade

# Step 4 — myguard repo auto-detects Trixie, no changes needed
apt install nginx   # refresh to Trixie build

# Step 5 — Test and reload
nginx -t && systemctl reload nginx</code></pre>

<h2 style="color:#f59e0b">Fresh Install Checklist for Trixie</h2>

<ol>
  <li><strong>Add myguard repository:</strong> <code>wget https://deb.myguard.nl/pool/myguard.deb &amp;&amp; dpkg -i myguard.deb</code></li>
  <li><strong>Install NGINX:</strong> <code>apt-get install nginx</code> — pulls in openssl-nginx automatically</li>
  <li><strong>Add modules:</strong> brotli, ModSecurity, Lua, GeoIP2 — all available as dynamic modules</li>
  <li><strong>Open UDP 443:</strong> HTTP/3 requires it — <code>ufw allow 443/udp</code></li>
  <li><strong>Configure TLS:</strong> Use the <a href="/2026/05/tls-configuration-ssllabs-a-plus/">TLS guide</a> for A+ on SSL Labs</li>
  <li><strong>Install PHP-FPM:</strong> <code>apt-get install php8.4-fpm php8.4-mysql php8.4-xml php8.4-curl</code></li>
  <li><strong>Harden PHP:</strong> <code>apt-get install php8.4-snuffleupagus</code></li>
</ol>

<h2 style="color:#f59e0b">Module Compatibility on Trixie</h2>

<p>All 50+ <a href="/nginx-modules/">dynamic modules</a> in the myguard repository are compiled natively for Trixie — no compatibility layer, built against the same NGINX and library versions as the main packages:</p>

<pre><code>apt-get install libnginx-mod-http-brotli       # Brotli compression
apt-get install libnginx-mod-http-modsecurity  # ModSecurity WAF
apt-get install libnginx-mod-http-lua          # Lua scripting
apt-get install libnginx-mod-http-zstd         # Zstandard compression
apt-get install libnginx-mod-http-geoip2       # GeoIP2 routing</code></pre>

<h2 style="color:#f59e0b">Known Issues and Gotchas</h2>

<h3>PHP 8.4 strict deprecations</h3>
<p>Some older WordPress plugins throw deprecation notices under PHP 8.4. They won&#8217;t break your site but may spam the error log. Suppress them with <code>error_reporting = E_ALL &amp; ~E_DEPRECATED</code> in your FPM pool config while waiting for plugin updates.</p>

<h3>systemd 256 PrivateTmp changes</h3>
<p>If you use a custom <code>/etc/systemd/system/nginx.service.d/override.conf</code> that modifies <code>PrivateTmp</code> with custom paths, review it. systemd 256 changed how PrivateTmp interacts with bind-mounted directories. The default myguard service unit is already correct.</p>

<h2 style="color:#f59e0b">Frequently Asked Questions</h2>

<div class="faq">
  <div class="faq-item">
    <div class="faq-q">Is Trixie stable enough for production?</div>
    <div class="faq-a">Trixie is Debian&#8217;s testing branch — broadly stable, but it hasn&#8217;t had the final freeze and stabilization pass that a Debian stable release gets. Many sysadmins run it on production servers without issues. For critical systems, Bookworm (Debian 12) is the safer choice until Trixie goes stable.</div>
  </div>
  <div class="faq-item">
    <div class="faq-q">Do I need to change the myguard repository URL for Trixie?</div>
    <div class="faq-a">No. The myguard repository uses a &#8220;stable&#8221; suite that automatically maps to the correct packages for your Debian release. The same sources.list entry works on Bookworm, Trixie, and future releases.</div>
  </div>
  <div class="faq-item">
    <div class="faq-q">Which PHP version should I use on Trixie for WordPress?</div>
    <div class="faq-a">PHP 8.4 is Trixie&#8217;s default and is compatible with all well-maintained WordPress plugins. Run compatibility checks before upgrading production. Pair it with php8.4-snuffleupagus from the myguard repository for interpreter-level security hardening.</div>
  </div>
  <div class="faq-item">
    <div class="faq-q">Will my existing NGINX config work on Trixie?</div>
    <div class="faq-a">Yes. NGINX configuration syntax hasn&#8217;t changed. Your /etc/nginx/ directory is preserved during the Bookworm-to-Trixie upgrade. Run nginx -t after upgrading to verify, then reload.</div>
  </div>
  <div class="faq-item">
    <div class="faq-q">Does HTTP/3 work better on Trixie than Bookworm?</div>
    <div class="faq-a">Slightly yes — Trixie&#8217;s kernel (6.11+) has improved QUIC socket handling vs Bookworm&#8217;s 6.1. The difference is most noticeable under high concurrent connection load. For most sites the improvement is marginal; for high-traffic servers it&#8217;s measurable.</div>
  </div>
  <div class="faq-item">
    <div class="faq-q">What is the Debian 13 release date?</div>
    <div class="faq-a">Debian doesn&#8217;t commit to fixed release dates — it releases when ready. Trixie is expected to go stable in 2025–2026. Follow the freeze schedule at debian.org/releases.</div>
  </div>
  <div class="faq-item">
    <div class="faq-q">Does Angie work on Trixie the same as NGINX?</div>
    <div class="faq-a">Yes. Angie packages for Trixie are in the myguard repository alongside NGINX. Same installation process, same dynamic modules, same configuration syntax. Angie adds native ACME (Let&#8217;s Encrypt without Certbot) and a JSON monitoring API.</div>
  </div>
</div>

<h2 style="color:#f59e0b">Related Posts</h2>
<ul>
  <li><a href="/how-to-use/">How to Add the myguard APT Repository</a> — two-minute setup for Debian and Ubuntu</li>
  <li><a href="/nginx-modules/">NGINX Dynamic Modules Overview</a> — all 50+ modules, with Trixie packages available for each</li>
  <li><a href="/2026/05/tls-configuration-ssllabs-a-plus/">TLS Configuration Guide for NGINX and Angie</a> — A+ SSL Labs config with TLS 1.3 and HSTS</li>
  <li><a href="/2026/05/angie-web-server-complete-guide/">Angie Web Server: The Complete Guide</a> — review, ACME, migration guide, and monitoring</li>
  <li><a href="/2026/05/nginx-http3-quic-debian-ubuntu/">How to Enable HTTP/3 on NGINX</a> — QUIC setup that works on Trixie out of the box</li>
</ul>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>nginx 1.29.8</title>
		<link>https://deb.myguard.nl/2026/05/nginx-debian-13-trixie/</link>
		
		<dc:creator><![CDATA[Thijs Eilander]]></dc:creator>
		<pubDate>Tue, 12 May 2026 19:57:13 +0000</pubDate>
				<category><![CDATA[nginx]]></category>
		<category><![CDATA[pbuilder]]></category>
		<guid isPermaLink="false">https://deb.myguard.nl/2026/05/nginx-debian-13-trixie/</guid>

					<description><![CDATA[Version 1.29.8 — 2026-05-12 Changes Full rebuild and backport with latest Mainline Merged with the source package from Debian Trixie in november&#8230;]]></description>
										<content:encoded><![CDATA[<p>Version <code>1.29.8</code> — <em>2026-05-12</em></p>
<h2>Changes</h2>
<ul>
<li>Full rebuild and backport with latest Mainline</li>
<li>Merged with the source package from Debian Trixie in november 2023</li>
<li>See for more information https://deb.myguard.nl/nginx-modules/</li>
<li>Changelog: https://deb.myguard.nl/forums/topic/changelog/</li>
</ul>
<h2>Distributions</h2>
<ul>
<li>bookworm</li>
<li>jammy</li>
<li>noble</li>
<li>resolute</li>
<li>trixie</li>
</ul>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>PHP Snuffleupagus Tutorial: Harden PHP-FPM Against Injection, XSS and Dangerous Functions</title>
		<link>https://deb.myguard.nl/2026/05/php-snuffleupagus-tutorial-harden-php-fpm/</link>
		
		<dc:creator><![CDATA[Thijs Eilander]]></dc:creator>
		<pubDate>Tue, 12 May 2026 19:57:11 +0000</pubDate>
				<category><![CDATA[nginx]]></category>
		<category><![CDATA[debian]]></category>
		<category><![CDATA[injection]]></category>
		<category><![CDATA[php]]></category>
		<category><![CDATA[php-fpm]]></category>
		<category><![CDATA[security]]></category>
		<category><![CDATA[snuffleupagus]]></category>
		<category><![CDATA[ubuntu]]></category>
		<category><![CDATA[xss]]></category>
		<guid isPermaLink="false">https://deb.myguard.nl/2026/05/php-snuffleupagus-tutorial-harden-php-fpm/</guid>

					<description><![CDATA[PHP-Snuffleupagus blocks dangerous functions, eval(), remote file inclusion and cookie theft inside the PHP interpreter itself — where a WAF can't reach. Full installation, WordPress-specific rules, per-pool config, and production tuning guide.]]></description>
										<content:encoded><![CDATA[
<p>Most security advice for PHP applications focuses on the wrong layer. You harden your NGINX config, add a WAF, keep WordPress updated — and then a zero-day plugin vulnerability drops and an attacker uploads a PHP webshell to <code>wp-content/uploads/</code>. The WAF saw HTTP traffic, the WAF saw an image upload, the WAF said &#8220;fine.&#8221; Three hours later you&#8217;re explaining to a client why their website is serving malware.</p>

<p><strong>PHP-Snuffleupagus</strong> is a PHP extension that adds a security layer <em>inside the PHP interpreter itself</em>. Not at the HTTP layer. Not at the network layer. Inside PHP, where function calls actually happen. An attacker can craft an HTTP request that bypasses your WAF. They cannot bypass the PHP interpreter — Snuffleupagus runs inside it.</p>

<p>Think of it like this: ModSecurity is a bouncer at the front door of your restaurant. Snuffleupagus is a bouncer who lives in the kitchen. Even if someone sneaks in through the window, they still can&#8217;t touch the knives.</p>

<p>The <a href="/how-to-use/">myguard APT repository</a> ships <code>php-snuffleupagus</code> as a pre-built package for Debian and Ubuntu, covering PHP 7.4 through 8.4. No compilation required. This guide goes deeper than the official docs — which are sparse — and covers everything from basic installation to WordPress-specific hardening to production tuning.</p>



<h2 style="color:#f59e0b">What Snuffleupagus actually blocks</h2>
<p>Unlike a WAF that pattern-matches HTTP strings, Snuffleupagus controls what PHP is <em>allowed to do</em>. Here&#8217;s what that means in practice:</p>

<h3>Dangerous function blocking</h3>
<p>PHP has a set of functions that legitimate applications almost never need but attackers always want: <code>eval()</code>, <code>system()</code>, <code>exec()</code>, <code>passthru()</code>, <code>shell_exec()</code>, <code>popen()</code>, <code>proc_open()</code>. If an attacker injects code and it reaches PHP, these are what they call to execute commands on your server. Snuffleupagus blocks them at the interpreter — before execution, not after.</p>
<p>The crucial difference from PHP&#8217;s built-in <code>disable_functions</code>: Snuffleupagus is <em>conditional</em>. Block <code>exec()</code> only in requests coming from the uploads directory. Block <code>file_get_contents()</code> only when called with a remote URL. Block <code>system()</code> everywhere except one specific legacy script that genuinely needs it. Surgical, not blunt.</p>

<h3>Cookie encryption and hardening</h3>
<p>Snuffleupagus can transparently encrypt all cookies your application sets, adding an HMAC to detect tampering. It also enforces <code>HttpOnly</code>, <code>Secure</code>, and <code>SameSite</code> flags on every cookie, even ones your application forgot to flag. Session hijacking via cookie theft becomes dramatically harder.</p>

<h3>File upload protection</h3>
<p>One of the most common attack patterns: upload a PHP file disguised as an image, then trigger its execution via a URL. Snuffleupagus can block PHP from including or executing files from your upload directory entirely. A PHP file in <code>wp-content/uploads/</code> cannot execute, period — no matter how it got there.</p>

<h3>Type juggling prevention</h3>
<p>PHP&#8217;s loose comparison operator (<code>==</code>) has infamous edge cases: <code>"0e12345" == "0"</code> is <code>true</code> because both evaluate as floating-point zero in scientific notation. Attackers exploit this to bypass password hash comparisons (the &#8220;magic hash&#8221; vulnerability). Snuffleupagus enforces strict comparisons in sensitive contexts.</p>

<h3>Remote file inclusion blocking</h3>
<p>PHP can include files from remote URLs if <code>allow_url_include</code> is on (it shouldn&#8217;t be, but legacy apps). Snuffleupagus blocks <code>file_get_contents()</code>, <code>include()</code>, and <code>require()</code> from fetching remote resources, regardless of PHP&#8217;s config.</p>

<h3>XSS output filtering</h3>
<p>Snuffleupagus can inject <code>htmlspecialchars()</code> calls transparently before PHP outputs content to the browser. This is a last-resort safety net for applications you can&#8217;t modify — it won&#8217;t catch every XSS vector but it catches the most common ones.</p>



<h2 style="color:#f59e0b">Snuffleupagus vs ModSecurity: use both</h2>
<p>People always ask whether Snuffleupagus replaces ModSecurity. It doesn&#8217;t. They live at completely different layers and catch completely different attacks.</p>
<table style="width:100%;border-collapse:collapse;margin:1rem 0;">
<thead><tr style="border-bottom:2px solid var(--border);"><th style="text-align:left;padding:0.5rem;">Layer</th><th style="text-align:left;padding:0.5rem;">ModSecurity (NGINX WAF)</th><th style="text-align:left;padding:0.5rem;">PHP-Snuffleupagus</th></tr></thead>
<tbody>
<tr style="border-bottom:1px solid var(--border);"><td style="padding:0.5rem;"><strong>Where it runs</strong></td><td style="padding:0.5rem;">NGINX process, HTTP layer</td><td style="padding:0.5rem;">Inside PHP-FPM, interpreter layer</td></tr>
<tr style="border-bottom:1px solid var(--border);"><td style="padding:0.5rem;"><strong>What it sees</strong></td><td style="padding:0.5rem;">Raw HTTP headers, URL, body</td><td style="padding:0.5rem;">PHP function calls, arguments, return values</td></tr>
<tr style="border-bottom:1px solid var(--border);"><td style="padding:0.5rem;"><strong>Stops</strong></td><td style="padding:0.5rem;">SQLi in URLs, XSS in forms, path traversal in requests</td><td style="padding:0.5rem;">eval(), shell_exec(), type juggling, cookie theft, bad file uploads</td></tr>
<tr><td style="padding:0.5rem;"><strong>Bypass risk</strong></td><td style="padding:0.5rem;">Encoding tricks can fool pattern matching</td><td style="padding:0.5rem;">Very hard — runs in the interpreter itself</td></tr>
</tbody>
</table>
<p>The attack chain typically goes: HTTP request → WAF layer → PHP layer → application. ModSecurity blocks at the WAF layer. Snuffleupagus blocks at the PHP layer. Run both and an attacker has to bypass two independent security systems at different levels of the stack. See the <a href="/2026/05/nginx-modsecurity-setup-debian-ubuntu/">ModSecurity setup guide</a> for the WAF side of this.</p>



<h2 style="color:#f59e0b">Step 1 — Install php-snuffleupagus</h2>
<pre><code>wget https://deb.myguard.nl/pool/myguard.deb
dpkg -i myguard.deb
apt-get update
apt-get install php-snuffleupagus</code></pre>
<p>The package installs the extension for all PHP versions it finds on your system. Check it&#8217;s available:</p>
<pre><code>ls /usr/lib/php/*/snuffleupagus.so
# You should see one file per installed PHP version</code></pre>



<h2 style="color:#f59e0b">Step 2 — Enable the extension</h2>
<p>You need to both load the extension and point it at a rules file. Do this per PHP version. For PHP 8.3:</p>
<pre><code>cat &gt; /etc/php/8.3/fpm/conf.d/20-snuffleupagus.ini &lt;&lt;'EOINI'
extension=snuffleupagus.so
sp.configuration_file=/etc/php/8.3/snuffleupagus.rules
EOINI</code></pre>
<p>If you&#8217;re running multiple PHP versions (common with multiple sites), repeat for each version. The rules file path can be the same or different per version — using version-specific paths gives you more flexibility.</p>
<p>Create the rules file (empty for now — the extension requires it to exist):</p>
<pre><code>touch /etc/php/8.3/snuffleupagus.rules</code></pre>
<p>Restart PHP-FPM and verify the extension loaded:</p>
<pre><code>systemctl restart php8.3-fpm
php8.3 -m | grep snuffleupagus
# Should output: snuffleupagus</code></pre>



<h2 style="color:#f59e0b">Step 3 — The golden rule: start in log mode</h2>
<p>Before you write a single blocking rule, understand this: <strong>every blocking rule is a potential outage</strong>. WordPress and its plugins use a broad range of PHP features. A rule that blocks <code>exec()</code> globally will break any plugin that calls it for legitimate image processing or PDF generation.</p>

<p>The workflow is always: <strong>log → review → whitelist false positives → block</strong>. Never jump straight to blocking on a live site.</p>

<p>In Snuffleupagus rules, <code>.simulation()</code> logs a violation but doesn&#8217;t block. <code>.log()</code> also just logs. <code>.drop()</code> blocks the call and returns <code>false</code>. <code>.kill()</code> terminates the PHP process entirely.</p>

<p>Start your rules file with simulation on the scary stuff:</p>
<pre><code>sp.enable();

# Dangerous execution functions — simulation first
sp.disable_function.function("system").simulation();
sp.disable_function.function("exec").simulation();
sp.disable_function.function("passthru").simulation();
sp.disable_function.function("shell_exec").simulation();
sp.disable_function.function("proc_open").simulation();
sp.disable_function.function("popen").simulation();
</code></pre>
<p>Restart PHP-FPM, then watch the log:</p>
<pre><code>systemctl restart php8.3-fpm
tail -f /var/log/php8.3-fpm.log | grep -i snuffleupagus</code></pre>
<p>Browse your site. Send some test requests. If you see violations in the log, read them carefully before deciding whether they&#8217;re legitimate. A WordPress plugin calling <code>exec()</code> to run ImageMagick is probably legitimate. Same call from a request touching <code>wp-content/uploads/</code> is not.</p>



<h2 style="color:#f59e0b">Step 4 — Production baseline rules</h2>
<p>Once you&#8217;ve done a day or two of simulation and confirmed your false positive picture, build your production rules file. This baseline is conservative — it blocks things nearly all PHP applications never legitimately need:</p>
<pre><code>sp.enable();

# ---- Cookie hardening ----
# Encrypt all cookies and enforce security flags
# WARNING: This invalidates existing sessions. Deploy during maintenance window
# or exclude specific cookies with sp.cookie.name("PHPSESSID").attribute().drop();
sp.cookie.name("*").encrypt();

# ---- Kill eval() ----
# Almost no legitimate application needs eval(). Template engines and ORMs
# that used to need it have moved on. Legacy apps: test first.
sp.disable_function.function("eval").drop();

# ---- Block remote command execution ----
sp.disable_function.function("system").drop();
sp.disable_function.function("exec").drop();
sp.disable_function.function("passthru").drop();
sp.disable_function.function("shell_exec").drop();
sp.disable_function.function("popen").drop();
sp.disable_function.function("proc_open").drop();

# ---- Block remote file inclusion ----
# Blocks file_get_contents("http://...") and file_get_contents("https://...")
# Add whitelists for APIs your code legitimately calls:
# sp.disable_function.function("file_get_contents").param("filename").value_r("^https://api.trusted.com").allow();
sp.disable_function.function("file_get_contents").param("filename").value_r("^https?://").drop();

# ---- Block PHP execution from upload directories ----
# Webshells in uploads can't execute if PHP can't include files from there
sp.disable_function.function("include").filename_r("/uploads/").drop();
sp.disable_function.function("include_once").filename_r("/uploads/").drop();
sp.disable_function.function("require").filename_r("/uploads/").drop();
sp.disable_function.function("require_once").filename_r("/uploads/").drop();

# ---- Information disclosure ----
sp.disable_function.function("phpinfo").drop();
sp.disable_function.function("getenv").param("varname").value("AWS_SECRET_ACCESS_KEY").drop();
sp.disable_function.function("getenv").param("varname").value("DB_PASSWORD").drop();
</code></pre>



<h2 style="color:#f59e0b">Step 5 — WordPress-specific hardening</h2>
<p>WordPress is the most common PHP application and the most commonly attacked. These rules are tuned specifically for WordPress&#8217;s function usage patterns.</p>
<pre><code># ---- WordPress-specific rules ----

# Block PHP execution in WordPress upload directory
sp.disable_function.function("include").filename_r("/wp-content/uploads/").drop();
sp.disable_function.function("include_once").filename_r("/wp-content/uploads/").drop();
sp.disable_function.function("require").filename_r("/wp-content/uploads/").drop();
sp.disable_function.function("require_once").filename_r("/wp-content/uploads/").drop();

# Suspicious base64 decode output — log only (WP legitimately uses base64)
# Long base64-decoded strings are often shellcode. Log first, block once confirmed.
sp.disable_function.function("base64_decode").ret_r(".{500,}").log();

# Block use of assert() as eval() substitute (old WordPress exploit technique)
sp.disable_function.function("assert").param("assertion").value_r("\$").drop();

# Prevent direct access to wp-config.php via LFI
sp.disable_function.function("file_get_contents").param("filename").value_r("wp-config.php").drop();

# Cookie hardening for WordPress sessions
# Use specific cookie name instead of wildcard to avoid breaking payment plugins
sp.cookie.name("wordpress_logged_in*").encrypt();
sp.cookie.name("PHPSESSID").encrypt();
</code></pre>

<h3>Testing your WordPress rules</h3>
<p>After applying, test these WordPress functions work correctly:</p>
<ul>
<li>Log in and out — cookies must work</li>
<li>Upload an image — the upload must work but PHP execution in uploads must fail</li>
<li>Edit a post — the Gutenberg editor uses REST API calls, verify these work</li>
<li>Run any background WP-Cron jobs manually: <code>wp cron event run --due-now --allow-root</code></li>
</ul>



<h2 style="color:#f59e0b">Step 6 — Per-site rules with PHP-FPM pools</h2>
<p>If you&#8217;re running multiple sites on the same server, you can apply different rule sets per PHP-FPM pool. A legacy application that legitimately calls <code>exec()</code> gets lenient rules. A modern WordPress site gets strict rules. No cross-contamination.</p>

<p>In each pool config (<code>/etc/php/8.3/fpm/pool.d/mysite.conf</code>):</p>
<pre><code>[mysite]
user = www-data
group = www-data
listen = /run/php/php8.3-fpm-mysite.sock

; Override the snuffleupagus rules file for this pool
php_admin_value[extension] = snuffleupagus.so
php_admin_value[sp.configuration_file] = /etc/php/8.3/snuffleupagus-mysite.rules</code></pre>

<pre><code>[legacyapp]
user = www-data
group = www-data
listen = /run/php/php8.3-fpm-legacy.sock

; Minimal rules for legacy app that needs exec()
php_admin_value[extension] = snuffleupagus.so
php_admin_value[sp.configuration_file] = /etc/php/8.3/snuffleupagus-legacy.rules</code></pre>

<p>The legacy rules file can block everything except the specific functions the legacy app needs:</p>
<pre><code>sp.enable();

# Legacy app needs exec() for PDF generation — allow it but log
sp.disable_function.function("system").drop();
sp.disable_function.function("passthru").drop();
sp.disable_function.function("shell_exec").drop();
# exec() allowed — only log unusual calls
sp.disable_function.function("exec").param("command").value_r("[;&|`]").log();
</code></pre>



<h2 style="color:#f59e0b">Step 7 — Advanced rules</h2>

<h3>Block specific parameter patterns</h3>
<p>You can match on function arguments, not just function names:</p>
<pre><code># Block file_get_contents() when the path looks like a traversal attack
sp.disable_function.function("file_get_contents").param("filename").value_r("../").drop();

# Block unserialize() on user-controlled data (common deserialization attack vector)
# The variable name check is heuristic — adjust to your app
sp.disable_function.function("unserialize").param("data").value_r("[Oo]:d+:").drop();

# Block preg_replace() with /e modifier (executes replacement as PHP code)
# This was deprecated in PHP 5.5 and removed in 7.0 but some legacy code still has it
sp.disable_function.function("preg_replace").param("pattern").value_r("/e").drop();
</code></pre>

<h3>Global request-level rules</h3>
<pre><code># Enable XSS output filtering as a last-resort safety net
# This transparently runs htmlspecialchars() on outputs when enabled
# Only use on applications you can't modify — it has false positives
# sp.global_strict = true;

# Log all calls to dangerous functions regardless of whether they're blocked
# Useful for forensics and building a whitelist before blocking
sp.disable_function.function("eval").simulation();
sp.disable_function.function("exec").simulation();
</code></pre>

<h3>Environment-specific tuning</h3>
<pre><code># Harden unserialize() with an allowed classes list
# Prevents object injection attacks — only allow specific classes to be unserialized
sp.disable_function.function("unserialize").param("allowed_classes").value_r("false").drop();

# Block PHP reflection API (used in some deserialization exploit chains)
sp.disable_function.function("ReflectionObject").drop();

# Block symbolic link creation (used in some privilege escalation chains)
sp.disable_function.function("symlink").drop();
</code></pre>



<h2 style="color:#f59e0b">Reading the logs</h2>
<p>Snuffleupagus logs to the PHP-FPM error log. Each violation entry contains the rule that triggered, the function called, the file and line number, and the actual arguments. Learn to read these — they tell you exactly what happened:</p>
<pre><code>tail -f /var/log/php8.3-fpm.log | grep -i "snuffleupagus|sp_"

# A blocked call looks like:
# [snuffleupagus][eval][block] in /var/www/html/wp-content/plugins/bad-plugin/shell.php:1
# [snuffleupagus][file_get_contents][drop] in /var/www/html/wp-content/uploads/cmd.php:1

# A simulation (would-have-been-blocked) looks like:
# [snuffleupagus][exec][simulation] called in /path/to/file.php:42</code></pre>

<p>The file path tells you which application or plugin triggered the rule. If it&#8217;s from <code>uploads/</code>, it&#8217;s almost certainly an attack. If it&#8217;s from a known plugin directory, investigate whether the plugin legitimately needs that function.</p>

<h3>Building a whitelist from simulation logs</h3>
<p>After running in simulation mode, look for patterns in the log:</p>
<pre><code>grep "snuffleupagus" /var/log/php8.3-fpm.log | grep simulation | awk '{print $NF}' | sort | uniq -c | sort -rn</code></pre>
<p>The most common violations are usually the ones you need to whitelist. Add specific exceptions for them before switching to <code>.drop()</code>:</p>
<pre><code># Whitelist exec() only for the specific ImageMagick plugin path
sp.disable_function.function("exec").param("command").value_r("convert").allow();
sp.disable_function.function("exec").drop();   # block everything else</code></pre>



<h2 style="color:#f59e0b">Performance impact</h2>
<p>A common question: does Snuffleupagus slow PHP down? The honest answer: yes, slightly. How slightly depends on your rules.</p>
<ul>
<li><strong>Cookie encryption:</strong> adds one AES operation per cookie per request. On modern CPUs this is microseconds.</li>
<li><strong>Function call checking:</strong> each disabled function check adds a small overhead at the opcode level. For rules that fire rarely (blocked functions), this is negligible.</li>
<li><strong>Global strict mode (XSS filter):</strong> this touches every output operation and has measurable overhead. Only enable it if you genuinely can&#8217;t modify the application.</li>
</ul>
<p>For a typical WordPress site serving PHP pages, expect 1–3ms additional processing time per request. On a well-cached site where most requests are served from cache by NGINX before reaching PHP, the impact on user-facing performance is essentially zero.</p>



<h2 style="color:#f59e0b">Frequently asked questions</h2>
<div class="faq">
  <div class="faq-item"><div class="faq-q">Will Snuffleupagus break my WordPress site?</div><div class="faq-a">Not if you start in simulation mode and review the logs first. WordPress core is generally well-written and doesn&#8217;t use dangerous functions like eval() or system(). However, third-party plugins are a different story — some use exec() for image processing or PDF generation. Simulation mode will show you exactly what would be blocked before you commit to blocking it.</div></div>
  <div class="faq-item"><div class="faq-q">Does Snuffleupagus replace PHP&#8217;s disable_functions in php.ini?</div><div class="faq-a">They&#8217;re complementary but Snuffleupagus is more powerful. PHP&#8217;s disable_functions is a global on/off switch — you can&#8217;t conditionally allow exec() only for ImageMagick calls. Snuffleupagus lets you block by function, by parameter value, by calling file path, or by request context. You can run both simultaneously — use disable_functions for the absolute hard blocks and Snuffleupagus for the conditional rules.</div></div>
  <div class="faq-item"><div class="faq-q">Is Snuffleupagus compatible with PHP 8.4?</div><div class="faq-a">Yes. The myguard php-snuffleupagus package supports PHP 7.4 through 8.4. Check the changelog for the exact version mapping. The extension is actively maintained and tracks PHP releases.</div></div>
  <div class="faq-item"><div class="faq-q">Can I use Snuffleupagus with a PHP-FPM pool running as a different user?</div><div class="faq-a">Yes. The extension loads per PHP process, and PHP-FPM pools run as whatever user you configure. You can have a strict pool for your WordPress site and a lenient pool for a legacy app on the same server, with completely different rule sets per pool using php_admin_value[sp.configuration_file].</div></div>
  <div class="faq-item"><div class="faq-q">What happens when Snuffleupagus blocks a function call?</div><div class="faq-a">With .drop(), the function returns false (as if it failed). With .kill(), the PHP process terminates immediately. Most rules should use .drop() — .kill() is extreme and should only be used for truly unrecoverable situations like a detected deserialization exploit in progress. Always log blocked calls so you can investigate them later.</div></div>
  <div class="faq-item"><div class="faq-q">Does cookie encryption break existing sessions?</div><div class="faq-a">Yes — when you enable cookie encryption, existing unencrypted cookies become invalid and users get logged out. Deploy this during a maintenance window or at a low-traffic time. If you can&#8217;t afford any disruption, whitelist the existing session cookie name from encryption and rotate cookie names after the transition.</div></div>
  <div class="faq-item"><div class="faq-q">Where does the name Snuffleupagus come from?</div><div class="faq-a">Mr. Snuffleupagus (or &#8220;Snuffy&#8221;) is Big Bird&#8217;s best friend on Sesame Street — a large, friendly creature that nobody else could see for many years. The PHP security extension is invisible to attackers in the same way: it silently guards the PHP interpreter without being visible at the HTTP layer. The developers have a good sense of humour.</div></div>
</div>



<h2 style="color:#f59e0b">Related posts</h2>
<ul>
<li><a href="/2026/05/nginx-modsecurity-setup-debian-ubuntu/">NGINX ModSecurity WAF Setup</a> — the HTTP layer to pair with Snuffleupagus for full-stack defence</li>
<li><a href="/2026/05/tls-configuration-ssllabs-a-plus/">TLS Configuration for NGINX and Angie</a> — hardening the transport layer alongside your PHP security stack</li>
<li><a href="/2026/05/postfix-dovecot-setup-debian/">Postfix + Dovecot Mail Server Setup</a> — complete mail server guide for the same Debian/Ubuntu stack</li>
<li><a href="/nginx-modules/">NGINX modules overview</a> — ModSecurity is one of 50+ modules available via APT</li>
<li><a href="/packages/">Full package list</a> — php-snuffleupagus, libmodsecurity3, and all security packages</li>
<li><a href="/how-to-use/">How to add the myguard APT repository</a></li>
</ul>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Postfix + Dovecot Mail Server Setup on Debian 12 and 13 (2026 Guide)</title>
		<link>https://deb.myguard.nl/2026/05/postfix-dovecot-setup-debian/</link>
		
		<dc:creator><![CDATA[Thijs Eilander]]></dc:creator>
		<pubDate>Tue, 12 May 2026 19:57:10 +0000</pubDate>
				<category><![CDATA[nginx]]></category>
		<category><![CDATA[debian]]></category>
		<category><![CDATA[dovecot]]></category>
		<category><![CDATA[mail]]></category>
		<category><![CDATA[postfix]]></category>
		<category><![CDATA[rspamd]]></category>
		<category><![CDATA[security]]></category>
		<category><![CDATA[tls]]></category>
		<category><![CDATA[ubuntu]]></category>
		<guid isPermaLink="false">https://deb.myguard.nl/2026/05/postfix-dovecot-setup-debian/</guid>

					<description><![CDATA[A complete Postfix + Dovecot + Rspamd mail server on Debian 12 and 13 — with TLS, DKIM, SPF, DMARC, spam filtering, virtual mailboxes, security hardening, and a 10/10 score on mail-tester.com. No shortcuts.]]></description>
										<content:encoded><![CDATA[
<p>Running your own mail server in 2026 is not the nightmare people say it is — if you use the right packages and follow a methodical setup. The horror stories you&#8217;ve read online are mostly about people who skipped DNS records, used outdated TLS configs, or ran ancient software with no spam filtering. Do it right and you&#8217;ll have a server that delivers reliably, stays off blacklists, and handles a few hundred mailboxes without breaking a sweat.</p>

<p><strong>Postfix + Dovecot + Rspamd</strong> is the production standard combination in 2026. Postfix handles SMTP (sending and receiving), Dovecot handles IMAP (what your mail client connects to), and Rspamd filters spam, signs outgoing mail with DKIM, and plugs into Postfix as a milter. All three are available from the <a href="/how-to-use/">myguard APT repository</a> — updated within hours of upstream releases, so you&#8217;re never stuck on a version Debian stable froze two years ago.</p>

<p>This guide is thorough by design. You&#8217;ll finish with a fully working mail server on Debian 12 (Bookworm) or Debian 13 (Trixie), including TLS everywhere, SPF, DKIM, DMARC, spam filtering, and a 10/10 score on mail-tester.com. No shortcuts that come back to bite you at 2am.</p>



<h2 style="color:#f59e0b">What you&#8217;re building</h2>
<p>A complete inbound + outbound mail server with:</p>
<ul>
<li><strong>Postfix</strong> — receives mail on port 25 from the internet, accepts submissions from your users on port 587 (STARTTLS) and 465 (SMTPS)</li>
<li><strong>Dovecot</strong> — serves mail to your IMAP clients on port 993 (IMAPS), delivers mail from Postfix to Maildir via LMTP</li>
<li><strong>Rspamd</strong> — scans inbound mail for spam, signs outbound mail with DKIM, integrates with Postfix as a milter</li>
<li><strong>Redis</strong> — Rspamd&#8217;s backend for Bayes learning and rate limiting</li>
<li><strong>Let&#8217;s Encrypt TLS</strong> — proper certificates so mail clients don&#8217;t complain</li>
<li><strong>SPF, DKIM, DMARC</strong> — the three DNS records that prevent your mail from being treated as spam</li>
</ul>



<h2 style="color:#f59e0b">Before you start — prerequisites</h2>
<ul>
<li><strong>A VPS or dedicated server with a static IP.</strong> Dynamic IPs are blacklisted by virtually every major mail provider. Hetzner, OVH, Contabo, and DigitalOcean all work well.</li>
<li><strong>Port 25 unblocked.</strong> Many providers block outbound port 25 by default. Contact support and ask them to unblock it. Hetzner does it within minutes. DigitalOcean requires account verification.</li>
<li><strong>A domain you control.</strong> You need to be able to edit DNS records — MX, TXT, and PTR.</li>
<li><strong>Reverse DNS (PTR record).</strong> Your server&#8217;s IP must resolve back to your mail hostname. Log in to your provider&#8217;s control panel and set the PTR record for your IP to <code>mail.example.com</code>. This is separate from your regular DNS — it&#8217;s set at the IP level.</li>
<li><strong>A valid hostname.</strong> Set your server&#8217;s hostname to match your PTR record: <code>hostnamectl set-hostname mail.example.com</code>.</li>
<li><strong>Firewall rules.</strong> Open ports 25 (SMTP), 465 (SMTPS), 587 (SMTP submission), 993 (IMAPS). Block 110 (POP3) and 143 (plain IMAP) — there&#8217;s no reason to offer unencrypted access.</li>
</ul>



<h2 style="color:#f59e0b">Step 1 — Add the repository and install packages</h2>
<pre><code>wget https://deb.myguard.nl/pool/myguard.deb
dpkg -i myguard.deb
apt-get update
apt-get install postfix postfix-pcre dovecot-core dovecot-imapd dovecot-lmtpd rspamd redis-server</code></pre>
<p>The installer will ask two questions during Postfix installation:</p>
<ul>
<li><strong>Configuration type:</strong> select <strong>Internet Site</strong></li>
<li><strong>System mail name:</strong> enter your domain — <code>example.com</code>, not <code>mail.example.com</code></li>
</ul>
<p>After installation, get a TLS certificate before touching any config. Everything breaks without one:</p>
<pre><code>apt-get install certbot
certbot certonly --standalone -d mail.example.com</code></pre>



<h2 style="color:#f59e0b">Step 2 — Configure Postfix</h2>
<p>Postfix is configured through two files: <code>/etc/postfix/main.cf</code> (settings) and <code>/etc/postfix/master.cf</code> (service definitions). Replace your <code>main.cf</code> with this production configuration:</p>
<pre><code># Identity
myhostname = mail.example.com
mydomain = example.com
myorigin = $mydomain
mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain

# Network
inet_interfaces = all
inet_protocols = ipv4
mynetworks = 127.0.0.0/8

# Mailbox delivery via Dovecot LMTP
mailbox_transport = lmtp:unix:private/dovecot-lmtp
virtual_transport = lmtp:unix:private/dovecot-lmtp

# TLS inbound (from other mail servers)
smtpd_tls_cert_file = /etc/letsencrypt/live/mail.example.com/fullchain.pem
smtpd_tls_key_file = /etc/letsencrypt/live/mail.example.com/privkey.pem
smtpd_tls_security_level = may
smtpd_tls_protocols = !SSLv2,!SSLv3,!TLSv1,!TLSv1.1
smtpd_tls_mandatory_protocols = !SSLv2,!SSLv3,!TLSv1,!TLSv1.1
smtpd_tls_mandatory_ciphers = high
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtpd_tls_loglevel = 1

# TLS outbound (to other mail servers)
smtp_tls_cert_file = /etc/letsencrypt/live/mail.example.com/fullchain.pem
smtp_tls_key_file = /etc/letsencrypt/live/mail.example.com/privkey.pem
smtp_tls_security_level = may
smtp_tls_protocols = !SSLv2,!SSLv3,!TLSv1,!TLSv1.1
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
smtp_tls_loglevel = 1

# SASL authentication (Dovecot handles this)
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
smtpd_sasl_auth_enable = yes
smtpd_sasl_security_options = noanonymous

# Anti-spam restrictions
smtpd_recipient_restrictions =
  permit_mynetworks,
  permit_sasl_authenticated,
  reject_unauth_destination,
  reject_invalid_hostname,
  reject_non_fqdn_hostname,
  reject_non_fqdn_sender,
  reject_non_fqdn_recipient,
  reject_unknown_sender_domain,
  reject_unknown_recipient_domain,
  reject_rbl_client zen.spamhaus.org

# Rspamd milter
smtpd_milters = inet:127.0.0.1:11332
non_smtpd_milters = inet:127.0.0.1:11332
milter_protocol = 6
milter_default_action = accept

# Limits
message_size_limit = 52428800
mailbox_size_limit = 0</code></pre>

<p>Now enable the submission ports in <code>/etc/postfix/master.cf</code>. Find (or uncomment) these lines:</p>
<pre><code>submission inet n       -       y       -       -       smtpd
  -o syslog_name=postfix/submission
  -o smtpd_tls_security_level=encrypt
  -o smtpd_sasl_auth_enable=yes
  -o smtpd_tls_auth_only=yes
  -o smtpd_reject_unlisted_recipient=no
  -o smtpd_relay_restrictions=permit_sasl_authenticated,reject
  -o milter_macro_daemon_name=ORIGINATING

smtps     inet  n       -       y       -       -       smtpd
  -o syslog_name=postfix/smtps
  -o smtpd_tls_wrappermode=yes
  -o smtpd_sasl_auth_enable=yes
  -o smtpd_relay_restrictions=permit_sasl_authenticated,reject
  -o milter_macro_daemon_name=ORIGINATING</code></pre>



<h2 style="color:#f59e0b">Step 3 — Configure Dovecot</h2>
<p>Dovecot&#8217;s configuration lives in <code>/etc/dovecot/dovecot.conf</code> and the <code>/etc/dovecot/conf.d/</code> directory. The conf.d approach is clean — each file handles one concern. Edit these key files:</p>

<h3>/etc/dovecot/dovecot.conf</h3>
<pre><code>protocols = imap lmtp</code></pre>

<h3>/etc/dovecot/conf.d/10-mail.conf</h3>
<pre><code>mail_location = maildir:~/Maildir
namespace inbox {
  inbox = yes
}</code></pre>

<h3>/etc/dovecot/conf.d/10-ssl.conf</h3>
<pre><code>ssl = required
ssl_cert = &lt;/etc/letsencrypt/live/mail.example.com/fullchain.pem
ssl_key = &lt;/etc/letsencrypt/live/mail.example.com/privkey.pem
ssl_min_protocol = TLSv1.2
ssl_cipher_list = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256
ssl_prefer_server_ciphers = yes</code></pre>

<h3>/etc/dovecot/conf.d/10-auth.conf</h3>
<pre><code>disable_plaintext_auth = yes
auth_mechanisms = plain login

!include auth-system.conf.ext</code></pre>

<h3>/etc/dovecot/conf.d/10-master.conf</h3>
<p>This is the most important file — it wires Dovecot to Postfix via LMTP and auth sockets:</p>
<pre><code>service imap-login {
  inet_listener imap {
    port = 0   # disable plain IMAP
  }
  inet_listener imaps {
    port = 993
    ssl = yes
  }
}

service lmtp {
  unix_listener /var/spool/postfix/private/dovecot-lmtp {
    mode = 0600
    user = postfix
    group = postfix
  }
}

service auth {
  unix_listener /var/spool/postfix/private/auth {
    mode = 0666
    user = postfix
    group = postfix
  }
  unix_listener auth-userdb {
    mode = 0600
    user = dovecot
  }
}</code></pre>

<h3>Create mail users</h3>
<p>For a simple setup, use system users. Create a mail user for each mailbox:</p>
<pre><code>useradd -m -s /sbin/nologin alice
useradd -m -s /sbin/nologin bob
passwd alice   # set a password for IMAP login</code></pre>



<h2 style="color:#f59e0b">Step 4 — Configure Rspamd</h2>
<p>Rspamd is a modern spam filter that replaces SpamAssassin — it&#8217;s significantly faster and has better default rules. It integrates with Postfix as a milter (mail filter), meaning Postfix runs every message through Rspamd before accepting it.</p>

<h3>Enable Redis backend</h3>
<pre><code>cat &gt; /etc/rspamd/local.d/redis.conf &lt;&lt;'EOF'
servers = "127.0.0.1";
EOF</code></pre>

<h3>Generate DKIM keys</h3>
<p>DKIM cryptographically signs your outgoing mail. Every major mail provider (Gmail, Outlook, Yahoo) checks DKIM before deciding whether your mail is legitimate. Without it, you&#8217;ll land in spam.</p>
<pre><code>mkdir -p /var/lib/rspamd/dkim
rspamadm dkim_keygen -s mail -d example.com -k /var/lib/rspamd/dkim/mail.key &gt; /tmp/dkim-dns.txt
cat /tmp/dkim-dns.txt   # you'll need this DNS record shortly
chmod 640 /var/lib/rspamd/dkim/mail.key
chown rspamd:rspamd /var/lib/rspamd/dkim/mail.key</code></pre>

<h3>Configure DKIM signing</h3>
<pre><code>cat &gt; /etc/rspamd/local.d/dkim_signing.conf &lt;&lt;'EOF'
path = "/var/lib/rspamd/dkim/$domain.$selector.key";
selector = "mail";
EOF</code></pre>

<h3>Set spam action thresholds</h3>
<pre><code>cat &gt; /etc/rspamd/local.d/actions.conf &lt;&lt;'EOF'
reject = 15;      # reject outright (clear spam)
greylist = 4;     # greylist suspicious mail
add_header = 6;   # add X-Spam header
EOF</code></pre>

<h3>Enable the Rspamd web UI (optional)</h3>
<pre><code>cat &gt; /etc/rspamd/local.d/worker-controller.inc &lt;&lt;'EOF'
bind_socket = "127.0.0.1:11334";
password = "$(rspamadm pw -p YOUR_PASSWORD_HERE)";
EOF</code></pre>
<p>Access the UI through an SSH tunnel: <code>ssh -L 11334:127.0.0.1:11334 yourserver</code>, then open <code>http://localhost:11334</code>.</p>



<h2 style="color:#f59e0b">Step 5 — DNS records</h2>
<p>These are the records that determine whether your mail gets delivered or dumped in spam. Every single one matters.</p>

<h3>MX record — where to deliver mail to your domain</h3>
<pre><code>example.com.    IN  MX  10  mail.example.com.</code></pre>

<h3>A record — your mail server&#8217;s IP</h3>
<pre><code>mail.example.com.  IN  A  YOUR_SERVER_IP</code></pre>

<h3>PTR record (reverse DNS) — set at your provider, not in your DNS panel</h3>
<p>Log into your VPS provider control panel. Find the IP management section. Set the PTR/reverse DNS for your server IP to <code>mail.example.com</code>. This is checked by most receiving servers. If it&#8217;s wrong, your mail will be rejected or flagged.</p>

<h3>SPF record — which servers are allowed to send mail for your domain</h3>
<pre><code>example.com.  IN  TXT  "v=spf1 mx -all"</code></pre>
<p>The <code>mx</code> means your MX server is allowed to send. The <code>-all</code> means everyone else is a hard fail. If you also send from a third-party service (Mailchimp, Sendgrid), add their <code>include:</code> clause before the <code>-all</code>.</p>

<h3>DKIM record — the public key that validates your DKIM signatures</h3>
<p>The public key was output in <code>/tmp/dkim-dns.txt</code> in the previous step. It looks like:</p>
<pre><code>mail._domainkey.example.com.  IN  TXT  "v=DKIM1; k=rsa; p=MIGfMA0GCSqGSIb3DQEBAQUAA..."</code></pre>
<p>Copy the full string from your file. The key is long — make sure you get all of it.</p>

<h3>DMARC record — policy for failed SPF/DKIM</h3>
<pre><code>_dmarc.example.com.  IN  TXT  "v=DMARC1; p=quarantine; rua=mailto:dmarc@example.com; ruf=mailto:dmarc@example.com; fo=1"</code></pre>
<p>Start with <code>p=quarantine</code> (send failures to spam) rather than <code>p=reject</code> (block them). Once you&#8217;ve monitored DMARC reports for a month and confirmed everything is aligned, switch to <code>p=reject</code>.</p>

<h3>MTA-STS and TLSRPT (optional but recommended)</h3>
<p>MTA-STS tells sending servers they must use TLS when delivering to you. Add:</p>
<pre><code>_mta-sts.example.com.  IN  TXT  "v=STSv1; id=20260512"
_smtp._tls.example.com.  IN  TXT  "v=TLSRPTv1; rua=mailto:tlsrpt@example.com"</code></pre>
<p>Then create a policy file at <code>https://mta-sts.example.com/.well-known/mta-sts.txt</code>:</p>
<pre><code>version: STSv1
mode: enforce
mx: mail.example.com
max_age: 86400</code></pre>



<h2 style="color:#f59e0b">Step 6 — Start and verify</h2>
<pre><code>systemctl enable --now redis-server rspamd dovecot postfix
systemctl restart redis-server rspamd dovecot postfix</code></pre>

<h3>Check each service is up</h3>
<pre><code>systemctl status postfix dovecot rspamd redis-server</code></pre>

<h3>Check ports are listening</h3>
<pre><code>ss -tlnp | grep -E '25|465|587|993|11332'</code></pre>
<p>You should see Postfix on 25, 465, 587; Dovecot on 993; Rspamd milter on 11332.</p>

<h3>Test SMTP inbound</h3>
<pre><code>telnet mail.example.com 25
# Should see: 220 mail.example.com ESMTP Postfix</code></pre>

<h3>Test IMAP</h3>
<pre><code>openssl s_client -connect mail.example.com:993
# Should see: * OK Dovecot ready.</code></pre>

<h3>Send a test email</h3>
<pre><code>echo "Test mail" | mail -s "Test" you@gmail.com
tail -f /var/log/mail.log</code></pre>
<p>Watch the log for errors. A clean delivery looks like:</p>
<pre><code>postfix/smtp[12345]: ... status=sent (250 2.0.0 OK)</code></pre>



<h2 style="color:#f59e0b">Step 7 — Test your score at mail-tester.com</h2>
<p>Go to <a href="https://www.mail-tester.com" target="_blank" rel="noopener noreferrer">mail-tester.com</a>. It gives you a unique address to send a test email to, then scores your setup from 1–10 across deliverability, authentication, blacklists, and content.</p>
<pre><code>echo "Testing my mail server" | mail -s "Mail tester" [your-unique-address]@mail-tester.com</code></pre>
<p>Common issues and fixes:</p>
<ul>
<li><strong>SPF fails:</strong> Check your TXT record syntax. Wait up to 10 minutes for DNS propagation.</li>
<li><strong>DKIM fails:</strong> Check the DKIM public key matches the private key. Re-run <code>rspamadm dkim_keygen</code> if needed.</li>
<li><strong>DMARC fails:</strong> Both SPF and DKIM must pass (or at least one aligned with the From domain).</li>
<li><strong>Listed on a blacklist:</strong> Check <a href="https://mxtoolbox.com/blacklists.aspx" target="_blank" rel="noopener noreferrer">MXToolbox</a>. Fresh IPs can be pre-listed on some blocklists — most accept a quick request for removal.</li>
<li><strong>PTR mismatch:</strong> Verify <code>dig -x YOUR_IP</code> returns <code>mail.example.com</code>. If not, fix the PTR at your provider.</li>
</ul>



<h2 style="color:#f59e0b">Step 8 — Ongoing maintenance</h2>

<h3>Let&#8217;s Encrypt certificate renewal</h3>
<p>Add a post-renewal hook to restart mail services after certificate renewal:</p>
<pre><code>cat &gt; /etc/letsencrypt/renewal-hooks/post/mail.sh &lt;&lt;'EOF'
#!/bin/bash
systemctl reload postfix
systemctl reload dovecot
EOF
chmod +x /etc/letsencrypt/renewal-hooks/post/mail.sh</code></pre>

<h3>Monitor the mail log</h3>
<pre><code>tail -f /var/log/mail.log</code></pre>
<p>Common things to watch for: repeated delivery failures to a domain (their mail server may be down), spikes in rejected connections (potential attack), and authentication failures (someone testing your credentials).</p>

<h3>Train Rspamd&#8217;s Bayes filter</h3>
<p>After a week or two, train Rspamd with actual spam and ham to improve accuracy:</p>
<pre><code># Train as spam
rspamc learn_spam &lt; /path/to/spam-message.eml

# Train as ham (legitimate mail)
rspamc learn_ham &lt; /path/to/good-message.eml</code></pre>

<h3>DMARC reports</h3>
<p>Once you&#8217;ve set up the <code>rua</code> address in your DMARC record, you&#8217;ll receive aggregate reports from major mail providers. These show which IPs are sending mail claiming to be from your domain. If you see IPs you don&#8217;t recognise, someone is spoofing you — tighten your DMARC policy from <code>quarantine</code> to <code>reject</code>.</p>



<h2 style="color:#f59e0b">Adding virtual mailboxes (multiple domains)</h2>
<p>The system user approach works for small setups. For multiple domains or many users, virtual mailboxes are cleaner — users exist only in a database, not as Unix accounts.</p>

<h3>Install a simple flat-file virtual setup</h3>
<pre><code># /etc/postfix/main.cf additions:
virtual_mailbox_domains = example.com otherdomain.com
virtual_mailbox_base = /var/mail/virtual
virtual_mailbox_maps = hash:/etc/postfix/vmailbox
virtual_minimum_uid = 100
virtual_uid_maps = static:5000
virtual_gid_maps = static:5000

# Create the user that owns all virtual mail
groupadd -g 5000 vmail
useradd -u 5000 -g 5000 -s /sbin/nologin -d /var/mail/virtual vmail
mkdir -p /var/mail/virtual
chown vmail:vmail /var/mail/virtual</code></pre>

<pre><code># /etc/postfix/vmailbox
alice@example.com     example.com/alice/
bob@example.com       example.com/bob/
info@otherdomain.com  otherdomain.com/info/</code></pre>

<pre><code>postmap /etc/postfix/vmailbox
postfix reload</code></pre>
<p>For large deployments (100+ users), store virtual mailboxes in MySQL or PostgreSQL using <code>postfix-mysql</code> and <code>dovecot-mysql</code>.</p>



<h2 style="color:#f59e0b">Security hardening</h2>
<p>A mail server open to the internet is a constant target. These settings close common attack vectors.</p>

<h3>Postfix hardening</h3>
<pre><code># Add to /etc/postfix/main.cf

# Prevent open relay
smtpd_relay_restrictions = permit_mynetworks, permit_sasl_authenticated, reject

# Rate limiting
smtpd_client_connection_rate_limit = 20
smtpd_client_message_rate_limit = 30
anvil_rate_time_unit = 60s

# Disable VRFY (stops address enumeration)
disable_vrfy_command = yes

# Don't expose version info
smtpd_banner = $myhostname ESMTP</code></pre>

<h3>Fail2ban for mail</h3>
<pre><code>apt-get install fail2ban

# /etc/fail2ban/jail.d/postfix.conf
[postfix]
enabled = true
port = smtp,465,587
logpath = /var/log/mail.log
maxretry = 5
bantime = 3600

[dovecot]
enabled = true
port = imaps
logpath = /var/log/mail.log
maxretry = 5
bantime = 3600</code></pre>

<h3>Rspamd DMARC enforcement</h3>
<pre><code># /etc/rspamd/local.d/dmarc.conf
reporting = false;   # disable reporting (handle manually via rua address)
enforce_blackhole = true;
enforce_reject = true;</code></pre>



<h2 style="color:#f59e0b">Frequently asked questions</h2>
<div class="faq">
  <div class="faq-item"><div class="faq-q">Will my emails end up in spam?</div><div class="faq-a">Not if you set up SPF, DKIM, DMARC, and reverse DNS correctly. New IP addresses may have a brief warm-up period with some providers — start by sending small volumes to your own accounts at Gmail and Outlook before sending to customers. A score of 10/10 on mail-tester.com means you&#8217;re configured correctly.</div></div>
  <div class="faq-item"><div class="faq-q">What is Rspamd and why use it instead of SpamAssassin?</div><div class="faq-a">Rspamd is a modern spam filter written in C, designed to be 10-100x faster than SpamAssassin which is written in Perl. It has a built-in DKIM signing module, Redis-based Bayes learning, native milter support for Postfix, and a web UI. For new setups in 2026, Rspamd is the right choice.</div></div>
  <div class="faq-item"><div class="faq-q">How do I add more email addresses?</div><div class="faq-a">For system user mailboxes: create a new Unix user with useradd. For virtual mailboxes: add the address to /etc/postfix/vmailbox and run postmap. For aliases (forwarding): add to /etc/aliases and run newaliases.</div></div>
  <div class="faq-item"><div class="faq-q">Does myguard ship the latest Postfix and Dovecot versions?</div><div class="faq-a">Yes. The myguard repository tracks upstream releases and publishes updated packages within hours of new versions. Postfix 3.9.x, Dovecot 2.4.x, and Rspamd 3.x are available — compared to the older versions frozen in Debian stable.</div></div>
  <div class="faq-item"><div class="faq-q">Can I run this mail server alongside NGINX on the same machine?</div><div class="faq-a">Yes. Postfix and Dovecot use completely different ports from NGINX (25, 465, 587, 993 vs 80, 443). They coexist without conflict. A typical setup runs NGINX as the web server and Let&#8217;s Encrypt provider, with Postfix and Dovecot using the same certificates.</div></div>
  <div class="faq-item"><div class="faq-q">How do I back up email?</div><div class="faq-a">Maildir stores each message as a separate file under ~/Maildir/. Backing up is as simple as rsync: rsync -av /home/alice/Maildir/ backup-server:/backups/alice/. Run this daily via cron. Incremental backups work naturally because new files are just new messages.</div></div>
  <div class="faq-item"><div class="faq-q">My IP is on a spam blacklist — how do I get off?</div><div class="faq-a">Check at MXToolbox blacklist checker. Most blacklists have a self-service removal form. Spamhaus ZEN is the most important — their form is at spamhaus.org/removal. Confirm you&#8217;ve fixed the underlying issue first (open relay, malware, etc.) before requesting removal.</div></div>
</div>



<h2 style="color:#f59e0b">Related posts</h2>
<ul>
<li><a href="/2026/05/tls-configuration-ssllabs-a-plus/">TLS Configuration for NGINX and Angie: A+ on SSL Labs</a> — the same TLS hardening principles apply to your mail server</li>
<li><a href="/2026/05/php-snuffleupagus-tutorial-harden-php-fpm/">PHP Snuffleupagus Tutorial: Harden PHP-FPM</a> — if you&#8217;re running a web app on the same server, add PHP-level security too</li>
<li><a href="/2026/05/nginx-modsecurity-setup-debian-ubuntu/">NGINX ModSecurity WAF Setup</a> — WAF protection for any web applications running alongside your mail server</li>
<li><a href="/packages/">Full package list</a> — Postfix, Dovecot, Rspamd, Redis and all dependencies available via APT</li>
<li><a href="/how-to-use/">How to add the myguard APT repository</a> — two-minute setup</li>
</ul>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>angie 1.11.4</title>
		<link>https://deb.myguard.nl/2026/05/angie-web-server-review-2026/</link>
		
		<dc:creator><![CDATA[Thijs Eilander]]></dc:creator>
		<pubDate>Tue, 12 May 2026 19:57:09 +0000</pubDate>
				<category><![CDATA[nginx]]></category>
		<category><![CDATA[pbuilder]]></category>
		<guid isPermaLink="false">https://deb.myguard.nl/2026/05/angie-web-server-review-2026/</guid>

					<description><![CDATA[This post has been consolidated into the complete Angie guide.]]></description>
										<content:encoded><![CDATA[<p>Version <code>1.11.4</code> — <em>2026-05-13</em></p>
<h2>Changes</h2>
<ul>
<li>Full rebuild and backport with latest Mainline</li>
<li>Merged with the source package from Debian Trixie in november 2023</li>
<li>See for more information https://deb.myguard.nl/nginx-modules/</li>
<li>Changelog: https://deb.myguard.nl/forums/topic/changelog/</li>
</ul>
<h2>Distributions</h2>
<ul>
<li>bookworm</li>
<li>jammy</li>
<li>noble</li>
<li>resolute</li>
<li>trixie</li>
</ul>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
