How to run Caddy on OPNsense
Guide on installing Caddy on OPNsense, adding your own Caddyfile snippets, and reverse proxying services — with a simple HTTP backend example and an optional two-Caddy HTTPS setup (including real client IP handling).
Caddy on OPNsense is a nice way to keep public access controlled at the firewall while still hosting your services on internal machines.
This guide starts simple (one Caddy + HTTP backend) and then shows the optional two-Caddy setup if you want HTTPS between OPNsense and an internal server.
Install the Caddy plugin
- System → Firmware → Plugins
- Install os-caddy
- Services → Caddy Web Server → General
- Enable Caddy
- Set an ACME email address
- Apply
After this, Caddy is running and you can start adding sites.
Config files on OPNsense (no GUI)
OPNsense’s Caddy plugin can be configured with the GUI, but this article is not doing that.
Everything below is done by dropping your own Caddyfile snippets into Caddy’s import directory.
Main file
- Main Caddyfile:
/usr/local/etc/caddy/Caddyfile
You should never edit this directly because the plugin manages it.
Drop-in directory (the one you should use)
- Custom snippets directory:
/usr/local/etc/caddy/caddy.d/
Caddy automatically imports files from here:
*.global→ goes into the global options block ({ ... })*.conf→ goes into the site/server part (your domains and handlers)
Simple layout I use
/usr/local/etc/caddy/caddy.d/
00-global.global
10-app.conf
20-vaultwarden.conf
After editing files, reload/restart the Caddy service from OPNsense (or restart via shell). If you break syntax, Caddy won’t start, so changes are “fail fast”.
Open WAN ports for Caddy
WAN rules (OPNsense is the edge firewall)
| Rule | Interface | Protocol | Source | Destination | Port | Action | Purpose |
|---|---|---|---|---|---|---|---|
| Allow HTTP | WAN | TCP | any | This firewall | 80 | Pass | Optional (redirects / HTTP access) |
| Allow HTTPS | WAN | TCP | any | This firewall | 443 | Pass | Required for public HTTPS |
DNS challenge (DNS-01) does not require ports 80/443 to be open for certificate issuance.
But if you want users to reach the service from the internet, you still need 443 open to OPNsense.
One Caddy with an HTTP backend
This is the easiest setup: OPNsense terminates HTTPS publicly and forwards to an internal service over HTTP.
Example:
- Public:
app.example.com - Internal service:
http://192.168.10.50:8123
Create a file like:
/usr/local/etc/caddy/caddy.d/10-app.conf
app.example.com {
log
encode zstd gzip
reverse_proxy http://192.168.10.50:8123
}
That’s it. In many home labs, this is totally fine because the backend traffic stays inside your LAN/VPN.
Block a path with a friendly 403 page
If you want to block a path like /admin, you can have Caddy serve a static 403 page.
Create the file on OPNsense
Create the directory:
/usr/local/etc/caddy/additional-pages/
Create the file:
/usr/local/etc/caddy/additional-pages/forbidden.html
forbidden.html (paste as-is)
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>403 Forbidden</title>
<style>
body {
background: #0f0f0f;
color: #e3e3e3;
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Arial, sans-serif;
margin: 0;
padding: 20px;
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
min-height: 100vh;
text-align: center;
}
h1 {
font-size: 64px;
margin: 0 0 10px;
color: #ff4444;
}
p {
font-size: 18px;
opacity: 0.85;
max-width: 300px;
line-height: 1.4;
}
@media (max-width: 480px) {
h1 {
font-size: 48px;
}
p {
font-size: 16px;
}
}
</style>
</head>
<body>
<h1>403</h1>
<p>Access to this resource is forbidden.</p>
</body>
</html>
Use it in your site block
Create or edit your site file (example):
/usr/local/etc/caddy/caddy.d/10-app.conf
app.example.com {
log
encode zstd gzip
@restrictedPath path /admin
handle @restrictedPath {
root * /usr/local/etc/caddy/additional-pages
try_files /forbidden.html
file_server {
hide *
hide ..
}
}
reverse_proxy http://192.168.10.50:8123
}
Two-Caddy setup (HTTPS on the inside hop)
If you want HTTPS on the internal hop too, use two hostnames:
- Public name:
vaultwarden.example.com→ OPNsense WAN IP - Internal name:
vaultwarden.internal.example.com→ internal server IP (split DNS recommended)
Create a file like:
/usr/local/etc/caddy/caddy.d/20-vaultwarden.conf
Edge Caddy (OPNsense)
vaultwarden.example.com {
log
encode zstd gzip
@restrictedPath path /admin
handle @restrictedPath {
root * /usr/local/etc/caddy/additional-pages
try_files /forbidden.html
file_server {
hide *
hide ..
}
}
reverse_proxy https://vaultwarden.internal.example.com {
header_up Host "vaultwarden.internal.example.com"
header_up X-Forwarded-Proto "https"
}
}
Why header_up Host matters
When you proxy from the public name (vaultwarden.example.com) to an internal name (vaultwarden.internal.example.com), the incoming Host header is still the public one.
Caddy’s reverse proxy forwards the original Host header by default (that’s usually what you want).
But in a two-Caddy setup, the internal Caddy uses the Host to select the correct site block. If the internal Caddy only has:
vaultwarden.internal.example.com { ... }
…and it receives a request with:
Host: vaultwarden.example.com
then it won’t match that site, and you’ll get the wrong site (or a default/404).
So we force the Host header to what the internal Caddy expects:
header_up Host "vaultwarden.internal.example.com"
Internal Caddy config and real client IPs
When requests reach your internal server, the TCP connection comes from OPNsense.
Without extra config, your internal Caddy (and your service) will see the client IP as OPNsense’s IP, not the real user IP.
Caddy can preserve the real client IP using forwarded headers:
X-Forwarded-For(added automatically by Caddy reverse_proxy)X-Real-IP(we often pass it explicitly to the app)
But this must be done safely:
trusted_proxies: only trust forwarded IP headers from your own network/proxies
(otherwise any random client could spoofX-Forwarded-For)client_ip_headers: tells Caddy which headers to use to determine{client_ip}
Once this is set, {client_ip} becomes the real user IP (not the firewall), which means:
- logs show the actual client
- rate-limits / allowlists make sense
- security-sensitive apps get correct audit trails
{
servers {
trusted_proxies static private_ranges
client_ip_headers X-Forwarded-For X-Real-IP
}
}
vaultwarden.internal.example.com {
reverse_proxy vaultwarden:80 {
header_up X-Real-IP {client_ip}
}
}
Note: internal HTTPS with “untrusted” certificates (Home Assistant and similar)
Some services (a common example is Home Assistant) can fail if the backend is HTTPS but uses a certificate that the proxy doesn’t trust (self-signed, internal CA not installed, etc.).
If your upstream looks like this:
- OPNsense →
https://internal-service(self-signed / internal CA)
You may need to tell Caddy to skip TLS verification for that upstream:
homeassistant.example.com {
reverse_proxy https://192.168.10.50:8123 {
transport http {
tls_insecure_skip_verify
}
}
}
This is mainly a “make it work” option for private networks. The nicer long-term fix is deploying a certificate your proxy trusts (for example, your own internal CA installed properly).
Quick checklist if something breaks
- WAN rules allow 443 to “This firewall”
- Public DNS points to your WAN IP (for the public hostname)
- OPNsense can reach the backend service IP/port
- Two-Caddy setup: internal hostname resolves to the internal server (split DNS)
- Two-Caddy setup:
header_up Host ...is set correctly - Backend uses self-signed/internal TLS: use
tls_insecure_skip_verify(or fix trust)