Personal Activity Index – Deployment Guide#
This guide walks through two common reverse proxy setups for pai serve: nginx and Caddy.
Both sections include native (host binary) instructions and optional Docker paths if you prefer containerized deployments.
Table of Contents#
- Prerequisites
- nginx Deployment
- Caddy Deployment
- Health Checks & Monitoring
- Cloudflare Worker Deployment
Prerequisites#
-
Build binary:
cargo build --release -p paiThe binary will live at
target/release/pai. -
Prepare a configuration + database location. The default locations follow the XDG spec, but you can override them with
-C(config dir) and-d(database path). -
Run a sync at least once so the database has data:
./target/release/pai sync -C /etc/pai -d /var/lib/pai/pai.db -a -
Start the server (example binds to localhost so the proxy terminates TLS):
./target/release/pai serve -d /var/lib/pai/pai.db -a 127.0.0.1:8080
nginx Deployment#
Host Setup#
-
Install nginx via your package manager (
apt,dnf,brew, etc.). -
Create a systemd service for
pai(optional but recommended):[Unit] Description=Personal Activity Index After=network.target [Service] ExecStart=/usr/local/bin/pai serve -d /var/lib/pai/pai.db -a 127.0.0.1:8080 Restart=on-failure User=pai Group=pai WorkingDirectory=/var/lib/pai [Install] WantedBy=multi-user.target -
Enable and start it:
sudo systemctl daemon-reload sudo systemctl enable --now pai.service
nginx Config#
Create /etc/nginx/conf.d/pai.conf:
server {
listen 80;
server_name pai.example.com;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Reload nginx: sudo nginx -s reload.
Optional: nginx via Docker#
Use an nginx image + bind-mount config:
services:
pai:
image: ghcr.io/your-namespace/pai:latest
command: ["serve", "-d", "/data/pai.db", "-a", "0.0.0.0:8080"]
volumes:
- ./data:/data
expose:
- "8080"
nginx:
image: nginx:1.27
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
ports:
- "80:80"
depends_on:
- pai
nginx.conf should proxy to http://pai:8080 instead of localhost.
Caddy Deployment#
Host Setup#
- Install Caddy (https://caddyserver.com/docs/install).
- Keep the same
paisystemd service from above (or run manually).
Caddyfile Example#
Create /etc/caddy/Caddyfile:
pai.example.com {
reverse_proxy 127.0.0.1:8080
encode gzip zstd
header {
Referrer-Policy "no-referrer-when-downgrade"
X-Content-Type-Options "nosniff"
}
}
Caddy automatically provisions TLS certificates with Let’s Encrypt. Reload with sudo systemctl reload caddy.
Optional: Caddy + Docker Compose#
services:
pai:
image: ghcr.io/your-namespace/pai:latest
command: ["serve", "-d", "/data/pai.db", "-a", "0.0.0.0:8080"]
volumes:
- ./data:/data
expose:
- "8080"
caddy:
image: caddy:2
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
ports:
- "80:80"
- "443:443"
depends_on:
- pai
volumes:
caddy_data:
caddy_config:
Use the same Caddyfile contents as above, but point reverse_proxy to pai:8080.
Health Checks & Monitoring#
GET /status– lightweight JSON (status, version, uptime, total items, counts persource_kind). Ideal for load balancer health probes.GET /api/feed?limit=1ensures the server can read from SQLite and return real data.GET /api/item/{id}is handy for debugging a specific record.- Consider wiring
/statusinto nginx/Caddy health checks (/healthz) or your platform’s monitoring agents.
Cloudflare Worker Deployment#
The Personal Activity Index can also be deployed as a Cloudflare Worker with D1 database, providing a serverless alternative to self-hosting.
Prerequisites#
- Cloudflare account with Workers enabled
- Wrangler CLI installed
- Rust toolchain with
wasm32-unknown-unknowntarget
Quick Start#
1. Generate Scaffolding#
Use the pai cf-init command to generate Cloudflare Worker configuration:
# Dry run to preview files
pai cf-init --dry-run -o cloudflare-deployment
# Create scaffolding
pai cf-init -o cloudflare-deployment
cd cloudflare-deployment
This creates:
wrangler.example.toml- Worker configuration templateschema.sql- D1 database schemaREADME.md- Deployment instructions
2. Create D1 Database#
wrangler d1 create personal-activity-db
Copy the database ID from the output and update wrangler.example.toml:
[[d1_databases]]
binding = "DB"
database_name = "personal-activity-db"
database_id = "your-database-id-here" # Replace with actual ID
Then copy to the active config:
cp wrangler.example.toml wrangler.toml
3. Initialize Database Schema#
wrangler d1 execute personal-activity-db --file=schema.sql
4. Build and Deploy#
# Build the worker
cd ..
cargo install worker-build
worker-build --release -p pai-worker
# Deploy
cd cloudflare-deployment
wrangler deploy
Cron Triggers#
The worker includes a scheduled event handler for automatic syncing. Configure the schedule in wrangler.toml:
[triggers]
crons = ["0 * * * *"] # Every hour at minute 0
Common schedules:
*/30 * * * *- Every 30 minutes0 */6 * * *- Every 6 hours0 0 * * *- Daily at midnight
Environment Variables#
Configure sources in wrangler.toml under [vars]:
[vars]
# Substack RSS feed URL
SUBSTACK_URL = "https://patternmatched.substack.com"
# Bluesky handle
BLUESKY_HANDLE = "desertthunder.dev"
# Leaflet publications (comma-separated id:url pairs)
LEAFLET_URLS = "desertthunder:https://desertthunder.leaflet.pub,stormlightlabs:https://stormlightlabs.leaflet.pub"
# BearBlog publications (comma-separated id:url pairs)
BEARBLOG_URLS = "desertthunder:https://desertthunder.bearblog.dev"
API Endpoints#
The Worker exposes the same API as the self-hosted server:
GET /api/feed?source_kind=bluesky&limit=20- List itemsGET /api/item/{id}- Get single itemGET /status- Health check
Local Development#
Test the worker locally before deploying:
wrangler dev
This starts a local server at http://localhost:8787 with live reload.
Monitoring#
View logs in real-time:
wrangler tail
Or check logs in the Cloudflare Dashboard under Workers & Pages.