personal activity index (bluesky, leaflet, substack)
pai.desertthunder.dev
rss
bluesky
1# Personal Activity Index – Deployment Guide
2
3This guide walks through two common reverse proxy setups for `pai serve`: **nginx** and **Caddy**.
4Both sections include native (host binary) instructions and optional Docker paths if you prefer containerized deployments.
5
6## Table of Contents
7
8- [Prerequisites](#prerequisites)
9- [nginx Deployment](#nginx-deployment)
10 - [Host Setup](#host-setup)
11 - [nginx Config](#nginx-config)
12 - [Optional: nginx via Docker](#optional-nginx-via-docker)
13- [Caddy Deployment](#caddy-deployment)
14 - [Host Setup](#host-setup-1)
15 - [Caddyfile Example](#caddyfile-example)
16 - [Optional: Caddy + Docker Compose](#optional-caddy--docker-compose)
17- [Health Checks & Monitoring](#health-checks--monitoring)
18- [Cloudflare Worker Deployment](#cloudflare-worker-deployment)
19 - [Prerequisites](#prerequisites-1)
20 - [Quick Start](#quick-start)
21 - [Cron Triggers](#cron-triggers)
22 - [API Endpoints](#api-endpoints)
23 - [Local Development](#local-development)
24 - [Monitoring](#monitoring)
25
26## Prerequisites
27
281. Build binary:
29
30 ```sh
31 cargo build --release -p pai
32 ```
33
34 The binary will live at `target/release/pai`.
35
362. Prepare a configuration + database location. The default locations follow the XDG spec, but you can override them with `-C` (config dir) and `-d` (database path).
373. Run a sync at least once so the database has data:
38
39 ```sh
40 ./target/release/pai sync -C /etc/pai -d /var/lib/pai/pai.db -a
41 ```
42
434. Start the server (example binds to localhost so the proxy terminates TLS):
44
45 ```sh
46 ./target/release/pai serve -d /var/lib/pai/pai.db -a 127.0.0.1:8080
47 ```
48
49## nginx Deployment
50
51### Host Setup
52
531. Install nginx via your package manager (`apt`, `dnf`, `brew`, etc.).
542. Create a systemd service for `pai` (optional but recommended):
55
56 ```ini
57 [Unit]
58 Description=Personal Activity Index
59 After=network.target
60
61 [Service]
62 ExecStart=/usr/local/bin/pai serve -d /var/lib/pai/pai.db -a 127.0.0.1:8080
63 Restart=on-failure
64 User=pai
65 Group=pai
66 WorkingDirectory=/var/lib/pai
67
68 [Install]
69 WantedBy=multi-user.target
70 ```
71
723. Enable and start it:
73
74 ```sh
75 sudo systemctl daemon-reload
76 sudo systemctl enable --now pai.service
77 ```
78
79### nginx Config
80
81Create `/etc/nginx/conf.d/pai.conf`:
82
83```nginx
84server {
85 listen 80;
86 server_name pai.example.com;
87
88 location / {
89 proxy_pass http://127.0.0.1:8080;
90 proxy_set_header Host $host;
91 proxy_set_header X-Real-IP $remote_addr;
92 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
93 proxy_set_header X-Forwarded-Proto $scheme;
94 }
95}
96```
97
98Reload nginx: `sudo nginx -s reload`.
99
100### Optional: nginx via Docker
101
102Use an `nginx` image + bind-mount config:
103
104```yaml
105services:
106 pai:
107 image: ghcr.io/your-namespace/pai:latest
108 command: ["serve", "-d", "/data/pai.db", "-a", "0.0.0.0:8080"]
109 volumes:
110 - ./data:/data
111 expose:
112 - "8080"
113
114 nginx:
115 image: nginx:1.27
116 volumes:
117 - ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
118 ports:
119 - "80:80"
120 depends_on:
121 - pai
122```
123
124`nginx.conf` should proxy to `http://pai:8080` instead of localhost.
125
126## Caddy Deployment
127
128### Host Setup
129
1301. Install Caddy (<https://caddyserver.com/docs/install>).
1312. Keep the same `pai` systemd service from above (or run manually).
132
133### Caddyfile Example
134
135Create `/etc/caddy/Caddyfile`:
136
137```caddyfile
138pai.example.com {
139 reverse_proxy 127.0.0.1:8080
140 encode gzip zstd
141 header {
142 Referrer-Policy "no-referrer-when-downgrade"
143 X-Content-Type-Options "nosniff"
144 }
145}
146```
147
148Caddy automatically provisions TLS certificates with Let’s Encrypt. Reload with `sudo systemctl reload caddy`.
149
150### Optional: Caddy + Docker Compose
151
152```yaml
153services:
154 pai:
155 image: ghcr.io/your-namespace/pai:latest
156 command: ["serve", "-d", "/data/pai.db", "-a", "0.0.0.0:8080"]
157 volumes:
158 - ./data:/data
159 expose:
160 - "8080"
161
162 caddy:
163 image: caddy:2
164 volumes:
165 - ./Caddyfile:/etc/caddy/Caddyfile:ro
166 - caddy_data:/data
167 - caddy_config:/config
168 ports:
169 - "80:80"
170 - "443:443"
171 depends_on:
172 - pai
173
174volumes:
175 caddy_data:
176 caddy_config:
177```
178
179Use the same `Caddyfile` contents as above, but point `reverse_proxy` to `pai:8080`.
180
181## Health Checks & Monitoring
182
183- `GET /status` – lightweight JSON (`status`, version, uptime, total items, counts per `source_kind`). Ideal for load balancer health probes.
184- `GET /api/feed?limit=1` ensures the server can read from SQLite and return real data.
185- `GET /api/item/{id}` is handy for debugging a specific record.
186- Consider wiring `/status` into nginx/Caddy health checks (`/healthz`) or your platform’s monitoring agents.
187
188## Cloudflare Worker Deployment
189
190The Personal Activity Index can also be deployed as a Cloudflare Worker with D1 database, providing a serverless alternative to self-hosting.
191
192### Prerequisites
193
1941. Cloudflare account with Workers enabled
1952. [Wrangler CLI](https://developers.cloudflare.com/workers/wrangler/install-and-update/) installed
1963. Rust toolchain with `wasm32-unknown-unknown` target
197
198### Quick Start
199
200#### 1. Generate Scaffolding
201
202Use the `pai cf-init` command to generate Cloudflare Worker configuration:
203
204```sh
205# Dry run to preview files
206pai cf-init --dry-run -o cloudflare-deployment
207
208# Create scaffolding
209pai cf-init -o cloudflare-deployment
210cd cloudflare-deployment
211```
212
213This creates:
214
215- `wrangler.example.toml` - Worker configuration template
216- `schema.sql` - D1 database schema
217- `README.md` - Deployment instructions
218
219#### 2. Create D1 Database
220
221```sh
222wrangler d1 create personal-activity-db
223```
224
225Copy the database ID from the output and update `wrangler.example.toml`:
226
227```toml
228[[d1_databases]]
229binding = "DB"
230database_name = "personal-activity-db"
231database_id = "your-database-id-here" # Replace with actual ID
232```
233
234Then copy to the active config:
235
236```sh
237cp wrangler.example.toml wrangler.toml
238```
239
240#### 3. Initialize Database Schema
241
242```sh
243wrangler d1 execute personal-activity-db --file=schema.sql
244```
245
246#### 4. Build and Deploy
247
248```sh
249# Build the worker
250cd ..
251cargo install worker-build
252worker-build --release -p pai-worker
253
254# Deploy
255cd cloudflare-deployment
256wrangler deploy
257```
258
259### Cron Triggers
260
261The worker includes a scheduled event handler for automatic syncing. Configure the schedule in `wrangler.toml`:
262
263```toml
264[triggers]
265crons = ["0 * * * *"] # Every hour at minute 0
266```
267
268Common schedules:
269
270- `*/30 * * * *` - Every 30 minutes
271- `0 */6 * * *` - Every 6 hours
272- `0 0 * * *` - Daily at midnight
273
274### Environment Variables
275
276Configure sources in `wrangler.toml` under `[vars]`:
277
278```toml
279[vars]
280# Substack RSS feed URL
281SUBSTACK_URL = "https://patternmatched.substack.com"
282
283# Bluesky handle
284BLUESKY_HANDLE = "desertthunder.dev"
285
286# Leaflet publications (comma-separated id:url pairs)
287LEAFLET_URLS = "desertthunder:https://desertthunder.leaflet.pub,stormlightlabs:https://stormlightlabs.leaflet.pub"
288
289# BearBlog publications (comma-separated id:url pairs)
290BEARBLOG_URLS = "desertthunder:https://desertthunder.bearblog.dev"
291```
292
293### API Endpoints
294
295The Worker exposes the same API as the self-hosted server:
296
297- `GET /api/feed?source_kind=bluesky&limit=20` - List items
298- `GET /api/item/{id}` - Get single item
299- `GET /status` - Health check
300
301### Local Development
302
303Test the worker locally before deploying:
304
305```sh
306wrangler dev
307```
308
309This starts a local server at `http://localhost:8787` with live reload.
310
311### Monitoring
312
313View logs in real-time:
314
315```sh
316wrangler tail
317```
318
319Or check logs in the [Cloudflare Dashboard](https://dash.cloudflare.com) under Workers & Pages.