An easy-to-host PDS on the ATProtocol, iPhone and MacOS. Maintain control of your keys and data, always.
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

at main 284 lines 12 kB view raw view rendered
1# ATProto OAuth Integration Spec 2 3Relay OAuth Provider 4 5v0.1 Draft — March 2026 6 7Companion to: Provisioning API Spec, Mobile Architecture Spec 8 9--- 10 11## 1. Overview 12 13The relay must be a compliant ATProto OAuth 2.1 authorization server so that third-party apps (Bluesky, etc.) can authenticate users and create records via XRPC. This document specifies how the relay integrates existing Rust OAuth libraries rather than building OAuth from scratch. 14 15### 1.1 Why OAuth Matters 16 17Without a compliant OAuth provider, no third-party app can authenticate against the relay. A user who creates an identity through the mobile app or desktop PDS cannot log into Bluesky — the entire product is unusable. OAuth is on the critical path for every lifecycle phase. 18 19### 1.2 ATProto OAuth Requirements 20 21The ATProto OAuth spec requires PDS implementations to support: 22 23- **OAuth 2.1** authorization code flow with PKCE (S256 only) 24- **DPoP** (Demonstrating Proof-of-Possession) using ES256, with unique JTI per request and nonce support 25- **PAR** (Pushed Authorization Requests) — mandatory for all client types 26- **Dynamic Client Registration** (RFC 7591) — clients provide metadata URLs, not pre-registered credentials 27- **Server metadata** at `/.well-known/oauth-authorization-server` 28- **JWKS endpoint** for public key discovery 29- Grant types: `authorization_code` and `refresh_token` 30- Token endpoint auth: `none` and `private_key_jwt` 31- Scopes: `atproto` and `transition:generic` 32- CORS support for browser-based apps 33- Refresh tokens are single-use (rotation on each use) 34- Tokens bound to DPoP key and client_id 35 36--- 37 38## 2. Existing Rust Ecosystem 39 40### 2.1 Recommended: `atproto-oauth-axum` 41 42**Crate:** [atproto-oauth-axum](https://crates.io/crates/atproto-oauth-axum) (v0.14.0, Feb 2026) 43**Author:** Nick Gerakines 44**Status:** Actively maintained, 22 releases since June 2025, ~440 downloads/month 45 46Provides pre-built Axum handlers for: 47- Authorization endpoint 48- Token endpoint 49- PAR endpoint 50- JWKS endpoint 51- Server metadata endpoint 52- Client metadata resolution 53- Authorization callback handling 54 55This is the most direct integration path if the relay uses Axum (which aligns with the Rust web server ecosystem). 56 57### 2.2 Alternative: `atproto-oauth-aip` 58 59**Crate:** [atproto-oauth-aip](https://crates.io/crates/atproto-oauth-aip) 60**Status:** Same author, lower-level workflow library 61 62Use this if the relay uses a different HTTP framework (e.g., actix-web) or needs more control over the OAuth flow. Provides the OAuth logic without Axum-specific bindings. 63 64### 2.3 Reference Implementation: graze-social/aip 65 66**Repo:** [graze-social/aip](https://github.com/graze-social/aip) (105 stars, v2.2.3, Jan 2026) 67**Status:** Production-ready, Docker support, multiple storage backends 68 69A complete standalone OAuth 2.1 authorization server with native ATProto integration. Useful as: 70- Reference for how a production ATProto OAuth server works 71- Potential deployment as a separate sidecar service (vs. embedding in the relay) 72- Storage backend patterns (SQLite, PostgreSQL) 73 74--- 75 76## 3. Integration Architecture 77 78### 3.1 Deployment Model 79 80Two viable approaches: 81 82**Option A: Embedded (recommended for v1.0)** 83 84The relay process embeds `atproto-oauth-axum` handlers directly into its Axum router. OAuth state lives in the same database as relay state. Simplest deployment — one process, one database. 85 86``` 87[Third-party app] → HTTPS → [Relay: Axum router] 88 ├── /oauth/* → atproto-oauth-axum handlers 89 ├── /xrpc/* → XRPC proxy/handler 90 └── /v1/* → Provisioning API 91``` 92 93**Option B: Sidecar** 94 95Deploy graze-social/aip as a separate service. The relay delegates OAuth to the sidecar and validates tokens on XRPC requests. More complex but isolates OAuth concerns. 96 97Not recommended for v1.0 — adds operational complexity for a solo developer. 98 99### 3.2 Storage 100 101OAuth state (authorization codes, tokens, sessions, client metadata cache) stored in the relay's SQLite database. Both `atproto-oauth-axum` and graze-social/aip support SQLite backends. 102 103Tables needed: 104- `oauth_authorization_codes` — short-lived, per-authorization-flow 105- `oauth_access_tokens` — bound to DPoP key, client_id, account 106- `oauth_refresh_tokens` — single-use, rotated on each use 107- `oauth_client_metadata_cache` — cached client metadata from discovery URLs 108- `oauth_dpop_nonces` — replay prevention 109 110### 3.3 Account Binding 111 112The OAuth provider needs to map ATProto DIDs to relay accounts. During authorization: 113 1141. User is redirected to relay's authorization endpoint 1152. Relay resolves the user's DID → account_id 1163. User authenticates (password, or session token if already logged in) 1174. Relay issues tokens bound to the account 118 119The relay's existing session/authentication system (provisioning API §2) handles step 3. The OAuth library handles everything else. 120 121--- 122 123## 4. Lifecycle Phase Behavior 124 125### 4.1 Mobile-Only Phase 126 127The relay is a full PDS. OAuth works identically to any hosted PDS: 128- Authorization, token, and XRPC endpoints all on the relay 129- Relay stores repo, signs commits, serves reads 130- Third-party apps see a normal PDS 131 132No special behavior needed. This is the standard ATProto OAuth flow. 133 134### 4.2 Desktop-Enrolled Phase 135 136The relay is still the OAuth provider and XRPC endpoint. The difference is internal: 137- Write XRPC calls (createRecord, etc.) are proxied to the desktop for repo construction before the relay signs them 138- Read XRPC calls can be served from relay cache 139- OAuth tokens and sessions are managed entirely at the relay — the desktop is invisible to third-party apps 140 141No OAuth changes needed for desktop enrollment. This is the key advantage of the relay-as-permanent-endpoint architecture. 142 143### 4.3 Desktop Offline (During Desktop-Enrolled Phase) 144 145- Read XRPC calls: served from relay cache (no change to OAuth) 146- Write XRPC calls: relay returns 503 to the XRPC caller 147- OAuth tokens remain valid — the 503 is at the XRPC layer, not the auth layer 148 149Third-party apps see a PDS that accepts reads but rejects writes. This is a known ATProto pattern (PDS maintenance mode). 150 151--- 152 153## 5. Endpoints 154 155The relay must serve these endpoints at its base URL (the DID document's service endpoint): 156 157| Endpoint | Source | Purpose | 158|----------|--------|---------| 159| `/.well-known/oauth-authorization-server` | atproto-oauth-axum | Server metadata (issuer, endpoints, supported flows) | 160| `/oauth/authorize` | atproto-oauth-axum | Authorization endpoint (user-facing) | 161| `/oauth/token` | atproto-oauth-axum | Token endpoint (app-facing) | 162| `/oauth/par` | atproto-oauth-axum | Pushed Authorization Request endpoint | 163| `/oauth/jwks` | atproto-oauth-axum | Public keys for token verification | 164| `/oauth/callback` | atproto-oauth-axum | Authorization callback | 165 166These are in addition to the relay's existing endpoints: 167- `/v1/*` — provisioning API 168- `/xrpc/*` — ATProto XRPC 169 170### 5.1 Server Metadata 171 172The `/.well-known/oauth-authorization-server` response must include: 173 174```json 175{ 176 "issuer": "https://relay.example.com", 177 "authorization_endpoint": "https://relay.example.com/oauth/authorize", 178 "token_endpoint": "https://relay.example.com/oauth/token", 179 "pushed_authorization_request_endpoint": "https://relay.example.com/oauth/par", 180 "jwks_uri": "https://relay.example.com/oauth/jwks", 181 "scopes_supported": ["atproto", "transition:generic"], 182 "response_types_supported": ["code"], 183 "grant_types_supported": ["authorization_code", "refresh_token"], 184 "token_endpoint_auth_methods_supported": ["none", "private_key_jwt"], 185 "code_challenge_methods_supported": ["S256"], 186 "dpop_signing_alg_values_supported": ["ES256"] 187} 188``` 189 190--- 191 192## 6. Authorization UI 193 194The relay needs a minimal web UI for the OAuth authorization screen. When a third-party app redirects a user to `/oauth/authorize`, the relay must: 195 1961. Show the app's name and permissions requested 1972. Allow the user to approve or deny 1983. Redirect back to the app with an authorization code 199 200For v1.0, this can be a minimal server-rendered page. No SPA needed. The provisioning API's session system handles user authentication. 201 202For BYO relay operators, the authorization UI should be customizable (branding, colors) via relay config. 203 204--- 205 206## 7. Security Considerations 207 208### 7.1 Token Storage 209 210Access tokens and refresh tokens are stored server-side. The relay validates DPoP proofs on every request, preventing token theft from being useful without the DPoP private key. 211 212### 7.2 Client Metadata Caching 213 214ATProto uses dynamic client registration — clients provide a metadata URL, not pre-registered credentials. The relay must: 215- Fetch and cache client metadata on first authorization 216- Re-validate periodically (TTL: 24 hours recommended) 217- Reject clients with unreachable or invalid metadata 218 219### 7.3 Rate Limiting 220 221OAuth endpoints should be rate-limited separately from XRPC and provisioning API endpoints. Recommended limits: 222- Authorization: 10/min per IP 223- Token: 30/min per client_id 224- PAR: 30/min per client_id 225 226### 7.4 BYO Relay Implications 227 228Self-hosted relay operators run their own OAuth provider. The BYO relay binary (Nix/Docker) must include the OAuth endpoints. The authorization UI defaults should be sensible without configuration. 229 230--- 231 232## 8. Implementation Milestones 233 234### v0.1 — Basic OAuth (blocks mobile-only phase) 235 236- Integrate `atproto-oauth-axum` into relay's Axum router 237- SQLite-backed token storage 238- Minimal authorization UI (server-rendered) 239- Server metadata endpoint 240- Test with Bluesky app as client 241 242### v1.0 — Production OAuth 243 244- PostgreSQL storage backend option 245- Client metadata caching with TTL 246- Rate limiting on OAuth endpoints 247- Customizable authorization UI for BYO relay operators 248- Token revocation endpoint 249- Audit logging of authorization grants 250 251### Later 252 253- Scoped tokens (read-only grants for specific collections) 254- Token introspection endpoint 255- Admin dashboard for managing active OAuth sessions 256 257--- 258 259## 9. Integration Checklist 260 261Before the relay can accept third-party app logins: 262 263- [ ] `/.well-known/oauth-authorization-server` returns valid metadata 264- [ ] `/oauth/authorize` renders authorization UI and handles consent 265- [ ] `/oauth/token` issues DPoP-bound access + refresh tokens 266- [ ] `/oauth/par` accepts pushed authorization requests 267- [ ] `/oauth/jwks` returns current signing keys 268- [ ] PKCE (S256) enforced on all flows 269- [ ] DPoP proof validated on every token request 270- [ ] Refresh token rotation (single-use) working 271- [ ] Bluesky app can complete full OAuth flow 272- [ ] Bluesky app can create a post via XRPC after OAuth 273- [ ] Token bound to correct account/DID 274 275--- 276 277## 10. Design Decisions 278 279| Decision | Rationale | Alternatives Considered | 280|----------|-----------|------------------------| 281| Embed `atproto-oauth-axum` in relay process | Simplest deployment for solo dev. One process, one DB. | Sidecar (graze-social/aip) — more complex ops. | 282| SQLite for OAuth storage in v1.0 | Matches relay's existing storage. No additional infra. | PostgreSQL from day one — overkill for early users. | 283| Minimal server-rendered auth UI | OAuth authorization screen is visited rarely. No SPA needed. | Full React SPA — unnecessary complexity. | 284| Use existing crates, don't build OAuth | ATProto OAuth is complex (DPoP, PAR, PKCE, dynamic registration). Building from scratch is months of work. | Build custom — slower, more bugs, no community fixes. |