atproto user agency toolkit for individuals and groups
8
fork

Configure Feed

Select the types of activity you want to include in your feed.

TypeScript 100.0%
23 1 0

Clone this repository

https://tangled.org/burrito.space/p2pds https://tangled.org/did:plc:7r5c5jhtphcpkg3y55xu2y64/p2pds
git@tangled.org:burrito.space/p2pds git@tangled.org:did:plc:7r5c5jhtphcpkg3y55xu2y64/p2pds

For self-hosted knots, clone URLs may differ based on your setup.

Download tar.gz
README.md

P2PDS#

An AT Protocol Personal Data Server with P2P replication.

  • Syncs and stores records for a set of accounts
  • Provides records on P2P networks for other nodes to sync and store
  • Fetches and stores records from P2P networks for serviced accounts

Stack#

  • Runtime: Node.js, TypeScript
  • Base: Generalized from Cirrus
  • HTTP: Hono
  • Database: better-sqlite3
  • IPFS: Helia (libp2p + DHT + FsBlockstore)
  • Identity: AT Protocol DIDs via PLC directory

Architecture#

Configured with a list of DIDs to replicate:

  1. Resolves DIDs via PLC directory to find source PDSes
  2. Fetches repos as CAR files from each DID's PDS
  3. Stores blocks in IPFS (Helia) and announces via DHT
  4. Serves blocks via content-addressed RASL endpoint
  5. Publishes peer identity and replication manifests as atproto records (org.p2pds.peer, org.p2pds.manifest)
  6. Verifies block availability on remote peers via layered verification

Design choices:

  • DHT only for discovery/routing — no IPNI or centralized indexers
  • Slow data is fine as a tradeoff for resilience and decentralization
  • Transport-agnostic verification — RASL works over any HTTP transport

Verification Layers#

Content-addressed retrieval is unforgeable: if a peer returns the correct bytes for a CID, they have the data. The verification stack exploits this property at multiple layers:

Layer Name Method Status
L0 Commit root Fetch repo root CID via RASL from remote PDS Implemented
L1 RASL sampling Fetch random block sample via HTTP, compare with local copy Implemented
L2 libp2p+HTTP Same RASL verification logic over libp2p transports (P2P HTTP) Blocked on Helia
L3 MST path proof Verify Merkle path proofs via com.atproto.sync.getRecord Future

L0 and L1 run on a configurable timer (default 30 min), independent from the sync timer. L1 samples are tuneable via VerificationConfig.raslSampleSize (default 50 blocks).

L2 blocker#

L2 reuses the same HTTP/RASL verification from L1 but over libp2p transports — giving P2P properties (NAT traversal, encryption, no public IP required) with HTTP simplicity. This requires the libp2p+HTTP Gateway spec to be implemented in Helia.

  • Kubo (Go) has this: ipfs/kubo#10049 (shipped)
  • Helia (JS) does not yet: ipfs/helia#348 (trustless gateway over libp2p listed as future/out-of-scope)

Replication#

Nodes declare their IPFS identity and replication commitments via AT Protocol records:

  • org.p2pds.peer/self — Binds a DID to a libp2p PeerID + multiaddrs. Updated on startup if PeerID changes.
  • org.p2pds.manifest/{did-rkey} — One per replicated DID. Declares "I serve this DID's data" with sync status.

Sync loop (per DID, periodic):

  1. Resolve DID → PDS endpoint (via PLC directory)
  2. Discover peer info (org.p2pds.peer/self record)
  3. Fetch repo (com.atproto.sync.getRepo, incremental via since)
  4. Parse CAR, store blocks in IPFS
  5. Track block CIDs for verification
  6. Announce to DHT
  7. Verify local block availability
  8. Update manifest record with sync rev

Development#

npm install
npm test
npm run dev

Configuration#

Environment variables (or .env file):

Variable Required Description
DID Yes Your DID (e.g., did:plc:...)
HANDLE Yes Your handle (e.g., user.example.com)
PDS_HOSTNAME Yes PDS hostname
AUTH_TOKEN Yes Auth token
SIGNING_KEY Yes Hex-encoded secp256k1 private key
SIGNING_KEY_PUBLIC Yes Multibase-encoded public key
JWT_SECRET Yes JWT signing secret
PASSWORD_HASH Yes Bcrypt password hash
DATA_DIR No Data directory (default: ./data)
PORT No HTTP port (default: 3000)
IPFS_ENABLED No Enable IPFS (default: true)
IPFS_NETWORKING No Enable IPFS networking (default: true)
REPLICATE_DIDS No Comma-separated DIDs to replicate

Phases#

  1. Single-user PDS working as local node service — done
  2. Record replication with IPFS storage — done
  3. Layered verification — done (L0, L1); blocked (L2); future (L3)
  4. Policy engine — research