An encrypted personal cloud built on the AT Protocol.
0
fork

Configure Feed

Select the types of activity you want to include in your feed.

Keyring rework #3

open opened by sans-self.org

Overview#

In the UI, keyrings are called workspaces. A workspace is a named group of people who share access to encrypted documents. Under the hood, it's a keyring: a shared AES-256 group key, wrapped to each member's X25519 public key.

This document covers the workspace model as it extends beyond the existing keyring primitives — roles, distributed membership, collaborative editing, revocation, and document adoption.

What exists today#

  • Lexicon: app.opake.keyring — group key wrapped to N members, rotation counter, key history
  • Core (Rust): create_keyring, list_keyrings, add_member, remove_member with group key rotation
  • CLI: Full keyring CRUD (opake keyring create/ls/add-member/remove-member)
  • WASM: Key wrapping/unwrapping exports, metadata encryption — but no high-level keyring operations
  • AppView: Indexes keyring_members, serves /api/keyrings?member={did}
  • Web: Nothing. The sidebar mentions workspaces but there's no implementation.

What this document proposes#

  1. Roles on keyring membership (manager, editor, viewer)
  2. Owner-managed keyrings — single writer for v1, replicated keyrings deferred to v2
  3. An AppView workspace index as the source of truth for cross-PDS document discovery
  4. Workspace directories — reuse existing app.opake.directory with keyringEncryption, owner-managed, AppView as fallback
  5. Collaborative editing via app.opake.documentUpdate records
  6. Document adoption when a member leaves
  7. Revocation model: forward secrecy + optional background re-encryption
  8. Sharing a directory = upgrading it to a workspace

Roles#

Every member of a workspace has a role. Roles are plaintext on the record because the AppView needs them for authorization decisions.

Role Read Upload Edit others' files Add members Remove members Rotate keys
viewer yes no no no no no
editor yes yes yes (via documentUpdate) no no no
manager yes yes yes yes yes yes (daemon)

The workspace creator is the initial manager. There must always be at least one manager.

Enforcement#

Roles are enforced at two levels:

  • AppView: Rejects documentUpdate records written by viewers. The AppView knows who is a manager because it tracks the full membership with roles.
  • Client: UI disables actions the user's role doesn't permit. Defense in depth — AppView is the authority.

Clients cannot enforce roles cryptographically. A viewer has the group key and could encrypt a document under the keyring. The AppView simply won't index it, making it invisible to other members.


Membership Model#

v1: Owner-managed#

In v1, the workspace owner is the single writer for the keyring record. All membership operations (add, remove, role changes) go through the owner. This matches the directory model (also owner-managed) and avoids the complexity of replica sync.

The owner adds a member by:

  1. Fetching the new member's X25519 public key
  2. Wrapping the current group key to their pubkey
  3. Appending the wrapped key (with role) to the members array
  4. putRecord on their keyring record

This is the existing add_member() flow, extended with roles.

Bus factor#

If the keyring owner disappears:

  • The keyring record is still on their PDS (the PDS is a server, it keeps running)
  • Existing members can still unwrap the group key and access all documents
  • What's frozen: adding/removing members, rotation, directory changes

Escape hatch: a remaining manager forks the workspace — creates a new keyring on their PDS (with a forkedFrom reference in encrypted metadata), invites remaining members, documents going forward reference the new keyring. This is manual but preserves continuity.

v2: Replicated keyrings (future)#

Every manager maintains a copy of the keyring record on their own PDS. The AppView aggregates membership across all copies. Changes propagate via daemon sync. This eliminates the single-writer bottleneck for membership operations and improves bus factor resilience.

Deferred because: the directory tree is owner-managed anyway (replicating directories across managers with constant churn is a hard sync problem). If the workspace is bottlenecked on the owner for file organization, replicating just the keyring solves the rare case (membership changes) while ignoring the common case (uploads, moves). Better to ship simple and add replication for both keyrings and directories when the daemon infrastructure is mature.


Lexicon Changes#

Modified: app.opake.defs#wrappedKey#

Add an optional role field:

{
  "did": "did:plc:alice",
  "ciphertext": "...",
  "algo": "x25519-hkdf-a256kw",
  "role": "manager"
}

role is a string enum: "manager", "editor", "viewer". Optional for backward compatibility — absent means "manager" (preserves existing keyring records where all members were implicitly managers).

Note: Adding role to wrappedKey (a shared def) means grants also technically have the field. This is fine — grants ignore it. The field is optional and skip_serializing_if = None on the Rust side.

Modified: app.opake.keyring#

Add an owner field — the DID of the keyring creator. Used by the AppView to identify the canonical owner and for v2 replica distinction.

"owner": {
  "type": "string",
  "format": "did",
  "description": "DID of the canonical keyring owner. The owner's copy is authoritative for rotation and keyHistory."
}

New: app.opake.documentUpdate#

Written by an editor to their own PDS to propose an update to a document owned by someone else.

{
  "lexicon": 1,
  "id": "app.opake.documentUpdate",
  "defs": {
    "main": {
      "type": "record",
      "key": "tid",
      "record": {
        "type": "object",
        "required": ["opakeVersion", "document", "blob", "encryptedMetadata", "createdAt"],
        "properties": {
          "opakeVersion": { "type": "integer", "minimum": 1 },
          "document": {
            "type": "string",
            "format": "at-uri",
            "description": "AT URI of the document being updated."
          },
          "blob": {
            "type": "blob",
            "description": "The updated encrypted content.",
            "accept": ["*/*"],
            "maxSize": 52428800
          },
          "encryptedMetadata": {
            "type": "ref",
            "ref": "app.opake.defs#encryptedMetadata",
            "description": "Updated metadata, encrypted with the document's content key."
          },
          "supersedes": {
            "type": "string",
            "format": "at-uri",
            "description": "For adoption: the original document URI this replaces. Absent for regular edits."
          },
          "createdAt": { "type": "string", "format": "datetime" }
        }
      }
    }
  }
}

The document owner's client picks up pending documentUpdate records (via AppView), applies the update (downloads the new blob, re-uploads to their PDS, updates their document record), and the editor's client deletes the update record after confirmation.

New: app.opake.keyringLeave#

Written by a member to their own PDS to opt out of a workspace. The AppView stops listing them as a member.


AppView Changes#

New table: workspace_documents#

Indexes documents encrypted under a keyring, across all members' PDSes. This is the source of truth for what documents exist in a workspace.

Column Type Description
document_uri string (PK) AT URI of the document
keyring_uri string (indexed) AT URI of the keyring
owner_did string DID that owns the document
rotation integer Keyring rotation the content key was wrapped under
indexed_at datetime

Populated from Jetstream: when a document with keyringEncryption is created/updated, extract the keyringRef.keyring URI and index it.

New table: document_updates#

Indexes pending document updates for discovery.

Column Type Description
update_uri string (PK) AT URI of the documentUpdate record
document_uri string (indexed) AT URI of the target document
author_did string DID of the editor who wrote the update
supersedes_uri string (nullable) For adoptions: the original document URI
indexed_at datetime

Modified table: keyring_members#

Add role column (string, default "manager").

New endpoints#

  • GET /api/workspace?keyring={uri} — all documents encrypted under a keyring, paginated
  • GET /api/workspace/updates?document={uri} — pending updates for a document (or all documents owned by the authed DID)

Firehose subscriptions#

The AppView already watches for app.opake.keyring and app.opake.grant. Add:

  • app.opake.document — if encryption is keyringEncryption, index in workspace_documents
  • app.opake.documentUpdate — validate writer is an editor/manager for the referenced keyring, then index
  • app.opake.keyringLeave — remove the writer from membership index for the referenced keyring

Key Operations#

Adding a member#

sequenceDiagram
    participant Owner
    participant OwnerPDS as Owner's PDS
    participant AppView
    participant NewPDS as New Member's PDS

    Owner->>NewPDS: getRecord(publicKey/self)
    NewPDS-->>Owner: X25519 pubkey

    Owner->>Owner: wrap group key to new member's pubkey
    Owner->>Owner: append wrappedKey (with role) to members array

    Owner->>OwnerPDS: putRecord(keyring)
    OwnerPDS-->>Owner: { uri, cid }

    OwnerPDS->>AppView: firehose event
    AppView->>AppView: index new member with role

Removing a member#

Removal triggers group key rotation.

sequenceDiagram
    participant Owner
    participant OwnerPDS as Owner's PDS
    participant AppView

    Owner->>AppView: fetch current membership
    AppView-->>Owner: member list

    Owner->>Owner: generate new group key GK'
    Owner->>Owner: wrap GK' to each remaining member
    Owner->>Owner: archive old rotation in keyHistory

    Owner->>OwnerPDS: putRecord(keyring: rotation++, new members)
    OwnerPDS-->>Owner: { uri, cid }

    OwnerPDS->>AppView: firehose event
    AppView->>AppView: update membership, rotation

Group key rotation#

Rotation happens on member removal. The old group key is preserved in keyHistory so remaining members can still decrypt documents from before the rotation. New documents use the new group key.

Rotation does NOT re-encrypt existing documents. Forward secrecy only — the removed member can't decrypt anything created after the rotation. Historical access persists unless background re-encryption is enabled (see Revocation).


Workspace Directories#

Model#

Workspace directories reuse the existing app.opake.directory record type. No new directory mechanism.

  • The workspace has its own root directory — a regular app.opake.directory record on the owner's PDS, with keyringEncryption pointing to the workspace keyring. This is separate from the owner's personal directory/self root.
  • Subdirectories are also regular app.opake.directory records with keyringEncryption, on the owner's PDS.
  • entries arrays contain AT-URIs — including cross-PDS URIs pointing to documents on any member's PDS. The entries field is format: "at-uri" with no same-PDS constraint.
  • The existing DirectoryTree logic works unchanged — it builds from directory records and doesn't care where the documents live.

File placement#

When a member uploads a file to the workspace, they specify a target directory. Their client writes the document to their own PDS, and the owner's daemon adds the document URI to the target directory's entries array.

sequenceDiagram
    participant Editor
    participant EditorPDS as Editor's PDS
    participant AppView
    participant OwnerDaemon as Owner's Daemon
    participant OwnerPDS as Owner's PDS

    Editor->>EditorPDS: createRecord(document, keyringEncryption)
    EditorPDS-->>Editor: { uri, cid }

    EditorPDS->>AppView: firehose event
    AppView->>AppView: index in workspace_documents

    OwnerDaemon->>AppView: detect new workspace document
    OwnerDaemon->>OwnerPDS: add document URI to directory entries

The client can optimistically render the file in the correct folder immediately. The directory record catches up when the daemon syncs (seconds). Users won't notice the lag.

Why not replicate directories#

Keyring replication works because keyrings change rarely (member add/remove). Directories change on every upload, delete, and move — that's constant churn. Replicating directory entries across N managers with concurrent modifications is a harder sync problem that isn't worth solving for v1.

If the owner's daemon is offline:

  • Members can still upload files (to their own PDS)
  • The AppView still indexes them in workspace_documents
  • Members can still READ the directory tree (owner's PDS is a server, it keeps serving records)
  • New files just don't appear in the right folder until the daemon syncs

If the owner's PDS goes down entirely:

  • The AppView's workspace_documents index serves as a flat file list fallback
  • Files are accessible and decryptable — just unorganized temporarily

The AppView is the source of truth for what documents exist in a workspace. The directory tree is organizational sugar. Useful, but not required for access.

Sharing a directory = creating a workspace#

Sharing a personal directory with another user upgrades it to a workspace:

  1. Create a keyring for the directory (user becomes the owner/manager)
  2. Switch the directory's encryption from directEncryption to keyringEncryption
  3. Re-wrap the directory metadata under the new group key
  4. Re-wrap each child document's content key under the group key (or use applyWrites for atomic batch)
  5. Add the recipient to the keyring

After this, the directory IS a workspace. Adding more people, roles, collaborative editing — all the workspace machinery applies. No separate "shared directory" concept needed.

Future: applyWrites#

com.atproto.repo.applyWrites enables atomic batch operations on a single repo. This would improve:

  • Directory entry updates (add/remove multiple entries atomically)
  • File moves between directories (remove from source + add to target in one call)
  • Workspace upgrade (re-wrap multiple document records + directory record atomically)

Not a blocker for v1, but a significant reliability and performance improvement.


Document Operations#

Upload to workspace#

An editor uploads a file encrypted under the workspace's keyring. The document lives on the editor's PDS.

  1. Editor's client fetches the keyring record from the owner's PDS
  2. Unwraps the group key with their private key
  3. Generates a content key, encrypts the file
  4. Wraps the content key under the group key (AES-KW)
  5. Uploads blob + creates document record on their own PDS
  6. AppView indexes the document in workspace_documents

All workspace members can decrypt by unwrapping the group key, then the content key.

Collaborative editing (documentUpdate)#

Alice (editor) wants to update a file owned by Bob:

  1. Alice fetches Bob's document, decrypts it (she has the group key)
  2. Alice makes changes, re-encrypts with the same content key (or a new one wrapped under the same keyring)
  3. Alice writes app.opake.documentUpdate to her PDS — references Bob's document URI, contains the new encrypted blob
  4. AppView indexes the update, surfaces it to Bob
  5. Bob's client fetches Alice's update blob, re-uploads to his PDS, updates his document record
  6. Alice's client deletes the documentUpdate record

Conflict resolution (v1): Last-write-wins by createdAt timestamp. If multiple updates exist for the same document, the owner's client applies the most recent one. Future versions may let the owner choose.

Document adoption#

When a member is removed, their documents (on their PDS) need to be migrated to a remaining member's PDS. This is NOT a documentUpdate — it's a fresh upload with a supersedes field for lineage.

  1. Remaining member (or their daemon) identifies documents by the removed member that are encrypted under this keyring (via workspace_documents AppView index)
  2. Downloads and decrypts each document
  3. Re-encrypts under the new group key (post-rotation)
  4. Uploads to their own PDS as a new document
  5. Creates a documentUpdate record with supersedes pointing to the old URI
  6. AppView updates workspace_documents — new URI replaces old

The adopter is the manager who initiated the removal (they're online, they triggered the action, their daemon runs the pipeline). If the removed member IS the workspace owner, this triggers a fork — the removing manager creates a new keyring on their PDS and adopts all content.

Time pressure: Adoption must happen while the removed member's PDS is still serving data. For hosted PDSes, the operator controls this window. For self-hosted PDSes, it's best-effort — the daemon should adopt eagerly on removal, not lazily.


Revocation Model#

What's guaranteed#

Forward secrecy via group key rotation. On member removal, the group key rotates. New documents are encrypted under the new group key. The removed member cannot decrypt anything created after their removal. This is automatic, cheap, and cryptographically sound.

What's not guaranteed#

Historical access. The removed member had the group key and the decrypted plaintext. Even if you re-encrypt every blob, a member who cached content locally retains it. This is inherent to every E2EE group system — Signal, git-crypt, all of them. Opake is honest about this.

Background re-encryption (optional)#

A workspace manager can enable re-encryption on removal. When a member is removed:

  1. Each remaining member's daemon re-encrypts their own documents under the new group key
  2. Downloads blob → decrypts with old content key → generates new content key → re-encrypts → re-uploads → updates document record
  3. Old blob becomes unreferenced on the PDS (GC behavior is PDS-dependent)

This is a policy choice per workspace, not a default. It's expensive (bandwidth, PDS writes) and provides a practical barrier, not a cryptographic guarantee. A removed member who pre-cached blobs or plaintext is unaffected.

What re-encryption protects against: A removed member who kept their seed phrase but didn't pre-cache the ciphertext. After re-encryption, the old content keys (wrapped under the old group key) no longer correspond to the blobs on the PDS. The old ciphertext may be garbage-collected. This raises the bar from "trivial" to "you needed to have been planning this."

Defense in depth (enterprise context)#

Layer Protects against Cost Guarantee
Group key rotation Passive future access Cheap, instant Cryptographic
Background re-encryption Casual historical access Expensive, async Infrastructure (PDS GC)
Document adoption Orphaned content on removed member's PDS Moderate, async Infrastructure
Device management (MDM) Cached plaintext and keys Operational (not Opake's scope) Physical

No single layer is complete. Together they're a credible enterprise story — as long as the docs are honest about what each layer actually provides.


Implementation Order#

  1. Lexicon: Add role to wrappedKey, add owner to keyring, define app.opake.documentUpdate and app.opake.keyringLeave
  2. Core (Rust): Update WrappedKey and Keyring structs, update create_keyring() and add_member() to handle roles and owner field
  3. WASM: Export high-level keyring operations (create, list, add member, decrypt names)
  4. AppView: Add role to keyring_members, add workspace_documents table + workspace endpoint
  5. Web: Workspace panel in sidebar, member management UI, upload-to-workspace flow, workspace directory browsing (reuse existing DirectoryTree + workspace_documents fallback)
  6. Daemon: Directory entry sync (file placement from workspace uploads)
  7. "Share directory" flow: Upgrade personal directory to workspace (create keyring, re-wrap, add recipient)
  8. documentUpdate lexicon + AppView indexing — collaborative editing
  9. Daemon: Adoption pipeline, optional re-encryption

Steps 1–5 are the minimum viable workspace. Step 6 is required for multi-user upload workflows. Step 7 bridges personal and shared use. Steps 8–9 are collaborative and lifecycle features.


Resolved Questions#

Forking semantics#

A forkedFrom field in the new keyring's encrypted metadata (only members need to see it). The AppView doesn't track lineage — members get invited to the new keyring, their clients handle the transition. The old keyring goes stale. No supersededBy on the old record because nobody can write to it (that's why we're forking). One-way link only.

Users remain visible members of stale/forked keyrings and can leave by writing an app.opake.keyringLeave record to their own PDS (referencing the keyring URI). The AppView stops listing them as a member. Their wrapped key still exists on the keyring record — they could still decrypt — but they've opted out. The workspace disappears from their sidebar.

Adoption policy#

The manager who initiates the removal adopts orphaned documents. They're online, they triggered the action, their daemon runs the pipeline. If the removed member IS the workspace owner, this triggers a fork — the removing manager creates a new keyring on their PDS and adopts all content.

Conflict resolution#

Last-write-wins by createdAt timestamp for the foreseeable future. Real-time collaboration is a separate system for a future release — session-scoped keyrings with a collaboration server (workspace member) brokering CRDT operations over P2P connections. The collab server is either a hosted instance or a leader-selected peer (likely the document owner). This is architecturally independent from the workspace model and doesn't need to be designed now.

Viewer key access#

Acceptable. Viewers have the group key (required for decryption). They could encrypt a document under the keyring — the AppView won't index it, making it invisible to other members. This is the same enforcement model as every file sharing system — application-layer enforcement, not cryptographic. Self-hosted AppViews give operators sovereignty over enforcement rules.

Replica bootstrapping (v2)#

When a member is promoted to manager (in v2 with replicated keyrings), their client creates a copy automatically. They already have the group key (they were a member). They fetch the owner's record, clone it to their PDS with createRecord, done. One-time client action on promotion, not a daemon concern.

Replica conflict resolution (v2)#

The AppView indexes the union of all manager copies. All managers are trusted — the owner gave them the role. If a manager goes rogue, the answer is: remove them (rotation revokes access to the new group key). You don't preemptively gatekeep trusted roles.

sign up or login to add to the discussion
Labels

None yet.

assignee

None yet.

Participants 1
AT URI
at://did:plc:wydyrngmxbcsqdvhmd7whmye/sh.tangled.repo.issue/3mhjmacwvvo22