Social cloud hosting
0
fork

Configure Feed

Select the types of activity you want to include in your feed.

at-rund#

⚠️ HEAVILY UNDER CONSTRUCTION

This project is in early alpha. APIs will change, features are incomplete, and you will encounter bugs. See ROADMAP.md for current status.

Social cloud hosting for AT Protocol.

at-rund lets you host serverless bundles for the AT Protocol network. Your runner represents you — bundle authors trust your infrastructure because they trust you.

The Idea#

Today, running code on the internet means trusting faceless cloud providers. at-rund flips this: anyone can host a runner, and trust flows through the social graph.

┌─────────────────────────────────────────────────────────────────────────────┐
│                         Social Cloud Hosting                                │
│                                                                             │
│  "I run an at-rund instance. Trust me because you know me."                │
│                                                                             │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                             │
│   @alice.bsky.social              @bob.example.com                         │
│   runs at-run.alice.dev           runs compute.bob.example.com             │
│   ├─ deno + ffmpeg                ├─ deno                                  │
│   ├─ python + pytorch             ├─ node                                  │
│   └─ allowlist: friends           └─ open to all                           │
│                                                                             │
│   Bundle authors choose runners based on:                                   │
│   • Social trust (I know Alice)                                            │
│   • Capabilities (Alice has ffmpeg)                                        │
│   • Availability (Bob's is always up)                                      │
│                                                                             │
└─────────────────────────────────────────────────────────────────────────────┘

How It Works#

  1. Bundle authors write serverless functions and store them on their AT Protocol PDS
  2. Runners (like you) host at-rund instances that execute those bundles
  3. Trust is social — authors choose runners they trust, runners choose who can use them
Bundle Author                         Runner Host
     │                                     │
     │  1. Write bundle                    │
     │  2. Deploy to PDS                   │
     │  3. Encrypt secrets for runner      │
     │                                     │
     └──────────── request ───────────────▶│
                                           │  4. Fetch bundle from PDS
                                           │  5. Execute in sandbox
                                           │  6. Return result
     ◀─────────── response ────────────────┘

Quick Start#

Install#

# Download at-rund
curl -sSL https://at-run.dev/install.sh | sh

# Install Nix (required for runtimes)
curl --proto '=https' --tlsv1.2 -sSf -L https://install.determinate.systems/nix | sh

Initialize#

at-rund init

This creates ~/.at-rund/ with default configuration and runtime definitions.

Configure#

Edit ~/.at-rund/config.toml:

# Your identity — the runner IS you
did = "did:plc:your-did-here"
handle = "you.bsky.social"

port = 3000

[access]
# Who can use your runner?
# "open" - anyone
# "allowlist" - only specific DIDs
# "blocklist" - everyone except specific DIDs
mode = "open"

[runtimes]
# Which runtimes you support
deno = "deno.nix"

Run#

# Development (uses Nix directly, no VM isolation)
at-rund serve --dev

# Production (uses Firecracker VMs, requires Linux + KVM)
at-rund build    # Build VM images
at-rund serve    # Start server

Deploy as a Service#

# Install systemd service
at-rund systemd install

# Start and enable
sudo systemctl enable --now at-rund

# Check status
at-rund systemd status

Architecture#

at-rund supports multiple isolation backends to balance security vs. accessibility:

┌─────────────────────────────────────────────────────────────┐
│                        at-rund                              │
│                    Executor interface                       │
├───────────────┬───────────────────┬─────────────────────────┤
│   NixPool     │   ContainerPool   │    FirecrackerPool      │
│   (none)      │   (container)     │    (firecracker)        │
├───────────────┼───────────────────┼─────────────────────────┤
│ Direct exec   │ Docker/Podman     │ Firecracker microVMs    │
│ No isolation  │ Namespace + seccomp│ Full VM isolation      │
│ Any OS        │ Any Linux VPS     │ Linux + KVM (bare metal)│
│ Dev/testing   │ Production        │ High-security prod      │
└───────────────┴───────────────────┴─────────────────────────┘

Isolation Modes#

Configure via isolation in config.toml:

# Auto-detect best available (default)
isolation = "auto"

# Or explicitly choose:
isolation = "none"        # Direct Nix execution (dev mode)
isolation = "container"   # OCI containers (debian-slim + seccomp)
isolation = "firecracker" # Firecracker microVMs (requires KVM)

Auto-detection logic:

  1. /dev/kvm accessible → Firecracker
  2. Docker/Podman available → Container
  3. Fallback → Nix direct execution

Why Multiple Backends?#

  • Low barrier to entry: Containers work on any $5 VPS
  • Strong isolation available: Firecracker for those with bare-metal
  • Same bundle everywhere: Nix ensures identical runtimes across all backends
  • Operator choice: Match isolation level to your threat model

See DESIGN.md for detailed architecture decisions.

Custom Runtimes#

Runtimes are defined with Nix. You can customize the defaults or create your own:

# ~/.at-rund/runtimes/python-ml.nix
{ pkgs, ... }:
{
  mimeTypes = [
    "application/python+ml"
  ];

  guest = {
    environment.systemPackages = with pkgs; [
      python312
      python312Packages.pytorch
      python312Packages.numpy
      python312Packages.pillow
    ];
  };

  executor = {
    command = "python3";
    permissionFlags = {};
  };
}

Then enable it in your config:

[runtimes]
deno = "deno.nix"
python-ml = "python-ml.nix"

Access Control#

Control who can run bundles on your infrastructure:

[access]
# Open to everyone
mode = "open"

# Only allow specific people
mode = "allowlist"
allowlist = [
  "did:plc:friend1",
  "did:plc:friend2",
]

# Block bad actors
mode = "blocklist"
blocklist = [
  "did:plc:spammer",
]

For more complex policies (payments, quotas, rate limiting), put a reverse proxy in front of at-rund.

Observability#

at-rund supports OpenTelemetry for metrics and traces:

[observability]
otlp_endpoint = "http://localhost:4317"
log_format = "json"

This lets you connect to Grafana, Jaeger, or any OTLP-compatible backend.

CLI Reference#

at-rund
├── init                  Initialize ~/.at-rund/
├── serve                 Run the server
│   ├── --dev             Dev mode (Nix direct execution)
│   ├── --port PORT       Override port
│   └── --config PATH     Custom config path
├── build                 Build Firecracker VM images
│   └── --runtime NAME    Build specific runtime only
├── runtime
│   └── list              Show configured runtimes
├── pool
│   ├── status            VM pool statistics
│   ├── warm              Pre-warm VMs
│   └── drain             Graceful shutdown
└── systemd
    ├── install           Install systemd service
    │   └── --user        User service (no sudo)
    ├── uninstall         Remove service
    └── status            Show service status

Project Structure#

~/.at-rund/
├── config.toml           # Main configuration
├── runtimes/             # Nix runtime definitions
│   ├── deno.nix
│   ├── node.nix
│   └── python.nix
├── images/               # Built Firecracker images (prod)
├── bundles/              # Cached bundle code
└── keys/                 # Runner keypair (for secrets)
  • at-run — Developer CLI for deploying bundles
  • AT Protocol — Decentralized social protocol
  • Firecracker — Lightweight virtualization
  • Nix — Reproducible builds

License#

MIT