Transport Layer Options for P2P AT Protocol PDS: Deep Research Report#
1. Iroh Deep Dive#
What Is Iroh?#
Iroh is a peer-to-peer networking library written in Rust by number 0 (n0-computer), a founder-backed and venture-backed startup. Iroh's tagline is "IP addresses break, dial keys instead." It was originally conceived as a high-performance IPFS implementation in Rust (that version was renamed to Beetle and put into maintenance mode), but has since diverged significantly from IPFS to become its own protocol stack focused on reliable direct connections between endpoints identified by public keys.
Iroh is approaching 1.0 -- a release candidate is targeted for February 23, 2026, with releases running through the 0.90-0.96 "canary series." It is already running in production on hundreds of thousands of devices.
Transport#
Iroh uses QUIC via the Quinn library to establish authenticated, encrypted connections between endpoints. Key characteristics:
- QUIC-native: Concurrent streams with priorities, datagram transport, no head-of-line blocking
- QUIC Multipath: Iroh has implemented QUIC multipath in a Quinn fork, allowing connections to survive network changes (e.g., WiFi to cellular)
- MagicSocket: Borrowed from Tailscale, all hole-punching and connection management happens transparently in a "magic socket" layer
- 0-RTT connection establishment: Supported as of version 0.33
Content Addressing: BLAKE3 vs. CIDs#
This is the critical incompatibility for your use case.
Iroh uses BLAKE3 hashes, not SHA-256 CIDs. iroh-blobs provides content-addressed data transfer using BLAKE3 verified streaming. BLAKE3's internal structure is already a Merkle tree, so Iroh does not need to chunk files into blocks the way IPFS does -- a single BLAKE3 hash covers an entire blob of any size.
AT Protocol mandates SHA-256 CIDs. The atproto data model spec specifies:
- Codec: dag-cbor (0x71) for data objects, raw (0x55) for blobs
- Hash: sha-256 (0x12), 256 bits
- These are "blessed" types -- only these specific CID configurations are valid
Interoperability gap:
- For files under 256KB, IPFS can be configured to emit BLAKE3 CIDs that match Iroh's, but this is irrelevant since atproto uses SHA-256
- For larger files, the hashes are fundamentally different because IPFS and Iroh chunk differently
- CID translations (signed documents stating content equivalency) are proposed but not yet implemented in Iroh
- Iroh has explicitly stated it will break interoperability with Kubo (IPFS reference implementation) going forward
Bottom line: Iroh's content addressing is incompatible with atproto's CID scheme. You could use Iroh purely as a transport (moving opaque bytes between peers) but not as a content-addressing layer. Your RASL endpoints serve blocks by SHA-256 CID -- Iroh's blob layer cannot natively index or route by these identifiers.
Discovery and Routing#
Iroh takes a different approach from libp2p's Kademlia DHT:
-
Pkarr + BitTorrent Mainline DHT: Pkarr (Public-Key Addressable Resource Records) publishes signed DNS records to the BitTorrent Mainline DHT -- the largest and oldest surviving DHT, battle-tested against powerful adversaries. This is used for node discovery (finding a peer's address given their public key). Not enabled by default; requires a feature flag.
-
DNS Discovery: Iroh also supports a DNS-based discovery system where an iroh-dns-server publishes records that bridge to DNS queries and optionally the Mainline DHT.
-
mDNS Local Discovery: For LAN peers, standard mDNS.
-
Content Discovery: For finding which peers have specific content, Iroh has an experimental content discovery system that uses the Mainline DHT to discover trackers (not peers directly), which then point to content providers. This is less mature than node discovery.
Key difference from your current setup: Your Kademlia DHT both discovers peers AND routes content requests. Iroh separates these concerns: the Mainline DHT finds peers, but content routing requires additional infrastructure (trackers or application-level logic).
NAT Traversal#
This is Iroh's strongest selling point. Iroh's approach is modeled on Tailscale's NAT traversal:
-
DERP Relays: All connections initially go through relay servers (Designated Encrypted Relay for Packets). Three relay servers are run by n0-computer (US, EU, Asia). You can self-host relay servers -- the
iroh-relaycrate provides a complete implementation. -
Hole Punching: Once connected via relay, peers coordinate to establish a direct UDP connection. If hole-punching succeeds, traffic seamlessly moves to the direct path. If not, relay remains as fallback.
-
Success Rate: Tailscale reports >90% direct connection success rate. Iroh claims "really high" hole-punching rates. No published exact number for Iroh specifically, but the architecture mirrors Tailscale's proven approach. For comparison, libp2p's measured success rate is 70% +/- 7.1% based on 4.4 million traversal attempts.
-
Future: Iroh is working on integrating hole-punching directly into QUIC via the QUIC-NAT-TRAVERSAL draft.
SPOF concern: The default relay servers are centralized (run by n0-computer). However, relay servers can be self-hosted, and the Mainline DHT is fully decentralized. If n0-computer stops, you would need to run your own relay servers, but the DHT continues independently.
JavaScript/Node.js Availability#
This is a significant limitation for your Node.js-based PDS:
-
@number0/iroh npm package: Version 0.35.0, using NAPI bindings generated from iroh-ffi. This embeds a native Rust iroh node inside Node.js.
-
iroh-js: A separate JavaScript client that talks to an Iroh node via RPC. Work-in-progress.
-
Browser/WASM: Alpha support via wasm-bindgen. Not production-ready.
-
FFI path forward: The iroh team has stated that deciding on the FFI binding strategy is part of the 1.0 roadmap. The current NAPI bindings work but are not the top priority.
Practical assessment: The NAPI bindings exist and are usable TODAY from Node.js. However, they are at version 0.35.0 (last published ~September 2025), lagging behind the Rust crate. The API surface exposed to Node.js may not cover all features. The iroh team's primary focus is Rust, with JS bindings being a secondary concern.
Bus Factor / Organizational Risk#
- Company: Number 0 is a founder-backed startup with some VC funding. Co-founded by Brendan O'Brien (previously Qri). Specific funding amounts are not publicly disclosed.
- Team: The GitHub contributors page shows a core team of roughly 5-8 active contributors, with 2-3 doing the bulk of commits.
- Open Source: Apache-2.0/MIT dual license. Code is fully open source.
- Risk: If n0-computer runs out of funding, the code continues to exist and can be forked. The Mainline DHT is independent. However, the relay servers would need community operation, and active development would stall. The team is small enough that losing 2-3 key contributors could severely slow development.
- Positive signal: Iroh is approaching 1.0, which would provide a stable API target even if active development slows.
Could Iroh Be a libp2p Transport?#
Yes, experimentally. A libp2p-iroh crate exists (posted October 2025 on the libp2p forum) that implements iroh QUIC connections as a libp2p Transport and StreamMuxer. There is also a go-libp2p version using FFI. However:
- These are Rust crates, not available in JS
- The bridge is experimental and community-maintained
- It would give libp2p nodes Iroh's NAT traversal while keeping the libp2p protocol stack
2. Helia/libp2p Current State Assessment#
Ecosystem Health#
Interplanetary Shipyard is the independent engineering collective (spun out of Protocol Labs in 2024) that maintains Helia, js-libp2p, Kubo, go-libp2p, and related projects.
- Team size: 11-50 employees (LinkedIn range)
- Funding: Raising $3M in community contributions with Protocol Labs as anchor financial partner for 2024-2025
- Recent work: Released js-libp2p developer tools in 2025; active work-plans published for 2025
- Helia npm downloads: ~5,962 weekly (modest but stable)
- libp2p npm package: Actively maintained with releases
- Impact: 75 million monthly active users across their tools
Known Pain Points#
-
DHT Performance: Memory growth when DHT discovery is enabled; potential OOM issues. Known issue with EventTarget memory leaks. Heavy performance penalty with DHT enabled.
-
Startup Time: Helia with full networking (libp2p + bitswap + DHT) has non-trivial startup time, as seen in your codebase where IPFS starts after the HTTP server is already listening.
-
Connection Management: Managing connections, peer limits, and resource usage requires careful tuning.
-
JS Implementation Gap: js-libp2p historically lags behind go-libp2p and rust-libp2p in features and performance. The JS DHT implementation has had long-standing spec compliance issues.
NAT Traversal in Practice#
- AutoNAT: Detects whether a node is behind NAT by asking peers to dial back
- DCUtR (Direct Connection Upgrade through Relay): Decentralized hole punching without central STUN/TURN servers
- Measured success rate: 70% +/- 7.1% based on a large-scale measurement campaign of 4.4 million traversal attempts
- Relay fallback: Circuit Relay v2 is lightweight but adds latency. Who runs the relay servers? Anyone can, but discovery of relay servers depends on the DHT.
- Compared to Iroh/Tailscale: The ~70% success rate is notably lower than Tailscale's >90% due to architectural differences (libp2p's DCUtR coordinates via the DHT rather than dedicated relay infrastructure)
Centralized Dependencies#
This is a critical concern:
-
Bootstrap nodes: The default IPFS/libp2p bootstrap nodes are operated by Protocol Labs / Interplanetary Shipyard. These are the initial entry points to the DHT. Without them (or replacements), new nodes cannot join the network.
-
Relay servers: While anyone can run a relay, discovering relays depends on the DHT, which depends on bootstrap nodes.
-
Implementation diversity: Shipyard has deployed both Kubo and rust-libp2p-server bootstrap nodes to increase resilience.
-
Mitigation: You can run your own bootstrap nodes and configure them in your nodes. The DHT itself is decentralized once bootstrapped. But the initial bootstrap step remains a vulnerability.
If Shipyard/Protocol Labs Stops#
- The DHT continues as long as enough nodes remain in the network. The IPFS DHT has millions of nodes; it would take a catastrophic event to kill it.
- Bootstrap nodes would eventually go offline, but the network has inertia and other organizations run public IPFS nodes.
- Active development of js-libp2p and Helia would stall or slow dramatically. The code is open source (Apache-2.0/MIT) and could be forked, but the JS ecosystem specifically would be at risk given the smaller contributor base compared to Go.
- Shipyard's $3M fundraise suggests they are aware of sustainability risks and actively addressing them.
3. Other Transport Options#
Hypercore/Hyperswarm (Dat Ecosystem)#
What it is: Hyperswarm is a distributed networking stack built by Holepunch (led by Mafintosh/Mathias Buus). It uses a custom Kademlia DHT with built-in hole punching, and Hypercore for append-only log replication.
Key characteristics:
- DHT: Custom Kademlia DHT over UDP. Bootstrap nodes at
node1.hyperdht.org,node2.hyperdht.org,node3.hyperdht.org-- run "on behalf of the commons" by Holepunch. You can run a fully private DHT. - NAT traversal: First-class hole punching built into the DHT itself. Any DHT node can assist with hole punching. Uses uTP over UDP.
- Encryption: Noise protocol for all connections. End-to-end encrypted.
- JavaScript: Native Node.js -- this is a JS-first ecosystem.
hyperswarmandhyperdhtare npm packages. - Content addressing: Hypercore uses BLAKE2b hashes in Merkle trees. Not CID-compatible natively.
- Sparse replication: Hypercore supports fetching only needed blocks from append-only logs.
Bus factor / risk:
- Holepunch is a company that has pivoted to Pear Runtime (a P2P application platform). The Hyperswarm/Hypercore stack is the foundation but the company's focus has shifted.
- Mafintosh is the primary architect; significant bus factor risk.
- Bootstrap nodes are centralized (3 nodes run by Holepunch).
- The ecosystem is smaller than IPFS but has loyal users (Keet messenger, etc.).
Active development — Searchable DHT Records (holepunchto/hyperdht#231):
An open PR adds searchable records with SimHash-based similarity search directly in the HyperDHT. This is significant for content routing and peer discovery:
searchableRecordPut(hash, pointer)— store a record in the DHT with a SimHash token and a 32-byte pointersearch(hash, { closest, values })— find records by SimHash similarity, returns results sorted by Hamming distance- Records have 48-hour TTL with automatic garbage collection
- Feature is experimental (opt-in via
experimentalSearchflag) - Uses the
simhash-vocabularypackage for locality-sensitive hashing of token arrays
Why this matters for P2PDS: This could enable DHT-based content routing without centralized infrastructure. Potential uses:
- "Which peers replicate DID X?" — store
simhash(["did", "plc", "abc123"])→ pointer to peer info - Policy discovery — search for peers with matching replication policies
- Content routing — associate CIDs or DID collections with peer addresses in the DHT
- Unlike IPFS's content routing (which maps individual CIDs to providers), this supports semantic/fuzzy search over structured tokens — a better fit for account-level discovery ("who has data for this DID?") than block-level routing ("who has this specific CID?")
Fit for P2PDS:
- Excellent Node.js support (native)
- Good NAT traversal with built-in hole punching
- DHT-based discovery without centralized infrastructure (once bootstrapped)
- NOT CID-compatible -- would require a translation layer for atproto's SHA-256 CIDs
- Hypercore's append-only log model maps well to atproto repo commits, but the content addressing is different
- Searchable DHT records (in development) could provide decentralized DID→peer discovery
rust-libp2p via FFI#
What it is: The Rust implementation of libp2p, which is more actively maintained and performant than js-libp2p.
Practical considerations:
- No existing NAPI bindings for Node.js -- you would have to build them yourself using napi-rs
- Significant engineering effort to wrap the async Rust API for Node.js consumption
- Would give you better DHT performance and connection management than js-libp2p
- Maintains full IPFS network interoperability
- No one has done this yet, making it a research project rather than a practical option today
Plain WebRTC#
What it is: Browser-native P2P protocol with ICE/STUN/TURN for NAT traversal.
Assessment:
- STUN/TURN dependency: Requires centralized STUN servers for address discovery and TURN servers as relay fallback. ~80% of connections can be established with STUN alone; the rest need TURN.
- Signaling server: WebRTC requires an out-of-band signaling mechanism to exchange session descriptions -- this is always centralized.
- Not a fit: WebRTC is designed for real-time media (audio/video) and data channels. Using it as a general-purpose P2P data transport adds complexity without benefit over libp2p or Iroh. The centralized signaling requirement violates your "no SPOF" constraint.
WireGuard/Tailscale-style Mesh#
What it is: Overlay VPN networks that create flat, addressable mesh networks.
Assessment:
- Tailscale is centralized (requires their coordination server for key exchange, though Headscale is an open-source alternative)
- These solve the "reach any peer" problem but not content discovery or content-addressed routing
- Would be a lower-layer transport that your application code runs on top of
- Practical for a small, known set of PDS nodes but not for open P2P networks
Yggdrasil/cjdns#
What it is: Yggdrasil is an encrypted IPv6 overlay mesh network. Every node gets a unique IPv6 address derived from its public key. cjdns is similar but older.
Assessment:
- Solves NAT traversal elegantly -- every Yggdrasil node is directly addressable via its overlay IPv6 address
- End-to-end encrypted
- Self-arranging mesh topology
- No content discovery -- this is a network layer, not an application protocol
- Go implementation (yggdrasil-go) with no native JS integration
- Very small user base (~thousands of nodes)
- Good for a "reachability" layer but you would still need content routing on top
Nostr Relays#
What it is: Simple WebSocket-based relay architecture from the Nostr protocol.
Assessment:
- Extremely simple: clients publish events to relays and subscribe to events from relays
- No P2P: Clients connect to relays, not to each other. Relays are servers.
- Good for message passing and event distribution but fundamentally not P2P
- Does not solve content-addressed block exchange
- Could be interesting as a signaling/coordination layer alongside a P2P transport
4. Compatibility Analysis Matrix#
| Feature | Helia/libp2p (current) | Iroh | Hyperswarm | rust-libp2p FFI | WebRTC | Yggdrasil |
|---|---|---|---|---|---|---|
| CID (SHA-256) support | Native | No (BLAKE3) | No (BLAKE2b) | Native | N/A (transport only) | N/A |
| DHT without centralized infra | Yes (Kademlia) | Yes (Mainline DHT via pkarr) | Yes (custom Kademlia) | Yes (Kademlia) | No (needs signaling) | No DHT (mesh routing) |
| NAT traversal reliability | ~70% | >90% (Tailscale-style) | Good (built-in) | ~70% (same as js) | ~80% STUN, TURN fallback | Full (overlay) |
| JS/TS availability | Native (npm) | NAPI bindings (npm) | Native (npm) | Must build | Native (browser) | None |
| IPFS network interop | Full | None (diverged) | None | Full | None | None |
| Bus factor | Shipyard (11-50, $3M) | n0 (5-8 core, VC) | Holepunch (~small) | Same as libp2p | Web standard | Volunteers |
| Maturity | Production | Pre-1.0 (RC Feb 2026) | Production | Production | Production | Alpha/Beta |
| Verified streaming | Bitswap (per-block) | BLAKE3 native | Hypercore (per-chunk) | Bitswap | None | None |
| Relay fallback | Circuit Relay v2 | DERP (self-hostable) | DHT-assisted | Circuit Relay v2 | TURN servers | Mesh routing |
5. Hybrid Approaches#
Option A: Iroh as Transport Only#
Use Iroh purely for connection establishment and NAT traversal, while keeping your own content-addressing layer:
[Your RASL HTTP endpoint] <-> [Iroh QUIC tunnel] <-> [Remote peer]
- Iroh establishes the connection using its MagicSocket + DERP relay + hole punching
- Once connected, you send HTTP-like requests over the QUIC stream for CID-addressed blocks
- You keep your SHA-256 CID scheme and blockstore intact
- You lose IPFS network interop (cannot exchange blocks with standard IPFS nodes)
- You gain superior NAT traversal (~90% vs ~70%)
Feasibility: This is architecturally sound. Iroh's core value IS connection establishment. The @number0/iroh npm package should support creating endpoints and establishing connections. You would need to implement your own block exchange protocol on top of Iroh's QUIC streams.
Option B: libp2p for IPFS Interop + Iroh for Connectivity#
Run both stacks simultaneously:
- Helia/libp2p for IPFS DHT participation and bitswap block exchange with the broader IPFS network
- Iroh for direct peer connections between P2PDS nodes (better NAT traversal, faster connections)
Challenges:
- Double the resource usage (two networking stacks)
- Complexity of managing two peer identity systems (libp2p PeerId vs. Iroh NodeId/ed25519 key)
- Both are async network stacks in the same process -- potential event loop contention in Node.js
- The NAPI bindings for Iroh add a native Rust runtime alongside Node.js
Option C: Transport-Agnostic Block Layer#
Your RASL HTTP endpoints (/.well-known/rasl/:cid) already demonstrate transport-agnostic design:
- HTTP RASL: Current, works over any HTTP transport
- libp2p Bitswap: Current, for IPFS network participation
- Direct QUIC (via Iroh or plain Quinn): For peer-to-peer block exchange between known PDS nodes
- Hyperswarm: For discovery and connection, with custom block exchange on top
The block layer interface is simple:
getBlock(cid) -> bytes | nullputBlock(cid, bytes)provideBlock(cid)(announce availability)
Your IpfsService class at /Users/dietrich/misc/p2pds/src/ipfs.ts already abstracts this. The key insight is that any transport that can carry (CID, bytes) pairs works. The content-addressed verification (CID = hash of bytes) is transport-independent.
Option D: Hyperswarm for Discovery + Custom Block Exchange#
Given that Hyperswarm is native Node.js with excellent NAT traversal:
- Use HyperDHT for peer discovery (find other P2PDS nodes)
- Establish encrypted connections via Hyperswarm
- Implement CID-based block exchange over the Hyperswarm connection streams
- Optionally maintain Helia for IPFS network interop
This avoids the FFI boundary of Iroh while getting good NAT traversal and a decentralized DHT.
6. Migration Risk Assessment#
If You Stay with Helia/libp2p#
Risks:
- js-libp2p DHT performance issues (memory leaks, slow provider lookups)
- 70% NAT traversal success rate means ~30% of home users may have connectivity issues
- Dependency on Shipyard's continued maintenance (~$3M/yr needed)
- If Shipyard dissolves, js-libp2p maintenance falls to community volunteers (small pool)
- Bootstrap node dependency (Protocol Labs/Shipyard operated)
Mitigations:
- Run your own bootstrap and relay nodes
- The IPFS DHT is large enough to survive organizational changes
- IPFS ecosystem has network effects and multiple implementations (Kubo, Helia, rust-libp2p)
- Your RASL HTTP endpoints provide a fallback that works without any P2P stack
If You Switch to Iroh#
Migration cost:
- Replace Helia with
@number0/irohNAPI bindings (manageable, yourIpfsServiceclass is a clean abstraction) - Lose IPFS network interoperability entirely
- Need to implement your own block exchange protocol over Iroh QUIC streams (since iroh-blobs uses BLAKE3, not SHA-256)
- Need to handle peer discovery differently (Pkarr/Mainline DHT for nodes, custom logic for content)
- Need to run or rely on DERP relay servers
What you lose:
- IPFS network participation (cannot exchange blocks with standard IPFS nodes)
- Mature, battle-tested DHT for content routing
- Large existing peer network
What you gain:
- ~90% NAT traversal success (vs. ~70%)
- Faster connections (QUIC, 0-RTT)
- QUIC Multipath (connection survives network changes)
- Simpler connection management
- Better performance characteristics
If n0-computer stops:
- Code is open source and can be forked
- Relay servers would need community operation (self-hostable)
- Mainline DHT continues independently (BitTorrent ecosystem)
- Development would stall; 5-8 person team means limited bus factor
- If Iroh reaches 1.0 (targeted Feb 2026), a stable API target exists
If Protocol Labs/Shipyard Stops#
- DHT bootstrap nodes eventually go offline, but the network has massive inertia
- Active development of js-libp2p/Helia stalls
- The code is open source; community could maintain it
- Other organizations (Cloudflare, Pinata, etc.) run IPFS infrastructure that would persist
- The Go (Kubo) and Rust implementations have separate maintainer communities
If n0-computer Stops#
- Three DERP relay servers go offline (US, EU, Asia) -- you would need your own
- Active development stops (small team, limited outside contributors)
- Mainline DHT discovery continues (BitTorrent ecosystem owns this)
- NAPI bindings for Node.js would likely be the first to bit-rot
Recommendation#
Given your constraints -- no single points of failure, NAT traversal, content-addressed verification (SHA-256 CIDs per atproto spec), JavaScript/TypeScript, AT Protocol compatibility -- here is my recommended path:
Short-Term (Now): Stay with Helia, Harden It#
-
Keep Helia/libp2p as your primary P2P stack. It is the only option that provides native CID (SHA-256) support, IPFS network interoperability, and a mature JS/TS implementation.
-
Run your own bootstrap and relay nodes. Do not depend solely on Protocol Labs/Shipyard infrastructure. Add your P2PDS bootstrap nodes to the configuration.
-
Lean on RASL HTTP endpoints as the primary verification mechanism. Your current architecture where RASL (
/.well-known/rasl/:cid) works over plain HTTP is your most resilient layer. It works regardless of which P2P transport is underneath. Any peer with an HTTP endpoint can serve blocks.
Medium-Term (3-6 months): Evaluate Hyperswarm as Supplementary Discovery#
-
Add Hyperswarm as a supplementary peer discovery and connection layer alongside Helia. Hyperswarm is native Node.js (no FFI), has a mature DHT with excellent built-in hole punching, and its architecture is compatible with running alongside libp2p.
-
Architecture: Use Hyperswarm to discover and connect to other P2PDS nodes specifically. Once connected, exchange blocks using your own simple CID-based protocol over Hyperswarm's encrypted streams. Keep Helia for broader IPFS network participation.
-
Searchable DHT records (holepunchto/hyperdht#231): Monitor this PR closely. Once merged, HyperDHT gains SimHash-based searchable records — enabling decentralized "which peers replicate DID X?" queries directly in the DHT without centralized infrastructure. This maps well to P2PDS's account-level discovery needs and could replace or supplement atproto-based peer discovery (
org.p2pds.manifestrecord scanning). -
This gives you: Two independent DHTs (IPFS Kademlia + Hyperswarm's Kademlia), better NAT traversal for PDS-to-PDS connections, no dependency on a single networking stack, and (with searchable records) decentralized DID-to-peer routing.
Long-Term (6-12 months): Monitor Iroh 1.0, Design Transport Abstraction#
-
Watch Iroh 1.0 (targeted early 2026). Once stable, evaluate the Node.js NAPI bindings for maturity. If the bindings are well-maintained and the API is stable, Iroh becomes a strong candidate for replacing the connection layer.
-
Design a formal transport abstraction. Your
IpfsServiceis already halfway there. Formalize aBlockTransportinterface:connect(peerId): ConnectiongetBlock(connection, cid): bytesannounceBlock(cid)discoverPeersFor(cid): PeerId[]
Implement this for Helia, Hyperswarm, and eventually Iroh. This lets you swap transports or run multiple simultaneously.
-
Consider Iroh purely as a connectivity layer (Option A from the hybrid section). Use it for connection establishment and NAT traversal, but run your own CID-based block exchange on top. This avoids the BLAKE3/SHA-256 incompatibility entirely.
Why NOT Switch to Iroh Now#
-
BLAKE3/SHA-256 incompatibility: atproto mandates SHA-256 CIDs. Iroh's blob layer is built around BLAKE3. You would have to bypass iroh-blobs entirely and build your own block exchange, negating much of Iroh's value proposition beyond connection establishment.
-
Pre-1.0 instability: The API is still changing (canary series 0.90-0.96). The Node.js bindings lag behind the Rust crate.
-
Loss of IPFS interop: Switching to Iroh means your blocks are only available to other Iroh nodes. With Helia, any IPFS node can fetch your blocks.
-
Bus factor: n0-computer is a small startup. Shipyard is small too, but the IPFS ecosystem has broader organizational support.
The Critical Insight#
Your RASL HTTP layer is your most important architectural decision. It decouples content verification from transport entirely. As long as peers can reach each other over HTTP (directly, via relay, via CDN, via Tor, via carrier pigeon with USB sticks), blocks can be verified by CID. The P2P transport layer is primarily about finding peers and establishing connections -- the actual block verification is handled by content addressing at the application layer.
This means the transport choice is less existential than it might seem. You can start with Helia, add Hyperswarm, and eventually add Iroh -- all without changing your block storage, verification, or RASL endpoint code.
Sources#
- Iroh GitHub
- Iroh 1.0 Roadmap
- Comparing Iroh & Libp2p
- Iroh & IPFS Compatibility
- Iroh Discovery Concepts
- Iroh Relay Concepts
- Iroh Hole Punching
- Iroh Pkarr DHT Integration
- Iroh Global Node Discovery
- Iroh Content Discovery Experiments
- Iroh BLAKE3 Hazmat API
- iroh-blobs Documentation
- Iroh Node.js Announcement (0.23)
- @number0/iroh npm Package
- iroh-ffi GitHub
- iroh-gossip Documentation
- n0-computer Website
- GuardianDB Migration to Iroh
- libp2p-iroh Bridge (libp2p Forum)
- libp2p-iroh Crate
- NAT Traversal Large-Scale Measurement (arXiv)
- libp2p NAT Traversal Docs
- Tailscale NAT Traversal Improvements
- Tailscale How NAT Traversal Works
- Interplanetary Shipyard
- Shipyard Hello World Announcement
- js-libp2p Helia Developer Tools
- Shipyard 2025 Work Plans (IPFS Forum)
- IPFS Bootstrap Nodes Issue
- Helia GitHub
- HyperDHT GitHub
- Hyperswarm GitHub
- Pear Runtime Docs
- AT Protocol Data Model Spec
- AT Protocol Repository Spec
- P2P Networking: WebRTC vs libp2p vs Iroh (Medium)
- Yggdrasil Network
- DIAP Hybrid P2P Stack Paper (arXiv)