about things
0
fork

Configure Feed

Select the types of activity you want to include in your feed.

restructure 0.16 io notes from single file to folder

split io.md into io/ with four files based on reading the official
std library docs and devlog:
- concurrency.md: async/concurrent, Future, Group, Select, Queue
- synchronization.md: Mutex, Condition, CancelProtection, cancellation
- patterns.md: InitOptions, backend selection, debug_io, lifecycle

update relay-integration.md with Io boundary table showing which
pipeline stages use io.concurrent vs std.Thread.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

+643 -162
+6 -2
languages/ziglang/0.16/README.md
··· 4 4 5 5 ## notes 6 6 7 - - [io](./io.md) - std.Io interface, async/concurrent, no function coloring 8 - - [migration](./migration.md) - practical API changes, verified patterns 7 + - [io/](./io/) — std.Io interface, concurrency primitives, synchronization, patterns 8 + - [README](./io/README.md) — overview, backends, design philosophy 9 + - [concurrency](./io/concurrency.md) — async vs concurrent, Future, Group, Select, Queue 10 + - [synchronization](./io/synchronization.md) — Mutex, Condition, CancelProtection, cancellation model 11 + - [patterns](./io/patterns.md) — backend selection, InitOptions, debug_io, long-lived tasks, networking 12 + - [migration](./migration.md) — practical API changes from 0.15, verified patterns
-154
languages/ziglang/0.16/io.md
··· 1 - # i/o 2 - 3 - 0.16 overhauls I/O with the `std.Io` interface. everything that can block (filesystem, networking, timers) moves to this interface. 4 - 5 - sources: [kristoff.it](https://kristoff.it/blog/zig-new-async-io/), [andrewkelley.me](https://andrewkelley.me/post/zig-new-async-io-text-version.html), [porting guide](https://sheran.sg/blog/porting-dns-from-zig-0.15-to-0.16/) 6 - 7 - ## std.Io interface 8 - 9 - like `Allocator` for memory, `Io` is passed to functions that do I/O: 10 - 11 - ```zig 12 - fn fetchData(io: std.Io, allocator: Allocator) ![]u8 { 13 - // io provides networking, timers, etc. 14 - } 15 - ``` 16 - 17 - this decouples code from execution model. same code works with threads, event loops, or blocking I/O. 18 - 19 - ## async vs concurrent 20 - 21 - two primitives with different semantics: 22 - 23 - **async/await** - operations *can* happen independently. infallible, works on limited implementations: 24 - 25 - ```zig 26 - const future = io.async(someFunction, .{args}); 27 - // ... do other work ... 28 - const result = future.await(); 29 - ``` 30 - 31 - **concurrent** - operations *must* happen simultaneously for correctness. can fail: 32 - 33 - ```zig 34 - const future = io.concurrent(someFunction, .{args}) catch |err| { 35 - // error.ConcurrencyUnavailable on single-threaded systems 36 - }; 37 - ``` 38 - 39 - use `async` when you want asynchrony. use `concurrent` when you need parallelism. 40 - 41 - ## cancellation (major footgun) 42 - 43 - `io.async()` returns a future with its own stack. if you `try` an error before `await`ing, you leak. always defer cancel: 44 - 45 - ```zig 46 - const future = io.async(someFunction, .{args}); 47 - defer future.cancel(io) catch {}; // idempotent — safe even after await 48 - 49 - // ... work that might fail with try ... 50 - 51 - const result = future.await(); 52 - ``` 53 - 54 - `cancel` and `await` share identical semantics — calling either one consumes the future. the deferred cancel is a no-op if await already ran. this pattern prevents resource leaks on early error returns. 55 - 56 - `Cancelable!void` is the return type for cancellable operations (sleep, I/O). propagate it with `try` to let callers cancel subscriptions/loops from the outside. 57 - 58 - ## no function coloring (mostly) 59 - 60 - `async` and `await` are library functions, not keywords. no viral async/await infection through call stacks. any function can be called with `io.async()`. 61 - 62 - ```zig 63 - // foo is a normal function 64 - fn foo(io: std.Io) !void { ... } 65 - 66 - // can be called normally 67 - try foo(io); 68 - 69 - // or asynchronously 70 - const future = io.async(foo, .{io}); 71 - ``` 72 - 73 - **caveat**: coloring shifted from keyword-based to parameter-based. if a pure computation function later needs I/O (file read, random, sleep), it must accept an `Io` parameter, which propagates to all callers. the advantage: code stays agnostic to the execution model (sync vs async vs evented). 74 - 75 - ## std.net moved 76 - 77 - networking moves from `std.net` to `Io.net`: 78 - 79 - ```zig 80 - // 0.15 81 - const addrs = try std.net.getAddressList(allocator, hostname, 0); 82 - 83 - // 0.16 84 - const future = io.concurrent(Io.net.HostName.lookup, .{...}); 85 - // results come through a queue 86 - ``` 87 - 88 - ## what doesn't change 89 - 90 - atomics and `std.Thread.spawn`/`.detach()` remain. `posix.setsockopt`/`SOL`/`SO` and `posix.Sigaction`/`sigaction` are unchanged. 91 - 92 - **note**: `std.Thread.Mutex` is replaced by `std.Io.Mutex` which requires an `io` parameter for lock/unlock. see [migration.md](./migration.md) for details. 93 - 94 - ## std.Options.debug_io is NOT for application code 95 - 96 - `std.Options.debug_io` is backed by `Io.Threaded.global_single_threaded`: 97 - 98 - ```zig 99 - pub const init_single_threaded: Threaded = .{ 100 - .allocator = .failing, 101 - .async_limit = .nothing, 102 - .concurrent_limit = .nothing, 103 - // ... 104 - }; 105 - /// This instance does not support concurrency or cancelation. 106 - pub const global_single_threaded: *Threaded = &global_single_threaded_instance; 107 - ``` 108 - 109 - it's designed for `std.debug.print` and stack trace capture — not application I/O. using it for mutex locks, sleeps, or network ops in a multi-threaded program will silently serialize everything. 110 - 111 - **symptom**: coral went from ~60 events/s (pre-0.16 with `std.Thread.Mutex`) to ~4/s after migrating to `Io.Mutex` with `std.Options.debug_io`. no errors, just slow. 112 - 113 - ### override it in your root source file 114 - 115 - ```zig 116 - const std = @import("std"); 117 - const Io = std.Io; 118 - 119 - // global storage, initialized in main() 120 - var app_threaded_io: Io.Threaded = undefined; 121 - 122 - // tells std to use our instance instead of global_single_threaded 123 - pub const std_options_debug_threaded_io: ?*Io.Threaded = &app_threaded_io; 124 - 125 - // all std.Options.debug_io references now use the real threaded instance 126 - const io = std.Options.debug_io; 127 - 128 - pub fn main() !void { 129 - const allocator = std.heap.smp_allocator; 130 - app_threaded_io = Io.Threaded.init(allocator, .{}); 131 - // ... 132 - } 133 - ``` 134 - 135 - this works because `std.Options.debug_io` resolves to `debug_threaded_io.?.io()`, and `debug_threaded_io` checks for the `std_options_debug_threaded_io` decl in the root source file. the `Io` struct just holds a pointer to the `Threaded` — so even though the `Threaded` is `undefined` at comptime, the pointer is stable and the data is accessed at runtime after `main()` initializes it. 136 - 137 - **result**: zero changes to any other file. all existing `std.Options.debug_io` usage across the codebase automatically gets the multi-threaded implementation. 138 - 139 - ### or just pass io explicitly 140 - 141 - the more "correct" approach: create `Io.Threaded` in `main()`, call `.io()`, and thread it through your functions as a parameter. avoids globals entirely. but overriding `debug_threaded_io` is much less invasive for existing codebases. 142 - 143 - ## implementations 144 - 145 - - `std.Io.Threaded` - thread-based, no event loop 146 - - `std.Io.Evented` - experimental backends have landed: 147 - - **linux**: `io_uring` (`std.Io.Uring`) 148 - - **macOS/iOS**: Grand Central Dispatch (`std.Io.Dispatch`) 149 - - **BSD**: kqueue (`std.Io.Kqueue`) 150 - - these use userspace stack switching (fibers/green threads) for massive concurrency without OS thread overhead 151 - - currently experimental — `Threaded` is the production default 152 - - **WASM**: fiber-based backends can't work (no stack switching). stackless coroutines planned as a future compiler feature — compiler will infer suspension points and rewrite to state machines. 153 - 154 - the interface is concrete, not generic - same benefits as 0.15's explicit buffer approach.
+86
languages/ziglang/0.16/io/README.md
··· 1 + # std.Io 2 + 3 + 0.16 overhauls I/O with the `std.Io` interface. everything that can block (filesystem, networking, timers, concurrency) moves through this interface. 4 + 5 + sources: 6 + - [std library docs](https://ziglang.org/documentation/master/std/#std.Io) (JS-rendered, use browser to read) 7 + - [devlog 2025-2026](https://ziglang.org/devlog/) — design rationale from Andrew Kelley 8 + - [codeberg source](https://codeberg.org/ziglang/zig) — `lib/std/Io.zig`, `lib/std/Io/Threaded.zig` 9 + - [kristoff.it](https://kristoff.it/blog/zig-new-async-io/), [andrewkelley.me](https://andrewkelley.me/post/zig-new-async-io-text-version.html), [porting guide](https://sheran.sg/blog/porting-dns-from-zig-0.15-to-0.16/) 10 + 11 + ## the interface 12 + 13 + like `Allocator` for memory, `Io` is passed to functions that do I/O: 14 + 15 + ```zig 16 + fn fetchData(io: std.Io, allocator: Allocator) ![]u8 { 17 + // io provides networking, timers, concurrency, etc. 18 + } 19 + ``` 20 + 21 + this decouples code from execution model. from the devlog (Oct 2025): 22 + 23 + > Regardless of whether Io is implemented via threads, or via an event loop, this code behaves optimally. The code also works when using single-threaded, blocking Io even though the operations happen sequentially. 24 + 25 + same code, three execution models. write once, swap backend at init. 26 + 27 + ## what Io provides 28 + 29 + - file system, networking, processes 30 + - time and sleeping 31 + - randomness 32 + - `async`, `await`, `concurrent`, and `cancel` 33 + - concurrent queues (`Io.Queue`) 34 + - wait groups and select (`Io.Group`, `Io.Select`) 35 + - mutexes, futexes, events, and conditions (`Io.Mutex`, `Io.Condition`, `Io.Event`) 36 + - memory mapped files 37 + 38 + ## backends 39 + 40 + - `Io.Threaded` — thread-based, always available, production default 41 + - `Io.Evented` — fiber-based, experimental: 42 + - linux: `Io.Uring` (io_uring) 43 + - macOS/iOS: `Io.Dispatch` (GCD) 44 + - BSD: `Io.Kqueue` 45 + - unsupported platforms: `void` 46 + - uses userspace stack switching (fibers/green threads) 47 + - currently experimental — known performance issues to diagnose 48 + - WASM: fiber-based backends can't work (no stack switching). stackless coroutines planned as future compiler feature. 49 + 50 + backend selection: 51 + ```zig 52 + const Backend = if (Io.Evented != void) Io.Evented else Io.Threaded; 53 + ``` 54 + 55 + init differs by backend: 56 + ```zig 57 + // Threaded 58 + backend = Io.Threaded.init(allocator, .{}); 59 + 60 + // Evented 61 + try Backend.init(&backend, allocator, .{}); 62 + ``` 63 + 64 + both expose `.io()` → `std.Io`. all downstream code uses the same `Io` value. 65 + 66 + ## no function coloring (mostly) 67 + 68 + `async` and `await` are library functions, not keywords. no viral async/await infection. 69 + 70 + ```zig 71 + fn foo(io: std.Io) !void { ... } 72 + 73 + // called normally 74 + try foo(io); 75 + 76 + // or asynchronously 77 + const future = io.async(foo, .{io}); 78 + ``` 79 + 80 + **caveat**: coloring shifted from keyword-based to parameter-based. if a pure computation function later needs I/O, it must accept an `Io` parameter, which propagates to callers. the advantage: code stays agnostic to the execution model. 81 + 82 + ## files in this folder 83 + 84 + - [concurrency.md](./concurrency.md) — async vs concurrent, Future, Group, Select, Queue 85 + - [synchronization.md](./synchronization.md) — Mutex, Condition, CancelProtection, cancellation model 86 + - [patterns.md](./patterns.md) — backend selection, InitOptions, debug_io, long-lived tasks, timedWait workarounds
+172
languages/ziglang/0.16/io/concurrency.md
··· 1 + # concurrency primitives 2 + 3 + ## async vs concurrent 4 + 5 + two task-spawning mechanisms with different semantics: 6 + 7 + ### io.async — "run this, I'll need the result" 8 + 9 + ```zig 10 + const future = io.async(someFunction, .{args}); 11 + // ... do other work ... 12 + const result = future.await(io); 13 + ``` 14 + 15 + - infallible (always succeeds in spawning) 16 + - under Threaded: bounded pool (`async_limit`, default = CPU count - 1). overflow runs task **inline** on caller's thread. 17 + - under Evented: fiber (cheap) 18 + - use when you want asynchrony but can tolerate inline fallback 19 + 20 + ### io.concurrent — "run this independently" 21 + 22 + ```zig 23 + const future = io.concurrent(someFunction, .{args}) catch |err| { 24 + // error.ConcurrencyUnavailable when limit reached 25 + }; 26 + ``` 27 + 28 + - fallible — returns `ConcurrentError!Future(R)` 29 + - under Threaded: unbounded by default (`concurrent_limit = .unlimited`). overflow returns `error.ConcurrencyUnavailable`. 30 + - under Evented: fiber (cheap) 31 + - use when correctness requires actual parallelism (e.g., long-lived I/O loops) 32 + 33 + ### summary 34 + 35 + | | `io.async()` | `io.concurrent()` | 36 + |---|---|---| 37 + | Threaded | bounded pool, overflow runs inline | unbounded (default), overflow is error | 38 + | Evented | fiber | fiber | 39 + | failure mode | never fails, degrades to inline | returns error | 40 + | semantic intent | "can happen in parallel" | "must happen in parallel" | 41 + 42 + ## Future 43 + 44 + returned by both `io.async()` and `io.concurrent()`: 45 + 46 + ```zig 47 + const Io = std.Io; 48 + var future = try io.concurrent(worker, .{io, &state}); 49 + ``` 50 + 51 + ### await 52 + 53 + ```zig 54 + const result = future.await(io); 55 + ``` 56 + 57 + blocks until the task completes. idempotent. **NOT threadsafe** — only call from the parent task. 58 + 59 + ### cancel 60 + 61 + ```zig 62 + const result = future.cancel(io); 63 + ``` 64 + 65 + equivalent to `await` but sends a cancellation request first. the task receives `error.Canceled` from its next cancellation point (any `Io` function that returns `Cancelable!T`). 66 + 67 + ### defer pattern (always do this) 68 + 69 + ```zig 70 + var task = try io.concurrent(worker, .{io, &state}); 71 + defer _ = task.cancel(io); // safe even if await already ran — idempotent 72 + 73 + // ... work that might fail with try ... 74 + 75 + _ = task.await(io); 76 + ``` 77 + 78 + if you `try` something before `await`, the future leaks without the defer. 79 + 80 + ## Group — unordered task set 81 + 82 + an unordered set of tasks awaited or canceled as a whole: 83 + 84 + ```zig 85 + var group: Io.Group = .init; 86 + 87 + // spawn tasks into the group (no individual Future returned) 88 + group.concurrent(io, handleSubscriber, .{sub1}) catch {}; 89 + group.concurrent(io, handleSubscriber, .{sub2}) catch {}; 90 + group.concurrent(io, handleSubscriber, .{sub3}) catch {}; 91 + 92 + // wait for all 93 + group.await(io) catch {}; 94 + 95 + // or cancel all 96 + group.cancel(io); 97 + ``` 98 + 99 + key properties from the docs: 100 + 101 + > The resources associated with each task are **guaranteed** to be released when the individual task returns, as opposed to when the whole group completes. 102 + 103 + this means it's safe to have a long-lived group with tasks dynamically added and removed. individual task cleanup happens immediately on return — you don't need to await the group just to free one task's resources. 104 + 105 + - `group.async(io, fn, args)` — like `io.async` but owned by group 106 + - `group.concurrent(io, fn, args)` — like `io.concurrent` but owned by group 107 + - `group.await(io)` — blocks until all tasks finish. cancellation propagates to all members. 108 + - `group.cancel(io)` — immediately cancels all members and waits 109 + 110 + use cases: managing a dynamic set of subscribers, worker pools where you don't need individual results. 111 + 112 + ## Select — typed fan-in 113 + 114 + `Select(U)` where U is a tagged union. each field corresponds to a task type: 115 + 116 + ```zig 117 + const Result = union(enum) { 118 + connection: net.Stream, 119 + timeout: void, 120 + }; 121 + 122 + var select = Io.Select(Result).init(io, &result_buffer); 123 + 124 + select.concurrent(.connection, connectToHost, .{io, host}) catch {}; 125 + select.async(.timeout, timeoutAfter, .{io, 5_000_000_000}); 126 + 127 + // blocks until first task completes, returns tagged result 128 + const result = try select.await(); 129 + switch (result) { 130 + .connection => |stream| { ... }, 131 + .timeout => { ... }, 132 + } 133 + 134 + // cleanup remaining tasks 135 + select.cancelDiscard(); 136 + ``` 137 + 138 + - `select.async(field, fn, args)` — spawn async task tagged with union field 139 + - `select.concurrent(field, fn, args)` — spawn concurrent task tagged with union field 140 + - `select.await()` — blocks until first task completes, returns tagged union 141 + - `select.awaitMany(buffer, min)` — blocks until at least `min` results 142 + - `select.cancel()` — cancel all remaining, return last result if any 143 + - `select.cancelDiscard()` — cancel all remaining, discard results 144 + 145 + use cases: "first of N" patterns, connect-with-timeout, race between alternatives. 146 + 147 + ## Queue — bounded MPMC channel 148 + 149 + many-producer, many-consumer, thread-safe, bounded: 150 + 151 + ```zig 152 + var buf: [16]Item = undefined; 153 + var queue: Io.Queue(Item) = .init(&buf); 154 + 155 + // producer (blocks when full): 156 + try queue.putOne(io, item); 157 + 158 + // consumer (blocks when empty): 159 + const item = try queue.getOne(io); 160 + 161 + // shutdown — wakes all waiters: 162 + queue.close(io); // subsequent ops return error.Closed 163 + ``` 164 + 165 + - `putOne` / `getOne` — single element, blocking 166 + - `put(elements, min)` / `get(buffer, min)` — batch, blocks until at least `min` transferred 167 + - `putAll` / `getUncancelable` etc. — variants for different cancellation semantics 168 + - all blocking operations are cancellation points 169 + 170 + from the devlog DNS resolver example: `io.async(connectMany, .{...})` produces results into a `Queue`, consumer pulls from queue in a loop. the queue provides natural backpressure. 171 + 172 + use cases: producer-consumer pipelines, fan-out/fan-in, bounded work queues.
+162
languages/ziglang/0.16/io/patterns.md
··· 1 + # patterns 2 + 3 + practical patterns for using `std.Io` in real applications. 4 + 5 + ## backend selection 6 + 7 + ```zig 8 + const std = @import("std"); 9 + const Io = std.Io; 10 + 11 + const Backend = if (Io.Evented != void) Io.Evented else Io.Threaded; 12 + var backend: Backend = undefined; 13 + 14 + pub fn main() !void { 15 + const allocator = std.heap.smp_allocator; 16 + 17 + if (Backend == Io.Threaded) { 18 + backend = Io.Threaded.init(allocator, .{}); 19 + } else { 20 + try Backend.init(&backend, allocator, .{}); 21 + } 22 + const io = backend.io(); 23 + 24 + // pass io to your app 25 + try app(io, allocator); 26 + } 27 + ``` 28 + 29 + production: use `Io.Threaded` until Evented is stable. the code is identical — just swap the init. 30 + 31 + ## Threaded InitOptions 32 + 33 + `Io.Threaded.init(allocator, opts)` accepts: 34 + 35 + ```zig 36 + Io.Threaded.init(allocator, .{ 37 + .stack_size = 8 * 1024 * 1024, // default: 16MB (std.Thread.SpawnConfig.default_stack_size) 38 + .async_limit = .{ .value = 8 }, // default: CPU count - 1 39 + .concurrent_limit = .{ .value = 64 }, // default: .unlimited 40 + }); 41 + ``` 42 + 43 + | option | default | what it does | 44 + |---|---|---| 45 + | `stack_size` | 16MB | per-thread stack. affects all spawned threads. | 46 + | `async_limit` | CPU - 1 | bounded pool for `io.async()`. overflow runs task **inline**. | 47 + | `concurrent_limit` | `.unlimited` | pool for `io.concurrent()`. overflow returns `error.ConcurrencyUnavailable`. | 48 + 49 + ### the thread explosion lesson 50 + 51 + with default `concurrent_limit = .unlimited`, every `io.concurrent()` call that outlives its parent creates a permanent OS thread. a relay connecting to 2,000+ hosts with 2 concurrent tasks each (read loop + ping loop) creates ~4,000 threads at 16MB stack = 64GB virtual memory. 52 + 53 + mitigations: 54 + 1. **set `concurrent_limit`** to a bounded value 55 + 2. **set `stack_size`** to what you actually need (8MB is plenty for I/O tasks) 56 + 3. **reduce concurrent tasks per unit of work** — merge ping into read loop (1 task per host, not 2) 57 + 4. **use `Io.Group`** for lifecycle management — cancel all subscribers on shutdown 58 + 59 + under Evented, `io.concurrent()` creates fibers (cheap userspace stacks). the 2-tasks-per-host architecture is fine there. Threaded InitOptions let you bound the damage until Evented is production-ready. 60 + 61 + ## debug_io override 62 + 63 + `std.Options.debug_io` is backed by a **single-threaded** instance. using it for application I/O silently serializes everything. 64 + 65 + **symptom**: coral went from ~60 events/s to ~4/s after migrating to `Io.Mutex` with `std.Options.debug_io`. 66 + 67 + override in your root source file: 68 + 69 + ```zig 70 + var app_threaded_io: Io.Threaded = undefined; 71 + pub const std_options_debug_threaded_io: ?*Io.Threaded = &app_threaded_io; 72 + 73 + pub fn main() !void { 74 + app_threaded_io = Io.Threaded.init(allocator, .{}); 75 + // now all std.Options.debug_io usage gets the real threaded instance 76 + } 77 + ``` 78 + 79 + this works because the `Io` struct holds a pointer to `Threaded` — the pointer is stable even though the data is `undefined` at comptime. 80 + 81 + **or just pass io explicitly** — create `Io.Threaded` in main, call `.io()`, thread it through functions. avoids globals entirely but more invasive. 82 + 83 + ## long-lived task lifecycle 84 + 85 + replacing `std.Thread.spawn` with `io.concurrent` for I/O-bound loops: 86 + 87 + ```zig 88 + // old pattern 89 + self.thread = try std.Thread.spawn(.{ .stack_size = 8 * 1024 * 1024 }, runLoop, .{self}); 90 + // ... later: 91 + if (self.thread) |t| t.join(); 92 + 93 + // new pattern 94 + self.future = try io.concurrent(runLoop, .{self}); 95 + // ... later: 96 + _ = self.future.cancel(io); 97 + ``` 98 + 99 + the task function should exit cleanly on cancellation: 100 + 101 + ```zig 102 + fn runLoop(self: *Self) void { 103 + while (!self.shouldStop()) { 104 + self.io.sleep(Io.Duration.fromMilliseconds(100), .awake) catch break; 105 + // ... work ... 106 + } 107 + } 108 + ``` 109 + 110 + `io.sleep()` is a cancellation point. when `future.cancel(io)` is called, sleep returns `error.Canceled`. the `catch break` exits the loop. 111 + 112 + ### cancel vs await 113 + 114 + - `cancel(io)` — requests cancellation + blocks until done. returns the task's result. 115 + - `await(io)` — just blocks until done. no cancellation request. 116 + - both are idempotent and consume the future. 117 + - both are NOT threadsafe — only call from the parent task. 118 + 119 + ## managing dynamic task sets with Group 120 + 121 + for a dynamic set of long-lived tasks (e.g., subscriber connections): 122 + 123 + ```zig 124 + var subscribers: Io.Group = .init; 125 + 126 + // spawn subscribers as they're discovered 127 + for (hosts) |host| { 128 + subscribers.concurrent(io, runSubscriber, .{host, io}) catch { 129 + log.warn("concurrent limit reached for {s}", .{host}); 130 + continue; 131 + }; 132 + } 133 + 134 + // on shutdown — cancel all at once 135 + subscribers.cancel(io); 136 + ``` 137 + 138 + Group resources per task are freed when that task returns, not when the group is awaited. safe for long-lived groups where tasks come and go. 139 + 140 + ## std.net moved to Io.net 141 + 142 + ```zig 143 + const net = Io.net; 144 + 145 + // connecting 146 + const host_name = try net.HostName.init(host); 147 + const stream = try host_name.connect(io, port, .{}); 148 + 149 + // listening 150 + var addr = try net.IpAddress.parse("::", port); 151 + var server = try net.IpAddress.listen(&addr, io, .{ .reuse_address = true }); 152 + defer server.deinit(io); 153 + 154 + // accepting 155 + const stream = try server.accept(io); 156 + 157 + // reading/writing (need wrapper) 158 + var reader = net.Stream.Reader.init(stream, io, &read_buf); 159 + var writer = net.Stream.Writer.init(stream, io, &write_buf); 160 + ``` 161 + 162 + `net.Stream` no longer has direct `read`/`writeAll`. use `Stream.Reader`/`Stream.Writer`.
+189
languages/ziglang/0.16/io/synchronization.md
··· 1 + # synchronization primitives 2 + 3 + ## Io.Mutex 4 + 5 + extern struct. futex-based. works from any execution context (threads AND fibers). 6 + 7 + ```zig 8 + var mutex: Io.Mutex = Io.Mutex.init; 9 + 10 + // cancelable lock 11 + mutex.lock(io) catch |err| switch (err) { 12 + error.Canceled => return, 13 + }; 14 + defer mutex.unlock(io); 15 + 16 + // uncancelable lock (for use in cleanup paths) 17 + mutex.lockUncancelable(io); 18 + defer mutex.unlock(io); 19 + 20 + // non-blocking try 21 + if (mutex.tryLock()) { 22 + defer mutex.unlock(io); 23 + // ... 24 + } 25 + ``` 26 + 27 + - `lock(m, io)` → `Cancelable!void` — cancellation point 28 + - `lockUncancelable(m, io)` → `void` — no cancellation point 29 + - `tryLock(m)` → `bool` — non-blocking, no `io` needed 30 + - `unlock(m, io)` → `void` 31 + 32 + replaces `std.Thread.Mutex` from 0.15. 33 + 34 + ### cross-context usage 35 + 36 + `Io.Mutex` is futex-based and works from both `std.Thread` workers and Io tasks. if you have a data structure accessed from both explicit threads (e.g., CPU worker pool) and Io tasks (e.g., subscriber fibers), `Io.Mutex` is the correct choice — it integrates with the scheduler in both contexts. 37 + 38 + ## Io.Condition 39 + 40 + pairs with `Io.Mutex`: 41 + 42 + ```zig 43 + var cond: Io.Condition = Io.Condition.init; 44 + var mutex: Io.Mutex = Io.Mutex.init; 45 + 46 + // waiter (must hold mutex) 47 + mutex.lockUncancelable(io); 48 + while (!predicate()) { 49 + cond.wait(&cond, io, &mutex) catch break; // releases mutex, waits, reacquires 50 + } 51 + mutex.unlock(io); 52 + 53 + // signaler 54 + cond.signal(io); // wake one waiter 55 + cond.broadcast(io); // wake all waiters 56 + ``` 57 + 58 + - `wait(cond, io, mutex)` → `Cancelable!void` — releases mutex, waits, reacquires 59 + - `waitUncancelable(cond, io, mutex)` → `void` — same but no cancellation point 60 + - `signal(cond, io)` → `void` — wake one 61 + - `broadcast(cond, io)` → `void` — wake all 62 + 63 + ### no timedWait 64 + 65 + `Io.Condition` has no timed wait variant. patterns that used `timedWait` must be restructured: 66 + 67 + **option 1: sleep-based polling** (simplest, adds latency up to sleep interval) 68 + ```zig 69 + while (condition_not_met and alive.load(.acquire)) { 70 + mutex.unlock(io); 71 + io.sleep(Io.Duration.fromMilliseconds(100), .awake) catch {}; 72 + mutex.lockUncancelable(io); 73 + } 74 + ``` 75 + 76 + **option 2: ticker task that signals the cond** (preserves immediate wakeup) 77 + ```zig 78 + var ticker = try io.concurrent(tickerLoop, .{io, &cond}); 79 + defer _ = ticker.cancel(io); 80 + 81 + fn tickerLoop(tick_io: Io, cond: *Io.Condition) void { 82 + while (true) { 83 + tick_io.sleep(Io.Duration.fromMilliseconds(100), .awake) catch break; 84 + cond.signal(tick_io); 85 + } 86 + } 87 + ``` 88 + 89 + option 1 is fine when latency tolerance matches the poll interval. option 2 is better when immediate wake on signal AND periodic timeout are both needed. 90 + 91 + ## cancellation model 92 + 93 + every `Io` function that returns `Cancelable!T` is a cancellation point. when `future.cancel(io)` is called on a task: 94 + 95 + 1. the task is flagged for cancellation 96 + 2. at its next cancellation point, the function returns `error.Canceled` 97 + 3. the task should propagate or handle the error to exit cleanly 98 + 4. `cancel` blocks until the task actually returns 99 + 100 + ### what's a cancellation point? 101 + 102 + any Io function with `Cancelable` in its error set: 103 + - `io.sleep()` 104 + - `mutex.lock()` (but NOT `lockUncancelable`) 105 + - `cond.wait()` (but NOT `waitUncancelable`) 106 + - `queue.getOne()` / `queue.putOne()` 107 + - `io.checkCancel()` — explicit cancellation point (does nothing else) 108 + - `io.futexWait()` (but NOT `futexWaitUncancelable`) 109 + 110 + ### recancel 111 + 112 + if you catch `error.Canceled` and want to propagate it through more cancellation points: 113 + 114 + ```zig 115 + io.sleep(...) catch |err| switch (err) { 116 + error.Canceled => { 117 + // do some cleanup... 118 + io.recancel(io); // re-arm so next cancellation point also returns Canceled 119 + return error.Canceled; 120 + }, 121 + }; 122 + ``` 123 + 124 + `recancel` asserts that a prior cancellation was received. it re-arms the request so subsequent cancellation points also fire. 125 + 126 + ## CancelProtection 127 + 128 + in rare cases, a section of code must run to completion without being interrupted by cancellation: 129 + 130 + ```zig 131 + const old = io.swapCancelProtection(.blocked); 132 + defer _ = io.swapCancelProtection(old); 133 + 134 + // io operations here will NOT return error.Canceled 135 + // even if the task has a pending cancellation request 136 + mutex.lock(io) catch unreachable; // lock can't fail with .blocked 137 + defer mutex.unlock(io); 138 + // ... critical section ... 139 + ``` 140 + 141 + - `.unblocked` — default. cancellation points are active. 142 + - `.blocked` — no Io function returns `error.Canceled`. 143 + 144 + use for cleanup code that must complete, commit-then-ack patterns, etc. 145 + 146 + ## Io.Event 147 + 148 + simple binary signal (like a one-shot condition without a mutex): 149 + 150 + ```zig 151 + var event: Io.Event = Io.Event.init; 152 + 153 + // waiter 154 + event.wait(io) catch {}; // blocks until set 155 + 156 + // signaler 157 + event.set(io); // wakes all waiters 158 + ``` 159 + 160 + use when you just need "wait until something happens" without associated data. 161 + 162 + ## Io.Semaphore 163 + 164 + counting semaphore: 165 + 166 + ```zig 167 + var sem: Io.Semaphore = Io.Semaphore.init; 168 + 169 + sem.acquire(io) catch {}; // blocks if count == 0 170 + defer sem.release(io); 171 + ``` 172 + 173 + use for bounding concurrent access to a shared resource. 174 + 175 + ## Io.RwLock 176 + 177 + reader-writer lock for read-heavy workloads: 178 + 179 + ```zig 180 + var rwlock: Io.RwLock = Io.RwLock.init; 181 + 182 + // readers (concurrent) 183 + rwlock.lockShared(io) catch {}; 184 + defer rwlock.unlockShared(io); 185 + 186 + // writer (exclusive) 187 + rwlock.lockExclusive(io) catch {}; 188 + defer rwlock.unlockExclusive(io); 189 + ```
+28 -6
protocols/atproto/inductive-proof/relay-integration.md
··· 1 1 # relay integration 2 2 3 - how zlay uses the sync 1.1 APIs from zat, as of march 2026. zlay is ~4 days old. 3 + how zlay uses the sync 1.1 APIs from zat. zlay is migrating from zig 0.15 to 0.16. 4 4 5 5 ## current state 6 6 7 - zlay is on **zat v0.2.10**. the sync 1.1 verification is wired but deployed in **observation mode** — chain breaks are logged and counted, not enforced. 7 + zlay is on **zat v0.3.0-alpha.11** (zig 0.16). sync 1.1 verification is wired but deployed in **observation mode** — chain breaks are logged and counted, not enforced. rolled back to 0.15 build in production while 0.16 thread explosion is addressed. 8 8 9 9 ## the pipeline 10 10 11 + two concurrency layers, matching the Io model (see `languages/ziglang/0.16/io/`): 12 + 11 13 ``` 12 - subscriber (reader thread) 13 - → header decode, cursor tracking 14 + subscriber (io.concurrent task — I/O-bound) 15 + → websocket read loop, header decode, cursor tracking 14 16 → submit raw frame to thread pool 15 17 16 - frame_worker (pool worker) 18 + frame_worker (std.Thread pool worker — CPU-bound) 17 19 → CBOR decode payload 18 20 → rev clock check (reject future timestamps beyond 5min skew) 19 21 → chain continuity check (log-only): ··· 28 30 → return (data_cid, commit_rev) 29 31 30 32 event_log 31 - → persist frame to disk 33 + → persist frame to disk (rocksdb + postgres) 32 34 → conditional upsert: UPDATE ... WHERE rev < new_rev 33 35 → broadcast to consumers 36 + 37 + broadcaster (io.concurrent tasks — I/O-bound) 38 + → per-consumer websocket write loop 34 39 ``` 40 + 41 + ### Io boundary 42 + 43 + | layer | execution context | why | 44 + |---|---|---| 45 + | subscribers | `io.concurrent()` tasks | I/O-bound (websocket read), cancelable via `future.cancel(io)` | 46 + | frame pool | `std.Thread.spawn` workers | CPU-bound (CBOR decode, ECDSA verify), key-partitioned ordering | 47 + | DID resolvers | `io.concurrent()` tasks | I/O-bound (HTTP), cancelable | 48 + | consumers | `io.concurrent()` tasks | I/O-bound (websocket write), cancelable | 49 + | background tasks | `io.concurrent()` tasks | GC, metrics, backfill, resync, cleaner | 50 + 51 + cross-boundary data structures (ring_buffer, LRU cache) use `Io.Mutex` which works from both `std.Thread` workers and Io tasks (futex-based). 52 + 53 + the frame pool stays on explicit threads because it needs: 54 + - deterministic routing: `workers[host_id % N]` for per-key FIFO ordering 55 + - bounded backpressure: blocking submit when queue full → TCP backpressure to upstream PDS 56 + - CPU-heavy work that shouldn't monopolize Io fibers 35 57 36 58 ## what's working 37 59