perf: atomics-based backpressure for stream dispatch (experimental)
Replaces the pause/resume message protocol on the worker-to-parent
stream path with a shared Int32Array of two slots: INFLIGHT and STATE.
Worker increments INFLIGHT after each emit and, on hitting HIGH_WATER,
parks in Atomics.waitAsync(flags, INFLIGHT, current). Parent decrements
INFLIGHT + Atomics.notify on each pull. Cancellation is STATE=CANCEL +
notify; worker checks STATE after each wake.
Eliminates:
- 'pause'/'resume' postMessage round-trip
- Adaptive setImmediate yield per-emit (no longer needed; worker parks
directly on the atomic when backpressured, runs at microtask speed
otherwise)
Only wired through dispatchStream + handleStreamTask. pipeToPort and
channel Distributor still use the message-based adaptive-yield path via
the opts.flags-less branch in pipeIterable.
Results (M-series mac, Node v24.15.0, noopStream, 100K items):
before after
stream throughput ~410K/s ~455K/s +11%
backpressure buffer (avg) 184 ms 89 ms ~2x tighter
backpressure buffer (max) 991 ms 96 ms ~10x tighter
backpressure steady-state still stable
growing at ~89ms
Max queued items drops from ~200 to ~18 (HIGH_WATER + 1-2 overshoot),
within a factor of 2 of the theoretical minimum.
Follow-up work: apply the same pattern to pipeToPort (parent->worker
iterable args) and channel Distributor (1:N fan-out).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>