Monorepo for Aesthetic.Computer aesthetic.computer
4
fork

Configure Feed

Select the types of activity you want to include in your feed.

seashells: initial implementation and documentation

Add seashells piece with analysis, conceptual model, lab bench setup,
sliders documentation, and README explaining the generative shell system.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

+2925
+359
SEASHELLS_README.md
··· 1 + # Seashells.mjs: Complete Documentation 2 + 3 + A comprehensive guide to understanding, using, and remixing **seashells.mjs** — a bytebeat algorithmic synthesizer with visual feedback. 4 + 5 + --- 6 + 7 + ## What Is Seashells? 8 + 9 + **Seashells** is an interactive generative music instrument that: 10 + 11 + 1. **Synthesizes audio** using bytebeat (mathematical bit operations) 12 + 2. **Visualizes in real-time** by painting bytebeat patterns to the screen 13 + 3. **Feeds back visuals → audio** by sampling pixels and using them to modulate synthesis 14 + 4. **Sustains via hold sequences** — can play autonomously for 90+ minutes 15 + 16 + It's designed as a tape-like generative system: set it running and watch/listen to emergence unfold. 17 + 18 + --- 19 + 20 + ## Quick Start 21 + 22 + ### Launching 23 + ```bash 24 + npm run ac # Start dev server 25 + # Navigate to seashells in your browser 26 + ``` 27 + 28 + ### Interactive Play 29 + - **Touch anywhere** to start a voice at that position 30 + - X-axis = base frequency (left=low, right=high) 31 + - Y-axis = pitch multiplier (top=high, bottom=low) 32 + - **Drag** to change frequency/pitch in real-time 33 + - **Lift** to stop that voice 34 + 35 + ### Hold Sequence (Autoplay) 36 + - Press **H** to start autonomous voice generation 37 + - Voices spawn at orbital positions every 2 seconds 38 + - Each voice holds for 5-13 seconds 39 + - New voices spawn before old ones fade → continuous audio 40 + - Press **H** again to stop 41 + 42 + ### For 90-Minute Tape 43 + - Press **H** at start of session 44 + - Let it run unattended 45 + - The system will sustain audio continuously 46 + - Visual feedback creates ongoing emergence 47 + 48 + --- 49 + 50 + ## Files in This Documentation 51 + 52 + | File | Purpose | 53 + |------|---------| 54 + | `seashells_analysis.md` | Technical breakdown of how each component works | 55 + | `seashells_conceptual_model.md` | High-level architecture, 4-layer model, remix framework | 56 + | `seashells_variation_examples.md` | 5 concrete remix examples with copy-paste code | 57 + | `SEASHELLS_README.md` | This file — quick reference | 58 + 59 + **Read in this order:** 60 + 1. This README (5 min) 61 + 2. Conceptual Model (20 min) — understand the 4 layers 62 + 3. Analysis (30 min) — deep dive into each layer 63 + 4. Variation Examples (60 min) — try specific remixes 64 + 65 + --- 66 + 67 + ## The 4-Layer Architecture 68 + 69 + ``` 70 + ┌────────────────────────────────────────────────────┐ 71 + │ Layer 4: SEQUENCING │ 72 + │ How voices spawn, sustain, and move │ 73 + │ (Hold mechanism, orbital paths, voice lifecycle) │ 74 + ├────────────────────────────────────────────────────┤ 75 + │ Layer 3: FEEDBACK LOOP │ 76 + │ Audio ↔ Visual feedback │ 77 + │ (Sample pixels, convert to audio modulation) │ 78 + ├────────────────────────────────────────────────────┤ 79 + │ Layer 2: SYNTHESIS │ 80 + │ Audio generation │ 81 + │ (5 bytebeat patterns, blending, modulation) │ 82 + ├────────────────────────────────────────────────────┤ 83 + │ Layer 1: SPATIAL MAPPING │ 84 + │ Touch position → Audio parameters │ 85 + │ (X→frequency, Y→pitch factor) │ 86 + └────────────────────────────────────────────────────┘ 87 + ``` 88 + 89 + **Key insight:** Each layer is independent. Change one without breaking others. 90 + 91 + --- 92 + 93 + ## What Makes It Work for 90 Minutes? 94 + 95 + 1. **Algorithmic Complexity** 96 + - 5 blending bytebeat patterns (not 1) 97 + - Each pattern responds to 10+ modulation parameters 98 + - Feedback loop creates continuous variation 99 + 100 + 2. **Visual-Audio Feedback** 101 + - Audio paints pixels 102 + - Pixels influence audio via feedback 103 + - Creates genuine emergence, not repetition 104 + 105 + 3. **Voice Continuity** 106 + - Hold sequences spawn new voices before old ones fade 107 + - Ensures unbroken audio stream 108 + - Voices never synchronize (different speeds) 109 + 110 + 4. **State Decay** 111 + - Interaction memory decays over 10 seconds 112 + - Creates slow drift in parameters 113 + - System explores new regions of parameter space 114 + 115 + --- 116 + 117 + ## Made a Change? Test It Like This 118 + 119 + ### 1-Minute Test 120 + ``` 121 + Press H → listen for 60 seconds 122 + Does audio continue without gaps? 123 + Does timbre vary or is it repetitive? 124 + ``` 125 + 126 + ### 5-Minute Test 127 + ``` 128 + Press H → listen for 5 minutes 129 + Do patterns feel structured or random? 130 + Is visual complexity growing or settling? 131 + Any obvious repetition loops? 132 + ``` 133 + 134 + ### Tape Test (90 minutes) 135 + ``` 136 + Press H → start recording 137 + Leave running (no interaction) 138 + Come back after 90 minutes 139 + Listen back: would you put this on cassette? 140 + ``` 141 + 142 + **Tape quality checklist:** 143 + - [ ] Audio never drops out (voices always sustain) 144 + - [ ] Timbre evolves (5+ distinct characters over 90 min) 145 + - [ ] No obvious repetition (don't hear same sequence twice) 146 + - [ ] Visual patterns remain interesting (not just noise) 147 + - [ ] Rhythm/pacing feels intentional, not random 148 + 149 + --- 150 + 151 + ## Remix Quick Reference 152 + 153 + ### Change Spatial Mapping (10 min) 154 + - File: `seashells.mjs` 155 + - Functions: `mapXToFrequency()`, `mapYToPitchFactor()` 156 + - Examples: quantized scale, polar coords, grid snapping 157 + 158 + ### Change Synthesis Patterns (30 min) 159 + - File: `seashells.mjs` 160 + - Location: `generator.bytebeat()` (line 61) 161 + - Task: Add new pattern, integrate into blending 162 + 163 + ### Change Feedback (30 min) 164 + - File: `seashells.mjs` 165 + - Function: `samplePixelFeedback()` (line 371) 166 + - Task: Change what pixels are sampled, how they map to audio 167 + 168 + ### Change Sequencing (30 min) 169 + - File: `seashells.mjs` 170 + - Functions: `spawnHoldVoice()`, `updateHoldVoices()` 171 + - Task: Alter spawn timing, positions, movement patterns 172 + 173 + ### Change Visuals (1 hour) 174 + - File: `seashells.mjs` 175 + - Function: `paint()` (line 525) 176 + - Task: Different rendering (oscilloscope, spectrogram, particles) 177 + 178 + --- 179 + 180 + ## Common Remix Patterns 181 + 182 + ### "Make It Musical" 183 + - Quantize X-axis to specific scale (pentatonic, chromatic) 184 + - Use grid-based sequencing instead of orbital 185 + - Reduce chaos injection 186 + - Result: Harmonic, bell-like, more consonant 187 + 188 + ### "Make It Chaotic" 189 + - Increase feedback sensitivity to brightness/variance 190 + - Add more chaos injection 191 + - Increase pattern blending speed 192 + - Result: Glitchy, algorithmic, harsh 193 + 194 + ### "Make It Visual" 195 + - Replace pixel column visualization with oscilloscope 196 + - Add particles, trails, or fractal rendering 197 + - Sync visual updates to audio beats 198 + - Result: Visuals are primary, audio is secondary 199 + 200 + ### "Make It Spacious" 201 + - Reduce concurrent voices (max 3-4 instead of 6) 202 + - Increase hold durations (10-30 seconds instead of 5-13) 203 + - Reduce spawn rate (every 5-10 seconds instead of 2) 204 + - Result: Sparse, contemplative, room to breathe 205 + 206 + ### "Make It Dense" 207 + - Increase concurrent voices (15-20 instead of 6) 208 + - Decrease hold durations (2-5 seconds instead of 5-13) 209 + - Increase spawn rate (every 1 second) 210 + - Result: Dense, layered, orchestral 211 + 212 + --- 213 + 214 + ## Performance Notes 215 + 216 + ### If Synthesis Is CPU-Heavy 217 + - Reduce waveform sample count (currently 512) 218 + - Reduce pixel feedback sampling points (currently 12-20) 219 + - Profile in DevTools to find bottleneck 220 + 221 + ### If Visuals Fill with Noise 222 + - Add slow screen wipe: `if (now % 45000 < 1000) wipe(0,0,0)` 223 + - Reduce additive blending intensity 224 + - Use opacity/fade instead of accumulation 225 + 226 + ### If Hold Sequence Is Uneven 227 + - Increase spawn interval (>2000ms) 228 + - Reduce max concurrent voices (to 3-4) 229 + - Make durations more consistent (reduce randomness) 230 + 231 + --- 232 + 233 + ## Conceptual Symmetries (Design Patterns) 234 + 235 + These patterns appear throughout the code — exploit them: 236 + 237 + 1. **Orbital Math** — Scanning, voice movement, visual sweeps all use cos/sin 238 + - Use same orbit equations everywhere for coherence 239 + 240 + 2. **Feedback Parameters** — Audio parameters match visual feedback sources 241 + - High brightness → intensity 242 + - High variance → chaos 243 + - Exploit this for intuitive relationships 244 + 245 + 3. **Time Scales** 246 + - Sample-level: 44.1 kHz (bytebeat) 247 + - Voice-level: 1-20 seconds (hold durations) 248 + - System-level: 10+ seconds (state decay) 249 + - Design remixes that respect these scales 250 + 251 + 4. **Randomness** — Always constrained by feedback 252 + - Voice spawn positions: random + orbital structure 253 + - Hold durations: random ± base duration 254 + - Pattern mixing: time-based + feedback bias 255 + - Never pure noise, always quasi-musical 256 + 257 + --- 258 + 259 + ## Key Files & Functions 260 + 261 + ### Core Synthesis 262 + - Line 61: `generator.bytebeat()` — The heart of audio generation 263 + - Lines 82-97: Pattern definitions 264 + - Lines 100-126: Pattern blending logic 265 + 266 + ### Feedback 267 + - Line 371: `samplePixelFeedback()` — Pixel → audio conversion 268 + - Lines 379-424: Sampling strategy 269 + - Lines 440-486: RGB → audio parameter mapping 270 + 271 + ### Sequencing 272 + - Line 40: `holdSequence` object initialization 273 + - Line 370: `spawnHoldVoice()` — Create new voice 274 + - Line 406: `updateHoldVoices()` — Update positions & durations 275 + - Line 457: `toggleHoldSequence()` — Start/stop autoplay 276 + 277 + ### Spatial Mapping 278 + - Line 190: `mapXToFrequency()` — X pixel → Hz 279 + - Line 198: `mapYToPitchFactor()` — Y pixel → pitch multiplier 280 + - Line 205: `deriveVoiceFrequency()` — Combine into final frequency 281 + 282 + ### Visuals 283 + - Line 525: `paint()` — Main rendering function 284 + - Lines 551-612: Pixel drawing logic 285 + - Line 210: `drawTouchMapping()` — Grid visualization 286 + 287 + ### Interaction 288 + - Line 696: `sim()` — Per-frame updates 289 + - Line 715: `act()` — Event handling (touch, keyboard) 290 + 291 + --- 292 + 293 + ## Next Steps 294 + 295 + ### To Understand Seashells 296 + 1. Read `seashells_conceptual_model.md` (understand 4 layers) 297 + 2. Read `seashells_analysis.md` (deep technical) 298 + 3. Try pressing H, making touches, observe behavior 299 + 300 + ### To Remix Seashells 301 + 1. Pick one variation from `seashells_variation_examples.md` 302 + 2. Copy code into seashells.mjs 303 + 3. Test with `npm run ac` 304 + 4. Iterate one small change at a time 305 + 306 + ### To Create Your Own Variation 307 + 1. Identify which layer(s) you want to change 308 + 2. Read the relevant functions in Analysis doc 309 + 3. Sketch the change on paper first 310 + 4. Implement in small steps 311 + 5. Test after each change 312 + 313 + --- 314 + 315 + ## Philosophy 316 + 317 + Seashells is built on **layered independence**: 318 + - Spatial mapping doesn't know about synthesis 319 + - Synthesis doesn't know about visuals 320 + - Visuals don't know about sequencing 321 + - Sequencing is just a voice generator 322 + 323 + This means: 324 + - You can modify any layer without breaking others 325 + - Testing is incremental (change one thing, test) 326 + - Remixes are combinatorial (stack changes) 327 + - Future extensions are easy (add new layers) 328 + 329 + This is intentional design. Use it. 330 + 331 + --- 332 + 333 + ## Questions? 334 + 335 + - **How does feedback work?** → See `samplePixelFeedback()` in Analysis 336 + - **How can I add a new pattern?** → See "Add Your Own Pattern" in Conceptual Model 337 + - **How do I make it more musical?** → See "Make It Musical" in Remix Patterns 338 + - **What's the audio quality?** → Bytebeat, lo-fi by design (8-bit character) 339 + - **Can I export audio?** → Use your browser's recording, or modify to write to AudioBuffer 340 + - **Can I use this in my own piece?** → Yes, architecture is modular and reusable 341 + 342 + --- 343 + 344 + ## Version History 345 + 346 + - **2025.6.13** — Initial Seashells release 347 + - **2025.6.14** — Added hold mechanism, documentation suite 348 + 349 + **Created for:** 90-minute cassette tape experimentation 350 + 351 + **Best consumed as:** 352 + - Interactive exploration (press H, make touches) 353 + - Tape/long-form listening (press H, walk away) 354 + - Educational dissection (read Analysis, remix patterns) 355 + - Foundation for variations (remix, combine, extend) 356 + 357 + --- 358 + 359 + Made with care for emergence and modular design. Happy creating!
+400
SEASH_LAB_BENCH.md
··· 1 + # Seash — Lab Bench Proof of Concept 2 + 3 + A minimal, readable bytebeat synthesizer. ~350 lines of clear code. Perfect for understanding the mechanism and experimenting. 4 + 5 + --- 6 + 7 + ## What It Does 8 + 9 + **Seash** is the distilled core of Seashells: 10 + - Generate sound using bytebeat (simple mathematical synthesis) 11 + - Auto-spawn voices on a loop (hold sequence) 12 + - Play interactively by touching the screen 13 + - Press H to toggle automatic playback 14 + 15 + No complex feedback loops. No dense visuals. Just the mechanism. 16 + 17 + --- 18 + 19 + ## How It Works (Conceptual) 20 + 21 + ### Layer 1: Sound Generation (Bytebeat) 22 + 23 + **The core idea:** Audio from bit operations on integers. 24 + 25 + ```javascript 26 + // Two patterns that blend over time 27 + pattern1 = (t ^ (t >> 8) ^ (t >> 9)) & 255 // XOR: crisp, digital 28 + pattern2 = ((t * harmonic) & (t >> 5) | (t >> 4)) & 255 // Melodic: pitched 29 + 30 + // Mix them based on time 31 + finalPattern = pattern1 * (1 - blend) + pattern2 * blend 32 + ``` 33 + 34 + **Why it works:** Integer operations are deterministic. Same input → same output. But small changes in parameters create wildly different sounds. 35 + 36 + **Key insight:** The two patterns **never collide**. When one fades out, the other fades in. This creates a continuous, evolving texture. 37 + 38 + --- 39 + 40 + ### Layer 2: Spatial Control (Touch → Frequency) 41 + 42 + **How position maps to sound:** 43 + 44 + ```javascript 45 + // X-axis (left to right) 46 + X = 0% → 55 Hz (very low) 47 + X = 50% → 220 Hz (middle) 48 + X = 100% → 880 Hz (high) 49 + // Logarithmic scale (musically natural) 50 + 51 + // Y-axis (top to bottom) 52 + Y = 0% → 2.0x pitch multiplier (octave up) 53 + Y = 50% → 1.0x pitch multiplier (normal) 54 + Y = 100% → 0.5x pitch multiplier (octave down) 55 + ``` 56 + 57 + **Result:** Any (X, Y) position has a unique frequency. Move around = explore the frequency space. 58 + 59 + **Why logarithmic:** Our ears perceive pitch logarithmically. A 2x frequency jump feels the same from 100Hz→200Hz as from 1000Hz→2000Hz. 60 + 61 + --- 62 + 63 + ### Layer 3: Multi-Voice Management 64 + 65 + **The system tracks multiple simultaneous voices:** 66 + 67 + ```javascript 68 + const touchVoices = new Map() // { pointerIndex → { sound, frequency, x, y } } 69 + ``` 70 + 71 + **When you touch:** 72 + 1. New voice spawns at that position 73 + 2. Frequency derived from X/Y 74 + 3. Voice added to map with unique pointer ID 75 + 76 + **When you move your finger:** 77 + 1. Voice position updates 78 + 2. Frequency updates smoothly (with lerp to avoid jumps) 79 + 80 + **When you lift:** 81 + 1. Voice fades out (0.08 second fade) 82 + 2. Removed from map 83 + 84 + **Volume balancing:** If you have N fingers down, each voice volume = 0.5 / sqrt(N) 85 + - 1 voice: 0.5 volume 86 + - 4 voices: 0.25 each (stays reasonable) 87 + - 16 voices: 0.125 each 88 + 89 + --- 90 + 91 + ### Layer 4: Auto-Voice Generation (Hold Sequence) 92 + 93 + **The magic mechanism:** 94 + 95 + ```javascript 96 + if (holdSequence.enabled) { 97 + // Every 3 seconds, spawn a new voice 98 + spawnHoldVoice() 99 + 100 + // Each voice: 101 + // - Starts at orbital position 102 + // - Drifts along orbit 103 + // - Lasts 6±1.5 seconds 104 + // - Fades out when duration expires 105 + 106 + // Result: continuous audio, voices never sync 107 + } 108 + ``` 109 + 110 + **Why it works for 90 minutes:** 111 + - New voice spawns before old one dies 112 + - Each has different orbit speed (unique drifting path) 113 + - No two voices follow same trajectory 114 + - Even though only 2 patterns, constant variation 115 + 116 + **Why voices don't sync:** 117 + - Orbit speeds differ by small amounts (0.0003–0.0005 rad/frame) 118 + - After 30 seconds, they're all at different phases 119 + - Mathematical: irrational multiples → infinite non-repetition 120 + 121 + --- 122 + 123 + ## The Code (Line by Line) 124 + 125 + ### Sections 126 + 127 + **Lines 1–20:** Constants and initialization 128 + - `touchVoices`: Map of active voices 129 + - `holdSequence`: State for auto-generation 130 + 131 + **Lines 22–64:** Bytebeat generator 132 + - Takes frequency, time, and sample count 133 + - Returns 512 audio samples 134 + - Blends 2 patterns smoothly 135 + 136 + **Lines 66–80:** Utility functions 137 + - `clamp()`: Constrain values to range 138 + - `mapXToFrequency()`: X pixel → Hz (logarithmic) 139 + - `mapYToPitch()`: Y pixel → pitch factor 140 + - `deriveFrequency()`: Combine X+Y 141 + 142 + **Lines 82–130:** Voice lifecycle 143 + - `createVoice()`: Initialize synthesizer 144 + - `startTouchVoice()`: New voice from touch 145 + - `updateTouchVoice()`: Move existing voice 146 + - `stopTouchVoice()`: Fade and remove 147 + - `rebalanceVolumes()`: Keep mix balanced 148 + 149 + **Lines 132–175:** Hold sequence 150 + - `spawnHoldVoice()`: Create auto voice at orbital position 151 + - `updateHoldVoices()`: Move orbits, spawn new, fade old 152 + - `toggleHoldSequence()`: Start/stop auto mode 153 + 154 + **Lines 177–210:** Rendering 155 + - `paint()`: Draw grid, labels, voice positions 156 + - Minimal UI: just frequency readouts and status 157 + 158 + **Lines 212–247:** Input handling 159 + - `act()`: Touch and keyboard events 160 + - Support 8 simultaneous touches 161 + - H key toggles hold sequence 162 + 163 + **Lines 249–253:** Per-frame updates 164 + - `sim()`: Audio polling, hold sequence updates 165 + 166 + --- 167 + 168 + ## How to Use It 169 + 170 + ### Interactive Mode 171 + ``` 172 + Touch screen → voice spawns at that position 173 + Move finger → frequency changes in real-time 174 + Lift finger → voice fades out 175 + 176 + Multi-touch: 8 fingers at once, each with own voice 177 + ``` 178 + 179 + ### Autoplay (Tape Mode) 180 + ``` 181 + Press H → hold sequence starts 182 + Voices spawn automatically every 3 seconds 183 + Each holds for 6±1.5 seconds 184 + Press H again → stops 185 + 186 + Let run for 90 minutes → listen to emergence 187 + ``` 188 + 189 + ### Observe the Mechanism 190 + ``` 191 + Watch voice numbers increase and decrease 192 + Notice frequency labels updating as voices move 193 + See hold sequence spawn new voice before old fades 194 + Listen to how patterns blend smoothly 195 + ``` 196 + 197 + --- 198 + 199 + ## Why It's a Good Lab Bench 200 + 201 + ### 1. **Readable** 202 + - ~350 lines (vs 800+ for seashells.mjs) 203 + - No visual feedback loop 204 + - No complex state decay 205 + - Direct cause-effect 206 + 207 + ### 2. **Modifiable** 208 + Each section is independent: 209 + 210 + **Change sound:** Edit `generator.bytebeat()` (line 22) 211 + - Add pattern 3: just write another formula 212 + - Integrate it into blending 213 + 214 + **Change spatial mapping:** Edit `mapXToFrequency()` (line 69) 215 + - Try linear instead of logarithmic 216 + - Try quantized to specific notes 217 + - Try polar coordinates 218 + 219 + **Change auto-generation:** Edit `spawnHoldVoice()` (line 135) 220 + - Different spawn positions (grid, random, bounded) 221 + - Different orbit speeds 222 + - Different durations 223 + 224 + **Change UI:** Edit `paint()` (line 179) 225 + - Remove grid, add oscilloscope 226 + - Add different text labels 227 + - Show orbit paths visually 228 + 229 + ### 3. **Testable** 230 + - Add `console.log()` anywhere to debug 231 + - Change one variable, test immediately 232 + - No side effects (each voice is independent) 233 + 234 + ### 4. **Minimal Dependencies** 235 + - Only uses AC's `sound.synth()` API 236 + - No external libraries 237 + - No complex state machines 238 + 239 + --- 240 + 241 + ## Experiments to Try 242 + 243 + ### Experiment 1: Change the Patterns 244 + **Goal:** Make it sound more chaotic 245 + 246 + ```javascript 247 + // In generator.bytebeat(), change p1: 248 + const p1 = (t * t) & (t >> 4) & 255; // Multiplicative instead of XOR 249 + 250 + // Test: Press H, listen to how it differs 251 + ``` 252 + 253 + ### Experiment 2: Change Frequency Range 254 + **Goal:** Make it higher or lower pitched 255 + 256 + ```javascript 257 + // In mapXToFrequency(): 258 + const minHz = 110; // Was 55 (raise minimum) 259 + const maxHz = 440; // Was 880 (lower maximum) 260 + 261 + // Test: Press H, notice narrower frequency range 262 + ``` 263 + 264 + ### Experiment 3: Change Auto-Spawn Rate 265 + **Goal:** More or fewer voices 266 + 267 + ```javascript 268 + // In holdSequence: 269 + spawnInterval: 1500, // Was 3000 (spawn every 1.5 sec instead of 3) 270 + 271 + // Test: More voices, denser texture 272 + ``` 273 + 274 + ### Experiment 4: Change Voice Lifespan 275 + **Goal:** Longer or shorter holds 276 + 277 + ```javascript 278 + // In spawnHoldVoice(): 279 + const duration = 15000 + (Math.random() - 0.5) * 5000; // Was 6000 ± 1500 280 + 281 + // Test: Slower, more meditative tape 282 + ``` 283 + 284 + ### Experiment 5: Add a Third Pattern 285 + **Goal:** More timbral variety 286 + 287 + ```javascript 288 + // In generator.bytebeat(), add: 289 + const p3 = ((t >> 2) + (t >> 5)) & 255; // Additive pattern 290 + 291 + // Modify blending to include it: 292 + let mixPhase = (time * 0.1 + freqScale * 0.5) % 3; // Was % 2 293 + if (mixPhase < 1) { /* p1 to p2 */ } 294 + else if (mixPhase < 2) { /* p2 to p3 */ } 295 + else { /* p3 to p1 */ } 296 + 297 + // Test: Three-way blend creates richer texture 298 + ``` 299 + 300 + --- 301 + 302 + ## What's Removed (vs Seashells) 303 + 304 + ### Removed: 305 + - **Feedback loop** (pixels don't influence audio) 306 + - **5 patterns** (reduced to 2 for clarity) 307 + - **Complex UI** (minimal text only) 308 + - **Grid visualization** (just dots) 309 + - **State decay system** (removed unnecessary complexity) 310 + - **Chaos injection** (simplified synthesis) 311 + - **Harmonic Bell quantization** (bare version uses full spectrum) 312 + 313 + ### Kept: 314 + - **Core bytebeat synthesis** 315 + - **Spatial frequency mapping** 316 + - **Multi-voice management** 317 + - **Hold sequence (auto-generation)** 318 + - **Smooth blending between patterns** 319 + 320 + --- 321 + 322 + ## Performance Notes 323 + 324 + **CPU usage:** Very light 325 + - 2 patterns instead of 5 326 + - No pixel sampling 327 + - No feedback loop 328 + - Simple rendering 329 + 330 + **Audio quality:** 8-bit bytebeat character (lo-fi by design) 331 + 332 + **Latency:** Minimal — direct synthesis, no heavy processing 333 + 334 + --- 335 + 336 + ## Next Steps 337 + 338 + ### To Understand Deeper: 339 + 1. Run it and experiment with touches 340 + 2. Press H and listen for 5 minutes 341 + 3. Add `console.log()` to understand timing 342 + 4. Modify one parameter at a time 343 + 344 + ### To Build Variations: 345 + 1. Copy seash.mjs to seashell_variation.mjs 346 + 2. Make ONE change 347 + 3. Test immediately 348 + 4. Document what changed and why 349 + 5. Stack changes incrementally 350 + 351 + ### To Connect to Seashells: 352 + Once you understand seashell: 353 + 1. Read seashells_conceptual_model.md 354 + 2. Understand how feedback loop works 355 + 3. Add feedback sampling to seashell 356 + 4. You've now built seashells from scratch! 357 + 358 + --- 359 + 360 + ## Philosophy 361 + 362 + **Seashell** embodies these principles: 363 + 364 + 1. **Clarity over cleverness** — Every line should make sense 365 + 2. **Mechanism over magic** — You can trace cause-effect 366 + 3. **Minimal abstraction** — One function per concept 367 + 4. **Modular independence** — Change one layer, others stay stable 368 + 5. **Testability** — Add print statements, change values, listen 369 + 370 + Use it as a **thinking tool**, not just a sound generator. 371 + 372 + --- 373 + 374 + ## Quick Reference 375 + 376 + | What | Where | How | 377 + |------|-------|-----| 378 + | Change sound | `generator.bytebeat()` line 22 | Modify pattern formulas | 379 + | Change frequency range | `mapXToFrequency()` line 69 | Edit minHz/maxHz | 380 + | Change pitch modulation | `mapYToPitch()` line 75 | Edit exponent (2) | 381 + | Change auto-spawn timing | `holdSequence.spawnInterval` line 24 | Edit milliseconds | 382 + | Change voice duration | `spawnHoldVoice()` line 149 | Edit duration calculation | 383 + | Change voice count limit | `updateHoldVoices()` line 161 | Edit `.length < 5` | 384 + | Change rendering | `paint()` line 179 | Modify drawing code | 385 + | Change audio response | `sim()` line 245 | Edit voice update logic | 386 + 387 + --- 388 + 389 + ## Files to Compare 390 + 391 + - **seash.mjs** (this one) — bare bones, ~350 lines 392 + - **seashells.mjs** — full version, ~900 lines, with feedback 393 + - **seashells_conceptual_model.md** — 4-layer architecture explanation 394 + 395 + You can learn by: 396 + 1. Understanding seashell completely 397 + 2. Reading conceptual_model.md 398 + 3. Comparing against seashells.mjs to see what each layer adds 399 + 400 + Good lab bench = good foundation for understanding the full piece.
+311
SEASH_SLIDERS.md
··· 1 + # Seash Sliders — Lab Bench Control Panel 2 + 3 + Five interactive sliders at the bottom of seash for real-time parameter adjustment. 4 + 5 + --- 6 + 7 + ## The Sliders 8 + 9 + All sliders appear at the bottom of the screen and are touchable in real-time. 10 + 11 + ### 1. **Spawn** (Spawn Interval) 12 + - **Range:** 500 ms — 8000 ms 13 + - **Default:** 3000 ms (3 seconds) 14 + - **What it does:** How often new voices are spawned in hold sequence 15 + - **Try this:** 16 + - Move left (500ms) = dense, overlapping voices 17 + - Move right (8000ms) = sparse, distinct phrases 18 + - Sweet spot: 2000-4000ms for continuous texture 19 + 20 + ### 2. **Duration** (Voice Duration) 21 + - **Range:** 2000 ms — 20000 ms 22 + - **Default:** 6000 ms (6 seconds) 23 + - **What it does:** How long each voice sustains before fading 24 + - **Try this:** 25 + - Left (2s) = staccato, quick pulses 26 + - Right (20s) = long held tones, pad-like 27 + - Quick changes: 2-4 seconds 28 + - Sustained changes: 10-15 seconds 29 + 30 + ### 3. **OrbitSpd** (Orbit Speed) 31 + - **Range:** 0.0001 — 0.001 rad/frame 32 + - **Default:** 0.0003 rad/frame 33 + - **What it does:** How fast each voice drifts through frequency space 34 + - **Try this:** 35 + - Left = slow, lazy movement 36 + - Right = fast, erratic movement 37 + - Controls the "wandering" behavior of auto voices 38 + 39 + ### 4. **BlendSpd** (Pattern Blend Speed) 40 + - **Range:** 0.01 — 0.5 41 + - **Default:** 0.1 42 + - **What it does:** How fast patterns cross-fade into each other 43 + - **Try this:** 44 + - Left (0.01) = smooth, gradual transitions 45 + - Right (0.5) = quick, snappy transitions 46 + - Affects overall timbre evolution 47 + 48 + ### 5. **MaxVoices** (Max Concurrent Auto Voices) 49 + - **Range:** 1 — 10 50 + - **Default:** 5 voices 51 + - **What it does:** Maximum number of simultaneous auto-generated voices 52 + - **Try this:** 53 + - 1 = single voice, monophonic 54 + - 3-5 = balanced polyphony 55 + - 8-10 = dense cluster 56 + 57 + --- 58 + 59 + ## How to Use 60 + 61 + ### Adjust a Slider 62 + 1. **Touch and drag** the slider handle (■) left/right 63 + 2. Value updates in real-time 64 + 3. Audio responds immediately 65 + 4. Release to stop dragging 66 + 67 + ### Read the Values 68 + Each slider shows its current value to the right: 69 + ``` 70 + Spawn: ________■____ 3000 71 + (value displayed on right) 72 + ``` 73 + 74 + ### Combine Sliders for Effects 75 + 76 + **Slow Pad:** 77 + - Duration: 15000 (long holds) 78 + - Spawn: 8000 (slow spawn) 79 + - OrbitSpd: 0.0001 (minimal drift) 80 + - BlendSpd: 0.01 (smooth transitions) 81 + 82 + **Textural Chaos:** 83 + - Duration: 3000 (quick pulses) 84 + - Spawn: 500 (rapid spawning) 85 + - OrbitSpd: 0.001 (fast movement) 86 + - BlendSpd: 0.3 (snappy blending) 87 + 88 + **Musical Melody:** 89 + - Duration: 8000 (medium holds) 90 + - Spawn: 4000 (structured spacing) 91 + - OrbitSpd: 0.0003 (moderate drift) 92 + - BlendSpd: 0.15 (balanced transitions) 93 + 94 + **Dense Chords:** 95 + - MaxVoices: 10 (many simultaneous) 96 + - Duration: 10000 (held chords) 97 + - Spawn: 2000 (continuous) 98 + - OrbitSpd: 0.0001 (minimal movement) 99 + 100 + --- 101 + 102 + ## What's Happening Under the Hood 103 + 104 + ### Spawn Interval 105 + Controls `params.spawnInterval`, used in `updateHoldVoices()`: 106 + ```javascript 107 + if (now - lastSpawnTime > params.spawnInterval && count < maxVoices) { 108 + spawnHoldVoice(); // Create new voice 109 + } 110 + ``` 111 + 112 + ### Voice Duration 113 + Controls `params.voiceDuration`, used when spawning: 114 + ```javascript 115 + const duration = params.voiceDuration + (Math.random() - 0.5) * (duration * 0.5); 116 + // Voice lasts this long before fading 117 + ``` 118 + 119 + ### Orbit Speed 120 + Controls `params.orbitSpeed`, used for voice position: 121 + ```javascript 122 + hold.orbitPhase += params.orbitSpeed; 123 + const x = Math.cos(hold.orbitPhase) * radius + center; 124 + ``` 125 + 126 + ### Blend Speed 127 + Controls `params.blendSpeed`, in the bytebeat generator: 128 + ```javascript 129 + const mixPhase = (time * params.blendSpeed + freqScale * 0.5) % 2; 130 + // Controls how fast patterns morph 131 + ``` 132 + 133 + ### Max Voices 134 + Controls `params.maxVoices`, checked during spawning: 135 + ```javascript 136 + if (activeHolds.length < params.maxVoices) { 137 + spawnHoldVoice(); // Only spawn if under limit 138 + } 139 + ``` 140 + 141 + --- 142 + 143 + ## Slider Layout 144 + 145 + ``` 146 + Screen height: 147 + 148 + [Main playing area - 80% of screen] 149 + - Frequency grid 150 + - Voice positions 151 + - Touch interaction 152 + 153 + [Slider area - bottom 20% of screen] 154 + Spawn: ___■_____ 3000 155 + Duration: ______■__ 6000 156 + OrbitSpd: _____■___ 0.0003 157 + BlendSpd: ____■____ 0.1 158 + MaxVoices: _■_______ 5 159 + ``` 160 + 161 + Sliders are fully interactive while you play — adjust in real-time. 162 + 163 + --- 164 + 165 + ## Tips for Experimentation 166 + 167 + ### 1. **Change One at a Time** 168 + - Adjust Spawn, listen for 10 seconds 169 + - Adjust Duration next, observe how it interacts 170 + - This trains your intuition for each parameter 171 + 172 + ### 2. **Listen for Patterns** 173 + - Try MaxVoices=1 (monophonic) with different spawn/duration combos 174 + - Try MaxVoices=8 (dense) with same settings 175 + - Notice how polyphony changes the effect 176 + 177 + ### 3. **Map the Space** 178 + Create a mental map: 179 + ``` 180 + Slow, smooth ←→ Fast, chaotic 181 + (all sliders left) (all sliders right) 182 + 183 + Spacious, minimal ←→ Dense, overlapping 184 + (low spawn, max=1) (high spawn, max=10) 185 + ``` 186 + 187 + ### 4. **Use as Tape Control** 188 + - Set parameters how you like 189 + - Press H to start recording 190 + - Let it run 30 minutes 191 + - Sliders capture your intended "composition" in 5 parameters 192 + 193 + ### 5. **Keyboard + Slider Combo** 194 + - H to toggle hold sequence 195 + - Drag sliders while holding plays 196 + - Create interactive performances 197 + 198 + --- 199 + 200 + ## Slider Precision 201 + 202 + **Coarse adjustments:** 203 + - Touch slider, drag all the way left or right 204 + - Jumps to min/max 205 + 206 + **Fine adjustments:** 207 + - Touch slider and drag slowly 208 + - Values update frame-by-frame as you drag 209 + 210 + **Quantized sliders:** 211 + - MaxVoices: steps of 1 (always integer) 212 + - Others: continuous (floating point) 213 + 214 + --- 215 + 216 + ## Getting the Values Right 217 + 218 + ### For 90-Minute Tape 219 + - **Spawn:** 3000-5000 (not too dense) 220 + - **Duration:** 6000-10000 (medium holds) 221 + - **OrbitSpd:** 0.0002-0.0004 (gentle movement) 222 + - **BlendSpd:** 0.1-0.2 (smooth but alive) 223 + - **MaxVoices:** 4-6 (polyphonic without muddy) 224 + 225 + ### For Interactive Play 226 + - **Spawn:** Don't matter (you're creating voices) 227 + - **Duration/OrbitSpd:** Personal taste 228 + - **BlendSpd:** 0.15 for snappy response 229 + - **MaxVoices:** 6-8 (more bandwidth for your touches) 230 + 231 + ### For Study 232 + - **Spawn:** 3000 (baseline) 233 + - **Duration:** 6000 (baseline) 234 + - **OrbitSpd:** 0.0003 (baseline) 235 + - **BlendSpd:** 0.1 (baseline) 236 + - **MaxVoices:** 5 (baseline) 237 + 238 + Then change ONE, listen, understand, reset, repeat. 239 + 240 + --- 241 + 242 + ## Not Working? 243 + 244 + **Slider doesn't respond:** 245 + - Make sure you're touching on the blue slider line, not the label 246 + - Drag horizontally (not vertically) 247 + - Release and try again 248 + 249 + **Values not changing:** 250 + - Confirm you're in seash (not seashells) 251 + - Check that hold sequence is ON (press H) 252 + - Values change immediately; listen for audio change 253 + 254 + **Want to reset:** 255 + - Reload the page (F5) 256 + - Or manually drag each slider back to default 257 + - No "reset" button needed 258 + 259 + --- 260 + 261 + ## Next: Combine with Touch Play 262 + 263 + You can: 264 + 1. Set sliders to your preferred hold sequence settings 265 + 2. **Also** touch the screen to add manual voices 266 + 3. Manual touches + auto holds mix together 267 + 4. Create layered performances 268 + 269 + The sliders control the "background" (auto voices), your touches are the "foreground" (interactive layer). 270 + 271 + --- 272 + 273 + ## Example Sessions 274 + 275 + ### Session A: Deep Listen (90 minutes) 276 + ``` 277 + Spawn: 4000 (slower spawn) 278 + Duration: 8000 (longer holds) 279 + OrbitSpd: 0.0002 (minimal drift) 280 + BlendSpd: 0.08 (smooth) 281 + MaxVoices: 4 (sparse chords) 282 + 283 + Result: Meditative, slowly evolving texture 284 + ``` 285 + 286 + ### Session B: Interactive Play (30 minutes) 287 + ``` 288 + Spawn: 3000 289 + Duration: 6000 290 + OrbitSpd: 0.0003 291 + BlendSpd: 0.15 292 + MaxVoices: 5 293 + 294 + Press H, then also make touches 295 + Auto voices + your gestures = rich conversation 296 + ``` 297 + 298 + ### Session C: Chaos Lab (15 minutes) 299 + ``` 300 + Spawn: 800 (rapid!) 301 + Duration: 3000 (quick bursts) 302 + OrbitSpd: 0.0008 (fast movement) 303 + BlendSpd: 0.3 (snappy blends) 304 + MaxVoices: 8 (dense!) 305 + 306 + Result: Glitchy, algorithmic texture 307 + ``` 308 + 309 + --- 310 + 311 + Made for experimentation. Sliders = direct feedback loop between your intention and the sound.
+334
seashells_analysis.md
··· 1 + # Seashells.mjs Analysis 2 + 3 + ## Overview 4 + **Seashells** is a bytebeat algorithmic synthesizer with visual feedback. It's designed as an interactive piece where touch positions generate audio, but converting it to autoplay requires significant architectural changes. 5 + 6 + --- 7 + 8 + ## Core Audio Mechanism 9 + 10 + ### Bytebeat Synthesis 11 + The piece uses **5 algorithmic patterns** that blend over time: 12 + 13 + 1. **Pattern 1 (XOR Cascade)** - Crisp, digital texture 14 + ``` 15 + (t ^ (t >> (8 + shiftMod1)) ^ (t >> (9 + shiftMod2))) & 255 16 + ``` 17 + Classic bytebeat - creates sharp, glitchy sounds 18 + 19 + 2. **Pattern 2 (Melodic Stepped)** - Harmonic content 20 + ``` 21 + ((t * harmonic) & (t >> (5 + bitMod1)) | (t >> (4 + bitMod2))) & 255 22 + ``` 23 + Responsive to frequency scaling 24 + 25 + 3. **Pattern 3 (Rhythmic)** - Complex polyrhythmic patterns 26 + ``` 27 + (t | (t >> rhythmMod | t >> 7)) * (t & (t >> 11 | t >> complexMod)) & 255 28 + ``` 29 + Highly sensitive to frequency and feedback 30 + 31 + 4. **Pattern 4 (Sierpinski-like)** - Fractal patterns 32 + ``` 33 + (t & (t >> (5 + sierpinskiMod) | t >> 8)) & 255 34 + ``` 35 + Creates algorithmic complexity 36 + 37 + 5. **Pattern 5 (Frequency-responsive Melodic)** 38 + ``` 39 + ((t * melodyScale) ^ (t >> 6)) & (t >> 8) & 255 40 + ``` 41 + Strong pitch sensitivity 42 + 43 + ### Pattern Blending 44 + - Patterns cycle through 4-phase blend states (0→1→2→3→0) 45 + - Blend speed is modifiable by feedback (`mixSpeed`) 46 + - Blending is smooth and continuous (`blendIntensity` 0.3-1.0) 47 + 48 + ### Feedback System (Audio ↔ Visual Loop) 49 + The piece samples pixels from the screen and converts them into audio modulation parameters: 50 + 51 + **Sampling Strategy:** 52 + - 4 corner samples 53 + - 4 edge samples (mid-points of each edge) 54 + - 4 diagonal sweeps 55 + - 4+ orbital scanning samples (elliptical patterns that move with interaction memory) 56 + 57 + **Conversion to Audio Parameters:** 58 + ``` 59 + Red channel → timeModulation, harmonicScale, colorMod.r 60 + Green channel → rhythmScale, mixSpeed, colorMod.g 61 + Blue channel → shiftMod2, patternBias, colorMod.b 62 + Contrast → bitMod values (higher contrast = more bit operations) 63 + Variance → chaosLevel (pixel unpredictability → audio chaos) 64 + ``` 65 + 66 + **Chaos Injection:** 67 + ```javascript 68 + if (feedback.chaosLevel > 0.5) { 69 + finalPattern = finalPattern ^ Math.floor(feedback.chaosLevel * 128); 70 + } 71 + ``` 72 + 73 + --- 74 + 75 + ## Visual Generation 76 + 77 + ### Pixel Rendering 78 + - **No wipe()** - pixels accumulate, creating permanent trails 79 + - Bytebeat values map directly to Y positions 80 + - Colors computed from bit patterns AND feedback color mods 81 + - Additive blending for accumulation effects 82 + 83 + ### Visual Elements 84 + 1. **Main column visualization** - One vertical line per X pixel, height = bytebeat value 85 + 2. **Bit pattern layers** - Each bit of the bytebeat value adds horizontal bands 86 + 3. **Vertical streaks** - Frequency-responsive vertical lines (every Nth column) 87 + 4. **Horizontal sweep** - Time-based horizontal scan line that moves down the screen 88 + 89 + ### Interaction Visualization 90 + - Touch overlays show active voices with colored circles 91 + - Frequency label for each touch (Hz) 92 + - Grid showing frequency/pitch mapping 93 + - Help text when idle 94 + 95 + --- 96 + 97 + ## Interaction State (The "Memory" System) 98 + 99 + The piece maintains persistent modulation state that decays over time: 100 + 101 + ```javascript 102 + interactionState = { 103 + scanOffset: 0, // Orbital scan phase 104 + scanVelocity: 0.003, // How fast it scans 105 + scanSpread: 1.0, // Vertical spread of scan 106 + orbit: 0, // Cumulative rotation bias 107 + memory: 0, // Persistent touch "memory" (0-1) 108 + chaosBias: 0, // How chaotic it gets 109 + density: 1.0, // Sampling density 110 + lastTouchAt: 0 // Timestamp of last interaction 111 + } 112 + ``` 113 + 114 + **How Touch Influences State:** 115 + ```javascript 116 + scanOffset += (nx * 0.11 + ny * 0.07) // Touch moves scan 117 + orbit += (nx - 0.5) * 0.18 // Horizontal bias → rotation 118 + scanSpread *= 0.9; scanSpread += (0.65 + ny) * 0.1 // Vertical → spread 119 + memory *= 0.94; memory += 0.05 + |nx - 0.5| * 0.08 // Accumulates 120 + chaosBias *= 0.9; chaosBias += |nx - 0.5| * 0.25 // Edges → chaos 121 + ``` 122 + 123 + **Decay Over Time:** 124 + - Memory decays at 0.998/frame if touched recently, 0.992 if idle 125 + - Chaos decays at 0.997 (active) / 0.985 (idle) 126 + - Orbit decays at 0.992 127 + - After ~10 seconds idle: memory → 0, piece quiets down 128 + 129 + --- 130 + 131 + ## Current Limitations for 90-Minute Tape 132 + 133 + ### ❌ Problems 134 + 1. **No autoplay** - Requires manual touches to generate audio 135 + 2. **Silent when idle** - Help screen displays when no voices active 136 + 3. **Limited generative richness** - Only 8 simultaneous voices, all driven by touch 137 + 4. **Accumulation without clearing** - Visual system will eventually fill with noise 138 + 5. **No time-based voice generation** - No procedural voice triggering 139 + 6. **Memory decay** - State fades to silence after ~10 seconds of inactivity 140 + 141 + ### ✅ Strengths (Why It Could Work) 142 + 1. **High algorithmic complexity** - 5 blending patterns × feedback × chaos injection = very large parameter space 143 + 2. **Feedback loop creates emergence** - Visual patterns influence audio, creating unpredictable evolution 144 + 3. **Deterministic** - Same pixel patterns always produce same audio (reproducible tape) 145 + 4. **Minimal repetition** - Bytebeat patterns are subtle and shift continuously via blending 146 + 5. **Scaling** - Can handle more simultaneous voices than needed (currently capped at 8) 147 + 148 + --- 149 + 150 + ## Required Changes for Autoplay 151 + 152 + ### Option 1: Procedural Voice Generation (Recommended) 153 + 154 + **Add time-based voice triggering in `sim()`:** 155 + 156 + ```javascript 157 + function sim({ sound, hud, screen }) { 158 + // ... existing code ... 159 + 160 + // Procedural voice generation 161 + const voiceTargetCount = Math.round(1 + sharedPixelFeedback.density * 4); 162 + const currentVoiceCount = totalVoiceCount(); 163 + 164 + if (currentVoiceCount < voiceTargetCount && performance.now() - interactionState.lastAutoVoiceAt > 300) { 165 + // Add voice at pseudo-random "musical" position 166 + const nextX = (interactionState.autoVoicePhase * screen.width) % screen.width; 167 + const nextY = (Math.sin(performance.now() * 0.0003) * 0.5 + 0.5) * screen.height; 168 + 169 + startTouchVoice({ 170 + pointerIndex: 8 + currentVoiceCount, // Use high indices for auto voices 171 + x: nextX, 172 + y: nextY, 173 + screenWidth: screen.width, 174 + screenHeight: screen.height, 175 + sound 176 + }); 177 + 178 + interactionState.lastAutoVoiceAt = performance.now(); 179 + interactionState.autoVoicePhase = (interactionState.autoVoicePhase + 0.31) % 1; // Golden ratio 180 + } 181 + 182 + // Age out auto-voices slowly (don't kill, just quiet) 183 + // This creates natural voice turnover instead of jumping in/out 184 + } 185 + ``` 186 + 187 + **Adjustments needed:** 188 + - Increase `maxTouchPointers` from 8 to ~20-30 for more voices 189 + - Add `lastAutoVoiceAt` and `autoVoicePhase` to `interactionState` 190 + - Modify voice volume calculation to account for mix of auto/touch voices 191 + 192 + ### Option 2: "Hold" Mode (Simpler, More Controlled) 193 + 194 + **Add a single "master" voice that holds until changed:** 195 + 196 + ```javascript 197 + let holdState = { 198 + x: null, 199 + y: null, 200 + holdUntil: 0, 201 + nextChangeAt: 0 202 + }; 203 + 204 + function act({ event: e, sound, screen, pens }) { 205 + // Existing touch handling... 206 + 207 + if (e.is("keyboard:down:h")) { 208 + // Toggle hold mode 209 + if (holdState.x === null) { 210 + // Start holding at a specific position 211 + holdState.x = screen.width * 0.5; 212 + holdState.y = screen.height * 0.5; 213 + holdState.holdUntil = performance.now() + 5000; // Hold for 5 sec 214 + startTouchVoice({ 215 + pointerIndex: 99, // Special hold voice 216 + x: holdState.x, 217 + y: holdState.y, 218 + screenWidth: screen.width, 219 + screenHeight: screen.height, 220 + sound 221 + }); 222 + } else { 223 + stopTouchVoice(99); 224 + holdState.x = null; 225 + } 226 + } 227 + } 228 + 229 + function sim({ sound, hud, screen }) { 230 + // Auto-release hold if time expired 231 + if (holdState.x !== null && performance.now() > holdState.holdUntil) { 232 + stopTouchVoice(99); 233 + holdState.x = null; 234 + } 235 + 236 + // Or: continuously update hold position based on pixel feedback 237 + if (holdState.x !== null) { 238 + const feedback = sharedPixelFeedback; 239 + holdState.x = (holdState.x + feedback.patternBias * 0.5) % screen.width; 240 + holdState.y = (holdState.y + feedback.timeModulation * 0.0001) % screen.height; 241 + updateTouchVoice({ 242 + pointerIndex: 99, 243 + x: holdState.x, 244 + y: holdState.y, 245 + screenWidth: screen.width, 246 + screenHeight: screen.height, 247 + sound 248 + }); 249 + } 250 + } 251 + ``` 252 + 253 + ### Option 3: Hybrid (Best for Tape) 254 + 255 + Combine procedural generation + controlled hold positions: 256 + - Auto-voices spawn at intervals determined by pixel feedback 257 + - Each voice holds for variable duration (3-15 seconds) 258 + - Hold positions follow orbital patterns (music-like phrasing) 259 + - User can still manually intervene 260 + 261 + --- 262 + 263 + ## Viability for 90 Minutes 264 + 265 + ### Without Changes 266 + **⚠️ Not viable** - Needs manual interaction, would result in 90 minutes of silence + random touches 267 + 268 + ### With Procedural Voices 269 + **✅ Viable** - Could sustain audio, but: 270 + - Voices may cluster in same regions without spatial variation 271 + - Without user interaction, state may converge to stable patterns 272 + - Visual accumulation could become monolithic 273 + 274 + ### With Hold Mode + Orbital Sequencing 275 + **✅ Very viable** - Could create: 276 + - Phrased movements (voices move through parameter space) 277 + - Natural emergence from pixel feedback 278 + - Balance between predictability and surprise 279 + - Tape-like "performance" quality 280 + 281 + ### Recommended Hybrid Approach 282 + 283 + 1. **Keep current touch system** for interactivity 284 + 2. **Add procedural voice spawning** that's influenced by feedback 285 + 3. **Add orbital "hold" sequences** that create musical phrasing 286 + 4. **Slowly wipe screen** (every 30-60 seconds) to prevent visual noise accumulation 287 + 5. **Map feedback more musically** - e.g., high variance → more voices, high brightness → faster tempo 288 + 289 + Example voice spawning pattern: 290 + ```javascript 291 + // Spawn voices at orbital positions, Fibonacci intervals 292 + const goldenRatio = 1.618; 293 + const nextSpawn = Math.floor(baseInterval * Math.pow(goldenRatio, currentSpawnIndex)); 294 + const orbitPhase = (performance.now() * 0.0001 + currentSpawnIndex * 0.31) % (Math.PI * 2); 295 + const x = (Math.cos(orbitPhase) * 0.4 + 0.5) * screen.width; 296 + const y = (Math.sin(orbitPhase) * 0.4 + 0.5) * screen.height; 297 + ``` 298 + 299 + --- 300 + 301 + ## Memory & Emergence 302 + 303 + The **key strength** is that visual state influences audio via feedback sampling: 304 + 305 + 1. Pixels accumulate → visual patterns become complex 306 + 2. Complex visuals → chaotic feedback parameters 307 + 3. Chaotic feedback → audio becomes more generative 308 + 4. Audio via painting → new visual patterns 309 + 5. Loop → increasing complexity over 90 minutes 310 + 311 + This is **genuine emergence**, not repetition. A 90-minute tape would document the system's exploration of its parameter space, gradually finding new combinations. 312 + 313 + --- 314 + 315 + ## Suggested Implementation Priority 316 + 317 + If building autoplay version: 318 + 319 + 1. **First** - Add slow screen wipe (every 45 sec) to prevent accumulation 320 + 2. **Second** - Add procedural voice spawning based on pixel variance 321 + 3. **Third** - Implement hold sequences (3-15 second voice holds at orbital positions) 322 + 4. **Fourth** - Map feedback more musically (high-brightness → voice clusters, etc.) 323 + 5. **Optional** - Add keyboard shortcuts for manual phase control (reset wipe, trigger voices, etc.) 324 + 325 + --- 326 + 327 + ## Code Entry Points to Modify 328 + 329 + - `sim()` (line 696) - Add voice generation logic 330 + - `interactionState` (line 25) - Add autoplay-specific state 331 + - `act()` (line 625) - Add keyboard controls for autoplay 332 + - `paint()` (line 490) - Add conditional wipe logic 333 + 334 + Would preserve all existing touch/visual mechanics while enabling tape-like continuous playback.
+529
seashells_conceptual_model.md
··· 1 + # Seashells: Conceptual Model & Variation Framework 2 + 3 + This document breaks down the architecture of **seashells.mjs** into conceptual components, so you can understand, remix, and create variations. 4 + 5 + --- 6 + 7 + ## The Core Stack (4 Layers) 8 + 9 + ``` 10 + ┌─────────────────────────────────┐ 11 + │ SEQUENCING LAYER │ How voices spawn & interact 12 + │ (Hold system, voice lifecycle) │ 13 + ├─────────────────────────────────┤ 14 + │ SYNTHESIS LAYER │ How audio is generated 15 + │ (5 bytebeat patterns, blending)│ 16 + ├─────────────────────────────────┤ 17 + │ FEEDBACK LOOP LAYER │ Audio ↔ Visual feedback 18 + │ (Pixel sampling → parameters) │ 19 + ├─────────────────────────────────┤ 20 + │ SPATIAL MAPPING LAYER │ Touch → Frequency/Pitch 21 + │ (X/Y to Hz, modulation axes) │ 22 + └─────────────────────────────────┘ 23 + ``` 24 + 25 + Each layer is **independently modifiable**. You can swap out any component without breaking the others. 26 + 27 + --- 28 + 29 + ## Layer 1: Spatial Mapping (Touch/Position → Audio Parameters) 30 + 31 + ### Current Implementation 32 + ```javascript 33 + X-axis: screen position → base frequency (80–1600 Hz logarithmic) 34 + Y-axis: screen position → pitch multiplier (0.5x–2x linear) 35 + ``` 36 + 37 + **Functions involved:** 38 + - `mapXToFrequency(x, width)` - Convert X pixel to frequency 39 + - `mapYToPitchFactor(y, height)` - Convert Y pixel to pitch multiplier 40 + - `deriveVoiceFrequency()` - Combine both into final frequency 41 + 42 + ### Variations You Could Try 43 + 44 + **1. Polar Coordinate Mapping** 45 + ```javascript 46 + // Instead of cartesian X/Y 47 + const angle = Math.atan2(y - centerY, x - centerX); 48 + const distance = Math.sqrt((x-centerX)² + (y-centerY)²); 49 + const frequency = minHz * Math.pow(maxHz/minHz, distance/maxRadius); 50 + const timbre = (angle + Math.PI) / (2 * Math.PI); // Map to 0-1 51 + ``` 52 + 53 + **2. Vertical Strip Mapping** (like a piano keyboard) 54 + ```javascript 55 + // Ignore X, only use Y for frequency 56 + const frequency = 55 * Math.pow(2, y / screenHeight * 5); // 5 octaves 57 + ``` 58 + 59 + **3. Grid Quantization** (musical scale constraints) 60 + ```javascript 61 + const notes = [55, 62, 69, 82, 110, 123, 147, 165, 196, 220]; // C minor pentatonic 62 + const gridX = Math.round(x / screenWidth * (notes.length - 1)); 63 + const octaveY = Math.round(y / screenHeight * 4); 64 + const frequency = notes[gridX] * Math.pow(2, octaveY); 65 + ``` 66 + 67 + **4. Feedback-Influenced Mapping** (space changes based on audio) 68 + ```javascript 69 + const baseFreq = mapXToFrequency(x, width); 70 + const pitchMult = mapYToPitchFactor(y, height); 71 + // Modulate by pixel feedback 72 + const feedbackScale = 0.8 + sharedPixelFeedback.intensity * 0.4; 73 + return baseFreq * pitchMult * feedbackScale; 74 + ``` 75 + 76 + --- 77 + 78 + ## Layer 2: Synthesis (Audio Generation) 79 + 80 + ### Current Architecture: 5 Blending Patterns 81 + 82 + The piece uses **5 independent bytebeat generators** that morph through each other: 83 + 84 + ```javascript 85 + pattern1 = (t ^ (t >> (8 + shiftMod1)) ^ (t >> (9 + shiftMod2))) & 255 86 + pattern2 = ((t * harmonic) & (t >> (5 + bitMod1)) | (t >> (4 + bitMod2))) & 255 87 + pattern3 = (t | (t >> rhythmMod | t >> 7)) * (t & (t >> 11 | t >> complexMod)) & 255 88 + pattern4 = (t & (t >> (5 + sierpinskiMod) | t >> 8)) & 255 89 + pattern5 = ((t * melodyScale) ^ (t >> 6)) & (t >> 8) & 255 90 + ``` 91 + 92 + **Blending mechanism:** 93 + - Time-based phase progresses through 5 states (0→1→2→3→4→0) 94 + - Between states, linear interpolation smooths transitions 95 + - Phase speed and intensity controlled by feedback 96 + 97 + ### Understanding Each Pattern 98 + 99 + | Pattern | Type | Character | Key Insight | 100 + |---------|------|-----------|-------------| 101 + | **Pattern 1** | XOR Cascade | Digital, crisp, glitchy | Bit flips create harsh transitions | 102 + | **Pattern 2** | Melodic | Pitched, harmonic | `t * harmonic` creates repeating cycles | 103 + | **Pattern 3** | Rhythmic | Complex polyrhythm | Multiplication creates interference patterns | 104 + | **Pattern 4** | Fractal | Sierpinski-like, algorithmic | Simple XOR creates complexity | 105 + | **Pattern 5** | Frequency-Responsive | Pitch-sensitive melodic | Scale changes with input frequency | 106 + 107 + ### Variation: Add Your Own Pattern 108 + 109 + **Step 1: Design a pattern** 110 + ```javascript 111 + const pattern6 = (t * t) & (t >> (7 + feedback.complexity)) & 255; 112 + ``` 113 + 114 + **Step 2: Integrate into blending loop** 115 + ```javascript 116 + let mixPhase = (time * 0.08 + freqScale * 0.5) % 6; // Changed from 5 to 6 117 + if (mixPhase < 1) { 118 + finalPattern = pattern1 * (1 - blend) + pattern2 * blend; 119 + } else if (mixPhase < 2) { 120 + finalPattern = pattern2 * (1 - blend) + pattern3 * blend; 121 + } // ... add more conditions ... 122 + else if (mixPhase < 5) { 123 + finalPattern = pattern5 * (1 - blend) + pattern6 * blend; 124 + } 125 + ``` 126 + 127 + ### Pattern Design Ideas 128 + 129 + **Additive (Smooth)** 130 + ```javascript 131 + const patternSmooth = ((t >> 1) + (t >> 3) + (t >> 5)) & 255; 132 + ``` 133 + 134 + **Multiplicative (Complex)** 135 + ```javascript 136 + const patternComplex = (t * (t >> 4) * (t >> 8)) & 255; 137 + ``` 138 + 139 + **Modulo-based (Rhythmic)** 140 + ```javascript 141 + const patternModulo = (t % 128 + (t >> 8) % 128) & 255; 142 + ``` 143 + 144 + **Conditional (Structured)** 145 + ```javascript 146 + const patternConditional = (t & 128) ? (t << 1) & 255 : (t >> 1) & 255; 147 + ``` 148 + 149 + --- 150 + 151 + ## Layer 3: Feedback Loop (Visual → Audio Influence) 152 + 153 + ### Current System: Pixel Sampling → Parameter Modulation 154 + 155 + **Sampling strategy:** 12-20 points strategically distributed 156 + - 4 corners (detect extreme brightness) 157 + - 4 edge midpoints (detect edge activity) 158 + - 4 diagonal sweeps (detect diagonal patterns) 159 + - 4+ orbital scans (detect center/rotation) 160 + 161 + **Conversion:** 162 + ``` 163 + RED channel → Harmonic scaling, time modulation 164 + GREEN channel → Rhythm scaling, mix speed 165 + BLUE channel → Pattern bias, shift modulation 166 + Brightness → Intensity, chaos injection 167 + Contrast → Bit operations 168 + Variance → Chaos level 169 + ``` 170 + 171 + ### Feedback Parameters Affected 172 + 173 + ```javascript 174 + timeModulation: How the time variable shifts (larger jumps = more chaotic) 175 + shiftMod1/2: XOR shift amounts (bigger shifts = less repetitive) 176 + harmonicScale: How many cycles the melody completes 177 + rhythmScale: Speed of rhythmic modulation 178 + bitMod1/2: Bit operation amounts (chaos injection) 179 + mixSpeed: How fast patterns cycle through 180 + blendIntensity: How smooth transitions are 181 + chaosLevel: XOR noise injection probability 182 + colorMod (r,g,b): Color channel multipliers (affects visuals) 183 + ``` 184 + 185 + ### Variation: Change What Pixels Affect 186 + 187 + **Current: RGB brightness → Audio parameters** 188 + 189 + **Alternative 1: Directional Gradient** 190 + ```javascript 191 + // Sample top half vs bottom half 192 + const topSamples = sampleRegion(0, 0, width, height/2); 193 + const bottomSamples = sampleRegion(0, height/2, width, height); 194 + const topBrightness = avgBrightness(topSamples); 195 + const bottomBrightness = avgBrightness(bottomSamples); 196 + 197 + feedback.mixSpeed = 0.5 + (topBrightness / 255) * 2; 198 + feedback.chaosLevel = (bottomBrightness / 255); 199 + ``` 200 + 201 + **Alternative 2: Edge Detection** 202 + ```javascript 203 + // High contrast areas → more complexity 204 + const contrast = maxBrightness - minBrightness; 205 + feedback.complexity = contrast / 255; 206 + ``` 207 + 208 + **Alternative 3: Color-Specific Regions** 209 + ```javascript 210 + // Sample only red-dominant pixels 211 + const redRegions = samples.filter(s => s.r > s.g && s.r > s.b); 212 + feedback.intensity = redRegions.length / samples.length; 213 + ``` 214 + 215 + ### Variation: Change Visual Effects from Audio 216 + 217 + The piece also **paints bytebeat patterns** back to the screen: 218 + 219 + **Current:** 220 + ```javascript 221 + // For each pixel column: 222 + const bytebeat = pattern(...); 223 + const y = (bytebeat / 255) * screenHeight; 224 + screen.pixels[y * width + x] = color; 225 + ``` 226 + 227 + **Alternative: Oscilloscope Mode** 228 + ```javascript 229 + // Draw audio waveform like an oscilloscope 230 + const samples = generator.bytebeat({ frequency, sampleRate, time, samplesNeeded: 512 }); 231 + for (let i = 0; i < samples.length; i++) { 232 + const y = (samples[i] * 0.5 + 0.5) * screenHeight; 233 + const x = (i / samples.length) * screenWidth; 234 + screen.pixels[Math.round(y * width + x)] = 255; 235 + } 236 + ``` 237 + 238 + **Alternative: Spectrogram Mode** 239 + ```javascript 240 + // Show frequency content over time 241 + const frequencies = fft(bytebeat_output); 242 + for (let freq = 0; freq < frequencies.length; freq++) { 243 + const brightness = frequencies[freq]; 244 + const y = (freq / frequencies.length) * screenHeight; 245 + screen.pixels[Math.round(y * width + sweepX)] = brightness; 246 + } 247 + ``` 248 + 249 + --- 250 + 251 + ## Layer 4: Sequencing (Voice Lifecycle & Hold Mechanism) 252 + 253 + ### Current Architecture: Hold Sequence 254 + 255 + **States:** 256 + - **Off** - No automatic voices, only touch interaction 257 + - **On** - Periodically spawns voices at orbital positions, 5-13 second durations 258 + 259 + **Parameters:** 260 + ```javascript 261 + spawnInterval: 2000ms (spawn every 2 seconds) 262 + maxConcurrentHolds: 6 (never more than 6 at once) 263 + baseDuration: 5000-13000ms (influenced by chaos feedback) 264 + orbitSpeed: 0.0003-0.0006 rad/frame (varies per voice) 265 + wobble: 0.15-0.35 (influenced by memory) 266 + ``` 267 + 268 + **Spawning logic:** 269 + ``` 270 + Position = orbital path (cosine × radius, sine × radius) 271 + Radius influenced by feedback.density 272 + Phase influenced by time + randomness 273 + Duration = base + (1 - chaos) bonus - (1 - quiet bonus) 274 + Less chaos → longer holds 275 + High memory → longer holds 276 + Movement = orbital drift + wobble 277 + Each voice has independent orbital speed 278 + Memory makes movements more pronounced 279 + ``` 280 + 281 + ### Variation: Different Sequencing Strategies 282 + 283 + **1. Fibonacci Interval Spawning** 284 + ```javascript 285 + const goldenRatio = 1.618; 286 + const intervals = []; 287 + for (let i = 0; i < 10; i++) { 288 + intervals.push(Math.floor(1000 * Math.pow(goldenRatio, i))); 289 + } 290 + // Spawn voices at fibonacci-spaced intervals 291 + ``` 292 + 293 + **2. Grid-Based Spawning** 294 + ```javascript 295 + // Spawn voices at fixed grid positions, one per cell 296 + for (let gx = 0; gx < gridWidth; gx++) { 297 + for (let gy = 0; gy < gridHeight; gy++) { 298 + const x = (gx + 0.5) / gridWidth * screenWidth; 299 + const y = (gy + 0.5) / gridHeight * screenHeight; 300 + spawnVoiceAt(x, y, sound); 301 + } 302 + } 303 + ``` 304 + 305 + **3. Random Walk Sequencing** 306 + ```javascript 307 + // Each voice position is random walk from previous 308 + const walk = { x: screenWidth * 0.5, y: screenHeight * 0.5 }; 309 + for (let i = 0; i < voiceCount; i++) { 310 + walk.x += (Math.random() - 0.5) * 200; 311 + walk.y += (Math.random() - 0.5) * 200; 312 + walk.x = clamp(walk.x, 0, screenWidth); 313 + walk.y = clamp(walk.y, 0, screenHeight); 314 + spawnVoiceAt(walk.x, walk.y, sound); 315 + } 316 + ``` 317 + 318 + **4. Brightness-Following Sequencing** 319 + ```javascript 320 + // Spawn voices at brightest regions of screen 321 + const samples = samplePixels(screen, 20); 322 + const sorted = samples.sort((a, b) => b.brightness - a.brightness); 323 + sorted.slice(0, 5).forEach(sample => { 324 + spawnVoiceAt(sample.x, sample.y, sound); 325 + }); 326 + ``` 327 + 328 + **5. Phase-Locking to Audio** 329 + ```javascript 330 + // Spawn new voices synchronized to audio beat 331 + const audioEnergy = measureAudioEnergy(sound); 332 + if (audioEnergy > threshold && (now - lastSpawn) > spawnDelay) { 333 + spawnHoldVoice(screenWidth, screenHeight, sound); 334 + lastSpawn = now; 335 + } 336 + ``` 337 + 338 + --- 339 + 340 + ## Remix Guide: Creating Variations 341 + 342 + ### Quick Swaps (30 minutes) 343 + 344 + **1. Change the color palette** 345 + - Modify `touchOverlayPalette` (line 14-23) 346 + - Modify color generation in `paint()` (line 558-560) 347 + 348 + **2. Change spatial mapping** 349 + - Replace `mapXToFrequency()` and `mapYToPitchFactor()` 350 + - E.g., use only vertical axis, or add diagonal 351 + 352 + **3. Adjust hold sequence timing** 353 + - Change `spawnInterval` (currently 2000ms) 354 + - Change hold duration calculation (currently 5-13 seconds) 355 + - Change max concurrent holds (currently 6) 356 + 357 + **4. Modify feedback sensitivity** 358 + - Increase/decrease pixel sampling points 359 + - Change RGB→parameter mappings 360 + - Adjust decay rates in `sim()` 361 + 362 + --- 363 + 364 + ### Medium Swaps (1-2 hours) 365 + 366 + **1. Add a 6th bytebeat pattern** 367 + - Design new pattern formula 368 + - Insert into blending loop (change mod 5 to mod 6) 369 + - Adjust blend transitions 370 + 371 + **2. Implement alternative sequencing** 372 + - Comment out `updateHoldVoices()` 373 + - Write new spawning logic 374 + - Re-export or call from `sim()` 375 + 376 + **3. Change visual rendering** 377 + - Modify pixel drawing (lines 509-598) 378 + - Swap from vertical columns to orbits/grids/waveforms 379 + - Add new visual effects (trails, particles, etc.) 380 + 381 + **4. Implement new feedback strategy** 382 + - Rewrite `samplePixelFeedback()` 383 + - Change what gets sampled (edges, variance, specific colors) 384 + - Change RGB→parameter mappings 385 + 386 + --- 387 + 388 + ### Deep Remixes (3-6 hours) 389 + 390 + **1. Multi-Layer Synthesis** 391 + - Have different hold voices use different pattern sets 392 + - E.g., lower voices use pattern 1-2, higher voices use 4-5 393 + 394 + **2. Envelope Shaping** 395 + - Add ADSR envelopes to voices 396 + - Make volume/timbre evolve over hold duration 397 + 398 + **3. Harmonic Relationships** 399 + - Make voices respond to each other 400 + - E.g., new voice spawned at harmonic of existing voices 401 + 402 + **4. Spatial Audio Evolution** 403 + - Make voices' frequency change as they move through space 404 + - Create "force fields" where certain regions repel/attract 405 + 406 + **5. Generative Visual System** 407 + - Decouple visuals from audio synthesis 408 + - Create independent generative visual patterns 409 + - Use audio to modulate visual parameters 410 + 411 + --- 412 + 413 + ## Code Landmarks for Modification 414 + 415 + ### To understand a layer, read these functions: 416 + 417 + **Spatial Mapping:** 418 + - `mapXToFrequency()` (line 190) 419 + - `mapYToPitchFactor()` (line 198) 420 + - `deriveVoiceFrequency()` (line 205) 421 + 422 + **Synthesis:** 423 + - `generator.bytebeat()` (line 61) 424 + - Pattern definitions (lines 82-97) 425 + - Pattern blending (lines 100-126) 426 + 427 + **Feedback:** 428 + - `samplePixelFeedback()` (line 371) 429 + - Sampling strategy (lines 379-424) 430 + - Parameter derivation (lines 440-486) 431 + 432 + **Sequencing:** 433 + - `spawnHoldVoice()` (line 370) 434 + - `updateHoldVoices()` (line 406) 435 + - `toggleHoldSequence()` (line 457) 436 + - Hold state initialization (line 40) 437 + 438 + **Visuals:** 439 + - `paint()` (line 525) 440 + - Pixel rendering (lines 551-612) 441 + - Color computation (lines 558-560) 442 + 443 + --- 444 + 445 + ## Conceptual Symmetries 446 + 447 + Notice these patterns: 448 + 449 + 1. **Feedback flows upward**: Pixels → Audio → Pixels 450 + 2. **Time operates at multiple scales**: 451 + - Sample-level: Bytebeat generation (44.1kHz) 452 + - Voice-level: Hold durations (seconds) 453 + - System-level: State decay (10+ seconds) 454 + 3. **Randomness is constrained**: Random values modulated by feedback 455 + 4. **Movement is orbital**: Scanning, voice drift, visual sweeps all use trig functions 456 + 5. **Colors derive from bits**: RGB computed from bytebeat pattern XORs 457 + 458 + These symmetries are **features** you can exploit in variations: 459 + - Use same orbital math for voices and pixel sampling 460 + - Use same bytebeat generators for audio and visuals 461 + - Use same feedback parameters to shape multiple layers 462 + 463 + --- 464 + 465 + ## Testing Your Variations 466 + 467 + When you remix, test these: 468 + 469 + 1. **With no touches** (hold sequence only) 470 + - Does it sustain audio continuously? 471 + - Are voices distinguishable or do they blend? 472 + - Does visual feedback remain varied? 473 + 474 + 2. **With touches** (interactive) 475 + - Do touch voices feel responsive? 476 + - Does hold sequence coexist peacefully? 477 + - Are there frequency collisions (too many same-pitch voices)? 478 + 479 + 3. **After 5 minutes idle** 480 + - Does it settle to silence or continue? 481 + - Do visuals accumulate wisely or become noise? 482 + 483 + 4. **After 90 minutes** 484 + - Would you listen to this as a tape? 485 + - Is there enough emergence/surprise? 486 + - Does it feel like a composition or just random? 487 + 488 + --- 489 + 490 + ## Example Variations to Try 491 + 492 + ### Variation A: "Comb Filter Seashells" 493 + - Keep synthesis/feedback as-is 494 + - Change `spawnInterval` to 500ms (faster) 495 + - Spawn voices at fixed frequency ratios (1x, 1.5x, 2x, 3x fundamental) 496 + - Result: Harmonic relationships, bell-like tones 497 + 498 + ### Variation B: "Noise Garden" 499 + - Keep synthesis/feedback as-is 500 + - Add 2-3 new chaotic bytebeat patterns 501 + - Increase `chaosLevel` sensitivity 5x 502 + - Result: More glitchy, algorithmic harshness 503 + 504 + ### Variation C: "Visual Instruments" 505 + - Keep synthesis as-is 506 + - Change visual rendering to oscilloscope 507 + - Scale oscilloscope based on voice frequency 508 + - High voices = small tight spirals, low voices = large loose ones 509 + - Result: Visual becomes the primary interface, audio is secondary 510 + 511 + ### Variation D: "Memory Piece" 512 + - Keep synthesis as-is 513 + - Make spawn rate depend on accumulated visual memory 514 + - Bright areas → more voices spawn nearby 515 + - Result: Visuals "grow" audio in response 516 + 517 + --- 518 + 519 + ## Final Note 520 + 521 + The beauty of this piece is that **every layer is independent**. You can: 522 + - Change synthesis without touching sequencing 523 + - Change sequencing without touching visuals 524 + - Change feedback without touching synthesis 525 + - Change mapping without touching anything else 526 + 527 + This independence is intentional. It means you can remix safely, testing one change at a time, without breaking the whole system. 528 + 529 + Happy remixing!
+474
seashells_variation_examples.md
··· 1 + # Seashells: Concrete Variation Examples 2 + 3 + Four complete variation sketches with copy-paste code. Each is a self-contained remix you can test. 4 + 5 + --- 6 + 7 + ## Example 1: "Harmonic Bell" — Constrained Pitch Mapping 8 + 9 + **Concept:** Instead of a continuous frequency space, voices snap to a musical scale. Creates bell-like tones. 10 + 11 + **Key change:** Replace the `mapXToFrequency()` function 12 + 13 + ```javascript 14 + // Replace mapXToFrequency() with this: 15 + function mapXToFrequencyQuantized(x, width) { 16 + const w = Math.max(1, width - 1); 17 + const nx = clamp((x ?? w / 2) / w, 0, 1); 18 + 19 + // C minor pentatonic scale 20 + const notes = [ 21 + 55, // A1 22 + 66, // B1 23 + 82, // E2 24 + 110, // A2 25 + 123, // B2 26 + 165, // E3 27 + 220, // A3 28 + 247, // B3 29 + 330, // E4 30 + 440 // A4 31 + ]; 32 + 33 + const index = Math.floor(nx * (notes.length - 1)); 34 + return notes[index]; 35 + } 36 + 37 + // Then in deriveVoiceFrequency(): 38 + function deriveVoiceFrequency({ x, y, screenWidth, screenHeight }) { 39 + const base = mapXToFrequencyQuantized(x, screenWidth) * mapYToPitchFactor(y, screenHeight); 40 + return clamp(base, 55, 1760); 41 + } 42 + ``` 43 + 44 + **What this does:** 45 + - X-axis snaps to 10 specific pitches (A minor pentatonic) 46 + - Y-axis still modulates pitch up/down 47 + - Result: Naturally harmonious, bells/resonators 48 + 49 + **To test:** Press H, touch the left side, right side, watch the pitch snap between specific notes. 50 + 51 + --- 52 + 53 + ## Example 2: "Chaos Intensifier" — Feedback-Driven Synthesis 54 + 55 + **Concept:** High visual variance → more chaotic audio. Creates feedback loops where visual complexity breeds audio wildness. 56 + 57 + **Changes:** 58 + 1. Increase chaos sensitivity in `samplePixelFeedback()` 59 + 2. Add new "chaos patterns" to synthesis 60 + 61 + ```javascript 62 + // In samplePixelFeedback(), find this line: 63 + // chaosLevel: Math.min(1.0, variance / 20000), 64 + 65 + // Replace with: 66 + chaosLevel: clamp(Math.sqrt(variance / 10000), 0, 1), // More sensitive 67 + 68 + // Then in the generator.bytebeat() function, find the chaos injection: 69 + // if (liveFeedback && liveFeedback.chaosLevel > 0.5) { 70 + // finalPattern = finalPattern ^ Math.floor(liveFeedback.chaosLevel * 128); 71 + // } 72 + 73 + // Replace with: 74 + if (liveFeedback && liveFeedback.chaosLevel > 0.3) { 75 + const chaosAmount = Math.floor(liveFeedback.chaosLevel * 200); 76 + finalPattern = (finalPattern ^ chaosAmount) + (chaosAmount >> 2) & 255; 77 + } 78 + ``` 79 + 80 + **What this does:** 81 + - Chaos level becomes much more sensitive (square root scaling) 82 + - Chaos injection affects more bits 83 + - Bright, contrasty visuals → immediately more chaotic audio 84 + 85 + **To test:** Press H, make the screen bright/contrasty with touches, watch audio become glitchier. 86 + 87 + --- 88 + 89 + ## Example 3: "Grid Voices" — Spatial Voice Quantization 90 + 91 + **Concept:** Hold sequence spawns voices on a grid, creating structured movement patterns. 92 + 93 + **Replace the `spawnHoldVoice()` function:** 94 + 95 + ```javascript 96 + // Grid configuration 97 + const gridConfig = { 98 + cols: 4, 99 + rows: 3, 100 + cellIndex: 0 101 + }; 102 + 103 + function spawnHoldVoice(screenWidth, screenHeight, sound) { 104 + const voiceId = holdSequence.nextVoiceId++; 105 + const feedback = sharedPixelFeedback; 106 + 107 + // Get next grid position (row-major order) 108 + const cellIndex = gridConfig.cellIndex % (gridConfig.cols * gridConfig.rows); 109 + const col = cellIndex % gridConfig.cols; 110 + const row = Math.floor(cellIndex / gridConfig.cols); 111 + gridConfig.cellIndex += 1; 112 + 113 + // Convert grid to screen coordinates (with padding) 114 + const padding = 40; 115 + const cellWidth = (screenWidth - padding * 2) / gridConfig.cols; 116 + const cellHeight = (screenHeight - padding * 2) / gridConfig.rows; 117 + 118 + const x = padding + (col + 0.5) * cellWidth; 119 + const y = padding + (row + 0.5) * cellHeight; 120 + 121 + // Duration varies by grid position 122 + const baseDuration = 4000 + (col + row) * 1000; 123 + const duration = baseDuration + (Math.random() - 0.5) * 1000; 124 + 125 + const hold = { 126 + voiceId, 127 + x, 128 + y, 129 + startTime: performance.now(), 130 + duration, 131 + orbitPhase: 0, 132 + orbitSpeed: 0.0001 + col * 0.00005 // Different speeds per column 133 + }; 134 + 135 + startTouchVoice({ 136 + pointerIndex: voiceId, 137 + x: Math.round(x), 138 + y: Math.round(y), 139 + screenWidth, 140 + screenHeight, 141 + sound 142 + }); 143 + 144 + holdSequence.activeHolds.push(hold); 145 + holdSequence.lastSpawnTime = performance.now(); 146 + } 147 + 148 + // In updateHoldVoices(), replace the orbital movement with: 149 + function updateHoldVoices(screenWidth, screenHeight, sound) { 150 + if (!holdSequence.enabled) return; 151 + 152 + const now = performance.now(); 153 + const feedback = sharedPixelFeedback; 154 + 155 + // Spawn new hold if interval exceeded 156 + if (now - holdSequence.lastSpawnTime > holdSequence.spawnInterval && holdSequence.activeHolds.length < 12) { 157 + spawnHoldVoice(screenWidth, screenHeight, sound); 158 + } 159 + 160 + // Update positions - GRID movement only (subtle vibrato) 161 + for (let i = holdSequence.activeHolds.length - 1; i >= 0; i--) { 162 + const hold = holdSequence.activeHolds[i]; 163 + const elapsed = now - hold.startTime; 164 + 165 + if (elapsed > hold.duration) { 166 + stopTouchVoice(hold.voiceId, 0.15); 167 + holdSequence.activeHolds.splice(i, 1); 168 + continue; 169 + } 170 + 171 + // Grid position stays fixed, but add vibrato 172 + const vibratoAmount = 10 + Math.sin(now * 0.003 + hold.voiceId) * 8; 173 + const vibratoX = Math.sin(now * 0.004 + hold.voiceId * 0.5) * vibratoAmount; 174 + const vibratoY = Math.cos(now * 0.005 + hold.voiceId * 0.7) * vibratoAmount; 175 + 176 + const x = hold.x + vibratoX; 177 + const y = hold.y + vibratoY; 178 + 179 + updateTouchVoice({ 180 + pointerIndex: hold.voiceId, 181 + x: Math.round(x), 182 + y: Math.round(y), 183 + screenWidth, 184 + screenHeight, 185 + sound 186 + }); 187 + } 188 + } 189 + ``` 190 + 191 + **What this does:** 192 + - Voices spawn in a 4×3 grid and fill it sequentially 193 + - Each voice has a fixed position with subtle vibrato 194 + - Creates structured, predictable movement 195 + - Different columns have different modulation speeds 196 + 197 + **To test:** Press H, watch voices fill grid positions systematically. 198 + 199 + --- 200 + 201 + ## Example 4: "Waveform Display" — Visual Audio Feedback 202 + 203 + **Concept:** Instead of bytebeat creating vertical lines, show actual waveform shapes. More "traditional" audio visualization. 204 + 205 + **Replace most of the `paint()` function (lines 551-612):** 206 + 207 + ```javascript 208 + // In paint(), replace the main pixel-manipulation loop with: 209 + 210 + if (totalVoiceCount() === 0) { 211 + wipe(10, 14, 22); 212 + drawTouchMapping({ ink, line, write, screen, emphasized: true }); 213 + ink(210, 232, 255); 214 + write("hold touches to play / press 'h' for hold sequence", { x: 2, y: Math.max(hudSafeTop + 2, screen.height - 16) }, undefined, undefined, false, uiFont); 215 + write("x=base hz y=pitch mult", { x: 2, y: Math.max(hudSafeTop + 10, screen.height - 8) }, undefined, undefined, false, uiFont); 216 + return; 217 + } 218 + 219 + // FEEDBACK LOOP 220 + sharedPixelFeedback = samplePixelFeedback(screen); 221 + const feedback = sharedPixelFeedback; 222 + 223 + // Generate waveform samples 224 + const samplesPerFrame = screen.width; 225 + const samples = generator.bytebeat({ 226 + frequency: currentFrequency, 227 + sampleRate: 44100, 228 + time: performance.now() * 0.001, 229 + samplesNeeded: samplesPerFrame, 230 + feedback 231 + }); 232 + 233 + // Draw waveform as oscilloscope-style 234 + const centerY = screen.height * 0.5; 235 + const amplitude = screen.height * 0.35; 236 + 237 + for (let x = 0; x < samplesPerFrame - 1; x++) { 238 + const sample1 = samples[x]; 239 + const sample2 = samples[x + 1]; 240 + 241 + const y1 = centerY - sample1 * amplitude; 242 + const y2 = centerY - sample2 * amplitude; 243 + 244 + // Draw line between consecutive samples 245 + drawLineBresenham( 246 + Math.round(x), 247 + Math.round(y1), 248 + Math.round(x + 1), 249 + Math.round(y2), 250 + screen, 251 + [200, 150, 255, 255] 252 + ); 253 + } 254 + 255 + // Draw baseline 256 + for (let x = 0; x < screen.width; x++) { 257 + const pixelIndex = (centerY * screen.width + x) * 4; 258 + screen.pixels[pixelIndex] = 80; 259 + screen.pixels[pixelIndex + 1] = 80; 260 + screen.pixels[pixelIndex + 2] = 100; 261 + screen.pixels[pixelIndex + 3] = 255; 262 + } 263 + 264 + // Helper: simple Bresenham line drawing 265 + function drawLineBresenham(x0, y0, x1, y1, screen, color) { 266 + const dx = Math.abs(x1 - x0); 267 + const dy = Math.abs(y1 - y0); 268 + const sx = x0 < x1 ? 1 : -1; 269 + const sy = y0 < y1 ? 1 : -1; 270 + let err = dx - dy; 271 + 272 + let x = x0, y = y0; 273 + while (true) { 274 + if (x >= 0 && x < screen.width && y >= 0 && y < screen.height) { 275 + const pixelIndex = (y * screen.width + x) * 4; 276 + screen.pixels[pixelIndex] = color[0]; 277 + screen.pixels[pixelIndex + 1] = color[1]; 278 + screen.pixels[pixelIndex + 2] = color[2]; 279 + screen.pixels[pixelIndex + 3] = color[3]; 280 + } 281 + 282 + if (x === x1 && y === y1) break; 283 + const e2 = 2 * err; 284 + if (e2 > -dy) err -= dy, x += sx; 285 + if (e2 < dx) err += dx, y += sy; 286 + } 287 + } 288 + 289 + drawTouchMapping({ ink, line, write, screen, emphasized: false }); 290 + drawTouchOverlays({ ink, line, circle, write, screen }); 291 + ``` 292 + 293 + **What this does:** 294 + - Shows actual audio waveform like an oscilloscope 295 + - Waveform updates in real-time based on synthesized samples 296 + - Visual directly represents what you're hearing 297 + - Feedback loop still influences timbre 298 + 299 + **To test:** Press H, watch the waveform shape change as pattern blending happens. 300 + 301 + --- 302 + 303 + ## Example 5: "Memory Painter" — Voices Follow Visual Entropy 304 + 305 + **Concept:** Voices spawn where the screen is most chaotic, creating a feedback where audio "grows" from visual disturbance. 306 + 307 + **Modify `spawnHoldVoice()`:** 308 + 309 + ```javascript 310 + function spawnHoldVoice(screenWidth, screenHeight, sound) { 311 + const voiceId = holdSequence.nextVoiceId++; 312 + const feedback = sharedPixelFeedback; 313 + 314 + // Sample multiple regions and find the most chaotic 315 + const samplePoints = 16; 316 + let maxChaos = 0; 317 + let spawnX = screenWidth * 0.5; 318 + let spawnY = screenHeight * 0.5; 319 + 320 + for (let i = 0; i < samplePoints; i++) { 321 + const x = Math.random() * screenWidth; 322 + const y = Math.random() * screenHeight; 323 + 324 + // Measure local entropy (variance of nearby pixels) 325 + const regionSamples = []; 326 + for (let dx = -10; dx <= 10; dx += 5) { 327 + for (let dy = -10; dy <= 10; dy += 5) { 328 + const px = clamp(Math.round(x + dx), 0, screenWidth - 1); 329 + const py = clamp(Math.round(y + dy), 0, screenHeight - 1); 330 + const pixelIndex = (py * screenWidth + px) * 4; 331 + const brightness = screen.pixels[pixelIndex] + 332 + screen.pixels[pixelIndex + 1] + 333 + screen.pixels[pixelIndex + 2]; 334 + regionSamples.push(brightness); 335 + } 336 + } 337 + 338 + // Compute variance 339 + const avg = regionSamples.reduce((a, b) => a + b, 0) / regionSamples.length; 340 + const variance = regionSamples.reduce((sum, val) => sum + Math.pow(val - avg, 2), 0) / regionSamples.length; 341 + 342 + if (variance > maxChaos) { 343 + maxChaos = variance; 344 + spawnX = x; 345 + spawnY = y; 346 + } 347 + } 348 + 349 + const baseDuration = 5000 + (1 - feedback.chaosLevel) * 8000; 350 + const duration = baseDuration + (Math.random() - 0.5) * 2000; 351 + 352 + const hold = { 353 + voiceId, 354 + x: spawnX, 355 + y: spawnY, 356 + startTime: performance.now(), 357 + duration, 358 + orbitPhase: 0, 359 + orbitSpeed: 0.0002 360 + }; 361 + 362 + startTouchVoice({ 363 + pointerIndex: voiceId, 364 + x: Math.round(spawnX), 365 + y: Math.round(spawnY), 366 + screenWidth, 367 + screenHeight, 368 + sound 369 + }); 370 + 371 + holdSequence.activeHolds.push(hold); 372 + holdSequence.lastSpawnTime = performance.now(); 373 + } 374 + ``` 375 + 376 + **What this does:** 377 + - Analyzes visual entropy (how chaotic pixels are) 378 + - Spawns new voices in the most chaotic regions 379 + - Creates positive feedback: audio → pixels → more audio 380 + - Visual "disturbances" are musically rewarded 381 + 382 + **To test:** Press H, touch screen to create visual chaos, watch new voices spawn there. 383 + 384 + --- 385 + 386 + ## How to Implement These 387 + 388 + 1. **Backup original:** 389 + ```bash 390 + cp system/public/aesthetic.computer/disks/seashells.mjs seashells_original.mjs 391 + ``` 392 + 393 + 2. **Pick one variation** (say, Harmonic Bell) 394 + 395 + 3. **Copy its code** into seashells.mjs, replacing the specified functions 396 + 397 + 4. **Test in dev environment:** 398 + ```bash 399 + npm run ac 400 + # Navigate to seashells in browser 401 + # Press H to activate hold sequence 402 + ``` 403 + 404 + 5. **Iterate:** Once you get one variation working, try others 405 + 406 + --- 407 + 408 + ## Combining Variations 409 + 410 + You can **stack these concepts**: 411 + 412 + - Grid Voices + Harmonic Bell = Structured harmonic grid 413 + - Chaos Intensifier + Waveform Display = Visual feedback of audio chaos 414 + - Memory Painter + Grid Voices = Chaos accumulates in grid cells 415 + - All four = Complex emergent system 416 + 417 + The trick is testing each change in isolation first, then carefully combining them. 418 + 419 + --- 420 + 421 + ## Debugging Tips 422 + 423 + **If synthesis breaks (no sound):** 424 + - Check that `currentFrequency` is in 20–20000 Hz range 425 + - Verify generator.bytebeat returns 512+ samples 426 + - Check sound.synth() is being called with correct parameters 427 + 428 + **If hold sequence doesn't work:** 429 + - Verify `holdSequence.enabled` is toggled by 'H' key 430 + - Check `spawnHoldVoice()` is being called from `updateHoldVoices()` 431 + - Make sure `voiceId` values don't collide with touch pointer IDs 432 + 433 + **If visuals freeze:** 434 + - Pixel manipulation loops might be expensive 435 + - Reduce `samplesPerFrame` or `gridConfig` cell count 436 + - Profile in DevTools Performance tab 437 + 438 + **If feedback loop breaks:** 439 + - Verify `sharedPixelFeedback` is being updated in paint() 440 + - Check pixel sampling doesn't go out of bounds 441 + - Ensure feedback parameters scale to expected ranges 442 + 443 + --- 444 + 445 + ## What To Listen For 446 + 447 + ### Harmonic Bell 448 + - Should sound like struck bells or gongs 449 + - Quantized pitches mean less dissonance 450 + - Movement within grid feels musically constrained 451 + 452 + ### Chaos Intensifier 453 + - Silent visuals = subtle, calm tone 454 + - Complex/bright visuals = harsh, glitchy audio 455 + - Real feedback loop, not just cosmetic 456 + 457 + ### Grid Voices 458 + - Predictable, structured movement 459 + - Different columns have different "personalities" (timbre) 460 + - Feels like an instrument you could learn to play 461 + 462 + ### Waveform Display 463 + - You see exactly what you hear 464 + - Blending between patterns visible as shape changes 465 + - Useful for understanding bytebeat architecture 466 + 467 + ### Memory Painter 468 + - Audio grows out of visual "accidents" 469 + - Touching creates short-term audio response 470 + - Over time, visual noise accumulates 471 + 472 + --- 473 + 474 + Happy remixing! Each variation teaches you something about how the layers interact.
+518
system/public/aesthetic.computer/disks/seash.mjs
··· 1 + // Seashell — bare-bones bytebeat synthesizer 2 + // Lab bench proof of concept: minimal code, maximum clarity 3 + 4 + /* Core mechanism: 5 + - 2 simple bytebeat patterns 6 + - Hold sequence: spawn voices automatically 7 + - X-axis = frequency, Y-axis = pitch multiplier 8 + - No visual feedback loop, no complexity 9 + - ~350 lines, easy to understand and modify 10 + */ 11 + 12 + // Voice management 13 + const touchVoices = new Map(); 14 + const maxTouchPointers = 8; 15 + 16 + // Hold sequence: auto-spawn voices 17 + const holdSequence = { 18 + enabled: false, 19 + activeHolds: [], 20 + lastSpawnTime: 0, 21 + spawnInterval: 3000, // 3 seconds between spawns 22 + nextVoiceId: 100 23 + }; 24 + 25 + // Parameters (controlled by sliders) 26 + const params = { 27 + spawnInterval: 3000, // ms between voice spawns 28 + voiceDuration: 6000, // ms each voice lasts 29 + orbitSpeed: 0.0003, // rad/frame orbital speed 30 + blendSpeed: 0.1, // pattern blend rate 31 + maxVoices: 5 // max concurrent auto voices 32 + }; 33 + 34 + // Slider UI 35 + const sliders = [ 36 + { 37 + label: "Spawn", 38 + key: "spawnInterval", 39 + min: 500, 40 + max: 8000, 41 + step: 100, 42 + x: 0, 43 + y: 0, 44 + width: 0, 45 + height: 16, 46 + dragging: false 47 + }, 48 + { 49 + label: "Duration", 50 + key: "voiceDuration", 51 + min: 2000, 52 + max: 20000, 53 + step: 500, 54 + x: 0, 55 + y: 0, 56 + width: 0, 57 + height: 16, 58 + dragging: false 59 + }, 60 + { 61 + label: "OrbitSpd", 62 + key: "orbitSpeed", 63 + min: 0.0001, 64 + max: 0.001, 65 + step: 0.00005, 66 + x: 0, 67 + y: 0, 68 + width: 0, 69 + height: 16, 70 + dragging: false 71 + }, 72 + { 73 + label: "BlendSpd", 74 + key: "blendSpeed", 75 + min: 0.01, 76 + max: 0.5, 77 + step: 0.02, 78 + x: 0, 79 + y: 0, 80 + width: 0, 81 + height: 16, 82 + dragging: false 83 + }, 84 + { 85 + label: "MaxVoices", 86 + key: "maxVoices", 87 + min: 1, 88 + max: 10, 89 + step: 1, 90 + x: 0, 91 + y: 0, 92 + width: 0, 93 + height: 16, 94 + dragging: false 95 + } 96 + ]; 97 + 98 + // Simple bytebeat generator 99 + const generator = { 100 + bytebeat: ({ frequency, sampleRate, time, samplesNeeded, feedback = null }) => { 101 + const samples = []; 102 + const freqScale = frequency / 440; 103 + const timeOffset = Math.floor(time * sampleRate * freqScale * 0.3); 104 + 105 + for (let i = 0; i < samplesNeeded; i++) { 106 + const t = timeOffset + i; 107 + 108 + // Pattern 1: XOR cascade (crisp, digital) 109 + const p1 = (t ^ (t >> 8) ^ (t >> 9)) & 255; 110 + 111 + // Pattern 2: Melodic (pitched, harmonic) 112 + const harmonic = Math.max(1, Math.floor(freqScale * 2)); 113 + const p2 = ((t * harmonic) & (t >> 5) | (t >> 4)) & 255; 114 + 115 + // Mix patterns based on time 116 + const mixPhase = (time * 0.1 + freqScale * 0.5) % 2; 117 + let finalPattern; 118 + if (mixPhase < 1) { 119 + const blend = mixPhase; 120 + finalPattern = p1 * (1 - blend) + p2 * blend; 121 + } else { 122 + const blend = mixPhase - 1; 123 + finalPattern = p2 * (1 - blend) + p1 * blend; 124 + } 125 + 126 + // Convert to audio range 127 + let sample = (finalPattern / 127.5) - 1; 128 + sample *= 0.6; // Volume scaling 129 + samples.push(sample); 130 + } 131 + return samples; 132 + } 133 + }; 134 + 135 + function clamp(value, low, high) { 136 + return Math.max(low, Math.min(high, value)); 137 + } 138 + 139 + // Spatial mapping 140 + function mapXToFrequency(x, width) { 141 + const nx = clamp(x / width, 0, 1); 142 + const minHz = 55; 143 + const maxHz = 880; 144 + return minHz * Math.pow(maxHz / minHz, nx); 145 + } 146 + 147 + function mapYToPitch(y, height) { 148 + const ny = clamp(y / height, 0, 1); 149 + return Math.pow(2, (0.5 - ny) * 2); // 0.5x to 2x 150 + } 151 + 152 + function deriveFrequency(x, y, screenWidth, screenHeight) { 153 + const base = mapXToFrequency(x, screenWidth) * mapYToPitch(y, screenHeight); 154 + return clamp(base, 40, 2000); 155 + } 156 + 157 + // Voice management 158 + function createVoice({ sound, frequency, volume = 0.5 }) { 159 + return sound.synth({ 160 + type: "custom", 161 + tone: frequency, 162 + duration: "🔁", 163 + volume, 164 + generator: generator.bytebeat 165 + }); 166 + } 167 + 168 + function startTouchVoice({ pointerIndex, x, y, screenWidth, screenHeight, sound }) { 169 + const key = `touch-${pointerIndex}`; 170 + if (touchVoices.has(key)) return; 171 + 172 + const frequency = deriveFrequency(x, y, screenWidth, screenHeight); 173 + const voice = createVoice({ sound, frequency, volume: 0.5 }); 174 + 175 + touchVoices.set(key, { sound: voice, x, y, frequency }); 176 + rebalanceVolumes(); 177 + } 178 + 179 + function updateTouchVoice({ pointerIndex, x, y, screenWidth, screenHeight }) { 180 + const key = `touch-${pointerIndex}`; 181 + const voice = touchVoices.get(key); 182 + if (!voice) return; 183 + 184 + voice.x = x; 185 + voice.y = y; 186 + 187 + const targetFrequency = deriveFrequency(x, y, screenWidth, screenHeight); 188 + if (Math.abs(targetFrequency - voice.frequency) > 0.5) { 189 + voice.frequency += (targetFrequency - voice.frequency) * 0.15; 190 + voice.sound?.update?.({ tone: voice.frequency }); 191 + } 192 + } 193 + 194 + function stopTouchVoice(pointerIndex, fade = 0.1) { 195 + const key = `touch-${pointerIndex}`; 196 + const voice = touchVoices.get(key); 197 + if (!voice) return; 198 + voice.sound?.kill(fade); 199 + touchVoices.delete(key); 200 + rebalanceVolumes(); 201 + } 202 + 203 + function stopAllVoices(fade = 0.1) { 204 + for (const voice of touchVoices.values()) { 205 + voice.sound?.kill(fade); 206 + } 207 + touchVoices.clear(); 208 + } 209 + 210 + function rebalanceVolumes() { 211 + const count = touchVoices.size; 212 + if (count <= 0) return; 213 + const baseVolume = clamp(0.5 / Math.sqrt(count), 0.15, 0.4); 214 + for (const voice of touchVoices.values()) { 215 + voice.sound?.update?.({ volume: baseVolume }); 216 + } 217 + } 218 + 219 + // Hold sequence: auto-voice generation 220 + function spawnHoldVoice(screenWidth, screenHeight, sound) { 221 + const voiceId = holdSequence.nextVoiceId++; 222 + 223 + // Orbital position 224 + const angle = (performance.now() * 0.0001) + Math.random() * Math.PI * 2; 225 + const radius = 0.35; 226 + const x = (Math.cos(angle) * radius + 0.5) * screenWidth; 227 + const y = (Math.sin(angle) * radius + 0.5) * screenHeight; 228 + 229 + // Duration with randomness (from params) 230 + const duration = params.voiceDuration + (Math.random() - 0.5) * (params.voiceDuration * 0.5); 231 + 232 + const hold = { 233 + voiceId, 234 + x, 235 + y, 236 + startTime: performance.now(), 237 + duration, 238 + orbitPhase: angle, 239 + orbitSpeed: params.orbitSpeed + Math.random() * (params.orbitSpeed * 0.5) 240 + }; 241 + 242 + startTouchVoice({ 243 + pointerIndex: voiceId, 244 + x: Math.round(x), 245 + y: Math.round(y), 246 + screenWidth, 247 + screenHeight, 248 + sound 249 + }); 250 + 251 + holdSequence.activeHolds.push(hold); 252 + holdSequence.lastSpawnTime = performance.now(); 253 + } 254 + 255 + function updateHoldVoices(screenWidth, screenHeight, sound) { 256 + if (!holdSequence.enabled) return; 257 + 258 + const now = performance.now(); 259 + 260 + // Spawn new voice if needed (use params) 261 + if (now - holdSequence.lastSpawnTime > params.spawnInterval && holdSequence.activeHolds.length < params.maxVoices) { 262 + spawnHoldVoice(screenWidth, screenHeight, sound); 263 + } 264 + 265 + // Update existing holds 266 + for (let i = holdSequence.activeHolds.length - 1; i >= 0; i--) { 267 + const hold = holdSequence.activeHolds[i]; 268 + const elapsed = now - hold.startTime; 269 + 270 + if (elapsed > hold.duration) { 271 + stopTouchVoice(hold.voiceId, 0.1); 272 + holdSequence.activeHolds.splice(i, 1); 273 + continue; 274 + } 275 + 276 + // Orbital movement 277 + hold.orbitPhase += hold.orbitSpeed; 278 + const x = (Math.cos(hold.orbitPhase) * 0.3 + 0.5) * screenWidth; 279 + const y = (Math.sin(hold.orbitPhase) * 0.3 + 0.5) * screenHeight; 280 + 281 + updateTouchVoice({ 282 + pointerIndex: hold.voiceId, 283 + x: Math.round(x), 284 + y: Math.round(y), 285 + screenWidth, 286 + screenHeight 287 + }); 288 + } 289 + } 290 + 291 + function toggleHoldSequence(screenWidth, screenHeight, sound) { 292 + if (holdSequence.enabled) { 293 + for (const hold of holdSequence.activeHolds) { 294 + stopTouchVoice(hold.voiceId, 0.08); 295 + } 296 + holdSequence.activeHolds = []; 297 + holdSequence.enabled = false; 298 + } else { 299 + holdSequence.enabled = true; 300 + holdSequence.nextVoiceId = 100; 301 + spawnHoldVoice(screenWidth, screenHeight, sound); 302 + } 303 + } 304 + 305 + // Slider helpers 306 + function drawSliders({ ink, write, screen }) { 307 + const sliderAreaHeight = sliders.length * 20 + 10; 308 + const sliderY = screen.height - sliderAreaHeight; 309 + 310 + // Draw slider area background 311 + ink(8, 10, 16); 312 + for (let y = sliderY; y < screen.height; y++) { 313 + for (let x = 0; x < screen.width; x++) { 314 + // Just set background by clearing that area when we draw 315 + } 316 + } 317 + 318 + // Position sliders 319 + const labelWidth = 55; 320 + const sliderWidth = screen.width - labelWidth - 15; 321 + 322 + sliders.forEach((slider, i) => { 323 + slider.y = sliderY + i * 20 + 5; 324 + slider.x = labelWidth; 325 + slider.width = sliderWidth; 326 + slider.height = 14; 327 + 328 + // Draw label 329 + ink(140, 160, 190); 330 + write(slider.label, { x: 5, y: slider.y }, undefined, undefined, false, "MatrixChunky8"); 331 + 332 + // Draw slider background 333 + ink(30, 40, 60); 334 + for (let sx = slider.x; sx < slider.x + slider.width; sx++) { 335 + write("_", { x: sx, y: slider.y }, undefined, undefined, false, "MatrixChunky8"); 336 + } 337 + 338 + // Draw slider handle 339 + const normalizedValue = (params[slider.key] - slider.min) / (slider.max - slider.min); 340 + const handleX = slider.x + Math.floor(normalizedValue * slider.width); 341 + 342 + ink(100, 180, 220); 343 + write("■", { x: handleX, y: slider.y }, undefined, undefined, false, "MatrixChunky8"); 344 + 345 + // Draw value 346 + ink(180, 200, 230); 347 + let displayValue = params[slider.key]; 348 + if (slider.step < 1) { 349 + displayValue = displayValue.toFixed(5); 350 + } else { 351 + displayValue = Math.round(displayValue); 352 + } 353 + write(`${displayValue}`, { x: slider.x + slider.width + 5, y: slider.y }, undefined, undefined, false, "MatrixChunky8"); 354 + }); 355 + } 356 + 357 + function checkSliderClick(x, y) { 358 + for (let i = 0; i < sliders.length; i++) { 359 + const slider = sliders[i]; 360 + if (y >= slider.y && y < slider.y + slider.height && x >= slider.x && x < slider.x + slider.width) { 361 + return i; 362 + } 363 + } 364 + return -1; 365 + } 366 + 367 + function updateSlider(sliderIndex, x) { 368 + if (sliderIndex < 0 || sliderIndex >= sliders.length) return; 369 + const slider = sliders[sliderIndex]; 370 + const relativeX = clamp(x - slider.x, 0, slider.width); 371 + const normalizedValue = relativeX / slider.width; 372 + const newValue = slider.min + normalizedValue * (slider.max - slider.min); 373 + params[slider.key] = newValue; 374 + } 375 + 376 + // Rendering 377 + function paint({ wipe, ink, write, screen, box }) { 378 + const voiceCount = touchVoices.size; 379 + 380 + // Clear screen 381 + wipe(10, 12, 18); 382 + 383 + // Draw simple grid (frequency reference) 384 + ink(40, 50, 70); 385 + for (let x = 0; x < screen.width; x += Math.floor(screen.width / 8)) { 386 + for (let y = 0; y < screen.height - 110; y += Math.floor(screen.height / 8)) { 387 + write("·", { x, y }, undefined, undefined, false, "MatrixChunky8"); 388 + } 389 + } 390 + 391 + // Draw frequency labels 392 + ink(180, 200, 230); 393 + write(`${Math.round(mapXToFrequency(0, screen.width))}Hz`, { x: 2, y: 2 }, undefined, undefined, false, "MatrixChunky8"); 394 + write(`${Math.round(mapXToFrequency(screen.width, screen.width))}Hz`, { x: screen.width - 35, y: 2 }, undefined, undefined, false, "MatrixChunky8"); 395 + 396 + // Draw status 397 + ink(220, 240, 255); 398 + const status = holdSequence.enabled ? "HOLD: ON" : "HOLD: OFF"; 399 + write(status, { x: 2, y: screen.height - 130 }, undefined, undefined, false, "MatrixChunky8"); 400 + write(`Voices: ${voiceCount}`, { x: screen.width - 50, y: screen.height - 130 }, undefined, undefined, false, "MatrixChunky8"); 401 + 402 + // Draw voice positions 403 + ink(100, 180, 220, 100); 404 + for (const [key, voice] of touchVoices.entries()) { 405 + const x = Math.round(voice.x); 406 + const y = Math.round(voice.y); 407 + 408 + // Draw circle 409 + ink(150, 200, 255, 150); 410 + write("●", { x, y }, undefined, undefined, false, "MatrixChunky8"); 411 + 412 + // Draw frequency label 413 + ink(200, 220, 255); 414 + write(`${Math.round(voice.frequency)}`, { x: x + 3, y: y - 2 }, undefined, undefined, false, "MatrixChunky8"); 415 + } 416 + 417 + // Help text 418 + if (voiceCount === 0) { 419 + ink(180, 200, 230); 420 + write("Touch to play | Press H for hold", { x: 2, y: Math.floor(screen.height / 2) - 50 }, undefined, undefined, false, "MatrixChunky8"); 421 + } 422 + 423 + // Draw sliders 424 + drawSliders({ ink, write, screen }); 425 + } 426 + 427 + // Input handling 428 + function act({ event: e, sound, screen, pens }) { 429 + // Keyboard: H toggles hold sequence 430 + if (e.is("keyboard:down:h")) { 431 + toggleHoldSequence(screen.width, screen.height, sound); 432 + } 433 + 434 + // Slider interaction (check for slider touches first) 435 + for (let i = 1; i <= maxTouchPointers; i++) { 436 + if (e.is(`touch:${i}`)) { 437 + const pointer = pens?.(i); 438 + const x = pointer?.x ?? e.x; 439 + const y = pointer?.y ?? e.y; 440 + 441 + const sliderIndex = checkSliderClick(x, y); 442 + if (sliderIndex >= 0) { 443 + sliders[sliderIndex].dragging = i; 444 + updateSlider(sliderIndex, x); 445 + return; // Don't create voice if touching slider 446 + } 447 + } 448 + 449 + if (e.is(`draw:${i}`)) { 450 + const pointer = pens?.(i); 451 + const x = pointer?.x ?? e.x; 452 + const y = pointer?.y ?? e.y; 453 + 454 + // Check if this was a slider drag 455 + if (sliders.some(s => s.dragging === i)) { 456 + const sliderIndex = sliders.findIndex(s => s.dragging === i); 457 + updateSlider(sliderIndex, x); 458 + return; 459 + } 460 + 461 + // Otherwise update touch voice 462 + updateTouchVoice({ pointerIndex: i, x, y, screenWidth: screen.width, screenHeight: screen.height }); 463 + } 464 + 465 + if (e.is(`lift:${i}`)) { 466 + // Check if this was a slider 467 + const sliderIndex = sliders.findIndex(s => s.dragging === i); 468 + if (sliderIndex >= 0) { 469 + sliders[sliderIndex].dragging = false; 470 + return; 471 + } 472 + 473 + // Otherwise stop touch voice 474 + stopTouchVoice(i, 0.08); 475 + } 476 + } 477 + 478 + // Touch/mouse input (only if not on slider) 479 + for (let i = 1; i <= maxTouchPointers; i++) { 480 + if (e.is(`touch:${i}`)) { 481 + const pointer = pens?.(i); 482 + const x = pointer?.x ?? e.x; 483 + const y = pointer?.y ?? e.y; 484 + if (checkSliderClick(x, y) < 0) { 485 + startTouchVoice({ pointerIndex: i, x, y, screenWidth: screen.width, screenHeight: screen.height, sound }); 486 + } 487 + } 488 + } 489 + 490 + // Fallback for single-touch environments 491 + if (e.is("touch")) { 492 + if (checkSliderClick(e.x, e.y) < 0) { 493 + startTouchVoice({ pointerIndex: 1, x: e.x, y: e.y, screenWidth: screen.width, screenHeight: screen.height, sound }); 494 + } 495 + } 496 + if (e.is("draw")) { 497 + updateTouchVoice({ pointerIndex: 1, x: e.x, y: e.y, screenWidth: screen.width, screenHeight: screen.height }); 498 + } 499 + if (e.is("lift")) { 500 + stopTouchVoice(1, 0.08); 501 + } 502 + } 503 + 504 + // Per-frame updates 505 + function sim({ sound, screen }) { 506 + sound.speaker?.poll(); 507 + updateHoldVoices(screen.width, screen.height, sound); 508 + } 509 + 510 + // Initialization 511 + function boot({ hud }) { 512 + // Runs once at startup 513 + } 514 + 515 + // Cleanup 516 + function leave() { 517 + stopAllVoices(0.05); 518 + }