Monorepo for Aesthetic.Computer
aesthetic.computer
1# Seashells: Conceptual Model & Variation Framework
2
3This document breaks down the architecture of **seashells.mjs** into conceptual components, so you can understand, remix, and create variations.
4
5---
6
7## The Core Stack (4 Layers)
8
9```
10┌─────────────────────────────────┐
11│ SEQUENCING LAYER │ How voices spawn & interact
12│ (Hold system, voice lifecycle) │
13├─────────────────────────────────┤
14│ SYNTHESIS LAYER │ How audio is generated
15│ (5 bytebeat patterns, blending)│
16├─────────────────────────────────┤
17│ FEEDBACK LOOP LAYER │ Audio ↔ Visual feedback
18│ (Pixel sampling → parameters) │
19├─────────────────────────────────┤
20│ SPATIAL MAPPING LAYER │ Touch → Frequency/Pitch
21│ (X/Y to Hz, modulation axes) │
22└─────────────────────────────────┘
23```
24
25Each layer is **independently modifiable**. You can swap out any component without breaking the others.
26
27---
28
29## Layer 1: Spatial Mapping (Touch/Position → Audio Parameters)
30
31### Current Implementation
32```javascript
33X-axis: screen position → base frequency (80–1600 Hz logarithmic)
34Y-axis: screen position → pitch multiplier (0.5x–2x linear)
35```
36
37**Functions involved:**
38- `mapXToFrequency(x, width)` - Convert X pixel to frequency
39- `mapYToPitchFactor(y, height)` - Convert Y pixel to pitch multiplier
40- `deriveVoiceFrequency()` - Combine both into final frequency
41
42### Variations You Could Try
43
44**1. Polar Coordinate Mapping**
45```javascript
46// Instead of cartesian X/Y
47const angle = Math.atan2(y - centerY, x - centerX);
48const distance = Math.sqrt((x-centerX)² + (y-centerY)²);
49const frequency = minHz * Math.pow(maxHz/minHz, distance/maxRadius);
50const timbre = (angle + Math.PI) / (2 * Math.PI); // Map to 0-1
51```
52
53**2. Vertical Strip Mapping** (like a piano keyboard)
54```javascript
55// Ignore X, only use Y for frequency
56const frequency = 55 * Math.pow(2, y / screenHeight * 5); // 5 octaves
57```
58
59**3. Grid Quantization** (musical scale constraints)
60```javascript
61const notes = [55, 62, 69, 82, 110, 123, 147, 165, 196, 220]; // C minor pentatonic
62const gridX = Math.round(x / screenWidth * (notes.length - 1));
63const octaveY = Math.round(y / screenHeight * 4);
64const frequency = notes[gridX] * Math.pow(2, octaveY);
65```
66
67**4. Feedback-Influenced Mapping** (space changes based on audio)
68```javascript
69const baseFreq = mapXToFrequency(x, width);
70const pitchMult = mapYToPitchFactor(y, height);
71// Modulate by pixel feedback
72const feedbackScale = 0.8 + sharedPixelFeedback.intensity * 0.4;
73return baseFreq * pitchMult * feedbackScale;
74```
75
76---
77
78## Layer 2: Synthesis (Audio Generation)
79
80### Current Architecture: 5 Blending Patterns
81
82The piece uses **5 independent bytebeat generators** that morph through each other:
83
84```javascript
85pattern1 = (t ^ (t >> (8 + shiftMod1)) ^ (t >> (9 + shiftMod2))) & 255
86pattern2 = ((t * harmonic) & (t >> (5 + bitMod1)) | (t >> (4 + bitMod2))) & 255
87pattern3 = (t | (t >> rhythmMod | t >> 7)) * (t & (t >> 11 | t >> complexMod)) & 255
88pattern4 = (t & (t >> (5 + sierpinskiMod) | t >> 8)) & 255
89pattern5 = ((t * melodyScale) ^ (t >> 6)) & (t >> 8) & 255
90```
91
92**Blending mechanism:**
93- Time-based phase progresses through 5 states (0→1→2→3→4→0)
94- Between states, linear interpolation smooths transitions
95- Phase speed and intensity controlled by feedback
96
97### Understanding Each Pattern
98
99| Pattern | Type | Character | Key Insight |
100|---------|------|-----------|-------------|
101| **Pattern 1** | XOR Cascade | Digital, crisp, glitchy | Bit flips create harsh transitions |
102| **Pattern 2** | Melodic | Pitched, harmonic | `t * harmonic` creates repeating cycles |
103| **Pattern 3** | Rhythmic | Complex polyrhythm | Multiplication creates interference patterns |
104| **Pattern 4** | Fractal | Sierpinski-like, algorithmic | Simple XOR creates complexity |
105| **Pattern 5** | Frequency-Responsive | Pitch-sensitive melodic | Scale changes with input frequency |
106
107### Variation: Add Your Own Pattern
108
109**Step 1: Design a pattern**
110```javascript
111const pattern6 = (t * t) & (t >> (7 + feedback.complexity)) & 255;
112```
113
114**Step 2: Integrate into blending loop**
115```javascript
116let mixPhase = (time * 0.08 + freqScale * 0.5) % 6; // Changed from 5 to 6
117if (mixPhase < 1) {
118 finalPattern = pattern1 * (1 - blend) + pattern2 * blend;
119} else if (mixPhase < 2) {
120 finalPattern = pattern2 * (1 - blend) + pattern3 * blend;
121} // ... add more conditions ...
122else if (mixPhase < 5) {
123 finalPattern = pattern5 * (1 - blend) + pattern6 * blend;
124}
125```
126
127### Pattern Design Ideas
128
129**Additive (Smooth)**
130```javascript
131const patternSmooth = ((t >> 1) + (t >> 3) + (t >> 5)) & 255;
132```
133
134**Multiplicative (Complex)**
135```javascript
136const patternComplex = (t * (t >> 4) * (t >> 8)) & 255;
137```
138
139**Modulo-based (Rhythmic)**
140```javascript
141const patternModulo = (t % 128 + (t >> 8) % 128) & 255;
142```
143
144**Conditional (Structured)**
145```javascript
146const patternConditional = (t & 128) ? (t << 1) & 255 : (t >> 1) & 255;
147```
148
149---
150
151## Layer 3: Feedback Loop (Visual → Audio Influence)
152
153### Current System: Pixel Sampling → Parameter Modulation
154
155**Sampling strategy:** 12-20 points strategically distributed
156- 4 corners (detect extreme brightness)
157- 4 edge midpoints (detect edge activity)
158- 4 diagonal sweeps (detect diagonal patterns)
159- 4+ orbital scans (detect center/rotation)
160
161**Conversion:**
162```
163RED channel → Harmonic scaling, time modulation
164GREEN channel → Rhythm scaling, mix speed
165BLUE channel → Pattern bias, shift modulation
166Brightness → Intensity, chaos injection
167Contrast → Bit operations
168Variance → Chaos level
169```
170
171### Feedback Parameters Affected
172
173```javascript
174timeModulation: How the time variable shifts (larger jumps = more chaotic)
175shiftMod1/2: XOR shift amounts (bigger shifts = less repetitive)
176harmonicScale: How many cycles the melody completes
177rhythmScale: Speed of rhythmic modulation
178bitMod1/2: Bit operation amounts (chaos injection)
179mixSpeed: How fast patterns cycle through
180blendIntensity: How smooth transitions are
181chaosLevel: XOR noise injection probability
182colorMod (r,g,b): Color channel multipliers (affects visuals)
183```
184
185### Variation: Change What Pixels Affect
186
187**Current: RGB brightness → Audio parameters**
188
189**Alternative 1: Directional Gradient**
190```javascript
191// Sample top half vs bottom half
192const topSamples = sampleRegion(0, 0, width, height/2);
193const bottomSamples = sampleRegion(0, height/2, width, height);
194const topBrightness = avgBrightness(topSamples);
195const bottomBrightness = avgBrightness(bottomSamples);
196
197feedback.mixSpeed = 0.5 + (topBrightness / 255) * 2;
198feedback.chaosLevel = (bottomBrightness / 255);
199```
200
201**Alternative 2: Edge Detection**
202```javascript
203// High contrast areas → more complexity
204const contrast = maxBrightness - minBrightness;
205feedback.complexity = contrast / 255;
206```
207
208**Alternative 3: Color-Specific Regions**
209```javascript
210// Sample only red-dominant pixels
211const redRegions = samples.filter(s => s.r > s.g && s.r > s.b);
212feedback.intensity = redRegions.length / samples.length;
213```
214
215### Variation: Change Visual Effects from Audio
216
217The piece also **paints bytebeat patterns** back to the screen:
218
219**Current:**
220```javascript
221// For each pixel column:
222const bytebeat = pattern(...);
223const y = (bytebeat / 255) * screenHeight;
224screen.pixels[y * width + x] = color;
225```
226
227**Alternative: Oscilloscope Mode**
228```javascript
229// Draw audio waveform like an oscilloscope
230const samples = generator.bytebeat({ frequency, sampleRate, time, samplesNeeded: 512 });
231for (let i = 0; i < samples.length; i++) {
232 const y = (samples[i] * 0.5 + 0.5) * screenHeight;
233 const x = (i / samples.length) * screenWidth;
234 screen.pixels[Math.round(y * width + x)] = 255;
235}
236```
237
238**Alternative: Spectrogram Mode**
239```javascript
240// Show frequency content over time
241const frequencies = fft(bytebeat_output);
242for (let freq = 0; freq < frequencies.length; freq++) {
243 const brightness = frequencies[freq];
244 const y = (freq / frequencies.length) * screenHeight;
245 screen.pixels[Math.round(y * width + sweepX)] = brightness;
246}
247```
248
249---
250
251## Layer 4: Sequencing (Voice Lifecycle & Hold Mechanism)
252
253### Current Architecture: Hold Sequence
254
255**States:**
256- **Off** - No automatic voices, only touch interaction
257- **On** - Periodically spawns voices at orbital positions, 5-13 second durations
258
259**Parameters:**
260```javascript
261spawnInterval: 2000ms (spawn every 2 seconds)
262maxConcurrentHolds: 6 (never more than 6 at once)
263baseDuration: 5000-13000ms (influenced by chaos feedback)
264orbitSpeed: 0.0003-0.0006 rad/frame (varies per voice)
265wobble: 0.15-0.35 (influenced by memory)
266```
267
268**Spawning logic:**
269```
270Position = orbital path (cosine × radius, sine × radius)
271 Radius influenced by feedback.density
272 Phase influenced by time + randomness
273Duration = base + (1 - chaos) bonus - (1 - quiet bonus)
274 Less chaos → longer holds
275 High memory → longer holds
276Movement = orbital drift + wobble
277 Each voice has independent orbital speed
278 Memory makes movements more pronounced
279```
280
281### Variation: Different Sequencing Strategies
282
283**1. Fibonacci Interval Spawning**
284```javascript
285const goldenRatio = 1.618;
286const intervals = [];
287for (let i = 0; i < 10; i++) {
288 intervals.push(Math.floor(1000 * Math.pow(goldenRatio, i)));
289}
290// Spawn voices at fibonacci-spaced intervals
291```
292
293**2. Grid-Based Spawning**
294```javascript
295// Spawn voices at fixed grid positions, one per cell
296for (let gx = 0; gx < gridWidth; gx++) {
297 for (let gy = 0; gy < gridHeight; gy++) {
298 const x = (gx + 0.5) / gridWidth * screenWidth;
299 const y = (gy + 0.5) / gridHeight * screenHeight;
300 spawnVoiceAt(x, y, sound);
301 }
302}
303```
304
305**3. Random Walk Sequencing**
306```javascript
307// Each voice position is random walk from previous
308const walk = { x: screenWidth * 0.5, y: screenHeight * 0.5 };
309for (let i = 0; i < voiceCount; i++) {
310 walk.x += (Math.random() - 0.5) * 200;
311 walk.y += (Math.random() - 0.5) * 200;
312 walk.x = clamp(walk.x, 0, screenWidth);
313 walk.y = clamp(walk.y, 0, screenHeight);
314 spawnVoiceAt(walk.x, walk.y, sound);
315}
316```
317
318**4. Brightness-Following Sequencing**
319```javascript
320// Spawn voices at brightest regions of screen
321const samples = samplePixels(screen, 20);
322const sorted = samples.sort((a, b) => b.brightness - a.brightness);
323sorted.slice(0, 5).forEach(sample => {
324 spawnVoiceAt(sample.x, sample.y, sound);
325});
326```
327
328**5. Phase-Locking to Audio**
329```javascript
330// Spawn new voices synchronized to audio beat
331const audioEnergy = measureAudioEnergy(sound);
332if (audioEnergy > threshold && (now - lastSpawn) > spawnDelay) {
333 spawnHoldVoice(screenWidth, screenHeight, sound);
334 lastSpawn = now;
335}
336```
337
338---
339
340## Remix Guide: Creating Variations
341
342### Quick Swaps (30 minutes)
343
344**1. Change the color palette**
345- Modify `touchOverlayPalette` (line 14-23)
346- Modify color generation in `paint()` (line 558-560)
347
348**2. Change spatial mapping**
349- Replace `mapXToFrequency()` and `mapYToPitchFactor()`
350- E.g., use only vertical axis, or add diagonal
351
352**3. Adjust hold sequence timing**
353- Change `spawnInterval` (currently 2000ms)
354- Change hold duration calculation (currently 5-13 seconds)
355- Change max concurrent holds (currently 6)
356
357**4. Modify feedback sensitivity**
358- Increase/decrease pixel sampling points
359- Change RGB→parameter mappings
360- Adjust decay rates in `sim()`
361
362---
363
364### Medium Swaps (1-2 hours)
365
366**1. Add a 6th bytebeat pattern**
367- Design new pattern formula
368- Insert into blending loop (change mod 5 to mod 6)
369- Adjust blend transitions
370
371**2. Implement alternative sequencing**
372- Comment out `updateHoldVoices()`
373- Write new spawning logic
374- Re-export or call from `sim()`
375
376**3. Change visual rendering**
377- Modify pixel drawing (lines 509-598)
378- Swap from vertical columns to orbits/grids/waveforms
379- Add new visual effects (trails, particles, etc.)
380
381**4. Implement new feedback strategy**
382- Rewrite `samplePixelFeedback()`
383- Change what gets sampled (edges, variance, specific colors)
384- Change RGB→parameter mappings
385
386---
387
388### Deep Remixes (3-6 hours)
389
390**1. Multi-Layer Synthesis**
391- Have different hold voices use different pattern sets
392- E.g., lower voices use pattern 1-2, higher voices use 4-5
393
394**2. Envelope Shaping**
395- Add ADSR envelopes to voices
396- Make volume/timbre evolve over hold duration
397
398**3. Harmonic Relationships**
399- Make voices respond to each other
400- E.g., new voice spawned at harmonic of existing voices
401
402**4. Spatial Audio Evolution**
403- Make voices' frequency change as they move through space
404- Create "force fields" where certain regions repel/attract
405
406**5. Generative Visual System**
407- Decouple visuals from audio synthesis
408- Create independent generative visual patterns
409- Use audio to modulate visual parameters
410
411---
412
413## Code Landmarks for Modification
414
415### To understand a layer, read these functions:
416
417**Spatial Mapping:**
418- `mapXToFrequency()` (line 190)
419- `mapYToPitchFactor()` (line 198)
420- `deriveVoiceFrequency()` (line 205)
421
422**Synthesis:**
423- `generator.bytebeat()` (line 61)
424- Pattern definitions (lines 82-97)
425- Pattern blending (lines 100-126)
426
427**Feedback:**
428- `samplePixelFeedback()` (line 371)
429- Sampling strategy (lines 379-424)
430- Parameter derivation (lines 440-486)
431
432**Sequencing:**
433- `spawnHoldVoice()` (line 370)
434- `updateHoldVoices()` (line 406)
435- `toggleHoldSequence()` (line 457)
436- Hold state initialization (line 40)
437
438**Visuals:**
439- `paint()` (line 525)
440- Pixel rendering (lines 551-612)
441- Color computation (lines 558-560)
442
443---
444
445## Conceptual Symmetries
446
447Notice these patterns:
448
4491. **Feedback flows upward**: Pixels → Audio → Pixels
4502. **Time operates at multiple scales**:
451 - Sample-level: Bytebeat generation (44.1kHz)
452 - Voice-level: Hold durations (seconds)
453 - System-level: State decay (10+ seconds)
4543. **Randomness is constrained**: Random values modulated by feedback
4554. **Movement is orbital**: Scanning, voice drift, visual sweeps all use trig functions
4565. **Colors derive from bits**: RGB computed from bytebeat pattern XORs
457
458These symmetries are **features** you can exploit in variations:
459- Use same orbital math for voices and pixel sampling
460- Use same bytebeat generators for audio and visuals
461- Use same feedback parameters to shape multiple layers
462
463---
464
465## Testing Your Variations
466
467When you remix, test these:
468
4691. **With no touches** (hold sequence only)
470 - Does it sustain audio continuously?
471 - Are voices distinguishable or do they blend?
472 - Does visual feedback remain varied?
473
4742. **With touches** (interactive)
475 - Do touch voices feel responsive?
476 - Does hold sequence coexist peacefully?
477 - Are there frequency collisions (too many same-pitch voices)?
478
4793. **After 5 minutes idle**
480 - Does it settle to silence or continue?
481 - Do visuals accumulate wisely or become noise?
482
4834. **After 90 minutes**
484 - Would you listen to this as a tape?
485 - Is there enough emergence/surprise?
486 - Does it feel like a composition or just random?
487
488---
489
490## Example Variations to Try
491
492### Variation A: "Comb Filter Seashells"
493- Keep synthesis/feedback as-is
494- Change `spawnInterval` to 500ms (faster)
495- Spawn voices at fixed frequency ratios (1x, 1.5x, 2x, 3x fundamental)
496- Result: Harmonic relationships, bell-like tones
497
498### Variation B: "Noise Garden"
499- Keep synthesis/feedback as-is
500- Add 2-3 new chaotic bytebeat patterns
501- Increase `chaosLevel` sensitivity 5x
502- Result: More glitchy, algorithmic harshness
503
504### Variation C: "Visual Instruments"
505- Keep synthesis as-is
506- Change visual rendering to oscilloscope
507- Scale oscilloscope based on voice frequency
508- High voices = small tight spirals, low voices = large loose ones
509- Result: Visual becomes the primary interface, audio is secondary
510
511### Variation D: "Memory Piece"
512- Keep synthesis as-is
513- Make spawn rate depend on accumulated visual memory
514- Bright areas → more voices spawn nearby
515- Result: Visuals "grow" audio in response
516
517---
518
519## Final Note
520
521The beauty of this piece is that **every layer is independent**. You can:
522- Change synthesis without touching sequencing
523- Change sequencing without touching visuals
524- Change feedback without touching synthesis
525- Change mapping without touching anything else
526
527This independence is intentional. It means you can remix safely, testing one change at a time, without breaking the whole system.
528
529Happy remixing!