Skip to content

haigpapa/photon

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

photon+

A physics-based optical grid sequencer where light beams become generative music through recursive signal propagation.

0. System Intent

This system collapses the boundary between visual and sonic composition. It proves that music can be a consequence of light architecture rather than a parallel medium—that spatial arrangements of optical elements generate temporal sound patterns.

It exists to make space audible.

1. Why This System Exists

What failed before:

  • Music composition tools assume timeline-based sequencing (linear time on X-axis)
  • Visual music systems treat sound as decoration for visuals, not structural consequence
  • Generative music lacks spatial logic—randomness without architectural constraints
  • Light-to-sound mappings are arbitrary (brightness → volume) rather than physics-derived

What tension shaped this design: Architecture coordinates attention through spatial relationships. Music coordinates attention through temporal relationships. But what if space and time aren't separate dimensions? What if you could compose space and have time derive from it?

Fourteen years of live performance taught that stage lighting isn't just aesthetic—it structures how sound is perceived. photon+ reverses this: make light the cause, sound the effect.

What this explicitly does NOT do:

  • Compete with traditional DAWs (Ableton, Logic) for track-based composition
  • Provide pixel-perfect visual rendering (this is a music tool, not a graphics engine)
  • Support collaborative multi-user editing
  • Function offline (requires browser with Web Audio API)
  • Guarantee identical outputs (physics simulation has float imprecision, outputs vary slightly)

2. System Boundary

Inputs:

  • Optical grid layout (emitters, mirrors, sensors placed by user)
  • Light beam parameters (wavelength → pitch, intensity → volume)
  • Mirror orientations (rotation angle determines reflection path)
  • Physics simulation settings (beam speed, decay rate, max bounces)

Transformation:

  • Emitter position + direction → Light beam raycast
  • Beam intersection with mirror → Reflection vector calculation
  • Reflected beam intersection with sensor → Audio trigger event
  • Beam path length → Audio parameter (longer path = lower pitch, more reverb)
  • Beam bounce count → Timbre complexity (more bounces = brighter tone)

Outputs:

  • Real-time audio synthesis (Web Audio API)
  • Visual beam rendering (Canvas 2D or WebGL)
  • MIDI-compatible note sequences (exportable)
  • Grid state snapshots (save/load presets)

External Dependencies:

  • React 19 (UI framework)
  • Web Audio API (audio synthesis)
  • Three.js or Canvas 2D (visual rendering)
  • Vitest (testing framework)

3. Architectural Approach

Core Design Principles:

  1. Physics as Composition: The grid is a score. Light beams are performers. Mirrors are compositional constraints. Sound emerges from spatial logic, not timeline editing.

  2. Recursive Signal Propagation: Beams bounce infinitely (or until decay). Each reflection spawns new audio events. This creates evolving, non-repeating patterns.

  3. Emergent Complexity: Simple rules (emit, reflect, trigger) produce complex musical behaviors. User designs constraints; system generates music.

Chosen Abstractions:

  • Optical Ray Tracing: Borrowed from computer graphics. Light beams are rays; mirrors are reflective surfaces. Standard vector math (reflect across normal).

  • Event-Driven Audio: Beam-sensor collisions trigger note-on events. This is closer to modular synthesis (trigger-based) than piano roll (timeline-based).

  • Grid as State Machine: Each frame, physics ticks, beams propagate, collisions detected, audio events fire. Deterministic (same grid → same output) but emergent (hard to predict exact sound).

Trade-offs Accepted:

  • CPU Intensity: Ray tracing + audio synthesis is compute-heavy. Can handle ~20 simultaneous beams smoothly; slows beyond that.

  • Determinism vs. Variation: Float imprecision means outputs vary slightly across runs. Embraced as "analog warmth" rather than fought against.

  • No Timeline Scrubbing: There's no playhead. Music exists only in the present moment (real-time simulation). No rewind, no precise editing.

  • Learning Curve: This is not intuitive for musicians trained on DAWs. Spatial thinking required. Steep onboarding.

4. Choreography Layer

This system coordinates four dimensions:

Attention: The visual grid focuses attention on spatial relationships. You see which emitters create which sounds based on beam paths. Attention follows light rays.

Memory: The grid is the memory. Save a grid state, reload it later, get the same music. Spatial arrangement encodes musical structure.

Time: Music exists in "simulation time" (physics ticks at 60fps). Time is not a scrubber you drag—it's a flow you observe. Irreversible, ephemeral.

Interaction: The user is architect, not pianist. You design the performance space (grid layout); the system performs within it. This is indirect composition—you shape constraints, not notes.

5. Technical Stack (Justified)

Technology Why This Choice
React 19 Latest React with improved rendering for 60fps animation. Component model maps cleanly to grid elements (Emitter, Mirror, Sensor).
Web Audio API Low-level browser audio with precise timing. Needed for sample-accurate triggering when beams hit sensors. No external libraries.
Three.js (optional) / Canvas 2D Three.js for WebGL rendering if performance allows; Canvas 2D fallback for simpler visuals. Flexibility in rendering backend.
TypeScript Type safety for complex vector math (ray-line intersections, reflection calculations). Prevents runtime errors in physics.
Vitest Fast unit testing framework. Critical for testing physics edge cases (parallel mirrors, grazing reflections, etc.).
Zustand Lightweight state management for grid state, beam positions, audio context. No Redux boilerplate.

6. Artifacts

Architecture Diagram:

User places: Emitters, Mirrors, Sensors
    ↓
Physics Engine (60fps tick)
    ├─→ Ray Tracing (beam propagation)
    ├─→ Collision Detection (beam-mirror, beam-sensor)
    └─→ Reflection Math (vector calculations)
         ↓
    ┌────┴────┐
    ↓         ↓
Audio Events  Visual Rendering
(Web Audio)   (Canvas/WebGL)
    ↓         ↓
Speakers      Screen

Key Code Excerpts:

// Ray-line intersection for beam-mirror collision
function checkMirrorCollision(
  beam: Ray,
  mirror: Mirror
): CollisionPoint | null {
  const lineStart = mirror.start
  const lineEnd = mirror.end

  // Ray-line segment intersection math
  const intersection = rayLineIntersect(
    beam.origin,
    beam.direction,
    lineStart,
    lineEnd
  )

  if (intersection) {
    // Calculate reflection vector
    const normal = mirror.getNormal()
    const reflected = reflect(beam.direction, normal)

    return {
      point: intersection,
      reflectedDirection: reflected,
      mirror: mirror,
    }
  }

  return null
}
// Trigger audio when beam hits sensor
function onBeamSensorCollision(beam: Beam, sensor: Sensor) {
  // Map beam wavelength to pitch (visible spectrum → audible range)
  const frequency = mapWavelengthToFrequency(beam.wavelength)

  // Beam intensity → note velocity
  const velocity = beam.intensity * 127

  // Beam path length → reverb amount
  const reverbMix = Math.min(beam.pathLength / 1000, 0.8)

  // Trigger note
  const synth = new Tone.Synth().toDestination()
  synth.set({ volume: velocityToDb(velocity) })

  const reverb = new Tone.Reverb(reverbMix)
  synth.connect(reverb)

  synth.triggerAttackRelease(frequency, '8n')
}
// Recursive beam propagation
function propagateBeam(beam: Beam, maxBounces: number = 10) {
  if (beam.bounceCount >= maxBounces) return // Terminate recursion

  const collision = findNextCollision(beam)

  if (!collision) return // Beam exits grid

  if (collision.type === 'mirror') {
    // Spawn reflected beam
    const reflected = new Beam({
      origin: collision.point,
      direction: collision.reflectedDirection,
      wavelength: beam.wavelength,
      intensity: beam.intensity * 0.9, // Decay on reflection
      bounceCount: beam.bounceCount + 1,
      pathLength: beam.pathLength + collision.distance,
    })

    propagateBeam(reflected, maxBounces) // Recurse
  } else if (collision.type === 'sensor') {
    // Trigger audio event
    onBeamSensorCollision(beam, collision.sensor)
  }
}

Interface Definitions:

interface Emitter {
  position: { x: number; y: number }
  direction: { x: number; y: number } // Unit vector
  wavelength: number // nm (400-700 visible, maps to pitch)
  intensity: number // 0-1 (maps to volume)
}

interface Mirror {
  start: { x: number; y: number }
  end: { x: number; y: number }
  getNormal(): { x: number; y: number } // Perpendicular vector
}

interface Sensor {
  position: { x: number; y: number }
  radius: number // Hit detection radius
  onTrigger: (beam: Beam) => void // Audio callback
}

interface Beam {
  origin: { x: number; y: number }
  direction: { x: number; y: number }
  wavelength: number
  intensity: number
  bounceCount: number
  pathLength: number // Total distance traveled
}

7. Failure Modes & Limits

What breaks:

  • Infinite loops → Beam bouncing between two parallel mirrors never decays. Need max bounce limit or intensity decay.
  • Too many beams (>50 active) → Frame rate drops. Physics simulation + audio synthesis overwhelm CPU.
  • Grazing reflections → Beam nearly parallel to mirror causes float imprecision. Beam "tunnels through" instead of reflecting.
  • Audio glitches → Rapid sensor triggers (>100/sec) cause audio buffer overflow. Need event throttling.

What scales poorly:

  • Complex grids (>100 elements) → Collision detection becomes O(n²). Need spatial partitioning (quadtree).
  • Long-running sessions → Memory leaks in audio context. Need periodic cleanup of dead synth nodes.
  • Mobile devices → Physics + audio drains battery. Desktop-first design.

What was consciously deferred:

  • 3D grid → Currently 2D only. 3D optics would be richer but much more complex to visualize and interact with.
  • Colored light mixing → Each beam is monochromatic. No RGB mixing or subtractive color.
  • Gravitational fields → Beams could curve in gravity wells. Deferred as too complex for v1.
  • MIDI output → Can export note sequences but no real-time MIDI sync. Would require MIDI clock implementation.

What would require architectural changes:

  • Networked multi-user → Would need server-side physics simulation + state sync. Currently client-only.
  • Deterministic playback → Float imprecision prevents bit-identical replays. Would need fixed-point math.
  • DAW plugin → Would need VST/AU wrapper. Web Audio API is browser-only.

8. Background & Context

This system emerged from:

  • Mashrou' Leila stage lighting (2008-2022): Observing how light shapes sonic perception. Stage spots create "sound zones."
  • Architectural light studies (The Bartlett): How natural light organizes spatial experience. Could sonic experience be organized by synthetic light?
  • Modular synthesis obsession: Patch cables as signal paths. What if light beams were patch cables?
  • Physics simulation curiosity: Ray tracing for rendering; why not ray tracing for music?

It synthesizes:

  • Optical physics: Ray tracing, reflection laws, wave-particle duality
  • Generative music: Emergent patterns from simple rules (Brian Eno, Alva Noto)
  • Spatial audio: How sound location affects perception
  • Modular synthesis: Event-driven triggering, patch-based architecture

Current Status:

  • Active Development (2024–)
  • Functional prototype deployed at [demo URL]
  • Used in live performances and installations
  • Open to collaborators and testers

Future Directions:

  • Integration with 3D-Beat-Synth for gestural control of emitter/mirror placement
  • Connection to DERIVE for AI-generated grid layouts based on memory clusters
  • Physical installation version (laser beams + photodiodes + modular synth)

Etymology: The name photon+ refers to:

  • Photon: Elementary particle of light
  • +: Additive synthesis, cumulative reflections, recursive propagation

The play on "photon" vs "photon+" suggests enhanced/extended light—light as compositional medium, not just illumination.


Meaning Stack Navigator

This repository represents the Stage Layer of the Meaning Stack. Coordinate your navigation through the ecosystem here:

Layer System Intent
Sensorium 3D-Beat-Synth Body as Input
Latent Space STORYLINES Memory as Space
Conductor DERIVE Logic & Tuning
Stage photon+ Output & Performance
Veracity Shield hah-was Epistemic Defense

Operating System: ECHO (hmp00) | Methodology: Choreography of Systems


Maintained by: Haig Papazian / Walaw Studio Repository: github.com/haigpapa/photon License: MIT (See LICENSE)

About

[Stage] Optical grid sequencer — music as emergent consequence of light architecture and physics simulation

Topics

Resources

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors