Neural Harmonica treats entrainment as a clocking problem: align audio, luminance, and spatial motion to a shared phase origin, then keep them locked with harmonic frame rates and calibration markers.
1) Choose set → 2) Lock phase origin → 3) Pick harmonic FPS → 4) Render luminance + motion → 5) Add calibration → 6) Mux A/V
Keep your pipeline deterministic: cosine-phase audio start, harmonic FPS, sinusoidal luminance envelope, and mandatory directional spatial motion. Use calibration markers to measure device latency.
Neural harmonica is a structured audiovisual entrainment generation pipeline designed to produce phase-coherent multimodal rhythmic stimulation (audio + visual + spatial motion) aligned to a selected neural-frequency target.
It is not a musical harmonica; the name refers to harmonic synchronization across sensory channels.
The system works by ensuring that every stimulus layer shares the same temporal reference:
All layers are phase-locked so that their peaks, transitions, and directional changes occur at deterministic positions relative to the same oscillatory cycle.
Choose:
This defines the temporal oscillation the system will synchronize to.
Stereo tones are produced using cosine-phase start:
L(t) = cos(2π fL t)
R(t) = cos(2π fR t)
Starting at cosine phase ensures the waveform peak occurs exactly at t = 0, creating a defined phase origin.
The rendering frame rate is chosen as an integer harmonic multiple of the beat:
FPS = beat × N
This guarantees that each visual cycle is sampled deterministically and does not drift relative to the audio oscillation.
A sinusoidal brightness modulation is generated:
Brightness(t) = 0.5 (1 + cos(2π beat t))
Because both audio and visual signals start at the same phase, their oscillations remain aligned across the entire duration.
A deterministic directional cycle (down → right → up → left) is applied, where each motion phase begins on a beat-aligned boundary.
This adds a spatial entrainment dimension synchronized with the temporal rhythm.
A frame-0 visual flash and audio impulse tick provide an external synchronization reference, allowing playback latency to be measured and compensated so the intended phase alignment survives real hardware buffering.
The synchronized video and audio streams are combined into a single file whose oscillatory structure is internally phase-consistent.
The “harmonica” metaphor refers to harmonic alignment across multiple modalities:
Instead of independent stimuli, the system produces a multi-channel coherent oscillatory field where every component is mathematically locked to the same temporal waveform.
---
If extended further, the next logical step is implementing an automatic harmonic scheduler, which selects the optimal FPS, envelope resolution, and motion-phase timing directly from the chosen beat frequency so the neural harmonica pipeline becomes frequency-agnostic.