Live Coding with computational neuron models
This live coding performance is driven by computational neuron models, including the Hodgkin-Huxley (1952) Squid Giant Axon, Mainen & Sejnowski (1996), and Wang-Buzsáki (1996) Fast-Spiking Interneuron, whose spiking outputs directly drive the speaker diaphragm. Neuron parameters of injected current, temperature, and stochastic noise are live coded and improvised, alongside a MIDI controller whose mappings are themselves reprogrammed in performance. Multiple feedback pathways are woven between neurons in real time, from self-excitatory loops to coupled networks and time-variable delay lines, generating spiking trains that dynamically bifurcate. As these delay networks compound, emergent frequencies arise resembling the affective vocalizations of animals, cries and whimpers revealing a raw, pre-linguistic expressivity latent within the neural substrate itself. All of this is computed in real time, provoked by the performer’s gestural improvisations and live coded interventions, coaxing the system into the unstable manifolds of their excitable dynamics.
