From Patch Cords to Pipelines: What MaxMSP Taught Me About AI Before AI Existed
The tools changed. The thinking didn’t. That’s the part nobody talks about.
Before I built systems for AI, I built them for sound.
In MaxMSP, you don’t write code in the conventional sense. You patch. You drag objects onto a canvas, draw lines between them, and watch signal flow. A microphone input becomes a number. That number passes through a filter object, then a math object, then an output. The program is the diagram. The diagram is the logic. If you understand the flow, you understand what the system is doing — and when it breaks, you can see exactly where.
I spent years in that environment. Live performance, generative music, interactive installations. Hours of watching waveforms move through chains of objects I’d assembled like a circuit board. It was, in retrospect, the most thorough education in systems thinking I ever received — and I didn’t recognise it as such until I started building AI pipelines and noticed I already knew how to think.
The Dataflow Mindset
MaxMSP is a dataflow programming environment. Data — audio signals, MIDI values, control messages — flows through connected objects in a directed graph. Each object does one thing well. The power of the system emerges from how those objects are arranged, not from any single object’s sophistication.
This is, almost precisely, the architecture of a modern AI pipeline.
An input arrives — a user prompt, an image, a document. It passes through a preprocessing step, then a model inference step, then a postprocessing step, then an output. Between those stages: transformation, routing, conditioning. The pipeline is the diagram. The diagram is the logic.
The vocabulary is different. Objects became nodes. Patch cords became data connections. Bangs became webhooks. But the fundamental act — tracing a signal through a system to understand what’s happening to it — is identical. When an AI pipeline produces unexpected output, I debug it the way I debugged a Max patch: I follow the signal upstream until I find the point where the data became something different from what I intended.
Most people who arrive at AI from a pure software background think in functions and state. I think in flow. That difference is small until the systems get complex, at which point it becomes everything.
What Analog Taught Digital
There’s something MaxMSP has in common with modular synthesizers and analog signal chains that purely digital environments don’t: the system pushes back.
In a digital environment, abstraction is cheap. You call a function, get a result, move on. The distance between intention and output is small and clean. In MaxMSP — and especially in analog hardware — the system has opinions. Signals degrade. Feedback loops form unexpectedly. Two objects that work perfectly in isolation produce interference when combined. The environment is not compliant.
This turns out to be extraordinary preparation for working with AI.
AI systems push back. A language model doesn’t return exactly what you asked for; it returns its interpretation of what you asked for, shaped by its training, its context window, its temperature setting, its position in the conversation. An image generation model doesn’t execute instructions — it approximates them, probabilistically, and the gap between instruction and output is where the craft lives.
Practitioners who’ve only ever worked with deterministic systems find this gap maddening. They want the system to do what they said. Practitioners who came up in environments that always had a gap — where the patch cord introduced noise and the filter had a personality and the output was never quite the input — find it familiar. Not easy, but familiar. The craft is the same: learn the system’s tendencies, work with them rather than against them, place your bets on the parts of the output you can predict.
The Translation That Wasn’t a Leap
When I started working seriously with AI tools, colleagues expected a learning curve. There was one, but it wasn’t where they predicted.
The technical concepts — inference, embeddings, attention, context — took time to understand in depth. But the practical skill of building systems with AI didn’t feel new. It felt like patching in a new environment with new primitives. The instinct to break complex tasks into small, composable steps; the habit of testing each node in isolation before connecting the chain; the reflex of asking “where is the signal changing in a way I didn’t intend?” — these came from MaxMSP, from years of watching audio signals transform through chains I’d assembled myself.
What I had to consciously learn was the probabilistic nature of the output. The same prompt, run twice, produces different results. This requires a different relationship with the system — less control, more collaboration, a higher tolerance for the gap between intention and output.
For someone trained in analog systems, that gap is familiar territory. For someone trained exclusively in software engineering, it can feel like the ground shifted.
What Gets Lost in Translation
I want to be honest about what the transition cost, because not everything transferred cleanly.
MaxMSP gave me a tactile understanding of systems that I haven’t fully replicated with AI tools. When I patched a synthesizer, I understood every stage of the signal chain physically — could point to the exact object where a sound became distorted, could hear the effect of changing a single parameter in real time. The system was inspectable in a way that felt almost embodied.
AI systems are not inspectable in that way. The model is a black box in a way that a Max patch never was. I can trace the inputs and outputs of each stage in a pipeline, but I cannot look inside a language model the way I could look inside an audio processing chain. That opacity is real, and it changes the nature of the craft. You develop intuition about what a model tends to do in certain conditions, but intuition is not the same as understanding.
This is, I think, the central intellectual challenge of AI design: building reliable systems out of components you cannot fully inspect. MaxMSP was good preparation for systems thinking, but it didn’t prepare me for the epistemological discomfort of working with systems that resist full understanding.
That discomfort is, itself, worth sitting with. The practitioners who handle it best are the ones who don’t pretend it isn’t there.
The patch cords are gone. The pipelines are invisible. The signal still flows.
And if you know how to follow it, you’re never entirely lost.
Linda Brown
Systems Architect building intelligent structures for creative teams — at the intersection of design systems, AI infrastructure, and the stubbornly human parts of creative practice.