Organelle is a computer right ? (Capacity for using A.I. for patch design)

Hi all, my understanding is that Organelle is an audio/midi interface with knobs and keys connected to Linux running Pure Data, correct ?

I know a little bit of C++, Java, but at work i mostly use Javascript and Python, i’ve been working in the field for about 2 years now, love coding.

In the last year at my job, A.I. has become a central tool for coding, i really love that because it enables me to not worry about remembering syntax and use all my energy on creativity and logic, just need to verify that the code makes sense of course. It has sped up my programming time by a factor of 5x.

In the last few months i got really good with the tool and it enabled me to write code even for languages that i didn’t know at all, i can ask it to “translate” code from a language to another, and it does it well.

I tried to have it do Pure Data as an experiment but it failed miserably because it isn’t text-based.

If the Organelle could be worked out to use a different programming framework (a text-based one), it would open up the creative potential of almost anyone with an AI friend :wink:

Or maybe someone finds a way to get AI to participate in the fun somehow ? Like perhaps train an AI model on PD ?

Would love to hear everyone’s thoughts on this.


You can probably try training a model on supercollider or seeing if a model already exists as it is a text based language. However the organelle is definitely not powerful enough to run any of models locally as its core is a measley Raspberry Pi CM3 but maybe you could have it interface over a network to talk to a more powerful machine. In theory you could still train a pure data model since pd files are still just text but its far out of my field of expertise.

I think it could be a cool idea to have AI code completion in supercollider for doing live music coding but development would best done on a separate machine. In theory, you could make a patch on your main machine with the assistance of AI and add the necessary bits so it interfaces with the Organelle. You can check out this patch as an example of how you would make a supercollider patch work on the organelle

Similar thoughts here… I wouldn’t run any machine learning / AI stuff in the Organelle itself but it could be interesting to generate Faust / Supercollider / csound code for it although I bet it will not be as efficient as code written by a hooman…

regarding generating pd patches, it might happen, but the current gpt 3.5 still doesn’t fully get pd yet. ( I haven’t tried other models… )
Although in a way it isn’t ‘text based’, pd patches are actually plain text files that a machine can get to understand and generate without problem.

1 Like

I didn’t mean that AI should be integrated into the Organelle, there’s no way this would run considering its specs. I meant that since the machine is basically a small computer it could be made to run other text-based language software other than pure data so that we can effectively get more assistance from AI when designing patches. The idea is to grant people with little programming skills a way to participate in the Organelle fun.

But yea, another way to go about it would be to have current AIs workout how PD works in the text based form, i did try that with GPT myself by feeding it the text-based version of PD patches, but it doesn’t work very well, i think the logic is hard to grasp in its text-based version.

I think that pd is and might still be the best option for creating a ‘sound engine’.
even if some languages are more AI friendly, others are better for live coding and others might have better sounding default oscillators,
pd power, flexibility and eficiency are still unmatched, imho.
But if someone feels adventurous, I think it could be possible to create a combination of pd patch + super prompt to be able to easily enable people to create organelle synths

I think you’re right in assessing that PD is the most useful tool for the situation, it’s a very complete language after all.

My thinking is that as people use AI to code more and more, there will be less and less people willing to dab in such an extensive language that is PD, unless of course we find a way to leverage AI with the graphical ui coding style.

My intention is not denigrate PD at all, i think it’s a wonderful language albeit a bit hard to get into when you’re used to regular code. I only want to find ways to use the untapped potential of all the non-PD coders out there.

I think there is a lot of potential in this area, but yes generating Pd patches is a bit of a challenge for the current tools. While the patches are text files, it is hard for something like an LLM to infer meaning from the textual representation of a patch.

GPT 4 can describe how to make Pd patches, and these descriptions could be translated into scripts that would build a patch (again using the AI). This might be something to try out, but also might only work for simple patches.

Another thing I’ve been playing with is embedding a scripting language in Pd, using the Pd Lua object, then you can program events inside Pd using a text based language. This really only runs at the control rate (so no audio processing), but can be used to generate notes, for example. I’ve tried pairing this with a coding bot to generate simple arpeggios or sequences, with some success… hoping to post about it once it is working a little better.

Also writing Pd externals in C is an option, and a coding bot can help a lot with this in my experience.


That sounds like a neat idea !

I had a chat with GPT a while back about how to write C code to provide something like a sampling LFO. I never cycled around to actually trying it out, but it did a better job than I would have done with very limited C experience. The trick with these things is to ask the right questions with a small enough scope.