EYESY Bot, Programming with LLM Assitant

This is an experiment that uses a large language model (LLM) to generate modes for the EYESY. It provides a simple chat interface: you can ask it to draw something and it returns code that can be copied and pasted right into the EYESY web editor.

It is aware of how a mode should be structured and the variables specific to the EYESY, for example knob values and audio input.

Check it out here:

https://eyesy-bot-bab146623aa5.herokuapp.com/eyesy-bot

The app is built with the Panel library for Python and uses the GPT-4 API.

4 Likes

Thanks for posting this. I’ve found it works pretty well so far, with some tweaks. My first prompt did not yield anything usable, but after I reported that back to the bot, it was able to gradually make changed to get very close to what I wanted.

So if your prompt doesn’t work, just tell it so and it will try again!

I love collaborative coding, and glad to see it’s finally here for Eyesy.

1 Like

I agree - it works well for the basics and is a great timesaver for exactly these kind of tasks. I would really like a VS Code extension of this, as I am already so used to working with AI assistants directly in my development environment.

I love this! The ability to talk to the assistant and fix errors is great. I was able to quickly make a few new modules that I had been struggling to code myself. Thanks for sharing!

Yeah works well for the basics, but will definitely get confused easily. Works well to build from simple elements, but you only get so much back and forth before you start to run up against the maximum context window, i.e. size of conversation.

I was also thinking about having it generate elements (for example background patterns or different sprites), instead of an entire mode. Then these would be stored in a personal catalog of elements which could be listed in the system message for a new conversation that would assemble the mode.

FYI, if you download an empty conversation (hit download before submitting anything) you can see the first element is the system message. You can edit this and then upload it if you want to change the system message… I’m sure it could be improved a bunch.

1 Like

Yes! I had always wanted to make a MIDI note visualizer that would display the notes like a player piano, but just never got around too it… was super easy to do with the bot.

1 Like

So far I’ve found the most issues with the background color knob programming, and the audio in setup. Very often the method it uses for the background color is wrong, or gray scale only. So I replace with:

background_color = etc.color_picker_bg(etc.knob5)

Still tweaking corrections for audio input.

1 Like

Thanks! I noticed that too and wasn’t able to solve it in the time I had. Now it works as expected.

I realize that I didn’t include a description of the etc.color_picker_bg() function in the system message, so it’s unlikely that the model is aware of it. If you run into this kind of thing, you can usually just describe the function and how it is supposed to work.