This is an experiment that uses a large language model (LLM) to generate modes for the EYESY. It provides a simple chat interface: you can ask it to draw something and it returns code that can be copied and pasted right into the EYESY web editor.
Thanks for posting this. I’ve found it works pretty well so far, with some tweaks. My first prompt did not yield anything usable, but after I reported that back to the bot, it was able to gradually make changed to get very close to what I wanted.
So if your prompt doesn’t work, just tell it so and it will try again!
I love collaborative coding, and glad to see it’s finally here for Eyesy.
I agree - it works well for the basics and is a great timesaver for exactly these kind of tasks. I would really like a VS Code extension of this, as I am already so used to working with AI assistants directly in my development environment.
I love this! The ability to talk to the assistant and fix errors is great. I was able to quickly make a few new modules that I had been struggling to code myself. Thanks for sharing!
Yeah works well for the basics, but will definitely get confused easily. Works well to build from simple elements, but you only get so much back and forth before you start to run up against the maximum context window, i.e. size of conversation.
I was also thinking about having it generate elements (for example background patterns or different sprites), instead of an entire mode. Then these would be stored in a personal catalog of elements which could be listed in the system message for a new conversation that would assemble the mode.
FYI, if you download an empty conversation (hit download before submitting anything) you can see the first element is the system message. You can edit this and then upload it if you want to change the system message… I’m sure it could be improved a bunch.
Yes! I had always wanted to make a MIDI note visualizer that would display the notes like a player piano, but just never got around too it… was super easy to do with the bot.
So far I’ve found the most issues with the background color knob programming, and the audio in setup. Very often the method it uses for the background color is wrong, or gray scale only. So I replace with:
I realize that I didn’t include a description of the etc.color_picker_bg() function in the system message, so it’s unlikely that the model is aware of it. If you run into this kind of thing, you can usually just describe the function and how it is supposed to work.