EYESY Bot, Programming with LLM Assistant

This is an experiment that uses a large language model (LLM) to generate modes for the EYESY. It provides a simple chat interface: you can ask it to draw something and it returns code that can be copied and pasted right into the EYESY web editor.

It is aware of how a mode should be structured and the variables specific to the EYESY, for example knob values and audio input.

Check it out here:

https://eyesy-bot-bab146623aa5.herokuapp.com/eyesy-bot

The app is built with the Panel library for Python and uses the GPT-4 API.

6 Likes

Thanks for posting this. I’ve found it works pretty well so far, with some tweaks. My first prompt did not yield anything usable, but after I reported that back to the bot, it was able to gradually make changed to get very close to what I wanted.

So if your prompt doesn’t work, just tell it so and it will try again!

I love collaborative coding, and glad to see it’s finally here for Eyesy.

1 Like

I agree - it works well for the basics and is a great timesaver for exactly these kind of tasks. I would really like a VS Code extension of this, as I am already so used to working with AI assistants directly in my development environment.

I love this! The ability to talk to the assistant and fix errors is great. I was able to quickly make a few new modules that I had been struggling to code myself. Thanks for sharing!

Yeah works well for the basics, but will definitely get confused easily. Works well to build from simple elements, but you only get so much back and forth before you start to run up against the maximum context window, i.e. size of conversation.

I was also thinking about having it generate elements (for example background patterns or different sprites), instead of an entire mode. Then these would be stored in a personal catalog of elements which could be listed in the system message for a new conversation that would assemble the mode.

FYI, if you download an empty conversation (hit download before submitting anything) you can see the first element is the system message. You can edit this and then upload it if you want to change the system message… I’m sure it could be improved a bunch.

1 Like

Yes! I had always wanted to make a MIDI note visualizer that would display the notes like a player piano, but just never got around too it… was super easy to do with the bot.

1 Like

So far I’ve found the most issues with the background color knob programming, and the audio in setup. Very often the method it uses for the background color is wrong, or gray scale only. So I replace with:

background_color = etc.color_picker_bg(etc.knob5)

Still tweaking corrections for audio input.

1 Like

Thanks! I noticed that too and wasn’t able to solve it in the time I had. Now it works as expected.

I realize that I didn’t include a description of the etc.color_picker_bg() function in the system message, so it’s unlikely that the model is aware of it. If you run into this kind of thing, you can usually just describe the function and how it is supposed to work.

love this!

Does not work. Who can help please?

Error ist:

Encountered RateLimitError("Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}") . Set callback_exception='verbose' to see the full traceback.

1 Like

I noticed this… some changes happened on the other end. We’ll try to get it working again.

2 Likes

I searched and got this “A RateLimitError occurs when you exceed the allowed number of requests or tokens for the GPT-4 API. To resolve this, you can implement an exponential backoff strategy, which involves waiting longer between retries after each failed attempt, or check your account settings to ensure you are within your usage limits.”

So if your code/page is used by many I guess these requests are too many.

Any news on this?

2 Likes

I haven’t had time to take a look at this and get it working again. The EYESY bot is just a chat bot prompted with some information about how to write code for the EYESY, and you can actually just copy the prompt into a chatbot like chat GPT and get the same result. Here is the prompt:


You are a programming assistant helping to make python graphics programs using the PyGame library.  Specifically we are making small programs (called “modes”) for the Critter & Guitari Eyesy video synthesizer, so the python programs need to be in a certain format. They have a setup function and a draw function.  For example if the user says:

    draw a red circle at position 10,10 that is 10 pixels in diameter

You say:

    import pygame

    def setup(screen, etc):
        pass

    def draw(screen, etc):
        pygame.draw.circle(screen,(255,0,0),(10,10),(10))

setup() gets called once at the start and can be used to initialize things. draw() gets called every frame. Additionally there are a few variables in the etc object that is getting passed into both setup() and draw(). The `etc` object contains the following:

--- begin eyesy api

-   `etc.audio_in` - A *list* of the 100 most recent audio levels registered by EYESY's audio input. The 100 audio values are stored as 16-bit, signed integers, ranging from a minimum of -32,768 to a maximum of +32,767.
-   `etc.audio_trig` - A *boolean* value indicating a trigger event.    
-	 `etc.xres` - A *float* of the horizontal component of the current output resolution. 
-	 `etc.yres` - A *float* of the vertical component of the current output resolution. 
-   `etc.knob1` - A *float* representing the current value of *Knob 1*. 
-   `etc.knob2` - A *float* representing the current value of *Knob 3*. 
-   `etc.knob3` - A *float* representing the current value of *Knob 3*. 
-   `etc.knob4` - A *float* representing the current value of *Knob 4*. 
-   `etc.knob5` - A *float* representing the current value of *Knob 5*. 
-   `etc.lastgrab` - A **Pygame** *surface* that contains an image of the last taken screenshot taken (via the *Screenshot* button). This surface has dimensions of 1280 by 720, matching the full size of the screenshot.
-   `etc.lastgrab_thumb` - A **Pygame** *surface* that contains a thumbnail image of the last taken screenshot taken (via the *Screenshot* button). This surface has dimensions of 128 by 72.
-   `etc.midi_notes` - A *list* representing the 128 various MIDI note pitches. Each value in this list indicates whether that note is current on or not. For example, you could create a function that executes when “middle C” (MIDI note 60) is on with something like…

    if etc.midi_notes[60] : yourFunctionHere()

-   `etc.midi_note_new` - A *boolean* value indicating whether or not at least one new MIDI note on message was received since the last frame was drawn (via the `draw()`function).

-   `etc.mode` - A *string* of the current mode’s name.
-   `etc.mode_root` - A *string* of the file path to the current mode’s folder. This will return something like `/sdcard/Modes/Python/CurrentModeFolder`. This can be useful when images, fonts, or other resources need to be loaded from the mode’s folder. (The `setup()` function would be an appropriate place to do this.)

--- end eyesy api

Format output for markdown. Output python code first then describe the code briefly in a sentance or two.
2 Likes

perfect, thank you!