Chat GPT Generated Code for an EYESY Mode

I asked Chat GPT to write code for the EYESY to “Draw a perfect circle” and it actually knew what EYESY is, and wrote code for EYESY that in theory should work, but I’m getting errors and I can’t figure out why. Any help is greatly appreciated!

import time
import random
import eyesy

# Initialize EYESY object
e = eyesy.EYESY()

# Set up colors
colors = [(255, 0, 0), (0, 255, 0), (0, 0, 255)]

# Set up parameters
x_pos = 0
y_pos = 0
radius = 0
color_index = 0

# Set up loop
while True:
    # Choose random color
    color = colors[color_index]
    # Update parameters
    x_pos = (x_pos + 10) % 640
    y_pos = (y_pos + 10) % 480
    radius = (radius + 1) % 50
    # Draw circle
    e.set_param("shape", "circle")
    e.set_param("x_pos", x_pos)
    e.set_param("y_pos", y_pos)
    e.set_param("radius", radius)
    e.set_param("color", color)
    # Update color index
    color_index = (color_index + 1) % 3
    # Sleep for a short time to control frame rate

I can’t help you with the code, I know next to nothing.

I would caution you not to expect the holy grail of natural language being translated into functional code just yet.

There are plenty of discussions going on across forums about how large language model generated code for niche situations (like critter & guitari or monome products) isn’t even good enough to be called bad yet.

It’s mostly nonsense that looks like code and as such is not a good starting point to actually write code from. While confidently crafted and presented it can actually be more harmful than helpful in learning these systems.

1 Like

GPT doesn’t think critically. This looks like it took your prompt literally and auto completed some code that seems right…to a language model.

However it doesn’t have knowledge of the library you are using and appears to be calling functions that don’t exist potentially. Would be hard to know the reason for the error without the error message itself.

All in all though you can’t rely on GPT to fully program something like this, you gotta do the debugging. Might even be quicker just to learn it yourself rather than sift through the nonsense it tries to pull sometimes.


@osmsolutions I recently converted an old Raspberry Pi with a Pisound audio card into a DIY-esy and am just starting to get to grips with it.

I take it from your statement that you are not very familiar with Python - correct me if I’m wrong.

Since I do a fair amount of Python professionally and also use ChatGPT in coding, I would recommend you learn Python first. As the previous posters correctly pointed out, ChatGPT also produces a lot of nonsense, which is of course much easier to recognize as such if you have a basic understanding of the language.

A good introduction to Python, especially regarding the programming of visuals, is Kirk Kaiser’s “Make Art With Python”:

Check it out, it’s great, fun and very easy to understand. There’s even a little tutorial on how to program modes on ETC which you could easily convert into Eyesy modes (just use the search function for more hints). Also check out existing modes and you might find the issues with this piece of code very easily.


I successfully got chatGPT to write working code for EYESY/ETC. After some failed attempts , I tried a session where I started by feeding it some working code as an example, and then got it to create something different. It was ok (a couple of audio-reactive squares that moved in response to audio input) but requests in the same session for further attempts got less and less effective pretty rapidly. It kept “forgetting” basic rules like the appropriate imports, def setup, and def draw loops, and it couldn’t successfully refer back to the earlier example even when given extremely clear instructions to do so.

As a LLM it struggles with interpreting the full context and the requirements to adhere to strict rules especially as the conversation develops and gets longer. Telling it what was wrong and asking it to try again resulted in completely bizarre nonsense.

I would suggest that perhaps iterating a new working program from an existing one you have written would be best done (if you want to bother) in a series of standalone sessions/conversations, where you give it a working example and ask it to make some specific adaptation, then test it to make sure it works. Then repeat the whole process in a new session using the developed code as the example. From experience it was better at writing ETC code than EYESY code, but as long as you know how to adapt one to the other that’s not a particularly big issue as it’s pretty trivial.

You can also ask it to write annotated code, or annotate/analyse code you feed it, which may help you understand what the different parts of the code are doing. It seemed ok at that, not many obvious errors, but obviously you still take risks that it may mislead you as it isn’t a professional EYESY/python/pygame tutor.

1 Like

Given chatGPT’s limitations, it is essential that you read section 5.1 of the EYESY manual, before playing with any code and hoping it to work. It goes over the basic and essential requirements for code to work on the device.

Example code from there is:

import pygame

def setup(screen, etc):

def draw(screen, etc):
size = 640
position = (510, 500)
color = (255, 0, 0), color, position, size, 0)

You MUST meet the basic requirements in order for the code to work, and the code chatGPT produced for you does not. For example, it does not first load the pygame module (“import pygame”), and it does not have a draw function or route the output to “screen” (“def draw(screen, etc):” “, …)”) .

Like chatGPT, I am also not a pro EYESY/python/pygame tutor, but I have managed to learn just enough to be able to produce a couple of things that work and get chatGPT to do the same with my assistance. I only got this far by bothering to read the manual, watching a few youtube vids, looking at some of the existing modes, and some trial and error. Essentially you can’t get chatGPT to do everything for you. And you can’t expect miracles (or anything at all really) if you don’t at least use the limited resources in the manual that tell you the 3 or 4 things necessary to produce working code.

Edited to add: the above code won’t work as it isn’t indented/formatted properly from me copying and pasting - go and look at the section of the manual for the working version.

I hope all the above is helpful for those trying to get to the stage of producing modes for their device.


I’ve found that with using ai to program you can’t really depend to do everything for you. You work alongside it and make sure you understand what you ask it to do. You have to build the program up piece by piece, you’d definitely want some knowledge of python, unfortunately they have it running python 2.7 still (i emailed them and they said it’s on the back burner but a potential thing one day, it’d be so much easier with modern python). You’d also want to learn pygame a bit too. As for using the ai, slowly sketch something in pygame with it. You kind of start it and then tell it add one thing at a time and test it along the way.