EYESY disk image for openFrameworks / openGL / Lua (beta)

Sorry to butt-in, a couple other users have asked and I was curious myself. Is there a repo out there with the new ofLua code used in the demos above? I really dig the ones used in “EYESY - Experiments with OpenGL” videos, but can’t seem to find those specific effects in the Critter and Guitari github repository, just the demo code for ofLua. Do y’all have an idea on when the effects used in the “Experiments with OpenGL” videos will be made available?

2 Likes

Hi, thanks for the repo.

Turns out most of the issues were user error…forgot to hold shift key on booting. As for the power switch, I think it’s behaving normally. I noticed the power LED turns blue while booting, then turns off, but the EYESY itself is still running so it’s all good.

1 Like

Thanks for this! Are there any instructions for how to generate the disk image (or is there an easier way to redeploy the OS to the Eyesy during development)? I tried out the beta and really liked it and am interested in contributing. I see deploy.sh in the repo, but that looks like it only redeploys the rootfs, omitting the engines and pd code.

Ok, figured out what seems like a decent dev workflow for anyone wanting to hack on EYESY_OS on-device. Personally I find this easier than making changes onto the SD card directly because I can’t mount ext4 on OSX.

# Connect to the Eyesy
$ ssh music@eyesy.local
music@eyesy.local's password: music

# Prepare the system for being updated: remount rootfs as writable
$ sudo mount / -o remount,ro
# NTP is disabled so set the date, otherwise TLS certificates won't work
$ sudo date -s '2021-06-30 00:11:00'  # <-- format: YYYY-MM-DD HH:MM:SS

$ cd /home/music/EYESY_OS
$ git pull
$ cd cd platforms/eyesy_cm3
$ sudo ./deploy.sh   # Be sure to configure this to start Python or Lua depending on what you want
$ sudo reboot

So to load in your own changes, fork the repo, then make your changes on your fork. Add your own git remote in the clone of EYESY_OS at /home/music/EYESY_OS, and pull from that instead before redeploying.

Happy hacking.

1 Like

Hello,

I’m currently trying to reproduce some XY oscilloscope video/music with the EYesy. And I came across 2 main issues.

First, This kind of video requires 2 audio input (left and right), however eyesy core is opening a single alsa channel. I was able to monkey patch (in my mode) eyesy core to enable capturing both channel but it doesnt smell good (code smell). Is there a better way to do that ? I Saw the video : opengl experiment with 2 audio inputs, is it only working with the beta luA version?

Another issue is that XY oscilloscope video are really smooth due ton the continuons nature of the signal. However, eyesy core is capturing audio with a 11khz sampling rate. Thus, I loose Many details (no high frequency) and I think performance will be poor with a decent frequency (due to pygame rendering performances). It seems I can’t increase sampling rate (EYesi crash), did I miss something ? Did Anyone tried This ? OpenGl could be great to improve performance, is it available without burning a New image ?

Anyway, I’m having so much fun with the EYESY !!! You guys did a good job !

It might be hard to change this in the PyGame version. To get a higher sample rate you also have to increase the size of the audio buffer that gets passed to the modes, otherwise the rate at which the buffer fills up will be faster than the frame rate and you will see skipping in the waveform.
Currently the buffer is 100 points and I think a lot of the modes assume this size, so the 11kHz sample rate is sorta entrenched…

The oF/Lua audio also uses 11kHz (it is also stereo), but perhaps we should think about increasing this if it doesn’t harm performance.

FWIW - I’ve got my hacked RasPi version of oF-EYESY with settings.sampleRate = 48000; to match my DAC board and it does not seem to have performance issues.

I also need to get back to experimenting with this and try the FFT stuff I researched awhile back. :grin:

2 Likes

I noticed that and my patch was designed to overcome this issue:


max_stereo_buffer_size = 15000


def get_avg_sample(data, i):
    avg = audioop.getsample(data, 2, i * 3)
    avg += audioop.getsample(data, 2, (i * 3) + 1)
    avg += audioop.getsample(data, 2, (i * 3) + 2)
    avg = avg / 3
    return avg


def stereo_recv(etc):
    start = time.time()

    # get audio
    l, data = sound.inp.read()  # 48Khz fails here!
    peak = 0
    while l:
        try:
            # Extract channels
            mono_data = audioop.tomono(data, 2, 1, 1)
            if sound.nb_channels >= 2:
                left_data = audioop.tomono(data, 2, 1, 0)
                right_data = audioop.tomono(data, 2, 0, 1)
            else:
                left_data = []
                right_data = []
        except Exception as exc:
            log_time('READING FAILED %s' % exc, time.time())
            time.sleep(0.010)
            return

        nb_stereo_samples = len(left_data) // sound.nb_channels
        for i in range(nb_stereo_samples):
            etc.audio_lin.append(audioop.getsample(left_data, 2, i))
            etc.audio_rin.append(audioop.getsample(right_data, 2, i))

        nb_mono_samples = len(mono_data) // sound.nb_channels // 3  # AVG sample takes 3 sample
        for i in range(nb_mono_samples):
            try:
                # "Original" code
                avg = get_avg_sample(mono_data, i)

                # scale it
                avg = int(avg * etc.audio_scale)
                if avg > 20000:
                    sound.trig_this_time = time.time()
                    if (sound.trig_this_time - sound.trig_last_time) > .05:
                        if etc.audio_trig_enable:
                            etc.audio_trig = True
                        sound.trig_last_time = sound.trig_this_time
                if avg > peak:
                    etc.audio_peak = avg
                    peak = avg
                # if the trigger button is held
                if etc.trig_button:
                    etc.audio_in[i] = sound.sin[i]
                else:
                    etc.audio_in[i] = avg
            except:
                pass
        l, data = sound.inp.read()

    if len(etc.audio_lin) > max_stereo_buffer_size:
        etc.audio_lin = etc.audio_lin[-max_stereo_buffer_size:]
    if len(etc.audio_rin) > max_stereo_buffer_size:
        etc.audio_rin = etc.audio_rin[-max_stereo_buffer_size:]
    log_time('stereo_recv', start)


def patch_sound_inp_recv(etc):
    # Set 2 new variable : left/right audio in
    etc.audio_lin = []
    etc.audio_rin = []

    # Close Mono PCM
    sound.inp = None

    # Open Stero PCM
    sound.inp = alsaaudio.PCM(alsaaudio.PCM_CAPTURE, alsaaudio.PCM_NONBLOCK)
    sound.inp.setchannels(2)  # Original: 1
    sound.inp.setrate(44100)  # Original: 11025
    sound.inp.setformat(alsaaudio.PCM_FORMAT_S16_LE)
    sound.inp.setperiodsize(512)  # Original: 300
    sound.nb_channels = 2  # Original: 1

    # Patch sound.recv to handle stereo
    sound.recv = partial(stereo_recv, etc)


def setup(screen, etc):
    patch_sound_inp_recv(etc)

Anyway I did some math and it apprears I need to really really high sampling rate (> 200kHz) and a lots of pygame shapes (20k) to get a nice display. Thus I don’t think I can achieve my goal.

Thanks for your help

Just curious. Is there a roadmap you could share for this project? You mentioned some upcoming refactoring but it looks like that hasn’t happened yet.

Being new to the EYESY and this community, I’m not sure what is normal for this project regarding rate of development, releases, testing, etc.

1 Like

We (C&G) have been working on openFrameworks/Lua and have a new disk image of the beta to share. Please let us know if you try it out and find any issues.

Download the disk image.

After downloading, burn it onto an micro SD card (see section 6.1 of the manual ) The process will wipe the SD card clean, so you might want to use a second SD card or make sure anything on the card is backed up.

All the web stuff works the same as before (as described in the current EYESY manual. You will want to get connected and then you can play around with the modes and create new ones in the web editor. The modes are in the oFLua folder. If you create a new mode, you will need to stop and re-start the video so that the new mode is picked up. Currently the easiest way to do this is copying an existing mode folder and renaming it.

What doesn’t work

The following features are not yet implemented:

  • Shift button settings
  • Persist button
  • On screen display (there is one, but it is pretty rudimentary)
  • Scene save / recall
  • MIDI / Link
6 Likes

Wow!! Super pleased to see progress on the OpenGL/Lua Eyesy firmware. Have you seen… the Hydra WebGL Live Coding Environment? Seems like this might be something of interest to Eyesy users. I’d love to be able to run a similar environment on the Eyesy.

Really looking forward to openGL/LUA having MIDI CC# implementation similar to the standard pyGame Eyesy firmware.

1 Like

The wifi nub i have does not seem to be working with the new oF disk image.

Checking the image contents from a RasPi, I notice that the sdcard directory is empty - thus no ap.txt or wpa_supplicant.conf files. I tried to manually create these with defaults but no dice. (turns out I was looking at the wrong directories)

Is there another way I can configure wifi? Or get a console on device with a keyboard plugged in?

(Note - Balena Etcher gave me an error when it finished writing the disk image, but it seemed to boot ok so I didn’t worry about it).

Did you hold down the Shift button while it booted? That launches the AP mode…

Yeah. Tried everything.

Maybe I’ll reflash again.

FYI - the image download took forever (many hours) so I’m not keen to re-download the image

EDIT/UPDATE:

  • Reflashed a couple times, was still not able to get AP mode working.
  • Loaded up the CD card on a RasPi (and found the correct mount point for the sdcard directory) and then manually added my info to wpa_supplicant.conf - then on putting the card back in EYESY and booting WiFi works.

Sounds good, glad you got it working…we’ll have to investigate why it doesn’t work initially.

Also, you can always get console access on they EYESY (either in the oF/Lua image or the original disk image) by plugging in a keyboard and hitting the ‘esc’ key to disable the video. Then press Ctrl-Alt-F3 (or F4) to get another tty. The username / pw are both ‘music’

I also had problems getting wireless to work. The AP never turned up when I held down shift, and even with the advice here for how to log in and set up wpa_supplicant, I couldn’t get wlan0 to connect.

However, I was able to plug in a USB-to–Ethernet adapter (Insignia NS-PCA3E), and that works smoothly over a wired network, so hopefully I can get going from here.

Gave the beta image a try tonight. It shows a lot of promise! I’m looking forward to seeing how things progress. Keep up the great work!

in general, this looks like a lot of fun! I like the ways that 3-D can add to the story, and I like the Lua code as well.

I’m just getting started with Lua (and EYESY), but I’m wondering if there are ways I could help. Documentation, maybe? Eventually coming up with some samples? (I’m a technical writer by trade, usually explaining new languages and tools to newcomers.)

Thanks for this!

3 Likes

Just checking in here how the oF/Lua fork is going and if it’s coming out of beta any time soon? Cheers.

Just downloaded to test a bit, seems great, even if there isn’t so much patches to messing around.

Actually, it’s a pity cause without midi, does this thing not so usable.
Is it hard to implement response to midi message as stock firmware?