EYESY reaction to different frequencies?

Do I understand correctly that the Eyesy can only react to the incoming audio level (volume), but can’t react to different frequencies? I’d like certain elements to react only to high frequencies, other elements only to low frequencies. Is that possible without using MIDI?

Hi, yes - that is correct. The EYESY’s audio input is capturing the amplitude of the incoming audio signal. It does not do any frequency analysis.

1 Like

Thanks! I’m using ChatGPT to try and find a way to get frequency analysis. It seems there is a python module that does this - numpy - which isn’t included in the Eyesy.

ChatGPT then came up with this simplified numpy code to create a simple FFT (Fast Fourier Transform) for use with the Eyesy. It seems to actually make different elements more reactive to different frequency ranges, though I don’t know enough about the code to really be sure what is going on.

import pygame
import random
import math
import cmath

marchers_low = []
marchers_mid = []
marchers_high = []

def simple_fft(x):
    N = len(x)
    if N <= 1:
        return x
    even = simple_fft(x[0::2])
    odd = simple_fft(x[1::2])
    T = [cmath.exp(-2j * math.pi * k / N) * odd[k] for k in range(N // 2)]
    return [even[k] + T[k] for k in range(N // 2)] + \
           [even[k] - T[k] for k in range(N // 2)]

def setup(screen, eyesy):
    global marchers_low, marchers_mid, marchers_high
    num_per_group = 4
    for _ in range(num_per_group):
        for group in [marchers_low, marchers_mid, marchers_high]:
            group.append({
                'x': random.uniform(0, eyesy.xres),
                'y': random.uniform(eyesy.yres * 0.3, eyesy.yres * 0.7),
                'angle': random.uniform(-0.2, 0.2),
                'speed': 1 + random.uniform(0.5, 2),
            })

def draw(screen, eyesy):
    global marchers_low, marchers_mid, marchers_high

    eyesy.color_picker_bg(eyesy.knob5)

    # Use only the first 64 audio samples for speed
    audio = eyesy.audio_in[:64]
    fft_result = simple_fft([float(v) for v in audio])
    mags = [abs(c) for c in fft_result[:32]]  # only positive freqs

    # Band averages
    third = len(mags) // 3
    low = sum(mags[:third]) / third
    mid = sum(mags[third:2*third]) / third
    high = sum(mags[2*third:]) / third

    # Normalize
    low_norm = min(low / 10000, 1.0)
    mid_norm = min(mid / 10000, 1.0)
    high_norm = min(high / 10000, 1.0)

    # Colors based on knob4
    hue_shift = eyesy.knob4
    color_low = eyesy.color_picker((hue_shift + 0.0) % 1.0)
    color_mid = eyesy.color_picker((hue_shift + 0.33) % 1.0)
    color_high = eyesy.color_picker((hue_shift + 0.66) % 1.0)

    def update_and_draw(group, volume_norm, color, shape='triangle'):
        for m in group:
            m['angle'] += random.uniform(-0.05, 0.05)
            dx = math.cos(m['angle']) * m['speed']
            dy = math.sin(m['angle']) * m['speed'] * 0.5
            m['x'] += dx
            m['y'] += dy

            if m['x'] > eyesy.xres + 50:
                m['x'] = -50
                m['y'] = random.uniform(eyesy.yres * 0.3, eyesy.yres * 0.7)
                m['angle'] = random.uniform(-0.2, 0.2)

            size = 20 + 30 * volume_norm

            if shape == 'triangle':
                points = [
                    (m['x'], m['y'] - size),
                    (m['x'] - size * 0.6, m['y'] + size),
                    (m['x'] + size * 0.6, m['y'] + size)
                ]
                pygame.draw.polygon(screen, color, points)
            elif shape == 'circle':
                pygame.draw.circle(screen, color, (int(m['x']), int(m['y'])), int(size // 2))
            elif shape == 'square':
                rect = pygame.Rect(m['x'] - size / 2, m['y'] - size / 2, size, size)
                pygame.draw.rect(screen, color, rect)

    update_and_draw(marchers_low, low_norm, color_low, shape='triangle')
    update_and_draw(marchers_mid, mid_norm, color_mid, shape='square')
    update_and_draw(marchers_high, high_norm, color_high, shape='circle')

Does anyone else have experience using FFT with the Eyesy for frequency analysis? If it works well, maybe something like this could be built into the Eyesy’s firmware. Even just being able to divide visual reactions into two groups - bass and treble - would make the reactiveness more dynamic.

This looks cool, did it work at all?

The problem is doing the FFT in Python inside the draw loop might slow the graphics down. I thought numpy was installed, but I guess not? Using numpy to do the math would be faster, but ideally the audio analysis would happen in a separate process to take advantage to multicore processing (i.e. graphics and audio running on different CPU cores). There is already a separate process handling the audio input, so that would be the place to add additional analysis if we found a simple algorithm to use. Even just a few frequency bands like you mention would be helpful and lead to interesting graphics!

1 Like

No, numpy isn’t installed!
After lots of coaxing, I managed to get a simple mode going that does appear to work! Using some filtering, the audio signal is divided into low, mid and high. I’ve got three oscilloscopes going which react to different frequency bands. The result isn’t perfect - there is still some frequency bleed, especially when playing low notes. But when music is running it looks pretty convincing.

Knob 1 - Amplitude of the scopes
Knob 2 - line thickness
Knob 3 - this controls the filter tightness, though I think it reacts best having the knob mostly down.
Knob 4 - standard color picker for the three scopes, including LFO. The colors of the three scopes are offset, so they will always be different.
Knob 5 - standard background color picker.

Curious what you all think of this! It would be awesome to have some kind of frequency filtering built into the API of the eyesy so there doesn’t need to be so much extra code in the mode. Unfortunately my python skills are quite beginner level, so I’m only able to do all this with ChatGPT. Perhaps a more experienced programmer could make even more of this?

import pygame
import math

# Smoothing buffers
prev_low = [0]*64
prev_mid = [0]*64
prev_high = [0]*64

def setup(screen, eyesy):
    global prev_low, prev_mid, prev_high
    prev_low = [0]*64
    prev_mid = [0]*64
    prev_high = [0]*64
    

def draw(screen, eyesy):
    global prev_low, prev_mid, prev_high

    width = int(eyesy.xres)
    height = int(eyesy.yres)
    
    # Knob controls
    amplitude = eyesy.knob1 * (height)
    thickness = int(1 + eyesy.knob2 * 9)
    hue_control = eyesy.knob4
    filter_tightness = 0.9 + eyesy.knob3 * 0.09  # Knob3: filter tightness

    # Normalize audio
    raw_audio = [v / 32768.0 for v in eyesy.audio_in[:64]]

    # Filters
    def low_pass(signal, strength):
        filtered = []
        last = 0
        for v in signal:
            last = strength * last + (1 - strength) * v
            filtered.append(last)
        return filtered

    def high_pass(signal, strength):
        filtered = []
        last_input = 0
        last_output = 0
        for v in signal:
            hp = strength * (last_output + v - last_input)
            filtered.append(hp)
            last_input = v
            last_output = hp
        return filtered

    # Apply double filters
    low_pass1 = low_pass(raw_audio, strength=filter_tightness)
    low_pass2 = low_pass(low_pass1, strength=filter_tightness)

    mid_pass1 = low_pass(raw_audio, strength=filter_tightness * 0.8)
    mid_pass2 = low_pass(mid_pass1, strength=filter_tightness * 0.8)

    high_pass1 = high_pass(raw_audio, strength=filter_tightness)
    high_pass2 = high_pass(high_pass1, strength=filter_tightness)

    # Build bands
    signal_low = [v for v in low_pass2]
    signal_mid = [m - l for m, l in zip(mid_pass2, low_pass2)]
    signal_high = [v for v in high_pass2]

    # Normalize with noise gate
    def normalize(signal, noise_floor=0.02):
        max_val = max(abs(v) for v in signal)
        if max_val < noise_floor:
            return [0 for _ in signal]
        else:
            return [v / max_val for v in signal]

    signal_low = normalize(signal_low)
    signal_mid = normalize(signal_mid)
    signal_high = normalize(signal_high)

    # Smooth over time (temporal smoothing)
    def smooth_signal(signal, prev, alpha=0.9):
        return [alpha * p + (1 - alpha) * c for p, c in zip(prev, signal)]

    signal_low = smooth_signal(signal_low, prev_low, alpha=0.9)
    signal_mid = smooth_signal(signal_mid, prev_mid, alpha=0.9)
    signal_high = smooth_signal(signal_high, prev_high, alpha=0.9)

    # Store smoothed for next frame
    prev_low = signal_low
    prev_mid = signal_mid
    prev_high = signal_high

    # Background
    eyesy.color_picker_bg(eyesy.knob5)

    # Color control
    def get_rotating_color(offset):
        t = pygame.time.get_ticks() / 1000.0
        hue = (t * 0.1 + offset) % 1.0
        return eyesy.color_picker(hue)

    if hue_control < 0.5:
        base_hue = hue_control * 2.0
        color_low = eyesy.color_picker((base_hue + 0.0) % 1.0)
        color_mid = eyesy.color_picker((base_hue + 0.33) % 1.0)
        color_high = eyesy.color_picker((base_hue + 0.66) % 1.0)
    else:
        color_low = get_rotating_color(0.0)
        color_mid = get_rotating_color(0.33)
        color_high = get_rotating_color(0.66)

    # Draw scopes
    def draw_scope(signal, y_center, color, x_offset=0):
        n = len(signal)
        points = []
        for i in range(n):
            x = int((i * (width / (n - 1))) + x_offset)
            y = int(y_center - signal[i] * amplitude)
            points.append((x, y))
        pygame.draw.lines(screen, color, False, points, thickness)

    draw_scope(signal_high, height * 0.25, color_high)
    draw_scope(signal_mid,  height * 0.50, color_mid, x_offset=3)  # mid-line turbo offset
    draw_scope(signal_low,  height * 0.75, color_low)

1 Like