the ETC software is taking the approach of redrawing the entire frame buffer each time, and thats ‘hardcoded’ in main.py, if you change this, to doing updates on dirty areas, then this masking would be ‘trivial’ since you just mark the area you want to unmask as dirty, and the rest of the area not… but that does require a change in the software. (something im considering on OTC)
Ive also recently been messing about with overlays (which might be another approach) , colour keys, alpha layers… and its nice, but I found the frame rate dropped quite significantly - so more useful for ‘effects’ than constant music ‘animation’
(that said, Ive only been messing with this for a few days, so by no means an expert)
maybe we can employ a version o lpmt inside the ETC?? http://hv-a.com/lpmt/?page_id=63
there’s some source code and a 32bit linux binary at the
little projection mapping tool site, written in OpenFrameworks and python if i am not mistaken
Another alternative would be to simply have a transparent PNG (solid black with a opening in the middle ) as the top layer of a patch. This would expose the layers underneath. If I wanted to apply this to a pre-existing patch, would it be fairly easy?
I think (assuming my code was correct) this is what I tried, and the frame rate halved, and given etc is only at 30fps, we are in the 15-20fps territory… that said, Id need to retest to be certain, as this was just one of a whole bunch of things i was trying… so its possible i had something else turned on as well.
the real ‘fix’ is definitely region based drawing, whist more difficult to code, the possibilities are worth it… e.g. it makes things like sprite animation possible.
that hv-a stuff is not going to work on the ETC/Organelle, there just isn’t the spare cpu available.
essentially, your asking to render everything in python, then do another rendering process in another app.
we really have to remember with these small devices, you have to do things directly and efficiently, you cant go layering stuff up like you can on a modern desktop/laptop.
as another aside, Ive noticed there really doesn’t seem to be any hardware acceleration (gpu) going on, despite there being a GPU… in particular the main issue, is no hardware blit… Im not sure if this is a driver/pygame/sdl issue… or just what we are using in the modes, and how its coded in etc.
(I enabled a hardware surface and performance was even worst)
this got me wondering if Processing might be a better alternative, but I really dont have time to explore now
honestly mkunoff you should NOT be trying to do mapping with an IG it’s too processor intensive as it seems like you are both finding i would take the output from the ETC into VDMX/MadMapper running on a Mac running 8GB or something with a little more GPU [or ANY! ] GPU powah.
I agree, now that I understand the python process better. I actually do very intensive projection mapping already with CoGe on a really powerful mac laptop. I was using Modul8 and Mad Mapper was a smart upgrade, but I really don’t need the power inside Mad Mapper. CoGe is more modular and supports Quartz Composer and VUO [http://vuo.org/] natively - super awesome.
ETC actually attracts me to it’s limitations and being able to “play it” like an instrument. I never had that much fun doing VJing with computers, but the ETC is so much fun! Like I said, I’m not opposed to continue using this old skool masking method. LOL - it works!!