John Harrison wrote: [snip]
And for me Gem also breaks many coding conventions of Pd.
I'm not trying to trash Gem. I have the utmost respect for its developers. I don't doubt it will be phenomenal with time and I wish to support its continued development. But I am hesitant to recommend it while it is in its current, perhaps unfinished, state. Example and test patches work fine but outside of that realm my experiences have not been positive. I have plans to document the problems I had and my thoughts about the coding conventions. It's possible I'm misunderstanding some things and/or maybe my concerns will help for future development.
I agree - I'd also extend the hesitation to Pd itself but that's another matter entirely. I started writing a mail on a tangent to this topic (mainly sparked due to frustrations that verbose and boring C is nicer to work with than Gem for a project making heavy use of multipass rendering with shaders) a day or two ago, I'll just paste it here in its current, perhaps unfinished, state:
Hi all,
Wondering if there are any plans for dataflow on the GPU in Gem?
By this I mean that a patch cord would be a representation of pixel data transfer paths on the GPU, and objects would process pixel data on the GPU.
I also do not mean using depth first message passing as a mechanism to shoehorn OpenGL state machine into Pd without concern for dataflow semantics (sorry if that sounds harsh - but it's the most counterintuitive aspect of Gem imo).
Mainly I would want Gem to take care of allocating any temporary textures, binding/unbinding framebuffers + shaders, setting uniforms from inlets, etc, as it's a pain to do it manually (in any language).
Would it make sense to make a set of "proof-of-concept" abstractions + shaders that port some subset of the pix_ objects to the GPU?
Maybe this is all a bit too vague and I should do some research into other "dataflow on the GPU" environments, if there are any...