I would only have the buffer in one process and have the other process(es) request frames to be fed to them. The shared memory process used by pix_share uses locks so the data can only be read or written by one process at a time, so your massive shared buffer idea is not really any more efficient.
On Fri, Mar 6, 2009 at 5:01 PM, B. Bogart <ben@ekran.org> wrote:
Thanks Chris + Jack,
This is already how I'm using pix_share.
I'm not using it to transport video but a data-base of random access
frames.
Ideally I'd be able to share the whole pix_buffer, rather than:
* iterating over each frame a dumping it into a pix_share to be read
into another buffer in the other PD process. (not very memory efficient)
* Use a separate pix_share for each slot in the pix_buffer. (This is not
very scalable, my patch currently has 2500 slots.)
I think a [pix_buffer_share] would be a useful object for cases when one
wants to share more than a single frame.
There I go, dreaming again.
.b.
chris clepper wrote:
> pix_share does do exactly what you ask. The same buffer is used for
> both read and write and I moved 1920x1080@30fps between processes with
> no problem.
>
> On Fri, Mar 6, 2009 at 12:59 PM, B. Bogart <ben@ekran.org
> <mailto:ben@ekran.org>> wrote:> GEM-dev@iem.at <mailto:GEM-dev@iem.at>
>
> Hey all,
>
> Is there a way to share a whole pix_buffer between PD processes?
>
> I'm running my SOM stuff in a second PD instance (to make use of the
> second core in my installation machine). But as I'm developing both PD
> instances are getting more coupled and I'd like to share a whole
> pix_buffer.
>
> The alternative is using two [pix_share]s, one for input the other for
> output, controlled by netsend. The problem with this is that I need to
> send a lot of data quickly, 10ms between new images, and I think that
> could cause lots of problems.
>
> I don't think I'll be able to get the CPU usage of the second patch down
> low enough to not interfere with rendering in the main PD patch.
>
> Thanks,
> B.
>
> _______________________________________________
> GEM-dev mailing list