Hi Jon,

Thank you for your insight, and welcome in the list !

I personnally think that for most real-time application (like the guitar effects processor i'm using), a latency equal à below 10 to 8 ms is definitely acceptable (especially considering the price). I could imagine many applications for other instruments that would work just fine with such a latency.

Pd currently works fine with no GUI at 10 ms, with simple patches. One has to increase the latency to 16 ms (maybe more for very heavy patches) if one needs to do some fft or other demanding stuff (i used a phase vocoder in my video).

So if using the GPU for DSP doesn't reduce latency, but allows for bigger patches, it's already great news.

Cheers,

Pierre.

2013/2/9 Jonathan Sheaffer <jon@jonsh.net>
Hi All,

I've been a silent observer for some time now, but since GPU processing is 'close to my heart', I thought I'd jump in... So there goes my first post in the pd-list...

In general, GPUs are really beneficial for parallelisable algorithms involving heavy-computations, such as FFTs, fast convolution, BLAS with huge matrices, finite difference modelling etc... To maximise performance, the GPU kernel needs to unanimously operate over a large enough data set which needs to be copied into the device's memory, as GPUs generally can't access the host memory. This would mean large buffers --> increased latency. So doing 'real-time' DSP on a GPU would probably make more sense for stuff like physical modelling, complex additive synthesis etc..., rather than to 'generally reduce the system latency'.

*However*, if the SoC platform physically shares memory between the GPU and the CPU, then this could, in theory, help reduce the inherent latency (as no memory transfers would be required), but without having detailed documentation, this would be difficult to assess. 

Cheers,
Jon.

www.jonsh.net

_______________________________________________
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list