So I did a little speed test between C and Pd for doing an array-to-
array copying. Pd didn't fair so well. Can anyone thing of a better
algorithm for copying in Pd?
.hc
hi hans,
On Sun, 2006-07-30 at 23:23 -0400, Hans-Christoph Steiner wrote:
So I did a little speed test between C and Pd for doing an array-to- array copying. Pd didn't fair so well. Can anyone thing of a better
algorithm for copying in Pd?
well, as pd is a purely interpreted language, i doubt that you can improve it without writing a compiler for pd patches ... and even then ... if you'll have overhead for stack management and message handling ... a pure c or even simd implementation will always be way faster ...
cheers ... tim
-- tim@klingt.org ICQ: 96771783 http://www.mokabar.tk
A paranoid is a man who knows a little of what's going on. William S. Burroughs
On Jul 31, 2006, at 2:01 AM, Tim Blechmann wrote:
hi hans,
On Sun, 2006-07-30 at 23:23 -0400, Hans-Christoph Steiner wrote:
So I did a little speed test between C and Pd for doing an array-to- array copying. Pd didn't fair so well. Can anyone thing of a better algorithm for copying in Pd?
well, as pd is a purely interpreted language, i doubt that you can improve it without writing a compiler for pd patches ... and even then ... if you'll have overhead for stack management and message handling ... a pure c or even simd implementation will always be way faster ...
Anyone feel like SIMDifiying [arraycopy] (hint hint ;). I suppose
vasp is a SIMD version of [arraycopy].
.hc
On Mon, 31 Jul 2006, Hans-Christoph Steiner wrote:
well, as pd is a purely interpreted language, i doubt that you can improve it without writing a compiler for pd patches ... and even then ... if you'll have overhead for stack management and message handling ... a pure c or even simd implementation will always be way faster ...
Anyone feel like SIMDifiying [arraycopy] (hint hint ;). I suppose vasp is a SIMD version of [arraycopy].
SIMD doesn't help in copying data. And in any case, introducing platform dependend code is only advisable in cases where it really matters.
Is there a special reason you want an ultrafast arraycopy ?
Günter
On Tue, 2006-08-01 at 11:28 +0200, geiger wrote:
Anyone feel like SIMDifiying [arraycopy] (hint hint ;). I suppose vasp is a SIMD version of [arraycopy].
SIMD doesn't help in copying data. And in any case, introducing platform dependend code is only advisable in cases where it really matters.
are you sure about this? not having benchmarks on this, i'm pretty sure, that moving 128 chunks of aligned memory is more efficient than moving 4 32 bit chunks of unaligned memory ...
tim
-- tim@klingt.org ICQ: 96771783 http://www.mokabar.tk
Avoid the world, it's just a lot of dust and drag and means nothing in the end. Jack Kerouac
On 8/1/06, Tim Blechmann TimBlechmann@gmx.net wrote:
On Tue, 2006-08-01 at 11:28 +0200, geiger wrote:
Anyone feel like SIMDifiying [arraycopy] (hint hint ;). I suppose vasp is a SIMD version of [arraycopy].
SIMD doesn't help in copying data. And in any case, introducing platform dependend code is only advisable in cases where it really matters.
are you sure about this? not having benchmarks on this, i'm pretty sure, that moving 128 chunks of aligned memory is more efficient than moving 4 32 bit chunks of unaligned memory ...
The cachelines and prefetch don't change for SIMD, so that will be the limitation on blocks outside of L1 and L2. Copying memory is inefficient no matter what the code is since it doesn't really do any work on the data.
The basic calls like memcpy() should have CPU specific code on every platform (Windows seems a little suspect though). In some cases where the data is not in a single linear array memcpy() might not be the most efficient way to copy, but in general it is hard to beat.
cgc
The cachelines and prefetch don't change for SIMD, so that will be the limitation on blocks outside of L1 and L2. Copying memory is inefficient no matter what the code is since it doesn't really do any work on the data.
not sure ... http://itdp.fh-biergarten.de/~itdp/html/mplayer-dev-eng/2003-01/msg00190.htm...
The basic calls like memcpy() should have CPU specific code on every platform (Windows seems a little suspect though). In some cases where the data is not in a single linear array memcpy() might not be the most efficient way to copy, but in general it is hard to beat.
well, memcpy is not optimized for:
cheers ... tim
-- tim@klingt.org ICQ: 96771783 http://www.mokabar.tk
After one look at this planet any visitor from outer space would say "I want to see the manager." William S. Burroughs
On 8/1/06, Tim Blechmann TimBlechmann@gmx.net wrote:
well, memcpy is not optimized for:
- chunks of 128bit
- aligned memory ...
That's a problem with your libc then (OSX has no such problems and Windows isn't that bad). Any memcpy() not ready to handle the proper sized cachelines and aligned memory is extremely poor.
cgc
On Tue, 1 Aug 2006, Tim Blechmann wrote:
SIMD doesn't help in copying data. And in any case, introducing platform dependend code is only advisable in cases where it really matters.
are you sure about this? not having benchmarks on this, i'm pretty sure, that moving 128 chunks of aligned memory is more efficient than moving 4 32 bit chunks of unaligned memory ...
I am not sure, but for me SIMD is a special assembler instruction set, which is definitely not used when copying data. Maybe my misinterpretation.
Günter
tim
-- tim@klingt.org ICQ: 96771783 http://www.mokabar.tk
Avoid the world, it's just a lot of dust and drag and means nothing in the end. Jack Kerouac
Anyone feel like SIMDifiying [arraycopy] (hint hint ;). I suppose vasp is a SIMD version of [arraycopy].
SIMD doesn't help in copying data. And in any case, introducing platform dependend code
is it? Linux Windows Macosx all run on x86 anymore, and GCC works great on each. i never understood this argument against SIMD. im thinking Miller just wanted PD to be slow :)
On Aug 1, 2006, at 5:28 AM, geiger wrote:
On Mon, 31 Jul 2006, Hans-Christoph Steiner wrote:
well, as pd is a purely interpreted language, i doubt that you can improve it without writing a compiler for pd patches ... and even then ... if you'll have overhead for stack management and message handling ... a pure c or even simd implementation will always be way faster ...
Anyone feel like SIMDifiying [arraycopy] (hint hint ;). I suppose vasp is a SIMD version of [arraycopy].
SIMD doesn't help in copying data. And in any case, introducing platform dependend code is only advisable in cases where it really matters.
Is there a special reason you want an ultrafast arraycopy ?
I have a patch that records to a large buffer, then copies chunks of
the buffer array to other arrays, where they are then individually
controlled. I am hoping to do be able to do this very frequently,
like 10+ times a second. I would also like to be able to run this
patch on PCs that I find on the street here, like Pentium III 600 MHz.
.hc
Hans-Christoph Steiner wrote:
I have a patch that records to a large buffer, then copies chunks of the buffer array to other arrays, where they are then individually controlled. I am hoping to do be able to do this very frequently, like 10+ times a second. I would also like to be able to run this patch on PCs that I find on the street here, like Pentium III 600 MHz.
Oh yes then it's true the streets of NYC are paved with gold... Martin
On Aug 1, 2006, at 7:22 PM, Martin Peach wrote:
Hans-Christoph Steiner wrote:
I have a patch that records to a large buffer, then copies chunks
of the buffer array to other arrays, where they are then
individually controlled. I am hoping to do be able to do this
very frequently, like 10+ times a second. I would also like to be
able to run this patch on PCs that I find on the street here, like
Pentium III 600 MHz.Oh yes then it's true the streets of NYC are paved with gold... Martin
This is an amazingly wasteful place sometimes, but its a great place
to live off the fat of the land. The vast majority of people I know
have furniture they've gotten from the street, for example. I've
seen dumpsters full of working computer equipment. Its heinous
actually, unless you can get that equipment to people who can use it.
I've actually had to stop collecting street-find PCs. I just
couldn't keep more than 20 PCs around... now 10 of them are finding
new life in the Pd compile farm!
.hc
On Tue, 1 Aug 2006, Hans-Christoph Steiner wrote:
I have a patch that records to a large buffer, then copies chunks of the buffer array to other arrays, where they are then individually controlled. I am hoping to do be able to do this very frequently, like 10+ times a second. I would also like to be able to run this patch on PCs that I find on the street here, like Pentium III 600 MHz.
good luck with SSE then :)
In any case, your patch will be faster and more stable if you find another solution to the problem, like accessing the arrays without copying them. As Chris explained, copying around large chunks of data in a real time environment is not a very good idea.
Günter
.hc
On Aug 2, 2006, at 5:22 AM, geiger wrote:
On Tue, 1 Aug 2006, Hans-Christoph Steiner wrote:
I have a patch that records to a large buffer, then copies chunks of the buffer array to other arrays, where they are then individually controlled. I am hoping to do be able to do this very frequently, like 10+ times a second. I would also like to be able to run this patch on PCs that I find on the street here, like Pentium III 600
MHz.good luck with SSE then :)
In any case, your patch will be faster and more stable if you find another solution to the problem, like accessing the arrays without
copying them. As Chris explained, copying around large chunks of data in a
real time environment is not a very good idea.
Its working pretty well as is on an Athlon 1700 (1350MHz, I think).
What counts as a "large chunk"? I am mostly copying between 100ms
and 900ms of mono, 48k, audio data. Is that large?
I would love to hear suggestions as to how I could do this
differently. The problem is that want to have each sound snippet
stored for a while and separately controllable. Perhaps I could just
use a massive buffer as a ringbuffer then use start and end points to
reference locations in the array. But at some point, its going to
have to loop around in the ringbuffer and that could be quite tricky
to handle well.
.hc
On Wed, 9 Aug 2006, Hans-Christoph Steiner wrote:
What counts as a "large chunk"? I am mostly copying between 100ms and 900ms of mono, 48k, audio data. Is that large?
depends on your machine.
I would love to hear suggestions as to how I could do this differently. The problem is that want to have each sound snippet stored for a while and separately controllable. Perhaps I could just use a massive buffer as a ringbuffer then use start and end points to reference locations in the array. But at some point, its going to have to loop around in the ringbuffer and that could be quite tricky to handle well.
I don't really understand how you do it with copy. You only need to copy data if you modify it later. Otherwise just record it to the right place and use it there directly.
Günter
On Aug 9, 2006, at 5:13 PM, geiger wrote:
On Wed, 9 Aug 2006, Hans-Christoph Steiner wrote:
What counts as a "large chunk"? I am mostly copying between 100ms and 900ms of mono, 48k, audio data. Is that large?
depends on your machine.
Pentium III 700... or G4 800.
I would love to hear suggestions as to how I could do this differently. The problem is that want to have each sound snippet stored for a while and separately controllable. Perhaps I could just use a massive buffer as a ringbuffer then use start and end points to reference locations in the array. But at some point, its going to have to loop around in the ringbuffer and that could be quite tricky to handle well.
I don't really understand how you do it with copy. You only need to copy data if you modify it later. Otherwise just record it to the right place and use it there directly.
Yup, its modified. First, I need to fade it in and out to remove
clicks, then I want to also be able to modify the sound in the future.
.hc
On Wed, 9 Aug 2006, Hans-Christoph Steiner wrote:
depends on your machine.
Pentium III 700... or G4 800.
well, yeah, you have to figure it out. if it works then its cool.
I would love to hear suggestions as to how I could do this differently. The problem is that want to have each sound snippet stored for a while and separately controllable. Perhaps I could just use a massive buffer as a ringbuffer then use start and end points to reference locations in the array. But at some point, its going to have to loop around in the ringbuffer and that could be quite tricky to handle well.
I don't really understand how you do it with copy. You only need to copy data if you modify it later. Otherwise just record it to the right place and use it there directly.
Yup, its modified. First, I need to fade it in and out to remove clicks, then I want to also be able to modify the sound in the future.
I would fade and modify during playback. They could also go in the recording stage if you are recording to more than one place, but that seems a bit of a waste of memory. YMMV.
Günter
.hc
PD-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
On 8/9/06, Hans-Christoph Steiner hans@eds.org wrote:
I would love to hear suggestions as to how I could do this differently. The problem is that want to have each sound snippet stored for a while and separately controllable. Perhaps I could just use a massive buffer as a ringbuffer then use start and end points to reference locations in the array. But at some point, its going to have to loop around in the ringbuffer and that could be quite tricky to handle well.
I don't really understand how you do it with copy. You only need to copy data if you modify it later. Otherwise just record it to the right place and use it there directly.
Yup, its modified. First, I need to fade it in and out to remove clicks, then I want to also be able to modify the sound in the future.
If you already have audio running, wouldn't it be simpler to use tabread~ and tabwrite~ than [until]? Or maybe that would be slower, I don't understand the internal operation well enough to say. There must be some way of fooling Pd into using the data without explicitly copying it. A feedback loop or something? Though I guess it has to access the same memory regardless. Your initial arraycopy subpatch has an ambiguous [0( message sent to both the float and the [+ 1( attached to it, btw.
Hi Hans-Christoph,
Anyone feel like SIMDifiying [arraycopy] (hint hint ;).
You know that SIMD code won't go into the main branch? It's easy but given the current state of arrays in PD (drawing, relation to DSP) i won't invest time into it.
I suppose vasp is a SIMD version of [arraycopy].
VASP modular is much more... it's a system providing various means for non-realtime manipulation of sample data. It's frozen (but well usable) in the current state and will later resurrect as a Python library based on numpy.
greetings, Thomas
On Aug 1, 2006, at 8:12 AM, Thomas Grill wrote:
Hi Hans-Christoph,
Anyone feel like SIMDifiying [arraycopy] (hint hint ;).
You know that SIMD code won't go into the main branch? It's easy but given the current state of arrays in PD (drawing,
relation to DSP) i won't invest time into it.
[arraycopy] is in maxlib, so you can freely modify the sources.
I suppose vasp is a SIMD version of [arraycopy].
VASP modular is much more... it's a system providing various means
for non-realtime manipulation of sample data. It's frozen (but well usable) in the current state and will later
resurrect as a Python library based on numpy.
That I know, but I just need to copy arrays right now...
.hc
So I did a little speed test between C and Pd for doing an array-to-array copying. Pd didn't fair so well. Can anyone thing of a better algorithm for copying in Pd?
.hc
hi hc
added zexy/tabdump and zexy/tabset to copy the array, but it arraycopy is much faster.
if i have methods to do the same thing i use [realtime] and run a little competition.
on ppc-osx (1.33ghz) the winner is :
:-)
c.u eni
Hallo, Enrique Erne hat gesagt: // Enrique Erne wrote:
So I did a little speed test between C and Pd for doing an array-to-array copying. Pd didn't fair so well. Can anyone thing of a better algorithm for copying in Pd?
.hc
added zexy/tabdump and zexy/tabset to copy the array, but it arraycopy is much faster.
And I added an alternative version of writing values to an array inside Pd by just sending "index value(s)" lists to the array name.
Interestingly this is a little bit faster than using [tabwrite]. It maybe could be made even faster by writing more than one value in one go.
Frank Barknecht _ ______footils.org_ __goto10.org__
On Jul 31, 2006, at 4:47 PM, Frank Barknecht wrote:
Hallo, Enrique Erne hat gesagt: // Enrique Erne wrote:
So I did a little speed test between C and Pd for doing an array-to-array copying. Pd didn't fair so well. Can anyone
thing of a better algorithm for copying in Pd?.hc
added zexy/tabdump and zexy/tabset to copy the array, but it
arraycopy is much faster.And I added an alternative version of writing values to an array inside Pd by just sending "index value(s)" lists to the array name.
Interestingly this is a little bit faster than using [tabwrite]. It maybe could be made even faster by writing more than one value in one go.
Now that I think about it, how much with SIMD/altivec help to speed
up the Pd internals for something like this? That would be very nice
indeed.
.hc
Now that I think about it, how much with SIMD/altivec help to speed up the Pd internals for something like this? That would be very nice indeed.
Tim showed some benchmarks in winter ... maybe he can post them to the list ...
i've posted the slides of my presentation (german language, sorry) to: http://puredata.org/Members/timblech/slidesiem/
speedups in short are: linear operations (copy, set, vector math): 1.2 - 3
linearizable operations (clip, min/max, sign, denormal bashing): 3 - 4
accumulating operations (envelope following): 2 - 3
special operations (rsqrt, rcp): 8
however, lately i've been implementing these algorithms with compile-time loop unrolling using c++ template metaprogramming techniques, which gives another performance boost of about 1.2 to 1.5, depending on the block size ...
cheers ... tim
-- tim@klingt.org ICQ: 96771783 http://www.mokabar.tk
I had nothing to offer anybody except my own confusion Jack Kerouac
hey tim,
On Aug 1, 2006, at 6:12 AM, Tim Blechmann wrote:
however, lately i've been implementing these algorithms with compile-time loop unrolling using c++ template metaprogramming techniques, which gives another performance boost of about 1.2 to 1.5, depending on the block size ...
...sounds good: I've just been learning about templates and generic
programming in general...on my to-do list, I've long dreamed of
adding macstl (don't be fooled by the name, it supports altivec/sse/
sse2) support to pd and friends: you might wanna take a look, it's
just some headers :-)
jamie
On Aug 1, 2006, at 6:12 AM, Tim Blechmann wrote:
however, lately i've been implementing these algorithms with compile-time loop unrolling using c++ template metaprogramming techniques, which gives another performance boost of about 1.2 to 1.5, depending on the block size ...
...sounds good: I've just been learning about templates and generic
programming in general...on my to-do list, I've long dreamed of
adding macstl (don't be fooled by the name, it supports altivec/sse/ sse2) support to pd and friends: you might wanna take a look, it's
just some headers :-)
i had a brief look at macstl ... a std::valarray with simd instructions ... that looks really good ... i'm just curious, if it contains support for non-trivial algorithms like clipping and such ... i'll definitely have a closer look at the implementation ...
cheers ... tim
-- tim@klingt.org ICQ: 96771783 http://www.mokabar.tk
Desperation is the raw material of drastic change. Only those who can leave behind everything they have ever believed in can hope to escape. William S. Burroughs
On Tue, 1 Aug 2006, james tittle wrote:
...sounds good: I've just been learning about templates and generic programming in general...on my to-do list, I've long dreamed of adding macstl (don't be fooled by the name, it supports altivec/sse/sse2) support to pd and friends: you might wanna take a look, it's just some headers :-)
I've been using manual (that is, non-gcc) loop unrolling since 2001 in GridFlow, using macros, and indeed it's a lot faster than without unrolling. You don't need C++ templates to do generic programming.
At one point I replaced part of the macros by C++ templates to make it easier to swallow for C++ dudes. Turns out it's more complicated and it's not any faster and then there are still macros left for doing the things that C++ templates just can't do. The only alternative for the remaining macros is to copy and paste.
Btw GridFlow doesn't do much SIMD and has some problems with alignment, which may explain some of its speed problems... but GridFlow surely does nice loop-unrolling.
(note: Recent versions of GCC do better loop-unrolling on their own than GCC 2.95 did back when I started)
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard - tél:+1.514.383.3801 - http://artengine.ca/matju | Freelance Digital Arts Engineer, Montréal QC Canada
On Sun, 30 Jul 2006, Hans-Christoph Steiner wrote:
So I did a little speed test between C and Pd for doing an array-to-array copying. Pd didn't fair so well. Can anyone thing of a better algorithm for copying in Pd?
Is there a [tabread] that can read several values at once using several indices? I mean something like what [#store] does. But because it would use lists of atoms it wouldn't be so much faster (maybe twice). If you want something really fast, for arrays, look into the VASP library (by Thomas Grill).
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard - tél:+1.514.383.3801 - http://artengine.ca/matju | Freelance Digital Arts Engineer, Montréal QC Canada