----- Original Message -----
From: Mathieu Bouchard matju@artengine.ca To: Matteo Sisti Sette matteosistisette@gmail.com Cc: PD-List pd-list@iem.at; gridflow-dev@artengine.ca Sent: Thursday, November 24, 2011 12:33 PM Subject: Re: [PD] GridFlow slowness
Le 2011-11-23 à 01:11:00, Matteo Sisti Sette a écrit :
But do any of these factors change when using an interpreted language or
environment as opposed to doing this "natively" (e.g. in C++)?
It depends on how much the interpreted language is actually compiled, and how it interacts with « less compiled » parts.
In Pd, nearly every piece of external or internal class is written in C or C++, and all abstractions are written in an interpreted language named Pd. Some other externals are written in other languages (Tcl, Lua, Python, etc., and formerly I was using Ruby).
Is there a way to take a pd patch and compile it to c or c++ or something?
This means that some parts are fast and some parts are slow. Now, if you give to a C/C++ part a large piece of work at a time, you're using much less CPU than if you cut it into tiny pieces. That's one big difference between using, say, [list-drip] vs [foreach], but it's even more the case if you do many [+] (without [list-map]) vs one big [# +].
([list-map] is actually much slower than what it is possible to do as a plain abstraction without deps, so that's why I say without [list-map])
Pd itself is probably among the slowest interpreted languages when you look at the message system. The interpreter still preparses everything and objects are mostly connected to each other as a graph. Symbol-table-lookup is used fairly seldom, and that helps making it not so slow. Using a rule of thumb, Pd should be faster than languages that reparse everything all of the time, such as Bash, and very old versions of Tcl until version 8 (which came out in 1997).
Pd's DSP is faster. It involves processing data in larger chunks of 64 floats by default (see above about too many tiny pieces) and it compiles patches as «wordcode»,
What is wordcode? Is that what's happening in d_ugen.c?
which is similar in speed to bytecode (such as Perl/Python), and usually somewhat faster than object graphs (such as Pd's message system and Ruby).
Then Java... Java is somewhat special. The oldest versions used plain bytecode (as in the original versions of Smalltalk), but when doing so, it was often slower than Tcl8/Perl/Python, because it interpreted each character operation separately, whereas Tcl8/Perl/Python bytecodes work on whole strings at once. It's again the problem of too many tiny pieces.
However, Java is nowadays almost always used with the JNI, which is a model it got from the SELF language. It's actually nearly as old as Java bytecode. Improvements in JNI made Java come supposedly close to the speed of C++, though there are still other ways in which Java needs more resources than C++.
I mean, when the bottlenecks of copying ram are discussed, I sometimes get
the impression that I'm being told: this is the part of code where the overhead of doing things in java (or whatever) rather than c++ is biggest, which is what I find counterintuitive. Or is it just a misunderstanding of mine?
I don't know how fast Java compilers are supposed to be right now. I have never tried serious number-crunching in Java. All I can tell you is to find a benchmark. Results will vary depending on the task being performed, which compiler/runtime-env is being used, and lots of small details in how each programme is written in each language.
| Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC _______________________________________________ Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list