It's funny reading old predictions that didn't quite work out (yet). One nuclear power station supplying all the planet with 3 giant supercomputers (presumably running with less than 640k or RAM) managing all human problems while we live a life of leisure and swan about in our flying cars...
Yeah. Right.
We were also taught something in computer science classes back in 88 that unlike Moores Law and other popularised concepts isn't quite so widely talked about. If it were
a) these predictions wouldn't seem so riduculous b) people wouldn't be so quick to make them
It's called the "wheel of life". If you could see it, technological development might look like a toroidal vortex. Things migrate away from the main CPU to become independent subsystems as they mature and specialise. Then they are subsumed back into the main area (be it motherboard vs peripheral card or specialised instruction sets on co-processors)
Then the cycle repeats.
You can see it in everything, DRAM and DMA controllers, sound and video chips/cards, maths co-processors, network controllers.
We are already on the second rotation in sound. Once there were special sound chips that used hybrid AD synthesis, like the SID.
Then it went native.
Then it went to DSPs.
Then DSPs were obsoleted by cost and lack of flexibilty and it went native again.
No doubt we will see another turn giving us massively parallel specialised SPU (sound/signal processors) that can run 100 instances of Pd or something, before that too gets folded back in to the silicon as an integrated faculty.
A new influence on the block is green/environmental considerations. It's no good having power hungry specialised subsystems running idle for most of the time if it can be done natively.
I don't want to embarrass the authors, because we all make bold judgements that come back to haunt us (and that's a good thing to venture an opinion and risk being wrong rather than have nothing to say), but I've read plenty of similar comments that trumpet the amazing parallel flexibility of dataflow programming. Of course this misses the point that writing parallel programs requires analysis and algorithm development with that in mind, it isn't just a bonus you get for free when you have more than one processor.
IRCAMS experimental multi-processor synthesisers and things like the Kyma and MARS were stepping stones along the way. Lessons learned can be incorporated into new implementations of Pd-like dataflow languages. The good thing is the establishment of a language and method that has potential to hide implementation. Presumably Pd would not look very different to the programmer if it were to have parallel scheduling and would scale seamlessly from one to many processors.
On Tue, 04 Mar 2008 11:14:25 -0500 marius schebella marius.schebella@gmail.com wrote:
hi, I am reading an old interview with james moorer (with curtis roads in CMJ/6 1982). one funny thing is that he says, 'software synthesis is either dead or dying[...] I am hoping it's demise will be quick and relatively painless.' in return he predicted all computation being done on special dsp chips. in part he was right, but on the other hand the main cpu got more than fast enough to survive (gfx is slightly different), but - and I am coming to my point - he also was thinking about hundreds or thousands of parallel processing elements. right now, we are going to have several and in the future many many parallel CPUs, and the need for parallel processing is back. miller was talking about that in montreal. so I wonder how pd will survive that evolution? afaik the current situation is poor in this regard. can anyone give an outview for the future? would it be a jump from pd (I) 0.43 to pd II 0.1? marius.
PD-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list