anyone seen this?
I bet it can open several phase vocoder patches
http://www.wired.com/wiredenterprise/2012/09/lego-super-gallery/?utm_source=...
On 15.9.2012, at 22:40 , Alexandre Torres Porres wrote:
I bet it can open several phase vocoder patches
64 on this one, strictly speaking.
-- keep your ears open: http://blauwurf.at http://soundcloud.com/noiseconformist
probably going well off topic now,
but what sort of new audio processes would be made possible by supercomputing???
you mean like with this sort of thing, or supercomputer google style?
2012/9/15 i go bananas hard.off@gmail.com
probably going well off topic now,
but what sort of new audio processes would be made possible by supercomputing???
yeah, with this sort of thing...
Miller was saying the other day how the original phase vocoder patch required $35000 worth of hardware (or whatever the actual figure was...)
So i was just wondering what sort of audio things are round at the moment that can only be achieved with well beyond ordinary hardware???
Well, there's almost no end of applications that wouldn't be improved or made usable by a hundred-fold increase in CPU. But things that aren't currently possible for commercial or domestic use might be; In processing, blind source separation using dictionary attack to find optimal sparse decomposition; also similar to deconvolution or upmixing without prior models. In modelling; for wavefield modelling or lumped masses with a large number of nodes. In analysis; articulatory speech models to do speaker independent recognition.
The practical outcomes of the first group of things are basically being able to record an orchestra with a stero mic, pull out and process individual instruments after the fact, change the hall acoustics and remix the recording. The latter stuff is more obvious, raytracing reverbs and whatnot.
But being unable to brute force these things leaves the quest to understand deeper and find optimisation tricks to change the algorithm/approach, not matters of scale. When the growth order of a method is wrong throwing a room full of GPU's at it only gives you a temporary lead.
On Sun, Sep 16, 2012 at 04:26:26PM +0900, i go bananas wrote:
yeah, with this sort of thing...
Miller was saying the other day how the original phase vocoder patch required $35000 worth of hardware (or whatever the actual figure was...)
So i was just wondering what sort of audio things are round at the moment that can only be achieved with well beyond ordinary hardware???
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
yeah, separating individual instruments / voices from a mix does seem like a 'just over the horizon' application. I'd love to be able to have a stereo microphone in the room i'm in now, and separate the sound of the rain, the wind, the TV in the background, my typing at this keyboard....
now my question is;
spending 4k to build a Pi supercomputer can give you more power and possibilities than with a top of the line MAC for example (which will cost just as much, and be a quad core 2.7 intel i7, 1.6GHz bus, 16GB Ram).
I'm guessing that CPU wize it would be more powerful indeed; even thought it's a modest one, that's 64 cores against 4...
what I'm not familiar to is how supercomputing works and optimizes the work by splitting it into all CPU units. Maybe it does work like getting hard drives into RAID 0 mode, right? Where the speed of file transfer does double up.
cheers Alex
2012/9/16 i go bananas hard.off@gmail.com
yeah, separating individual instruments / voices from a mix does seem like a 'just over the horizon' application. I'd love to be able to have a stereo microphone in the room i'm in now, and separate the sound of the rain, the wind, the TV in the background, my typing at this keyboard....
On Sun, Sep 16, 2012 at 8:24 AM, Alexandre Torres Porres porres@gmail.com wrote:
now my question is;
spending 4k to build a Pi supercomputer can give you more power and possibilities than with a top of the line MAC for example (which will cost just as much, and be a quad core 2.7 intel i7, 1.6GHz bus, 16GB Ram).
I think what you'll want to spend 4k on is a Xeon Phi co-processor for a desktop instead. It has 50 cores and a 512-bit instruction word length on each core.
I'm guessing that CPU wize it would be more powerful indeed; even thought it's a modest one, that's 64 cores against 4...
what I'm not familiar to is how supercomputing works and optimizes the work by splitting it into all CPU units. Maybe it does work like getting hard drives into RAID 0 mode, right? Where the speed of file transfer does double up.
You have to write software with MPI (for clustering) or OpenMP (massively multi-threaded) to take advantage of those extra cores. You always lose some efficiency when using multiple cores, but you may speedup the program. The highest possible speedup is achieved when all processes are independent.
cheers Alex
2012/9/16 i go bananas hard.off@gmail.com
yeah, separating individual instruments / voices from a mix does seem like a 'just over the horizon' application. I'd love to be able to have a stereo microphone in the room i'm in now, and separate the sound of the rain, the wind, the TV in the background, my typing at this keyboard....
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Clearly there are cheaper computers other than apple, so I'm using it for comparison to give the raspberry pi more chance to stand out in power.
But yeah, I made a bad comparison. First, you can actually have an apple macbook pro 2.7Ghz i7 for 2.5k, I was picking a top configuration model to compare to the price of this super-computer made of Pis, but the processing power would be the same, and it is a notebook and not a tower. So I guess the best way to compare the cost of this raspberry super computer to an apple cost like machine is the Mac Pro, which is a tower, and for around 4k you'd get two 6-core 2.4Ghz intel Xeon. And then 16GB of ram and 1TB HD, juts like the pi Super Computer. Now, these are actually old machines that haven't been properly updated, by the way.
Anyway, Hey, I didn't know anything about this Xeon Phi, it sound awesome. But I figure it was designed for supercomputing tasks, which I also know nothing about, and now I'm also very curious to know what kind of computer music process you can have with this kind of thing.
But my doubt remains, would the raspberry supercomputer be more powerful than this Mac Pro?
And if you say you can have a Xeon Phi Super Computer for 4 grand. Well, it seems it would be more powerful than 64 Pis together, right?
thanks Alex
2012/9/16 Charles Henry czhenry@gmail.com
On Sun, Sep 16, 2012 at 8:24 AM, Alexandre Torres Porres porres@gmail.com wrote:
now my question is;
spending 4k to build a Pi supercomputer can give you more power and possibilities than with a top of the line MAC for example (which will
cost
just as much, and be a quad core 2.7 intel i7, 1.6GHz bus, 16GB Ram).
I think what you'll want to spend 4k on is a Xeon Phi co-processor for a desktop instead. It has 50 cores and a 512-bit instruction word length on each core.
I'm guessing that CPU wize it would be more powerful indeed; even thought it's a modest one, that's 64 cores against 4...
what I'm not familiar to is how supercomputing works and optimizes the
work
by splitting it into all CPU units. Maybe it does work like getting hard drives into RAID 0 mode, right? Where the speed of file transfer does
double
up.
You have to write software with MPI (for clustering) or OpenMP (massively multi-threaded) to take advantage of those extra cores. You always lose some efficiency when using multiple cores, but you may speedup the program. The highest possible speedup is achieved when all processes are independent.
cheers Alex
2012/9/16 i go bananas hard.off@gmail.com
yeah, separating individual instruments / voices from a mix does seem
like
a 'just over the horizon' application. I'd love to be able to have a
stereo
microphone in the room i'm in now, and separate the sound of the rain,
the
wind, the TV in the background, my typing at this keyboard....
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Being so amazed as I am on the cheapness of the Pi, I wanted to also compare its processing power to the chips on an iphone, for example. Well, apparently apple wont even tell you the details of it's chip clock speed.
that's gotta suck
so, being it that cheap, it'd be great if it also were an open hardware, such as the arduino.
I then found stuff like the beagleboard, which is open and all, but the 200$ seemed pricy, that's 1/3 of Mac Mini (yeap, I like using apple as a standard for expensive hardware as you have noticed). So I'm figuring that if apple wanted to come up aith a "Mac Nano", the size of an apple TV, with very modest configuration comparable to a beagleboard, the price would kinda be the same in my speculations.
Now, anyone felt compeled to try the Raspberry Pi with arduino? Glerm was telling me that arduino is now working on a newer version of the hardware that would take an ARM chip. So I imagine it'd be like having a built in Pi into the Aerduino, and that you could have an Operational System in it runing PD. Since Arduinos are so popular, and open and everything, I hope this would be very cheap and acessible, not to mention that anyone could by the parts and try to build it themselves for even less.
I don't have any practical application for any of this technology in my head yet, but there's something about it that really fascinates me, and that's of course the accessibility and everything.
Well, I will let you pioneers do the hard work of getting stuff to run on the Pi and then some time later I'll definetly get one of those to play with.
Well, I'll just kinda ramble out of topic from now on. I wanted to say that, unfortunately, import taxes in Brazil are absurdly abusive and huge, so a 35$ Pi can cost us around 300 Brazilian reais - that's about 150 dollars (that's gotta suck), well, this is just so you know how much we're talking about, but you need to consider that we don't just have twice as much cash on us just because our currency is worth the half of that... :) I'd say we get paid less in general, not to mention that poverty and misery is still an issue. It bums me out so much because things like the Raspberry Pi is exactly what we need to make technology more acessible to everyone, and teach kids in public schools how to code, for example. That's why I hope for such a cheap and open sourced machine anytime soon.
Cheers
2012/9/16 Alexandre Torres Porres porres@gmail.com
Clearly there are cheaper computers other than apple, so I'm using it for comparison to give the raspberry pi more chance to stand out in power.
But yeah, I made a bad comparison. First, you can actually have an apple macbook pro 2.7Ghz i7 for 2.5k, I was picking a top configuration model to compare to the price of this super-computer made of Pis, but the processing power would be the same, and it is a notebook and not a tower. So I guess the best way to compare the cost of this raspberry super computer to an apple cost like machine is the Mac Pro, which is a tower, and for around 4k you'd get two 6-core 2.4Ghz intel Xeon. And then 16GB of ram and 1TB HD, juts like the pi Super Computer. Now, these are actually old machines that haven't been properly updated, by the way.
Anyway, Hey, I didn't know anything about this Xeon Phi, it sound awesome. But I figure it was designed for supercomputing tasks, which I also know nothing about, and now I'm also very curious to know what kind of computer music process you can have with this kind of thing.
But my doubt remains, would the raspberry supercomputer be more powerful than this Mac Pro?
And if you say you can have a Xeon Phi Super Computer for 4 grand. Well, it seems it would be more powerful than 64 Pis together, right?
thanks Alex
2012/9/16 Charles Henry czhenry@gmail.com
On Sun, Sep 16, 2012 at 8:24 AM, Alexandre Torres Porres porres@gmail.com wrote:
now my question is;
spending 4k to build a Pi supercomputer can give you more power and possibilities than with a top of the line MAC for example (which will
cost
just as much, and be a quad core 2.7 intel i7, 1.6GHz bus, 16GB Ram).
I think what you'll want to spend 4k on is a Xeon Phi co-processor for a desktop instead. It has 50 cores and a 512-bit instruction word length on each core.
I'm guessing that CPU wize it would be more powerful indeed; even
thought
it's a modest one, that's 64 cores against 4...
what I'm not familiar to is how supercomputing works and optimizes the
work
by splitting it into all CPU units. Maybe it does work like getting hard drives into RAID 0 mode, right? Where the speed of file transfer does
double
up.
You have to write software with MPI (for clustering) or OpenMP (massively multi-threaded) to take advantage of those extra cores. You always lose some efficiency when using multiple cores, but you may speedup the program. The highest possible speedup is achieved when all processes are independent.
cheers Alex
2012/9/16 i go bananas hard.off@gmail.com
yeah, separating individual instruments / voices from a mix does seem
like
a 'just over the horizon' application. I'd love to be able to have a
stereo
microphone in the room i'm in now, and separate the sound of the rain,
the
wind, the TV in the background, my typing at this keyboard....
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
On Sun, Sep 16, 2012 at 4:28 PM, Alexandre Torres Porres porres@gmail.comwrote:
so, being it that cheap, it'd be great if it also were an open hardware, such as the arduino.
I then found stuff like the beagleboard, which is open and all, but the 200$ seemed pricy, that's 1/3 of Mac Mini (yeap, I like using apple as a standard for expensive hardware as you have noticed).
The Beagleboard has a pretty powerful TI DSP chip that can crank through HD video - which is actually what the chip was designed to do.
Now, anyone felt compeled to try the Raspberry Pi with arduino? Glerm was telling me that arduino is now working on a newer version of the hardware that would take an ARM chip. So I imagine it'd be like having a built in Pi into the Aerduino, and that you could have an Operational System in it runing PD. Since Arduinos are so popular, and open and everything, I hope this would be very cheap and acessible, not to mention that anyone could by the parts and try to build it themselves for even less.
Arduino is a set of libraries and doesn't have to be tied to any platform. I use Arduino libs with a PIC32 at 80Mhz that is 10 times faster than an Arduino Uno. It's much easier to get something running compared to MPLAB. There is an ARM based 'Arduino' called the Maple that uses an ARM Coretx 3 (72Mhz) but the project doesn't appear that active.
On Sun, Sep 16, 2012 at 10:24:45AM -0300, Alexandre Torres Porres wrote:
now my question is;
spending 4k to build a Pi supercomputer can give you more power and possibilities than with a top of the line MAC for example (which will cost just as much, and be a quad core 2.7 intel i7, 1.6GHz bus, 16GB Ram).
We keep using the word 'supercomputer', and maybe a bit of perspective would help clarify matters of scale.
Back in the mists of time ///\ ...... wavy lines .....///\
A computer that a small business might own could be moved by one person if they really needed the exercise. After the 1980s they were called microcomputers and you could pick one up and carry it.
A minicomputer had a special room of its own, and was between ten and maybe fifty times faster. You could get a good one for a hundred thousand dollars. Minis were generally for mid level industrial organisations. Notice the power factor here between the everymans computer and the "top of the range" generally available model, which has remained constant. The biggest price differential is over the smallest value curve, as you would expect in commercial mass market.
A mainframe was an order of magnitude more powerful than a standard computer, having a whole floor to itself. Mainframes are generally for bulk data processing and were owned by governments or very large corporations. They were characterised by IO, rows of tape machines and teleprinters, more like a giant computerised office.
A supercomputer is, by definition, that which is on the cutting edge of feasible research. Most supercomputers are in a single location and not distributed or opportunistic, they usually have a building dedicated to them and a power supply suitable for a small town of a thousand homes (a few MW). A team of full time staff are needed to run them. They cost a few hundred million to build and a few tens of millions per year to operate. Current supercomputers are measured in tens of Peta FLOPS, ten to a hundred times more powerful than the equivalent mainframe, and are primarily used for scientific modelling.
To put this operational scale versus nomenclature into todays terms (taking into account one order of magnitide shift in power );
A microcomputer would probably be classed as a wearable, embedded or essentially invisible computer operating at a few tens or hundreds of MFLOPS, costing between one and ten dollars and operating from a lithium battery. If you have active RFID ID your credit card probably has more CPU power than an early business computer. The Raspberry Pi, gumsticks, and PIC based STAMPs occupy this spectrum.
The word minicomputer now tends to denote a small desktop, notebook or smartphone, or anything that is considered 'mini' compared to the previous generation, and probably having the capabilities of a full desktop from two or three years ago.
A powerful standard computer, the kind for a gaming fanatic or at the heart of a digital music/video studio is about five to ten times as powerful as the smallest micro (a much smaller gap than one might think) despite the large difference in power consumption and cost. Thse run at a few GFLOPS.
What used to be a 'minicomputer' is now what might be used in a commercial renderfarm, essentially a room of clustered boxes costing tens of thousands of dollars and consuming a heavy domestic sized electricity bill. Total CPU power in the range of 10 GFLOP to 1 TFLOP
The current guise of the 'mainframe' is what we would now see as a Data Center, a floor of an industrial unit, probably much like your ISP or hosting company with many rows of racked indepenedent units that can be linked into various cluster configurations for virtual services, network presence and data storage. Aggregate CPU power in the region of 10 TFLOP to 0.5 PFLOP
Supercomputers are still supercomputers, by definition they are beyond wildest imagination and schoolboy fantasies unless you happen to be a scientist who gets to work with them. A bunch of lego bricks networked together does not give you 20PFLOP, so it does not a supercomputer make.
However, there is a different point of view emerging since the mid 1990s based on concentrated versus distributed models. Since the clustering of cheap and power efficient microcomputers is now possible because of operating system and networking advances, we often hear of amazing feats of collective CPU power obtained by hooking together old Xboxes with GPUs, (Beowulf - TFLOP range) or using opportunistic distributed networks to get amazing power out of unused cycles (eg SETI at home/BOINC and other volunteer arrays, or 'botnets' used by crackers) (tens to hundreds of TFLOPS).
Some guides to growth here with interesting figures on the estimated cost per GFLOP over the last 50 years:
https://en.wikipedia.org/wiki/FLOPS
I'm guessing that CPU wize it would be more powerful indeed; even thought it's a modest one, that's 64 cores against 4...
So the issue now is that a parallel model of computing needs the problem cast into a program that works in this way. Some algorithms are trivially rewritten to work well on clusters, but many are not. The aggregate power isn't a full indicator of the expected speedup. A multi-core has fast data connection between cores but little memory for each processor, whereas a cluster may have GB of memory associated with each node but much slower data throughput between nodes.
what I'm not familiar to is how supercomputing works and optimizes the work by splitting it into all CPU units.
This is an important area of computer science. In summary, if the overhead of splitting a subproblem, sending it to node/core, collecting the result and re-integrating it back into the end solution is less than it would cost to compute it on a more powerful single node, then you have a speedup. This is where algorithm design gets fun :)
Message passing protocols serve to split up the data according to schemes that mirror the algorithm, a bit like routers in the internet. Wavefront broadcast, bifurcation, all manner of schemes are used to break up and reassemplbe the sub-processes. Anderw Tannenbaum wrote one of the early and very accessible books on it all, called "Distributed Operating Systems"
If _all_ the data needs to be present everywhere in the system then distributed models fail because the data throughput problem starts to dominate the advantage gained by parallel computation. So, only certain kinds of program can be run on 'supercomputers' that work this way. Your average desktop application like Protools probably wouldn't benefit much running on the IBM Sequoia, because it isn't written to get advantage from that architecture.
cheers, Andy
Thanks a lot Andy, that was really informative.
So I see there's no point at all comparing this "super" Pi rack to general computers, and that you can't run one Pd having it being served by 64 of these.
cheers
2012/9/16 Andy Farnell padawan12@obiwannabe.co.uk
On Sun, Sep 16, 2012 at 10:24:45AM -0300, Alexandre Torres Porres wrote:
now my question is;
spending 4k to build a Pi supercomputer can give you more power and possibilities than with a top of the line MAC for example (which will
cost
just as much, and be a quad core 2.7 intel i7, 1.6GHz bus, 16GB Ram).
We keep using the word 'supercomputer', and maybe a bit of perspective would help clarify matters of scale.
Back in the mists of time ///\ ...... wavy lines .....///\
A computer that a small business might own could be moved by one person if they really needed the exercise. After the 1980s they were called microcomputers and you could pick one up and carry it.
A minicomputer had a special room of its own, and was between ten and maybe fifty times faster. You could get a good one for a hundred thousand dollars. Minis were generally for mid level industrial organisations. Notice the power factor here between the everymans computer and the "top of the range" generally available model, which has remained constant. The biggest price differential is over the smallest value curve, as you would expect in commercial mass market.
A mainframe was an order of magnitude more powerful than a standard computer, having a whole floor to itself. Mainframes are generally for bulk data processing and were owned by governments or very large corporations. They were characterised by IO, rows of tape machines and teleprinters, more like a giant computerised office.
A supercomputer is, by definition, that which is on the cutting edge of feasible research. Most supercomputers are in a single location and not distributed or opportunistic, they usually have a building dedicated to them and a power supply suitable for a small town of a thousand homes (a few MW). A team of full time staff are needed to run them. They cost a few hundred million to build and a few tens of millions per year to operate. Current supercomputers are measured in tens of Peta FLOPS, ten to a hundred times more powerful than the equivalent mainframe, and are primarily used for scientific modelling.
To put this operational scale versus nomenclature into todays terms (taking into account one order of magnitide shift in power );
A microcomputer would probably be classed as a wearable, embedded or essentially invisible computer operating at a few tens or hundreds of MFLOPS, costing between one and ten dollars and operating from a lithium battery. If you have active RFID ID your credit card probably has more CPU power than an early business computer. The Raspberry Pi, gumsticks, and PIC based STAMPs occupy this spectrum.
The word minicomputer now tends to denote a small desktop, notebook or smartphone, or anything that is considered 'mini' compared to the previous generation, and probably having the capabilities of a full desktop from two or three years ago.
A powerful standard computer, the kind for a gaming fanatic or at the heart of a digital music/video studio is about five to ten times as powerful as the smallest micro (a much smaller gap than one might think) despite the large difference in power consumption and cost. Thse run at a few GFLOPS.
What used to be a 'minicomputer' is now what might be used in a commercial renderfarm, essentially a room of clustered boxes costing tens of thousands of dollars and consuming a heavy domestic sized electricity bill. Total CPU power in the range of 10 GFLOP to 1 TFLOP
The current guise of the 'mainframe' is what we would now see as a Data Center, a floor of an industrial unit, probably much like your ISP or hosting company with many rows of racked indepenedent units that can be linked into various cluster configurations for virtual services, network presence and data storage. Aggregate CPU power in the region of 10 TFLOP to 0.5 PFLOP
Supercomputers are still supercomputers, by definition they are beyond wildest imagination and schoolboy fantasies unless you happen to be a scientist who gets to work with them. A bunch of lego bricks networked together does not give you 20PFLOP, so it does not a supercomputer make.
However, there is a different point of view emerging since the mid 1990s based on concentrated versus distributed models. Since the clustering of cheap and power efficient microcomputers is now possible because of operating system and networking advances, we often hear of amazing feats of collective CPU power obtained by hooking together old Xboxes with GPUs, (Beowulf - TFLOP range) or using opportunistic distributed networks to get amazing power out of unused cycles (eg SETI at home/BOINC and other volunteer arrays, or 'botnets' used by crackers) (tens to hundreds of TFLOPS).
Some guides to growth here with interesting figures on the estimated cost per GFLOP over the last 50 years:
https://en.wikipedia.org/wiki/FLOPS
I'm guessing that CPU wize it would be more powerful indeed; even thought it's a modest one, that's 64 cores against 4...
So the issue now is that a parallel model of computing needs the problem cast into a program that works in this way. Some algorithms are trivially rewritten to work well on clusters, but many are not. The aggregate power isn't a full indicator of the expected speedup. A multi-core has fast data connection between cores but little memory for each processor, whereas a cluster may have GB of memory associated with each node but much slower data throughput between nodes.
what I'm not familiar to is how supercomputing works and optimizes the
work
by splitting it into all CPU units.
This is an important area of computer science. In summary, if the overhead of splitting a subproblem, sending it to node/core, collecting the result and re-integrating it back into the end solution is less than it would cost to compute it on a more powerful single node, then you have a speedup. This is where algorithm design gets fun :)
Message passing protocols serve to split up the data according to schemes that mirror the algorithm, a bit like routers in the internet. Wavefront broadcast, bifurcation, all manner of schemes are used to break up and reassemplbe the sub-processes. Anderw Tannenbaum wrote one of the early and very accessible books on it all, called "Distributed Operating Systems"
If _all_ the data needs to be present everywhere in the system then distributed models fail because the data throughput problem starts to dominate the advantage gained by parallel computation. So, only certain kinds of program can be run on 'supercomputers' that work this way. Your average desktop application like Protools probably wouldn't benefit much running on the IBM Sequoia, because it isn't written to get advantage from that architecture.
cheers, Andy
On Sun, Sep 16, 2012 at 05:47:22PM -0300, Alexandre Torres Porres wrote:
Thanks a lot Andy, that was really informative.
So I see there's no point at all comparing this "super" Pi rack to general computers, and that you can't run one Pd having it being served by 64 of these.
cheers
Actually, there's a lot of value in these arrays for DSP work, at least particular kinds of creative DSP work, because what you have is effectively a giant modular synth. Data flow is a good candidate, because the work is usually a unidirectional flow of data frames through the system.
On another note, I was pondering your comment on the economics of the Pi in Brazil that you replied to Charles.
Maybe I am mistaken but the real, deep objectives of the Pi foundation are to ubiquitize (yuck!!!) (maybe "democratise"?) production through open hardware design so that you can get a fab plant to start making them locally. I know Brazil can't compete with China on economies of scale right now, but nontheless the opportunity is there at least without any trade barries based on intellectual property nonsense. Its long past time we had a standard international unit of computing that any 10 year old kid can grab and know the other 9 billion people on the planet have access to.
best Andy
"Maybe I am mistaken but the real, deep objectives of the Pi foundation are to ubiquitize (yuck!!!) (maybe "democratise"?) production through open hardware design so that you can get a fab plant to start making them locally."
For what I saw, the circuitry is not opened, or is it? I fear that, unfortunately, I didn't see it anywhere so it seems they haven't done that, although they are surely willing to disseminate the usage of technology.
And I know wat you mean and that is why I hope something like that happens. And, as I was saying, the arduino works like that and some people in brazil can spend around less than 20$ in the parts needed to build it.
And so I also mentioned about this possibility of a newer version of the arduino made up with an ARM processor. It seems it will be not only open hardware, but capable of being both a computer and an arduino. I look forward to that.
Cheers
2012/9/16 Andy Farnell padawan12@obiwannabe.co.uk
On Sun, Sep 16, 2012 at 05:47:22PM -0300, Alexandre Torres Porres wrote:
Thanks a lot Andy, that was really informative.
So I see there's no point at all comparing this "super" Pi rack to
general
computers, and that you can't run one Pd having it being served by 64 of these.
cheers
Actually, there's a lot of value in these arrays for DSP work, at least particular kinds of creative DSP work, because what you have is effectively a giant modular synth. Data flow is a good candidate, because the work is usually a unidirectional flow of data frames through the system.
On another note, I was pondering your comment on the economics of the Pi in Brazil that you replied to Charles.
Maybe I am mistaken but the real, deep objectives of the Pi foundation are to ubiquitize (yuck!!!) (maybe "democratise"?) production through open hardware design so that you can get a fab plant to start making them locally. I know Brazil can't compete with China on economies of scale right now, but nontheless the opportunity is there at least without any trade barries based on intellectual property nonsense. Its long past time we had a standard international unit of computing that any 10 year old kid can grab and know the other 9 billion people on the planet have access to.
best Andy
But then I found about the beagleboard, which is open and have the schematics on their website http://beagleboard.org/hardware/design
it's more powerful than the Pi, but seems rather expensive still. It's $150, which is not that much less than an iphone. And if you take all the phone cost/screen and etc so you get only a single board, it should be cheaper and more powerful. Oh, as for comparing the processing power of an iphone, I found a link where someone seems to have figured out what its chip is all about. If anyone else is curious to compare the power, here you go:
http://www.macrumors.com/2012/09/16/iphone-5-benchmarks-appear-in-geekbench-...
cheers
2012/9/16 Alexandre Torres Porres porres@gmail.com
"Maybe I am mistaken but the real, deep objectives of the Pi foundation are to ubiquitize (yuck!!!) (maybe "democratise"?) production through open hardware design so that you can get a fab plant to start making them locally."
For what I saw, the circuitry is not opened, or is it? I fear that, unfortunately, I didn't see it anywhere so it seems they haven't done that, although they are surely willing to disseminate the usage of technology.
And I know wat you mean and that is why I hope something like that happens. And, as I was saying, the arduino works like that and some people in brazil can spend around less than 20$ in the parts needed to build it.
And so I also mentioned about this possibility of a newer version of the arduino made up with an ARM processor. It seems it will be not only open hardware, but capable of being both a computer and an arduino. I look forward to that.
Cheers
2012/9/16 Andy Farnell padawan12@obiwannabe.co.uk
On Sun, Sep 16, 2012 at 05:47:22PM -0300, Alexandre Torres Porres wrote:
Thanks a lot Andy, that was really informative.
So I see there's no point at all comparing this "super" Pi rack to
general
computers, and that you can't run one Pd having it being served by 64 of these.
cheers
Actually, there's a lot of value in these arrays for DSP work, at least particular kinds of creative DSP work, because what you have is effectively a giant modular synth. Data flow is a good candidate, because the work is usually a unidirectional flow of data frames through the system.
On another note, I was pondering your comment on the economics of the Pi in Brazil that you replied to Charles.
Maybe I am mistaken but the real, deep objectives of the Pi foundation are to ubiquitize (yuck!!!) (maybe "democratise"?) production through open hardware design so that you can get a fab plant to start making them locally. I know Brazil can't compete with China on economies of scale right now, but nontheless the opportunity is there at least without any trade barries based on intellectual property nonsense. Its long past time we had a standard international unit of computing that any 10 year old kid can grab and know the other 9 billion people on the planet have access to.
best Andy
On 17/09/12 07:41, Alexandre Torres Porres wrote:
it's more powerful than the Pi, but seems rather expensive still. It's $150, which is not that much less than an iphone. And if you take all the phone cost/screen and etc so you get only a single board, it should be cheaper and more powerful. Oh, as for comparing the processing power of an iphone, I found a link where someone seems to have figured out what its
except that the cost of phone hardware is also linked to the ongoing price the buyer will be paying for network access ... often rather high, and the network operators can and do cover part of the upfront cost then get all that back and more later.
Those prices are not really comparable.
Simon
On Sun, Sep 16, 2012 at 7:23 PM, Alexandre Torres Porres porres@gmail.comwrote:
For what I saw, the circuitry is not opened, or is it? I fear that, unfortunately, I didn't see it anywhere so it seems they haven't done that, although they are surely willing to disseminate the usage of technology.
No the Pi is not open. The licensing agreement with Broadcomm is one of the major liabilities with the Pi.
The Beagle/Panda/etc use TI parts and they are more friendly to open hardware (although you may have to buy TI's dev tools to get the most out of the hardware).
i guess i'll chime in here and mention that some folks are designing an ARM CortexA8-based computer based on a PCMCIA card (its called an EOMA68 card). the card can be put inside an enclosure that would offer breakouts if needed. the biggest difference here is that they are trying to do the whole project top to bottom using completely open source solutions - including the GPU. it's not a shipping product but they do have a schematic designed and are looking to qualify for a kickstarter campaign at the moment. it may be something to consider in six months to a year:
the discussion activity is mainly on the ARM-netbook list: http://lists.phcomp.co.uk/pipermail/arm-netbook/
scott
On Sun, Sep 16, 2012 at 6:06 PM, chris clepper cgclepper@gmail.com wrote:
On Sun, Sep 16, 2012 at 7:23 PM, Alexandre Torres Porres <porres@gmail.com
wrote:
For what I saw, the circuitry is not opened, or is it? I fear that, unfortunately, I didn't see it anywhere so it seems they haven't done that, although they are surely willing to disseminate the usage of technology.
No the Pi is not open. The licensing agreement with Broadcomm is one of the major liabilities with the Pi.
The Beagle/Panda/etc use TI parts and they are more friendly to open hardware (although you may have to buy TI's dev tools to get the most out of the hardware).
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Nice, but what kind of enclosure wold that be?
to me this card form factor seems to be only good to fit in a laptop computer, do they use it for something else?
thanks
2012/9/16 Scott R. Looney scottrlooney@gmail.com
i guess i'll chime in here and mention that some folks are designing an ARM CortexA8-based computer based on a PCMCIA card (its called an EOMA68 card). the card can be put inside an enclosure that would offer breakouts if needed. the biggest difference here is that they are trying to do the whole project top to bottom using completely open source solutions - including the GPU. it's not a shipping product but they do have a schematic designed and are looking to qualify for a kickstarter campaign at the moment. it may be something to consider in six months to a year:
the discussion activity is mainly on the ARM-netbook list: http://lists.phcomp.co.uk/pipermail/arm-netbook/
scott
On Sun, Sep 16, 2012 at 6:06 PM, chris clepper cgclepper@gmail.comwrote:
On Sun, Sep 16, 2012 at 7:23 PM, Alexandre Torres Porres < porres@gmail.com> wrote:
For what I saw, the circuitry is not opened, or is it? I fear that, unfortunately, I didn't see it anywhere so it seems they haven't done that, although they are surely willing to disseminate the usage of technology.
No the Pi is not open. The licensing agreement with Broadcomm is one of the major liabilities with the Pi.
The Beagle/Panda/etc use TI parts and they are more friendly to open hardware (although you may have to buy TI's dev tools to get the most out of the hardware).
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
On Sun, Sep 16, 2012 at 3:26 PM, Andy Farnell padawan12@obiwannabe.co.uk wrote:
On Sun, Sep 16, 2012 at 10:24:45AM -0300, Alexandre Torres Porres wrote:
now my question is;
spending 4k to build a Pi supercomputer can give you more power and possibilities than with a top of the line MAC for example (which will cost just as much, and be a quad core 2.7 intel i7, 1.6GHz bus, 16GB Ram).
We keep using the word 'supercomputer', and maybe a bit of perspective would help clarify matters of scale.
...
A supercomputer is, by definition, that which is on the cutting edge of feasible research. Most supercomputers are in a single location and not distributed or opportunistic, they usually have a building dedicated to them and a power supply suitable for a small town of a thousand homes (a few MW). A team of full time staff are needed to run them. They cost a few hundred million to build and a few tens of millions per year to operate. Current supercomputers are measured in tens of Peta FLOPS, ten to a hundred times more powerful than the equivalent mainframe, and are primarily used for scientific modelling.
Yeah, but when I tell people what I do, do you think I say "cluster computing" or symmetric multiprocessing or CUDA applications engineer? No, I tell them I work with "supercomputers"--It's not a term for practitioners, since there's more specific things to say, ... and it keeps people from thinking I'm going to waste time talking about nerdy shit that I don't want to talk about anyway :)
The current guise of the 'mainframe' is what we would now see as a Data Center, a floor of an industrial unit, probably much like your ISP or hosting company with many rows of racked indepenedent units that can be linked into various cluster configurations for virtual services, network presence and data storage. Aggregate CPU power in the region of 10 TFLOP to 0.5 PFLOP
At the moment, I'm (the engineer) putting together the proposal for a grant for GPU computing resources (for the researchers and scientists). We're looking to spend about $750,000 on hardware that will perform about 100 TFLOPS. Mostly it will be made up of--whatever NVIDIA Tesla is most cost/power effective--in servers that will hold 4 GPUs. Altogether, we hope this fills up 5-10 racks (in our shiny new energy efficient data center with 32 racks, that the f'ing fire marshall won't let us into for another month, when we've been postponed since June anyway).
Supercomputers are still supercomputers, by definition they are beyond wildest imagination and schoolboy fantasies unless you happen to be a scientist who gets to work with them. A bunch of lego bricks networked together does not give you 20PFLOP, so it does not a supercomputer make.
However, there is a different point of view emerging since the mid 1990s based on concentrated versus distributed models. Since the clustering of cheap and power efficient microcomputers is now possible because of operating system and networking advances, we often hear of amazing feats of collective CPU power obtained by hooking together old Xboxes with GPUs, (Beowulf - TFLOP range) or using opportunistic distributed networks to get amazing power out of unused cycles (eg SETI at home/BOINC and other volunteer arrays, or 'botnets' used by crackers) (tens to hundreds of TFLOPS).
Clustering is currently the most scalable model for supercomputers. Many expensive options exist for systems with large numbers of cores and shared memory--but year after year, more circuits get put on a single die. Generally when you think of supercomputers these days, it's a network of systems that each have a lot of x86_64 cores and a maybe nice co-processor (like the NVIDIA Tesla's).
Some of the IBM machines (and Cray, still?) use pipelined multi-core processors of a different architecture and 1000s of cores on a single system, but I don't see that as a trend that will survive.
Chuck
Hearing it from the front line is really interesting Chuck. I am a little envious at the excitement a project like that must produce.
Do you know of Joe Deken and the "suitcase supercomputer" project? He is a big Pd proponent (and friend of Miller I believe) and they are also looking at R-Pi boards for their next portable cluster (I'm probably telling you stuff you already know)
best Andy
On Sun, Sep 16, 2012 at 10:26:56PM -0500, Charles Henry wrote:
On Sun, Sep 16, 2012 at 3:26 PM, Andy Farnell padawan12@obiwannabe.co.uk wrote:
On Sun, Sep 16, 2012 at 10:24:45AM -0300, Alexandre Torres Porres wrote:
now my question is;
spending 4k to build a Pi supercomputer can give you more power and possibilities than with a top of the line MAC for example (which will cost just as much, and be a quad core 2.7 intel i7, 1.6GHz bus, 16GB Ram).
We keep using the word 'supercomputer', and maybe a bit of perspective would help clarify matters of scale.
...
A supercomputer is, by definition, that which is on the cutting edge of feasible research. Most supercomputers are in a single location and not distributed or opportunistic, they usually have a building dedicated to them and a power supply suitable for a small town of a thousand homes (a few MW). A team of full time staff are needed to run them. They cost a few hundred million to build and a few tens of millions per year to operate. Current supercomputers are measured in tens of Peta FLOPS, ten to a hundred times more powerful than the equivalent mainframe, and are primarily used for scientific modelling.
Yeah, but when I tell people what I do, do you think I say "cluster computing" or symmetric multiprocessing or CUDA applications engineer? No, I tell them I work with "supercomputers"--It's not a term for practitioners, since there's more specific things to say, ... and it keeps people from thinking I'm going to waste time talking about nerdy shit that I don't want to talk about anyway :)
The current guise of the 'mainframe' is what we would now see as a Data Center, a floor of an industrial unit, probably much like your ISP or hosting company with many rows of racked indepenedent units that can be linked into various cluster configurations for virtual services, network presence and data storage. Aggregate CPU power in the region of 10 TFLOP to 0.5 PFLOP
At the moment, I'm (the engineer) putting together the proposal for a grant for GPU computing resources (for the researchers and scientists). We're looking to spend about $750,000 on hardware that will perform about 100 TFLOPS. Mostly it will be made up of--whatever NVIDIA Tesla is most cost/power effective--in servers that will hold 4 GPUs. Altogether, we hope this fills up 5-10 racks (in our shiny new energy efficient data center with 32 racks, that the f'ing fire marshall won't let us into for another month, when we've been postponed since June anyway).
Supercomputers are still supercomputers, by definition they are beyond wildest imagination and schoolboy fantasies unless you happen to be a scientist who gets to work with them. A bunch of lego bricks networked together does not give you 20PFLOP, so it does not a supercomputer make.
However, there is a different point of view emerging since the mid 1990s based on concentrated versus distributed models. Since the clustering of cheap and power efficient microcomputers is now possible because of operating system and networking advances, we often hear of amazing feats of collective CPU power obtained by hooking together old Xboxes with GPUs, (Beowulf - TFLOP range) or using opportunistic distributed networks to get amazing power out of unused cycles (eg SETI at home/BOINC and other volunteer arrays, or 'botnets' used by crackers) (tens to hundreds of TFLOPS).
Clustering is currently the most scalable model for supercomputers. Many expensive options exist for systems with large numbers of cores and shared memory--but year after year, more circuits get put on a single die. Generally when you think of supercomputers these days, it's a network of systems that each have a lot of x86_64 cores and a maybe nice co-processor (like the NVIDIA Tesla's).
Some of the IBM machines (and Cray, still?) use pipelined multi-core processors of a different architecture and 1000s of cores on a single system, but I don't see that as a trend that will survive.
Chuck
On Mon, Sep 17, 2012 at 4:11 AM, Andy Farnell padawan12@obiwannabe.co.uk wrote:
Hearing it from the front line is really interesting Chuck. I am a little envious at the excitement a project like that must produce.
Do you know of Joe Deken and the "suitcase supercomputer" project? He is a big Pd proponent (and friend of Miller I believe) and they are also looking at R-Pi boards for their next portable cluster (I'm probably telling you stuff you already know)
best Andy
Actually, I just read the post yesterday from Joe--I was sort of aware of the San Diego Supercomputing Center before now. The RPi boards are interesting, and since the best you can do is 100Mb/s, the switch gear should be relatively cheap (and old).
However, if you can consolidate your systems more, you need fewer cables, smaller switches, etc...
So--look forward to the Kontron KTT30 board which hosts the Tegra 3 SoC. There's no word yet on the price, but it's about 4x as powerful as a Raspberry Pi. So, if it comes in low enough (say $120-140, then it just might beat the RPi for cost.
On Sun, Sep 16, 2012 at 10:26:56PM -0500, Charles Henry wrote:
On Sun, Sep 16, 2012 at 3:26 PM, Andy Farnell padawan12@obiwannabe.co.uk wrote:
On Sun, Sep 16, 2012 at 10:24:45AM -0300, Alexandre Torres Porres wrote:
now my question is;
spending 4k to build a Pi supercomputer can give you more power and possibilities than with a top of the line MAC for example (which will cost just as much, and be a quad core 2.7 intel i7, 1.6GHz bus, 16GB Ram).
We keep using the word 'supercomputer', and maybe a bit of perspective would help clarify matters of scale.
...
A supercomputer is, by definition, that which is on the cutting edge of feasible research. Most supercomputers are in a single location and not distributed or opportunistic, they usually have a building dedicated to them and a power supply suitable for a small town of a thousand homes (a few MW). A team of full time staff are needed to run them. They cost a few hundred million to build and a few tens of millions per year to operate. Current supercomputers are measured in tens of Peta FLOPS, ten to a hundred times more powerful than the equivalent mainframe, and are primarily used for scientific modelling.
Yeah, but when I tell people what I do, do you think I say "cluster computing" or symmetric multiprocessing or CUDA applications engineer? No, I tell them I work with "supercomputers"--It's not a term for practitioners, since there's more specific things to say, ... and it keeps people from thinking I'm going to waste time talking about nerdy shit that I don't want to talk about anyway :)
The current guise of the 'mainframe' is what we would now see as a Data Center, a floor of an industrial unit, probably much like your ISP or hosting company with many rows of racked indepenedent units that can be linked into various cluster configurations for virtual services, network presence and data storage. Aggregate CPU power in the region of 10 TFLOP to 0.5 PFLOP
At the moment, I'm (the engineer) putting together the proposal for a grant for GPU computing resources (for the researchers and scientists). We're looking to spend about $750,000 on hardware that will perform about 100 TFLOPS. Mostly it will be made up of--whatever NVIDIA Tesla is most cost/power effective--in servers that will hold 4 GPUs. Altogether, we hope this fills up 5-10 racks (in our shiny new energy efficient data center with 32 racks, that the f'ing fire marshall won't let us into for another month, when we've been postponed since June anyway).
Supercomputers are still supercomputers, by definition they are beyond wildest imagination and schoolboy fantasies unless you happen to be a scientist who gets to work with them. A bunch of lego bricks networked together does not give you 20PFLOP, so it does not a supercomputer make.
However, there is a different point of view emerging since the mid 1990s based on concentrated versus distributed models. Since the clustering of cheap and power efficient microcomputers is now possible because of operating system and networking advances, we often hear of amazing feats of collective CPU power obtained by hooking together old Xboxes with GPUs, (Beowulf - TFLOP range) or using opportunistic distributed networks to get amazing power out of unused cycles (eg SETI at home/BOINC and other volunteer arrays, or 'botnets' used by crackers) (tens to hundreds of TFLOPS).
Clustering is currently the most scalable model for supercomputers. Many expensive options exist for systems with large numbers of cores and shared memory--but year after year, more circuits get put on a single die. Generally when you think of supercomputers these days, it's a network of systems that each have a lot of x86_64 cores and a maybe nice co-processor (like the NVIDIA Tesla's).
Some of the IBM machines (and Cray, still?) use pipelined multi-core processors of a different architecture and 1000s of cores on a single system, but I don't see that as a trend that will survive.
Chuck
i go bananas wrote:
yeah, separating individual instruments / voices from a mix does seem like a 'just over the horizon' application. I'd love to be able to have a stereo microphone in the room i'm in now, and separate the sound of the rain, the wind, the TV in the background, my typing at this keyboard....
I saw a article and video (from slashdot iirc) like one or two years ago of a VST plugin for Cubase or Protools that did that. The guy recorded an acoustic guitar, and was able to separate the note in a chord and build a new one instead, and even change the tonality of the whole piece.
Unfortunatly, I'm completly unable to find any reference of this thing. Maybe it was vaporware, maybe it was a april fool.
maybe melodyne?
http://www.celemony.com/cms/index.php?id=products_studio
cheers der.brant
Zitat von "Charles Goyard" cg@fsck.fr:
i go bananas wrote:
yeah, separating individual instruments / voices from a mix does seem like a 'just over the horizon' application. I'd love to be able to have a stereo microphone in the room i'm in now, and separate the sound of the rain, the wind, the TV in the background, my typing at this keyboard....
I saw a article and video (from slashdot iirc) like one or two years ago of a VST plugin for Cubase or Protools that did that. The guy recorded an acoustic guitar, and was able to separate the note in a chord and build a new one instead, and even change the tonality of the whole piece.
Unfortunatly, I'm completly unable to find any reference of this thing. Maybe it was vaporware, maybe it was a april fool.
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management ->
http://lists.puredata.info/listinfo/pd-list
brandt@subnet.at wrote:
maybe melodyne?
The name does not ring a bell, but it could be.
Thanks Charles