/. points to this article which says that Intel is putting together an Atom and an FPGA into a single package and selling it for somewhere in the $61-100 range in lots of 1000. This is Intel competing with ARM for the embedded space. We all saw the tighter integration of x86 and FPGA coming 5 years ago when hard PowerPC started finding their way into Xilinx V2 Pros. Achronix is already using Intel to fab their components. In a year or two, perhaps we'll start to see x86 and FPGA on a single die by Intel and then a few more years and we won't need the x86 anymore.
This is just starting to heat up.
I looked at the FPGA they put on board, the Arria II GX FPGA.
25,300 ALMs
63,250 equivalent logic elements
50,600 Registers
495 M9K memory Blocks
791 Kb MLAB Memory
4,455 Kb Embedded memory
312 18x18 multipliers.
I don't remember Alteras logic modules anymore (some sort of big 8 input logic thing), but this chip is like 1/20 of the size of big FPGAs on the market today, though it has an impressive supply of multipliers. Clearly going to be a champ for custom video codecs since it'll be able to do tens of billion of multiplies per second when you need it to.
Hopefully this strategy works and then Intel will experiment with Xeon + Achronix Speedster on a single die :)
PS: My little girl is walking:
Little Red also walks on the iPad too, but so does the Big Bad Wolf! (buy our letter tracing iPad game for kids!)
Tuesday, November 23, 2010
Wednesday, November 17, 2010
Ten Things to do with an EC2 GPU
Yesterday I was saying we'll see crypto cracking on EC2 with GPUs. In addition to writing it on this blog, I enthusiastically proposed it to friends who are into this sorta thing and of course we then discovered that someone already did it so we lose all novelty points now and should just go back to thinking about how to make money using cloud based GPUs.
I've been writing about this for five years already and I think EC2 with GPU is like an invitation for me to take my GPU accelerated spreadsheet work into the limelight. A good question is, does Amazon have the capacity to turn that $2.10 per teraflop per hour into $2100 for a Petaflop for an hour? Can they provide it with enough granularity to provide a petaflop for a minute for 2100/60 dollars? I also suspect that some data intensive applications will wish they had solid state disks and faster connections between nodes (as I understand it they have 10Gbps links now). How will we deal with streaming data sets to and from the server? I wonder if you might be reserving the GPUs even when you're not using the GPU, but just moving data to and from the server; this would be a wasteful use of resources.
So here's my list of ten things to do with an EC2 GPU instance.
1) Electronic Design Automation - Giving supercomputer resources to teams of 2 or 3 designers for electronic simulation and synthesis. We definitely want some FPGAs in the cloud for logic simulation and other non-GPU algorithms that FPGAs are good at like pattern recognition. FPGA tools have a way to go and the appliances will need a GPU cloud just to do P&R.
2) Render Farms - NVidia's bought Mental Ray probably knowing they would put Reality Server on EC2. Plugins for Photoshop and Maya are gonna be next; lower resolution screen captures
3) Video Games - A system that renders in the cloud and streams it to your mobile device? Obviously other people are doing this, but access to the Amazon takes the risk out for developers! We can have much much much better game graphics, but controller to screen round trip latency puts an even higher burden to get the computation done faster.
4) Financial Services - Remember when you got to take a 15 minute break after starting a big Correlations or Monte Carlo sim? I wonder if we can get low latency streams of market data and make a sandbox for high frequency traders.
5) Voice Recognition - You know when you're talking to those computer on the phone and it acts like it can recognize you? Not sure how to pipe that volume of data in though.
6) Face Recognition - Soon to be a facebook feature for sure: we can index your face and then auto-tag you.
7) Search for Extra-Terrestrial Intelligence - Actually don't.
8) Computation Molecular Dynamics - Folding on a cloud!
9) Crypto Cracking - If only you had some crypto worth cracking you now have a supercomputer! You could also factor some big numbers, isn't RSA offering money for factoring some big numbers?
10) Verifying Goldbach's Conjecture - We seem to be having trouble proving that every even number is the sum of two primes, but computers keep verifying ever increasingly larger number we throw a them.
$2.10 per teraflop per hour on a $3000 GPU which consumes about 10 cents of power per hour means there's plenty of room for profit and competition in the infrastructure side. I suspect some of the algorithms above can still be designed as services that take advantage of these economics. Unless cloud-based GPUs gets more competitively priced, I suspect anyone selling GPU-accelerated services with constant enough demand would want to use their own server and possibly only tap into cloud resources when they are in need of capacity.
I've been writing about this for five years already and I think EC2 with GPU is like an invitation for me to take my GPU accelerated spreadsheet work into the limelight. A good question is, does Amazon have the capacity to turn that $2.10 per teraflop per hour into $2100 for a Petaflop for an hour? Can they provide it with enough granularity to provide a petaflop for a minute for 2100/60 dollars? I also suspect that some data intensive applications will wish they had solid state disks and faster connections between nodes (as I understand it they have 10Gbps links now). How will we deal with streaming data sets to and from the server? I wonder if you might be reserving the GPUs even when you're not using the GPU, but just moving data to and from the server; this would be a wasteful use of resources.
So here's my list of ten things to do with an EC2 GPU instance.
1) Electronic Design Automation - Giving supercomputer resources to teams of 2 or 3 designers for electronic simulation and synthesis. We definitely want some FPGAs in the cloud for logic simulation and other non-GPU algorithms that FPGAs are good at like pattern recognition. FPGA tools have a way to go and the appliances will need a GPU cloud just to do P&R.
2) Render Farms - NVidia's bought Mental Ray probably knowing they would put Reality Server on EC2. Plugins for Photoshop and Maya are gonna be next; lower resolution screen captures
3) Video Games - A system that renders in the cloud and streams it to your mobile device? Obviously other people are doing this, but access to the Amazon takes the risk out for developers! We can have much much much better game graphics, but controller to screen round trip latency puts an even higher burden to get the computation done faster.
4) Financial Services - Remember when you got to take a 15 minute break after starting a big Correlations or Monte Carlo sim? I wonder if we can get low latency streams of market data and make a sandbox for high frequency traders.
5) Voice Recognition - You know when you're talking to those computer on the phone and it acts like it can recognize you? Not sure how to pipe that volume of data in though.
6) Face Recognition - Soon to be a facebook feature for sure: we can index your face and then auto-tag you.
7) Search for Extra-Terrestrial Intelligence - Actually don't.
8) Computation Molecular Dynamics - Folding on a cloud!
9) Crypto Cracking - If only you had some crypto worth cracking you now have a supercomputer! You could also factor some big numbers, isn't RSA offering money for factoring some big numbers?
10) Verifying Goldbach's Conjecture - We seem to be having trouble proving that every even number is the sum of two primes, but computers keep verifying ever increasingly larger number we throw a them.
$2.10 per teraflop per hour on a $3000 GPU which consumes about 10 cents of power per hour means there's plenty of room for profit and competition in the infrastructure side. I suspect some of the algorithms above can still be designed as services that take advantage of these economics. Unless cloud-based GPUs gets more competitively priced, I suspect anyone selling GPU-accelerated services with constant enough demand would want to use their own server and possibly only tap into cloud resources when they are in need of capacity.
Monday, November 15, 2010
GPGPU on EC2
Looks like you can get some NVidia Teslas in an EC2 instance now! It seems like Amazon is in the business of selling you a teraflop for $2.10 per hour. This is going to be a big competitive business. I presume I can get a petaflop for a few minutes for under a hundred bucks. Time to start making crypto-crackers!
Subscribe to:
Posts (Atom)