Monday, August 24, 2009

Intel Buying Everyone

In the past month, Intel has purchased RapidMind and Cilk. I've talked about Cilk on this blog a while ago (post has comments from one of their founders).

This was a good move for Intel. It is probably an attempt to make the eventual release of Larrabee less painful for developers. This will help put Intel in the leader seat for parallel programming platforms.

What will this mean for CUDA and OpenCL? (Full disclosure: I own shares in Nvidia).

RapidMind and Cilk are both easier platforms to use than Nvidia's CUDA, but the total number of available Teraflops in all the CUDA-capable nodes makes it attractive. Intel still needs silicon to compete with CUDA. RapidMind and Cilk will give Intel's silicon a lot more flexible programming model than CUDA gives to Nvidia's GPUs, complementing the fact that Intel's silicon will be a lot more flexible architecture than Nvidia GPUs.

Cilk and RapidMind will simplify some of the work of parallelizing library functions, but Intel will be hard-pressed to compete with Nvidia in cost/performance ratio in any application with a strong CUDA library. Nvidia GPUs are already cheap: Intel will have to use their insane operating leverage to compete in the accelerator market on a cost/performance basis. Intel can also win this market from Nvidia by getting their latest integrated graphics chips in all the newest machines and generally by doing things that piss off anti-trust prosecutors.

I'm not very hopeful for OpenCL. Unless Nvidia decides to abandon CUDA or make isomorphic with OpenCL, then OpenCL is DOA. Apple's dependency on Intel means they will eventually find Zen in whatever platform Intel offers them. AMD is the first, and will probably be the only one to support this "Open Standard" for GPGPU and multicore. Unfortunately, they will find themselves the leader in a very small market. AMD needs to focus on crushing Intel in the server market by getting to 32 nm first and releasing octo-core Opterons.

This will be interesting to watch unfold.


RPG said...

"Unless Nvidia decides to abandon CUDA or make isomorphic with OpenCL, then OpenCL is DOA. "

I strongly disagree with you on this. I would say that once we have decent opencl implementations for the gpu, CUDA is dead. Why would any one write code for only one vendor, when you can have the choice of 3? And no opencl is not at a lower level than CUDA. It is at the bare minimum level it needs to be. Real work would probably end up using a simplified wrapper library over it.

Amir said...

Both CUDA and OpenCL will be hidden at the bottom of libraries only to be suffered by the brave. The people will use Excel.

OpenCL is like CUDA, just two years late. People will write things for CUDA because they can do it now without waiting. People are buying Teslas more rapidly than Nvidia can ship them. I do not think people are buying them to do work with OpenCL.

If Cilk and RapidMind were still vendor neutral then OpenCL might have gotten somewhere. Now, OpenCL only helps AMD buy giving them a way to compete with Nvidia and Intel. I don't think Intel and Nvidia will produce libraries in OpenCL and help AMD. Go look at the language Nvidia uses when discussing OpenCL.

I think Apple is the only real force that can pull OpenCL out of there reality distortion field they lobbed it into.

RPG said...

"People will write things for CUDA because they can do it now without waiting. People are buying Teslas more rapidly than Nvidia can ship them."

You think people will be writing CUDA over opencl a year from now? (when we'll have mature implementations from all 3 vendors.)

AFAIK, Cg came out before GLSL. How well did that go?

Amir said...

I really do hope someone emerges as a hardware agnostic vendor of parallel computing libraries. I just don't see why anyone would ever use OpenCL (or CUDA for that matter) over something like Cilk or RapidMind. Cilk might have targeted OpenCL, but why would Intel spend anytime optimizing for anything but larrabee?

I don't think all the CUDA functions that are already written will be implemented in OpenCL in a year so we'll probably still be using CUDA a year from now. I bet PhysX will still be in CUDA.

Again, "People" don't use OpenCL or CUDA. These are tools for Robots. Robots like you and I. Robots can't wait for hype when the need to do a Singular Value Decomposition of a Wavelet Transform arises.

Anonymous said...

Tools follow market. There are already 50 million CUDA compatable GPUs installed in the marketplace. A programmer should not ignore a potential market of that size. Even if a vastly superior multithreading processor appears on the market, the number of people who purchase the improved processor will drive the attractiveness of developement. In short, I won't bother to code for a market which is too small and has limited profitability. I will use whatever tools I must, to access a larger and more financially rewarding market. It is great there are products which can develope for a large collection of devices. Quite frankly, my greedy self will ignore some of the other processors with a limited install base.