The Convey machines use FPGAs to augment the CPU's instruction set. From a given FORTRAN or C code, their compiler tools create a "personality," which is Convey's way of re-branding the term "configuration."
Steve has a great quote on his site, which sums up everything I've ever written about the FPGA market on this blog: "The architecture which is simpler to program will win." I was once an intern at Xilinx pushing FORTRAN code through F2C (which redefines "barf" in the f2c.h header file) and massaging the resulting C code through an FPGA compiler. As a general rule, most important milestones in compiler technology diminish the need for a summer intern's manual labor. But while I understand the importance and value of demonstrating the capability of reconfigurable supercomputing on legacy FORTRAN applications, if FPGA supercomputing has a future, I am convinced that we need to break free of the instruction stream control flow model.
I've previously argued that the any approach to accelerating sequential imperative languages like C and FORTRAN only extends the von Neumann syndrome and that we need an explicitly parallel, locality-constrained, dataflow model akin to the spreadsheet if we hope to create scalable applications. Moore's law continues to drive down the cost of a transistor, but the speed of these transistors is limited by a power wall; if we are going to continue the geometric cost reduction of Gigaops, then we need languages and programming models that improve with more transistors and not just with faster transistors.
I agree with Ken Iverson's thesis in "Notation as a Tool of Thought." The Sapir-Whorf Hypothesis holds true in computer languages: we are bound to the von Neumann model by the languages we know and use and this is what we must change. In many of his talks and papers, Reiner Hartenstein identifies the dichotomy between von Neumann and reconfigurable computing paradigms. He argues that the divide must be overcome with an updated computer science curriculum.
Programming models aside, the main thing bugging me about the Convey machine is it's price tag: $32,000. Here's what Joe at Scalability has to say about pricing FPGA acceleration reasonably:
One company [at SC08] seemed to have a clue on the FPGA side, basically the pricing for the boards with FPGAs were reasonable. Understand that I found this surprising. The economics of FPGA accelerators have been badly skewed … which IMO served to exclude FPGAs from serious considerations for many users. With the rise in GPU usage for computing, some of the FPGA folks finally started to grasp that you can’t charge 10x node price for 10x single core performance.I really hope that Convey's increased programmability helps them make huge inroads at this price point so that we can expect super high margins from our products when we launch. I don't know how they will compete with XtremeData, DRC, Nallatech, Celoxica or whoever decides to do the same thing all these companies do with Achronix chips (hmmm...)