FPGA Journal recently published an article by Michael R. D'Amour of DRC Computer titled "Reconfigurable Computing for Acceleration in HPC." It's refreshing to see other FPGA computing enthusiasts populating the internet. The article discusses the marketable benefits of FPGA accelerators and the barriers this field has traditionally faced in its attempts to break into the mainstream. Early in his article, he hits the nail on the head:
"While there were many reasons for this lack of early widespread acceptance, one major issue has been a lack of standards surrounding reconfigurable processors and the absence of a standardized system architecture that can effectively employ them."
Standardization, and Open Standardization in particular is the only way a technology can mature from a high margin, low volume market to a low margin, high volume market. We've already seen FPGAs win embedded markets by integrating standard microcontroller architectures. The reason technological maturation requires standards is because the transition requires competition. Standardization provides parameters for competition. Proprietary standards simply erect other barriers to entry (NDAs or reverse engineering). Fortunately there's enough of us who think that competing over a small pie is less fun than making the pie bigger, which is the answer to the skeptics question why AMD would want open bus standards if it allows someone to compete them off their own bus. AMD is growing an eco-system in the process, but since they viably compete in the more standard and bigger x86 market, they really don't have much to fear from FPGAs which are still mired by proprietary programming tools. Still FPGA computing is growing it's niches and since many people have been working on these standardization problems, Michael is somewhat justified in posing this outlook:
"Many of the technological barriers to widespread use of reconfigurable computers have been overcome, and with standards-based reconfigurable interfaces and hardware plus a growing body of standard language compilers to support the technology, reconfigurable computing is poised to break through as a viable solution for a wide range of commercial and HPC applications."
This statement is mostly accurate since it starts with the word "many." Certainly DRC deserves a lot of credit for producing a viable co-processor following AMDs open bus specs. An equivalently accurate statement would go like this: Many of the technological barriers to widespread use of reconfigurable computers are a long way from being overcome, with proprietary hardware interfaces and expensive development environments impeding development of viable solutions for commercial and HPC applications.
Indeed Michael approximates this sentiment in his article while explaining why RC hasn't caught on. The problem is, when do we acknowledge the transition has happened? Is it by popular vote of the bloggers, because my casual observation is that FPGAs are losing to GPUs and Multicores there.
I think it has a lot to do with accessibility of the technology. You actually need people building applications. And right now, it's pretty darn tedious and expensive to build any substantial FPGA design (Check out the project videos of 300+ MIT students who mostly agree). After you do build large designs, its non-trivial to target a new architecture. FPGA acceleration requires FPGA acceleration or at least some semblance of iterative dynamism. I'll know the reconfigurable computing wave is coming when an FPGA is running its own open source tool-chain because that means Xilinx or Altera or somebody will actually be drinking from their own juice-boxes. This is not sarcastic though. Why should I buy reconfigurable computing if Xilinx and Altera don't even buy it yet?
Languages and tools for FPGAs are perhaps getting better for the HPC niche, but this is only a start and I will continue to deny that reconfigurable computing has a standard language until someone shows me a language that supports reconfiguration as a primitive notion. I told someone who works at Bluespec about my variant that allows dynamic elaboration and dynamic reassignment of modules and he wondered how that could be useful since it would take so long to re-target the device after any dynamic changes. The "field programmable" thing is still an inside joke among reconfigurable computing enthusiasts, but as I've said before, marketing reconfigurability is a good way for a computing startup to find no market at all. Everyone at Xilinx knows this because everyone who works there at some point cynically made the connection between reconfigurable computing and Gallium Arsenide. Verification and low-volume embedded are the FPGA markets and they pwnd micro-controllers to take the latter, but will an FPGA driving general purpose computing always be a thing of the future?
Despite the tools and proprietary architectures, we will start to see profitable FPGA computing applications pick up, especially where deterministic latency is a big win-factor or where the speedup potential is several orders of magnitude. But there is a long and bumpy road for FPGAs to break through to a wide range of potential applications. What's really holding back FPGA computing is more than just standards, its the inertia of the EDA industry and its market model.
The economic and perhaps cultural barriers for FPGA co-processors are nearly as complex as the technological barriers. The real, and ironic enemy of FPGA accelerated computing is Xilinx, Altera and the entrenched FPGA EDA industry who are most responsible for the proprietary nature of the current toolset and architectures. These organization are actually somewhat threatened by a disturbance to their high margins from the embedded system and verification markets. Breaking through into computing applications means higher volume of lower cost hardware and similarly cheap and familiar tools that don't require special training. If the cost structure doesn't change then the ROI will rarely be found. Indeed Xilinx and Altera have resisted commoditization of their high-end components and they keep their low-level API's mostly under wraps. Their software is free like free beer, but it isn't Free like freedom yet. If low level FPGA computing research requires an NDA to access the binary formats then an iconoclastic high-school student can't just look it up on the net and hack together a compiler over a rainy weekend in Seattle. Didn't Microsoft just open their binary formats? Maybe they could start a trend?
The FPGA EDA community is going to be shaken by a different market model that doesn't take kindly to expensive proprietary tools. The potential scale of a viable FPGA computing industry could compete out-of-existence many current EDA business models that fail to adapt.
Edit on March 3:
Michael's article originally appeared July 13, 2007 in HPCwire
FCCM '07 had a paper which mentioned how proprietary formats have stifled FPGA computing research
A paper and website about reverse-engineering bitstream formats
There's also a comp.arch.fpga thread from 2000 about "FPGA openness" (Google for it)
Subscribe to:
Post Comments (Atom)
2 comments:
Amir,
good stuff. I agree with you, that we're not there yet, and it's going to take standards to get where we're going.
I think it's going to take years to get there, but we just need to do it. one step at a time.
I think with things like CHREC, we should start to see at least the academic projects starting to pull together. I think we'll see the academics working on standards before all the vendors work together. Hats off to SRC though, they seem really committed to standards, possibly the most out of the vendors.
Hi Robin,
I think if FPGAs can break through the volume barriers then they'll become economically competitive in more markets. The logic is if the cost of a chip is dependent on volume of chips sold then the GOPs/$ efficiency is volume dependent. As more applications benefit from FPGA acceleration, the volume and hence the GOPs/$ value will increase. As the cost barriers lower, FPGA computing will become more and more common place.
Dan Poznanovic of SRC has lead the OpenFPGA CoreLib working group and deserves our gratitude for pushing forward on that front.
Post a Comment