Friday, August 29, 2008

Megahard Corp: Open Source EDA as-a-Service

Prescience is a quality of those who act on it:

Building very-large-scale parallel computing structures requires simulation and design-cost analysis whose associated optimizations are MegaHard problems (yes, I'm re-branding NP-Hard). A Software-as-a-Service (SaaS) provider of vendor-neutral simulation, synthesis, mapping, and place-and-route tools could change the electronic design automation industry substantially.

If a supercomputer architecture could perform the MegaHard compiler tasks associated with compiling parallel systems in less time than the current tools (within seconds instead of minutes/hours) many designers would gladly pay for access. If the supercomputer uses distributed algorithms, then many people may offer spare cycles to do parts of your MegaHard jobs. Most developers using such a tool would gladly lend their idle cycles to perform graph permutations and compute cost functions.

So let's motivate and create a business model for the tentatively-named, Megahard Corp.

Megahard Corp. (pronounced "Mega Hardcore") has the opposite business model of Microsoft Corp. Instead of selling closed source software and tools as licensed applications, Megahard provides open-source hardware IP and development tools as a service.

The writing is on the wall: we are transitioning from personal-computing, single-core desktop world to a shared-computing, multicore/FPGA embedded world. Yet the complexity of parallel system designs and the computational strain on tools to compile them is growing faster than the performance of a practical desktop computer. The decision for EDA firms to adopt the Application Service Provider (ASP) model probably made sense at some point in the last millennium: design assumptions are much different when you are connected to a massive parallel computer. Because current tools take much, much longer to compile than the file transfer time, there is potential to improve the compiler market by providing electronic design automation tools online.

So here's the business plan: build a supercomputer and code it to run a bunch of FPGA/multicore compilers and simulators really well. Since it's a supercomputer it should make the problem I/O bound. We can tell people we've got this supercomputer that will take in source and spit out configurations in about as long as it takes for you to upload the diff source and download the diff configuration. Since we have a really big supercomputer with lots of hardware all over the place, you can also target your application to some of our bored metal and provide your own service under our framework. (Sassafras is a good name for a SaaS Framework and can have the leaf-shaped logo)

Since we're thoroughly indoctrinated, we'll build our system as Free software and use all the existing open source compiler architecture too. Being open source means our tool won't be made obsolete by an eventual open source standard (read that again so you get it). Open source frameworks allow capable users to contribute to the product development: and wouldn't you know it, the entire EDA market happens to have degrees in computer science, electrical engineering, mathematics and physics.

There's also that recurring comp.arch.fpga thread complaining about the current tools, the lack of open source tools, and the lack of documentation of the configuration format and interface -- someone must appeal to these people because they have a real point: technology is inhibited by monopolized knowledge. It's harder for features to improve when they are part of closed-source packages because you are automatically limiting the number of developers that can improve it (this situation worsens as you lay off more developers).

Another benefit of using a SaaS tool: your users don't have to waste time upgrading. The only way to convince anyone to wait through a software upgrade process is by announcing new features that claim to make the product better: that's why users wait till the second iteration of a major release. SaaS providers can roll-out and test the waters with new features with much higher iteration rate.

When I use the term open source, I mean "free" like freedom, but it's important to point out that previous forays into open source FPGA tools have failed and disappeared because they were merely "open source" and not "free." When I was just starting to care about these sorts of things, GOSPL was being hyped up and eventually was canceled. The code is no where to be found because access to the source was restricted by invite-only membership for some nonsense reason: membership exclusivity maintains the integrity of the organization. When giant corporations disingenuously uses "open source" as a marketing term for non-free software, the project is destined/deserves to fail as long as the so-called "open" source is never actually provided to freedom-loving eyes.

On the other hand, free software, like debits or free hardware IP like opencores won't stop being useful just because some organization stops funding it. Free-software projects never die, they just go unmaintained.

Besides, Megahard Corp. follows a SaaS model, so the profitability is from developing and maintaining a parallel supercomputer system to run the free compiler software, instead of distributing that software. Good free software projects encourage users to "help make it better" instead of "help make us richer," though improving products is a good way to increase sales. A sufficiently better compilation service is probably more profitable than selling licenses too -- I would certainly pay more than what I'm currently paying for proprietary software licenses if I could get access to a supercomputer for FPGA compilation that does it faster.

One major problem: If you aren't Altera, Xilinx, Lattice, Actel, Achronix, Tilera, IBM, AMD, Intel etc. then making low-level Multicore and FPGA tools requires a whole bunch of reverse-engineering and "MegaHard Work" (TM). It's probably technically easier to design an FPGA from the bottom up than to reverse engineer the architecture of an existing chip and build a new tool-chain.

Open-source compilers and reverse-engineered, vendor-neutral FPGA tools have both been successful endeavors in the past. Providing Hardware-as-a-Service is still a "cloudy" future, but there are plenty of recent examples where a vendor came into a market ripe for SaaS-ification and redefined the market. I expect that the MegaHard problems associated with design automation make it ripe for a SaaS provider to change the game.

----
(Edit 9/2) If you find positions contrary to the Open Source EDA SaaS argument, please share them with me. Here's an old interview with Grant Martin, Chief Scientist of Tensilica, who argues that we should not hold our breath for Open Source EDA and specifically says:

I think the other issue with EDA is in terms of the general number of users. I don't think there's a large enough number of users in any particular sub-category of tools to really support very much open source development. Open source requires a big community involvement, plus ancillary things being built around that environment to attract companies into the effort.

EDA is still tool small to make open source the general model. The idea that all tools will be open source, and that all EDA companies would evolve to a service business model, is unlikely and makes no sense.

....

The business incentives just aren't there to motivate those efforts in an open source environment.

I tend to not trust the beliefs of people who were born before the Internet (aka the over-30 crowd). I think he's missing the perhaps non-obvious point that a service business model can subvert the traditional software business model by offering a faster software service with smoother new-feature roll-outs (perhaps when he says "service business model" he thinks RedHat instead of Google Apps). I also know from my prior immersion at CSAIL, that open source software development does NOT require users or a big community to be involved in development, but only requires one indefatigable iconoclast to chip away at status quo for his own personal reasons that are often incongruous with profit motive. When multiple hackers create disjoint useful tools that converge into a single product, user communities start to form to propel that framework further and potentially crack the existing market open.

I wonder how anyone can reject the service business model future for basically ANY computer tool. Seeing such a future no longer requires prescience, just patience. FPGA accelerated computing will increase the number of EDA users, but none of these users will ever identify with the existing EDA market (there's far too much FUD associated with "hardware" development that they traditionally sell: programmable arrays are more than reconfigurable HDL executers).
(end edit)

5 comments:

ss said...

Amir, MegaHard is a profound and powerful concept. The notion of SaaS’y RC tools has been floated before: but you’ve written it down and explained the utility. Thanks for putting this up in the clear! It will be interesting to see what swipes will be taken to dilute or dispute your premise. –Shep Siegel

Amir said...

every vendor will eventually provide their tools as a service. the question is can they reasonably compete with an open source projects running on a hardware-cloud.

The closed source competitor won't get anyone to offer idle cycles.

Anonymous said...

Very interesting article! The reader may be interested in SpectaReg.com, an EDA SaaS for simplifying, collaborating, automating addressable register interfaces. It run in a web browser.

Anonymous said...

Interesting and passionate post. I've been exploring SaaS and Cloud Computing for EDA on my blog recently as well, although you certainly seem to have a more in depth knowledge of the software development process and open source. I'm coming at it as someone who was in EDA (Synopsys) for many years.

Although I'm part of the "over 30" crowd, I've been somewhat of a heretic (term borrowed from Seth Godin) and completely agree that most "over 30s" don't get the open source ideal, that people will work for free if they get satisfaction out of it.

At the same time, there is one issue that I think needs exploration. Let's say I am a design engineer and I help write open source software in my spare time. What, if any, legal claim might my employer have on that work? Most employees sign an agreement that assigns all rights to their employer. Now some employers may look the other way or even encourage employees to contribute to open source developments. But most will want to enforce these rights if the result has any commercial value. This needs to be addressed before most engineers can freely contribute.

On the matter of massively parallel processing, I am all over that idea. That is something that a cloud computing environment can do that an enterprise cluster cannot. If I can create a tool to run simulations across hundreds of CPUs, or place and route, or STA, and achieve overnight turnaround on tasks that normally require a week, then I have a tremendous value proposition.

James Colgan said...

Hi Amir,
Just caught your post in a search. A great thrust in the right direction. Fortunately, you don't need to go out there and develop a supercomputer to do this - we at Xuropa have technology to leverage existing hardware infrastructure to make the concept you describe a reality.
There's also the Fedora Electronic Lab (FEL) open source EDA tools effort that could be used as a starting point for the flow.

There's a tremendous amount to this topic that I started to touch on in a post of my own back in december. There is a business here, and I believe there is a way for the large EDA vendors to participate and grow their profitability (rather than hurt their franchise).

Rest assured though - while this transition will eventually occur, it will be gradual and not a "step-function".

- James