Showing posts with label EDA. Show all posts
Showing posts with label EDA. Show all posts

Thursday, February 04, 2010

More on SaaS and EDA

EDA SaaS has been a recurring topic on many of the blogs I follow. Harry (the asic guy) wrote about the Blooming of an EDA SaaS Revolution in his first post at Xuropa. He says that the revolution "depends on a confluence of critical technologies." He also writes that the coming revolution will level the playing field allowing smaller EDA firms to compete. The same economics of sharing model that creates SETI-like distributed computing, fab-less semiconductor companies, and open source software has the potential to impact the EDA industry in a big way. I suspect the effect of decreasing the barrier-to-entry for design tools will ripple into the broader electronics industry.

With SaaS, the fixed cost of high performance computing infrastructure transforms into a variable cost. For a lean team of engineers, this can decrease the cost of compute resources substantially: who needs a $10,000 server, when you will only need to use 5000 CPU-hours on it which costs half the price from a cloud provider with the added bonus of being able to use 100 CPUs simultaneously on demand. This kind of computing power can increase the throughput of large complex simulations and optimizations substantially. I am currently paying for software licenses and infrastructure, but I would much prefer an on-demand high performance computer to run my simulations.

SaaS providers do not necessarily need to host their own infrastructure and can outsource some or all of their infrastructure requirements to third parties. Communication latency, job-size and job-work are the major factors determining how closely coupled a distributed computation network must be. If the data transmission for each job is order N and the work to process this data is N^2, then the communication-latency factor becomes negligible as the data transmitted increases. Large correlation operations like SETI are practical over large scale distributed networks for this reason.

Raw high-latency compute power can be provided by a widely distributed cloud: instead of searching for ET, your PC can help me make timing. Certain companies will pay to use your machine for distributed computing projects. These companies may act as a middle-man broker leasing out computing power in schedule blocks to a SaaS provider. Certain SaaS providers, recognizing that their customers work at 2-6 AM, may purchase large blocks of cheap computing power in these hours in each time-zone for example. They will have to form agreements with other cloud infrastructure providers to handle cases of excess demand. This forms an interesting set of insurances and contracts among the infrastructure providers, SaaS providers, and end-users. For example, an infrastructure firm may have unreserved excess capacity which it leases off for short intervals at short notice for low cost. If an end-user requires a large supply of CPUs for a long interval of usage on short notice they will pay a premium for this. To simplify matters, a SaaS provider takes jobs from users who have purchased a certain number of CPU-hour "tickets" in advance. These tickets have expiration dates and exchange values depending on whether you want 3600-CPUs-for-a-second or 1-CPU-for-an-hour.

For the software providers, supporting hosted applications is different from supporting installed programs. Issues are much easier to isolate: since none of the users ever install the software, if something is broken with the user's experience, it is definitely the fault of the software provider and not a virus, out of memory error, or other hardware incompatibility. Suppose the EDA industry wanted to support Multicore, FPGA or GPU accelerators for certain tasks. If they hosted this accelerated infrastructure to provide their software as a service, then they wouldn't have to deal with hardware compatibility issues providing this software to third parties.

Product iteration can be much faster since new features can be tested without requiring users to install anything. Additionally, all bugs can be reported back to the developers without requiring any action from the user. And of course, you shouldn't force new features on people or you'll have the same negative response that happens whenever Facebook changes their layout or adds new features. Facebook users can't complain with their checkbook (join my "Leave Facebook when One Million People Join This Group" group).

Security policies of EDA end-users are frequently cited as the major barrier to SaaS, indeed the sole criteria to reject using a hosted provider model. If secrecy is an important factor for a design, it is unlikely that a SaaS tool will be used. This is not due to some fundamental security issue with hosting an application remotely instead of locally. This is mostly an irrational insecurity of users that think it is harder to break in to their locally hosted computers versus a remote facility. Indeed large remotely hosted virtual private networks may even have better security than anything locally hosted because a large infrastructure provider can afford to deploy more attention to the problem.

Still, because of this paranoia, FPGA design teams and tools will be the first to transition to SaaS. I suspect that FPGA design teams value secrecy less than lowering fixed costs and lowering simulation latency, and secondly because the FPGA tools are used by many more people than ASIC tools. Of course tools like a distributed digital logic simulation engine will be useful in both ASIC and FPGA design.

The effect of decreasing the barrier to entry for electronic design tools as well as decreasing the latency of a simulation will lead to more people making more designs. The availability of more reconfigurable logic designs increases the value of the FPGA in a superlinear way. The total value of programmable logic cores derives from the various ways they can be combined: thus the value of the FPGA grows superlinearly similar to Metcalfe's Law.

So how will this happen? A fairly clear vision is shared by enough people blogging about EDA SaaS. The EDA industry certainly must be paying attention to the cloud computing meme and what SaaS means for their industry. Unfortunately, the economic barriers to entry are only a small aspect of the problem facing the programmable logic market. FPGA tools need to be more accessible to non-expert users. My vision is a tool flow combining incremental synthesis with dynamic partial reconfiguration that lets you program your array just like a spreadsheet.

Thursday, July 23, 2009

Demand for EDA SaaS

It's been a while, I promise I'll post more when I'm finished with my job.

The blogs have been buzzing about EDA SaaS ("Electronic Design Automation Software as a Service"). In one of my previous posts on the subject, I argued that the complexity of system design is growing faster than the ability of a reasonable desktop computer and that this will create demand for hosted EDA tools. I also argued that the ease of rolling out new features and upgrading users to the latest version is a major selling point for EDA SaaS. My experience this past week produced anecdotal evidence of both these points.

I'm coming to the final stages of a PDP-11/70 emulator design where I have it running test software in simulation. I was running Xilinx ISE 10.1 and about an hour and a half into the synthesis, I got an out of memory error from XST:
ERROR:Portability:3 - This Xilinx application has run out of memory or has encountered a memory conflict. Current memory usage is 2090436 kb. You can try increasing your system's physical or virtual memory. For technical support on this issue, please open a WebCase with this project attached at http://www.xilinx.com/support.

Process "Synthesis" failed
I searched for help on this error and I discovered from the Xilinx forums that you can add the /3GB option to your 32-bit Windows machine's boot.ini. A reboot and a couple hours later, I get the same message only with a larger number for the current memory usage at the time it failed. Before I start partitioning my design (something I'll have to do eventually anyway to increase my iteration rate during timing closure), I decide to give it a try on a 64-bit Vista machine. It compiles using some ungodly amount of memory after several hours.

I decided that I should install Xilinx ISE 11.1 on the 32-bit machine and give it a try. After an hour-long installation I have 11.1 running, and after another hour downloading an automatic update to 11.2 I'm ready to go. Running 11.2, the 32-bit machine compiles my design within the 3 GB memory limit.

These problems don't exist in a future world where EDA tools are provided as a service. If synthesis tools were hosted on some humongous supercomputer then I don't have to run out of memory and I don't have to install any software updates. Since you can run the synthesis optimizations and place-and-route parallelized across a thousand cores, I can even get my results in less than a couple hours.

Anyone want to do this?

---------
Edit 7/24

Another benefit of hosted EDA tools is that errors can be reported directly to the software vendor. This means that your hosted software won't have dozens of users experiencing the same error and not telling anyone.

I started partitioning my design today, and got a wonderfully meaningless error:

INTERNAL_ERROR:Xst:cmain.c:3446:1.47.6.1 - To resolve this error, please consult the Answers Database and other online resources at http://support.xilinx.com

Obviously Xilinx doesn't provide cmain.c as open source so I can't really figure out what I'm upsetting in the source code. Googling reveals that the Xilinx Forums have nothing useful to say about this bug, but I discovered that I am not alone with this error on the blog of another Israeli with similar gripes with ISE.

There are thousands of business opportunities that can be created by appending "...that doesn't suck" to the description of an existing product.

Monday, December 29, 2008

C-to-Verilog.com provides C to HDL as a service

If you follow this blog, then you've read my ramblings on EDA SaaS. An interesting new advance in this area is http://www.c-to-verilog.com. This website let's you compile C code into synthesizable Verilog modules.

Here is a screencast of the service:




I've also discovered a few other websites related to EDA, SaaS and "Cloud Computing." Harry the asic guy's blog covers the burgeoning EDA SaaS market and Xuropa is creating an online community for EDA users and developers. Here's a two part EETimes piece which talks about EDA SaaS.

I already use a remote desktop connection to run most of my EDA jobs remotely. I've argued that there is a "generation gap" between SaaS and traditional software license markets. The people who used to code in university basements when they were 13 in the 60's and 70's invented the software licensing industry. Now, the people who used to herd botnets when they were 13 are graduating with computer science degrees. The nu-hacker distributes code updates to all his users immediately without forcing anyone to wait through an install.

Friday, August 29, 2008

Megahard Corp: Open Source EDA as-a-Service

Prescience is a quality of those who act on it:

Building very-large-scale parallel computing structures requires simulation and design-cost analysis whose associated optimizations are MegaHard problems (yes, I'm re-branding NP-Hard). A Software-as-a-Service (SaaS) provider of vendor-neutral simulation, synthesis, mapping, and place-and-route tools could change the electronic design automation industry substantially.

If a supercomputer architecture could perform the MegaHard compiler tasks associated with compiling parallel systems in less time than the current tools (within seconds instead of minutes/hours) many designers would gladly pay for access. If the supercomputer uses distributed algorithms, then many people may offer spare cycles to do parts of your MegaHard jobs. Most developers using such a tool would gladly lend their idle cycles to perform graph permutations and compute cost functions.

So let's motivate and create a business model for the tentatively-named, Megahard Corp.

Megahard Corp. (pronounced "Mega Hardcore") has the opposite business model of Microsoft Corp. Instead of selling closed source software and tools as licensed applications, Megahard provides open-source hardware IP and development tools as a service.

The writing is on the wall: we are transitioning from personal-computing, single-core desktop world to a shared-computing, multicore/FPGA embedded world. Yet the complexity of parallel system designs and the computational strain on tools to compile them is growing faster than the performance of a practical desktop computer. The decision for EDA firms to adopt the Application Service Provider (ASP) model probably made sense at some point in the last millennium: design assumptions are much different when you are connected to a massive parallel computer. Because current tools take much, much longer to compile than the file transfer time, there is potential to improve the compiler market by providing electronic design automation tools online.

So here's the business plan: build a supercomputer and code it to run a bunch of FPGA/multicore compilers and simulators really well. Since it's a supercomputer it should make the problem I/O bound. We can tell people we've got this supercomputer that will take in source and spit out configurations in about as long as it takes for you to upload the diff source and download the diff configuration. Since we have a really big supercomputer with lots of hardware all over the place, you can also target your application to some of our bored metal and provide your own service under our framework. (Sassafras is a good name for a SaaS Framework and can have the leaf-shaped logo)

Since we're thoroughly indoctrinated, we'll build our system as Free software and use all the existing open source compiler architecture too. Being open source means our tool won't be made obsolete by an eventual open source standard (read that again so you get it). Open source frameworks allow capable users to contribute to the product development: and wouldn't you know it, the entire EDA market happens to have degrees in computer science, electrical engineering, mathematics and physics.

There's also that recurring comp.arch.fpga thread complaining about the current tools, the lack of open source tools, and the lack of documentation of the configuration format and interface -- someone must appeal to these people because they have a real point: technology is inhibited by monopolized knowledge. It's harder for features to improve when they are part of closed-source packages because you are automatically limiting the number of developers that can improve it (this situation worsens as you lay off more developers).

Another benefit of using a SaaS tool: your users don't have to waste time upgrading. The only way to convince anyone to wait through a software upgrade process is by announcing new features that claim to make the product better: that's why users wait till the second iteration of a major release. SaaS providers can roll-out and test the waters with new features with much higher iteration rate.

When I use the term open source, I mean "free" like freedom, but it's important to point out that previous forays into open source FPGA tools have failed and disappeared because they were merely "open source" and not "free." When I was just starting to care about these sorts of things, GOSPL was being hyped up and eventually was canceled. The code is no where to be found because access to the source was restricted by invite-only membership for some nonsense reason: membership exclusivity maintains the integrity of the organization. When giant corporations disingenuously uses "open source" as a marketing term for non-free software, the project is destined/deserves to fail as long as the so-called "open" source is never actually provided to freedom-loving eyes.

On the other hand, free software, like debits or free hardware IP like opencores won't stop being useful just because some organization stops funding it. Free-software projects never die, they just go unmaintained.

Besides, Megahard Corp. follows a SaaS model, so the profitability is from developing and maintaining a parallel supercomputer system to run the free compiler software, instead of distributing that software. Good free software projects encourage users to "help make it better" instead of "help make us richer," though improving products is a good way to increase sales. A sufficiently better compilation service is probably more profitable than selling licenses too -- I would certainly pay more than what I'm currently paying for proprietary software licenses if I could get access to a supercomputer for FPGA compilation that does it faster.

One major problem: If you aren't Altera, Xilinx, Lattice, Actel, Achronix, Tilera, IBM, AMD, Intel etc. then making low-level Multicore and FPGA tools requires a whole bunch of reverse-engineering and "MegaHard Work" (TM). It's probably technically easier to design an FPGA from the bottom up than to reverse engineer the architecture of an existing chip and build a new tool-chain.

Open-source compilers and reverse-engineered, vendor-neutral FPGA tools have both been successful endeavors in the past. Providing Hardware-as-a-Service is still a "cloudy" future, but there are plenty of recent examples where a vendor came into a market ripe for SaaS-ification and redefined the market. I expect that the MegaHard problems associated with design automation make it ripe for a SaaS provider to change the game.

----
(Edit 9/2) If you find positions contrary to the Open Source EDA SaaS argument, please share them with me. Here's an old interview with Grant Martin, Chief Scientist of Tensilica, who argues that we should not hold our breath for Open Source EDA and specifically says:

I think the other issue with EDA is in terms of the general number of users. I don't think there's a large enough number of users in any particular sub-category of tools to really support very much open source development. Open source requires a big community involvement, plus ancillary things being built around that environment to attract companies into the effort.

EDA is still tool small to make open source the general model. The idea that all tools will be open source, and that all EDA companies would evolve to a service business model, is unlikely and makes no sense.

....

The business incentives just aren't there to motivate those efforts in an open source environment.

I tend to not trust the beliefs of people who were born before the Internet (aka the over-30 crowd). I think he's missing the perhaps non-obvious point that a service business model can subvert the traditional software business model by offering a faster software service with smoother new-feature roll-outs (perhaps when he says "service business model" he thinks RedHat instead of Google Apps). I also know from my prior immersion at CSAIL, that open source software development does NOT require users or a big community to be involved in development, but only requires one indefatigable iconoclast to chip away at status quo for his own personal reasons that are often incongruous with profit motive. When multiple hackers create disjoint useful tools that converge into a single product, user communities start to form to propel that framework further and potentially crack the existing market open.

I wonder how anyone can reject the service business model future for basically ANY computer tool. Seeing such a future no longer requires prescience, just patience. FPGA accelerated computing will increase the number of EDA users, but none of these users will ever identify with the existing EDA market (there's far too much FUD associated with "hardware" development that they traditionally sell: programmable arrays are more than reconfigurable HDL executers).
(end edit)