Thursday, February 04, 2010

More on SaaS and EDA

EDA SaaS has been a recurring topic on many of the blogs I follow. Harry (the asic guy) wrote about the Blooming of an EDA SaaS Revolution in his first post at Xuropa. He says that the revolution "depends on a confluence of critical technologies." He also writes that the coming revolution will level the playing field allowing smaller EDA firms to compete. The same economics of sharing model that creates SETI-like distributed computing, fab-less semiconductor companies, and open source software has the potential to impact the EDA industry in a big way. I suspect the effect of decreasing the barrier-to-entry for design tools will ripple into the broader electronics industry.

With SaaS, the fixed cost of high performance computing infrastructure transforms into a variable cost. For a lean team of engineers, this can decrease the cost of compute resources substantially: who needs a $10,000 server, when you will only need to use 5000 CPU-hours on it which costs half the price from a cloud provider with the added bonus of being able to use 100 CPUs simultaneously on demand. This kind of computing power can increase the throughput of large complex simulations and optimizations substantially. I am currently paying for software licenses and infrastructure, but I would much prefer an on-demand high performance computer to run my simulations.

SaaS providers do not necessarily need to host their own infrastructure and can outsource some or all of their infrastructure requirements to third parties. Communication latency, job-size and job-work are the major factors determining how closely coupled a distributed computation network must be. If the data transmission for each job is order N and the work to process this data is N^2, then the communication-latency factor becomes negligible as the data transmitted increases. Large correlation operations like SETI are practical over large scale distributed networks for this reason.

Raw high-latency compute power can be provided by a widely distributed cloud: instead of searching for ET, your PC can help me make timing. Certain companies will pay to use your machine for distributed computing projects. These companies may act as a middle-man broker leasing out computing power in schedule blocks to a SaaS provider. Certain SaaS providers, recognizing that their customers work at 2-6 AM, may purchase large blocks of cheap computing power in these hours in each time-zone for example. They will have to form agreements with other cloud infrastructure providers to handle cases of excess demand. This forms an interesting set of insurances and contracts among the infrastructure providers, SaaS providers, and end-users. For example, an infrastructure firm may have unreserved excess capacity which it leases off for short intervals at short notice for low cost. If an end-user requires a large supply of CPUs for a long interval of usage on short notice they will pay a premium for this. To simplify matters, a SaaS provider takes jobs from users who have purchased a certain number of CPU-hour "tickets" in advance. These tickets have expiration dates and exchange values depending on whether you want 3600-CPUs-for-a-second or 1-CPU-for-an-hour.

For the software providers, supporting hosted applications is different from supporting installed programs. Issues are much easier to isolate: since none of the users ever install the software, if something is broken with the user's experience, it is definitely the fault of the software provider and not a virus, out of memory error, or other hardware incompatibility. Suppose the EDA industry wanted to support Multicore, FPGA or GPU accelerators for certain tasks. If they hosted this accelerated infrastructure to provide their software as a service, then they wouldn't have to deal with hardware compatibility issues providing this software to third parties.

Product iteration can be much faster since new features can be tested without requiring users to install anything. Additionally, all bugs can be reported back to the developers without requiring any action from the user. And of course, you shouldn't force new features on people or you'll have the same negative response that happens whenever Facebook changes their layout or adds new features. Facebook users can't complain with their checkbook (join my "Leave Facebook when One Million People Join This Group" group).

Security policies of EDA end-users are frequently cited as the major barrier to SaaS, indeed the sole criteria to reject using a hosted provider model. If secrecy is an important factor for a design, it is unlikely that a SaaS tool will be used. This is not due to some fundamental security issue with hosting an application remotely instead of locally. This is mostly an irrational insecurity of users that think it is harder to break in to their locally hosted computers versus a remote facility. Indeed large remotely hosted virtual private networks may even have better security than anything locally hosted because a large infrastructure provider can afford to deploy more attention to the problem.

Still, because of this paranoia, FPGA design teams and tools will be the first to transition to SaaS. I suspect that FPGA design teams value secrecy less than lowering fixed costs and lowering simulation latency, and secondly because the FPGA tools are used by many more people than ASIC tools. Of course tools like a distributed digital logic simulation engine will be useful in both ASIC and FPGA design.

The effect of decreasing the barrier to entry for electronic design tools as well as decreasing the latency of a simulation will lead to more people making more designs. The availability of more reconfigurable logic designs increases the value of the FPGA in a superlinear way. The total value of programmable logic cores derives from the various ways they can be combined: thus the value of the FPGA grows superlinearly similar to Metcalfe's Law.

So how will this happen? A fairly clear vision is shared by enough people blogging about EDA SaaS. The EDA industry certainly must be paying attention to the cloud computing meme and what SaaS means for their industry. Unfortunately, the economic barriers to entry are only a small aspect of the problem facing the programmable logic market. FPGA tools need to be more accessible to non-expert users. My vision is a tool flow combining incremental synthesis with dynamic partial reconfiguration that lets you program your array just like a spreadsheet.