Things don’t always progress as expected in life. Sometime you join a cool technology field only to find yourself in the middle of a process heavy revolution, with its strongest roots in a new operational model and cultural change. The Cloud computing world is frankly pretty starved of hot new technologies–sure, there are amazing things happening on the software side–but for a guy with geek roots in sexy hardware its a somewhat quiet conversation.
Why? Why is it that suddenly the companies doing the most with computing are often doing it with the most common types of hardware? Shouldn’t the enormous spending of Google, Yahoo and Amazon alone create a market for high end hardware catering exclusively to their needs? Wouldn’t it be much more fun if there was a a huge Hadoop ASIC market, and if Amazon’s virtual instances were really really virtual and used an exotic virtual memory architecture over Infiniband?
There is a theory to explain why the computing market is behaving as it is, and Cloud computing is following its playbook to such a fascinating extent I’m tempted to go get a business PHD on the topic. The idea is simple: once a certain set of design elements have enormous popularity the industry will use them in almost everything it attempts from there on out. Henceforth around that technology the battle for market dominance will shift in phase from a product innovation basis, to a process innovation one.
Who would have thought the industry would catch fire talking about the equivalent of an Opteron 1.2ghz server? But its exactly what happened, and a process and fulfillment driven company was exactly the right player for this phase of the market. I’m here to discuss why a process driven company like Amazon is the disruptive innovator in computing at the moment.
The Dominant Design Crossover:
The QWERTY typewriter design is the most famous dominant design in the world. It was originally designed to slow the speed of typing to reduce physical mechanisms from jamming. This makes its persistence to this day an obvious example of design inertia and the overwhelming power of an ecosystem over would be improvements.
The reason dominant designs are sticky is that competition fundamentally shifts from designing a technically differentiated product to the efficiency of process around how that (defined format) product can be delivered. Once this massive push for process and efficiency optimization hits, its almost impossible for a feature differentiated product to beat out the whole ecosystem of the dominant product with its economies of scale and delivery refinements.
Amazon didn’t create a better server. Instead they made the process of procuring and using one several orders of magnitude faster than any alternative approach. In their own way Google and Facebook did the same—putting instantaneous access to incredible computing power within the reach of the average user. This isn’t the end of innovation, but rather a shift to a focus on of delivering it to end users. Cloud computing is a process revolution—plain, simple, and by the book.
What are the dominant designs built into cloud computing
So a topic I’d like to open-source the discussion of is this. What are the dominant designs at the heart of cloud computing? This is just a starting list, if you see others please ad them in comments and I will eventually update this post and credit the new additions to their author.
My suggested starting list:
x86 Micro architecture: Important to note that the in-depth business strategy theory here doesn’t preclude other technologies from being launched or used, but they will not overtake the market and ecosystem share of the leader. Its hard to imagine a scenario in the near future where Intel’s design is not inside the cloud.
HTTP: I hear lots of technologist begrudgingly accept the dominance of REST based APIs. They won for a simple reason, HTTP is the dominant design for all web interfaces. Think you can out feature a dominant design? Sorry, unless you are targeting a specialized niche, you cannot.
Ethernet: With Doug Gourlay telling us that he see’s 90% of traffic load coming from server to server communications it might get tempting to think about specialized lower latency meshing networks in the cloud—I know I like to day dream about it–but sorry it won’t happen, next question, ethernet is how every node in the cloud will communicate.
Servers as targets: If hypervisors proved anything its that people just love writing to ‘servers’ as an abstraction. By creating virtual servers they tapped into the full ecosystem around servers and created virtual resources without forcing the world to rethink their development logic. This has even taken hold in the world of the mainframe, where one of the hot workloads is virtual Linux servers. There are lots and lots of ways of developing code in a less server centric way—but they don’t make a lot of sense now that the 1-2cpu server is a standard abstraction. True to theory almost every other server is niche by comparison.
Process Innovations of the Cloud:
So if we’ve crossed the Dominant Design threshold we should expect to see a burst of process focused innovations begin to take shape–and they have. Almost all of the core features of cloud computing in fact are rooted in process improvements in delivering standardized resources more efficiently.
Multi-tenancy: On a differentiated featured level multi-tenancy is a loser! It does very little to empower advanced feature sets, and instead requires users to exist in a much tigher behavioral pattern with each other. Its a winner for a simple reason–with the world standardizing on some of the above technologies a growing portion of IT applications can gain more from the efficiencies of multi-tenancy than from customization of infrastructure. The last great stand of differentated enterprise IT will make its fight here in this world, and it may take a long time before the effincies of multi-tenent environments win out. What no one can argue is that all things being equal multi-tenant environments are a process innovation over the old single tenant ASP and dedicated hosting playbooks.
Most SaaS applications: This week http://www.mint.com was bought by Intuit for $175M. I doubt there are many features of the free online money management SaaS provider had up its sleeve that Intuit couldn’t have copied. Instead what they had was a cheaper and more frictionless way of distributing that software value. Yes, there are exceptions, but in general SaaS is a distribution process innovation for largely standardized procedures from existing market leading products. Sometimes people struggle to connect the infrastructure cloud revolution with the SaaS one. Process and distribution revolution is their strongest connective tissue.
S3 Storage: There isn’t a ton of new technology behind the covers in Amazon’s storage service–but how many cloud vendors have launched offerings without being able to duplicate it. It delivers the power of standardized commodity disks efficiently. You’ll probably accept some small performance hit to use it–but your overall simplicity of storage management will increase.
Eucalyptus: My favorite technology in the cloud space is deeply rooted in process simplification. It was built to turn a heterogeneous mess into a AWS API controllable orchestrated mechanism. Where previous generations of managment software (BMC/CA) brought forward an ever increasing set of features and functions Eualyptus will catch fire in the enterprise precisely because of the simplicity of its upstream output, and its use of an emerging standard interface in the form of EC2.
Appliance driven deployments: Pretty simple. They incorporate the DD of servers as targets into a more industrialized deployment model. A great example is Websphere on AWS AMI’s. How long does it take to fire up on EC2 vs download.
None of these improvements are shocking news to anyone following the cloud computing industry, what I hope is helpful, is thinking of their mode of innovation as part of a larger highly patterned, predictable, trend.