Effervescent Clouds: Nimble Computing Infrastructure

Daniel Reed

This is a guest blog post by Daniel Reed. Daniel is Vice President for Research and Economic Development and University Computational Science and Bioinformatics Chair at the University of Iowa and a frequent government advisor on science and technology policy. He was previously Microsoft’s Corporate Vice President for eXtreme Computing, where he led a team designing next-generation cloud infrastructure and global partnerships for cloud-enabled scientific discovery.

Contact him at dan-reed@uiowa.edu or read his other musings at http://www.hpcdan.org.



What’s a cloud? The word means many things to many people. In true Airplane! movie parody style, one is tempted to say, It’s a visible mass of condensed water vapor floating in the atmosphere. But, that’s not important right now.

Some might say that our cloud confusion is indicative of a fog, a low hanging cloud. Indeed, in public perception, the cloud is a vaguely understood and amorphous capability that lurks behind our smartphones, enabling electronic communications, web searches, social networks, e-commerce, and 24×7 news and streaming media. It is a mysterious something, somewhere that is only noticed when it is inaccessible.

For those of us in computing, the cloud is more than a buzzword or meme, it is a set of technologies that are transforming the provisioning and delivery of computing cycles, storage and services, with profound implications for government services, research discovery and innovation, and businesses large and small. Given the public confusion and sometimes overhyped marketing, it seems worthwhile to dispel the fog and illuminate a few compelling features of clouds.

In a very real sense, public clouds  those hosted by cloud providers for on-demand access have much in common with the timesharing systems of the 1960s and 1970s, albeit at unprecedented scale. They shift the capital costs of provisioning, along with the expertise required for operation, from users to providers, allowing users to focus on their core expertise (e.g., research, business, government) and pay for only what computing they need, when they need it.

In turn, cloud elasticity allows servers to be provisioned and de-provisioned dynamically in response to changing interest and demand. One of the few things worse than having your brilliant idea ignored is having it fail publicly from too much attention. Many an organization has first been thrilled to see their product or service be reported in the press and go viral, only to horrified when on-premise servers collapse under the load of success; this is known as the slashdot effect, named after one of the popular news for nerds web sites. Without cloud elasticity, the global phenomenon of Pokemon Go would not be possible.

Although not required, many clouds also virtualize the underlying computing, allowing multiple services and software stacks to be co-resident on the same physical hardware, further reducing costs. With Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS) and containerization, developers can choose the level of abstraction, from low level to high level, needed to support existing and new applications.

By allowing innovators to focus on their ideas rather than computing infrastructure, the pay-as-you-go cloud model has made it easier, faster and cheaper to bring ideas to life, unleashing a second dot.com boom of cloud-based services, unprecedented citizen access to government data, and new approaches to scientific discovery via big data.

The latter has sometimes been called the fourth paradigm, where new sensors, data analytics and deep learning have shifted the research approach from what experiment should I conduct to what insights can I extract from available data. It has also stimulated interest in convergence architectures for high-performance computing (HPC) and data analytics that combine the best elements of cloud and HPC capabilities. (See my essay, Exascale Computing and Big Data: Time to Reunite.)

Implicit in this client-cloud model is ubiquitous, reliable and inexpensive wired and wireless broadband services. Without broadband access, clouds are simply isolated sets of computers and data, unconnected to the world of mobile devices and the nascent Internet of Things. To further ensure reliability, most public cloud providers geo-distribute their pools of servers. Even private clouds (i.e., those operated by governments and companies for their own use) often geo-distribute their servers at multiple data centers. Should disaster strike at one site, services can be shifted automatically to another site, perhaps even in another country or continent.

Finally, the unprecedented scale of cloud deployments, with commercial data centers each costing over $1B, has stimulated new approaches to energy efficiency and cooling, component failures and reliability, customized and open hardware, software defined networks (SDNs) and software defined storage (SDS), and system provisioning and management. In short, we are in a time of radical change in the computing ecosystem, all against a backdrop of Moore’s law limitations.

What’s a cloud? It’s an elastic, on-demand, often geo-distributed, frequently virtualized, flexible hardware, software and services infrastructure that powers the 21st century knowledge economy. And that is really important right now.

Leave a Reply