I haven’t posted in a long time, but today I came back with a bang. The uber-blog GigaOm posted a blog I wrote as a guest columnist. Thanks to Om Malik, Surj Patel and Carolyn Pritchard for all their help. In this post, I discuss the difference between utility computing and cloud computing, and go a little more in-depth into my view of what the nature of cloud computing platforms should be.
I’ll be writing more about this topic in future posts here. I’ll also be speaking at the Structure 08 conference that GigaOm is putting together on June 25 in San Francisco. With a sub-title that says: “Insights into the future of infrastructure” and with speakers like Om Malik, Parker Harris (Salesforce.com) and Werner Vogel (Amazon.com), it’s bound to be good stuff.
Also of interest on this topic, Nati Shalom is going to be on the Programming the Cloud panel, which is part of the Cloud as the New Middleware Platform track at the QCon London conference. It’s very cool that Floyd and the QCon organizers decided to call this track by this name. Last summer I gave a market analysis presentation at the GigaSpaces management meeting and I predicted that the middleware players of the future will be the Salesforce.coms, Amazons and Googles of the world — not BEA or Oracle. Now, a lot more people are starting to come to grips with this idea.
One really interesting indication of this trend that happened lately: SaaS player Workday acquired middleware/ESB player Cape Clear and will now offer it as Integration as a Service — or what you might also call an “ESB in the Sky.” Very cool.
As I was thinking about things I might talk about at Structure 08 (a little early, I know, but it was a fun exercise) — I started to develop some notions about where cloud computing might go. As I was reading stuff I came across “nephology” — a meteorological term for the study of clouds.
Turns out that there is a lot we can learn about cloud computing from nephology — the study of actual clouds. When people use the term “cloud computing”, they are actually referring to two aspects of the issue, and we can find equivalents to these in the physical world.
Meteorologists classify clouds in two basic groups:
- Cumulus – vertically-developed clouds
- Stratus – horizontal, layer-like clouds
You gotta love it. Starting to see the connection?
The cumulus — or vertical — type of cloud computing is actually a form of Software as a Service (SaaS). It is the notion that you as an end-user are running an application on the network – the ‘cloud’ — and you don’t know or care where the application runs physically or where your data resides. This is basically the Web, except that now you can do with it things that were previously only possible on your desktop, such as word processing, email, spreadsheets, etc. It also offers the new business model of SaaS — essentially a subscription model.
This cumulus cloud computing is “vertically-developed” because it provides you with a complete application stack, including the infrastructure (computational power, storage, memory, business logic, GUI, etc.). Nick Carr wrote a great post about The Vertical Cloud. In it he describes how the future of cloud computing may lie in “clouds” that are optimized to handle the needs of particular verticals, or industries, such as the health-care business or retail.
Then you have your stratus — or horizontal — type of cloud computing. This refers to the horizontal layers that enable building cloud computing apps. These layers include all of the pieces of the application stack, such as the computational power (Amazon EC2, IBM Blue Cloud, HP Flexible Computing Service), the storage (Amazon S3) and other hardware-focused solutions. It also includes the infrastructure software — such as middleware — and that’s where things get interesting.
- It is becoming clear that the prevalent middleware products and architectures are inadequate for the cloud. This includes your traditional B.A.D (Big-Ass Database) and the J2EE app server (you’ll be lucky if you can stretch it beyond 10 servers, not to mention complexity and deployment issues). With regards to the database, see Nati’s post Amazon SimpleDB is not a database! In fact, the whole n-tier architecture, i.e., the physical separation of the tiers comes into question. This opens a huge opportunity for new approaches.
- We are going to see this new generation of “cloud middleware”, or “cloudware” have characteristics that traditional middleware doesn’t have. (and yes, of course I believe GigaSpaces does have these characteristics). I wrote about this in the GigaOm post so let me quote myself from there:
- Self-healing: In case of failure, there will be a
hot backup instance of the application ready to take over without
disruption (known as failover). It also means that when I set a policy
that says everything should always have a backup, when such a fail
occurs and my backup becomes the primary, the system launches a new
backup, maintaining my reliability policies.
The system is dynamically managed by service-level agreements that
define policies such as how quickly responses to requests need to be
delivered. If the system is experiencing peaks in load, it will create
additional instances of the application on more servers in order to
comply with the committed service levels — even at the expense of a
- Multi-tenancy: The
system is built in a way that allows several customers to share
infrastructure, without the customers being aware of it and without
compromising the privacy and security of each customer’s data.
The system allows composing applications out of discrete services that
are loosely coupled (independent of each other). Changes to or failure
of one service will not disrupt other services. It also means I can
- Virtualized: Applications are
decoupled from the underlying hardware. Multiple applications can run
on one computer (virtualization a la VMWare) or multiple computers can
be used to run one application (grid computing).
- Linearly Scalable:
Perhaps the biggest challenge. The system will be predictable and
efficient in growing the application. If one server can process 1,000
transactions per second, two servers should be able to process 2,000
transactions per second, and so forth.
- Data, Data, Data:
The key to many of these aspects is management of the data: its
distribution, partitioning, security and synchronization. New
technologies, such as Amazon’s SimpleDB, are part of the answer, not large-scale relational databases. And don’t let the name fool you. As my colleague Nati Shalom rightfully proclaims, SimpleDB is not really a database. Another approach that is gaining momentum is in-memory data grids.
I am going to cover many of these areas in their own dedicated posts in the future. Would love to hear what people think. BTW, in my mind “cloud computing” is merely a refresh on “grid computing”. Grid is simply an older term that to many people had certain connotations to it, such as being related to the scientific and academic community. It’s a bit like SaaS being a refresh of the older term Application Service Provider (ASP), which was tainted because so many investors lost money on ASPs during the dot-com bust. See my post Tower of Babel for more on this phenomenon.