The demand coming from many SaaS providers to offer their own platform to their ecosystem players (a good example on that regard is Salesforce force.com) as well as the continues pressure of enterprises on their IT to enable faster deployment of new application and improve the utilization of their existing resources will force those organizations to build their own specialized PaaS offering. This will drive a new category of PaaS platform known as Cloud Enabled Application Platform (CEAP) specifically designed to handle multi-tenancy, scalability, and on-demand provisioning but unlike some of the public PaaS it needs to come with significantly higher degree of flexibility and control.
One of the immediate question that came up in various follow-up discussion was “how is that different from the Application Servers of 2000?”
I’ll try to answer that question in this post.
The difference between PaaS and Application Servers
Lori MacVittie wrote an interesting post, “Is PaaS Just Outsourced Application Server Platforms?,” which sparked an interesting set of response on twitter.
Lori summarized the aggregated response to this questions as follows:
PaaS and application server platforms are not the same thing, you technological luddite. PaaS is the cloud and scalable while application server platforms are just years of technology pretending to be more that it is..
Now let me try to expand on that:
The founding concept behind application servers was built around the concept of Three Tier architecture where you break your application into presentation, business-logic and data-tier. It was primarily an evolution of Client/Server architecture, which consists of a rich client connected to centralized database.
The main drive was the internet boom that allowed million of users to connect to the same database. Obviously having million of users connected directly to a database through a rich client couldn’t work. The three-tier approach addressed this challenge by breaking the presentation tier into two parts – a Web Browser and Web Server – and by separating the presentation business logic tier into two separate parts.
In this model, Clients are not connected directly into the database except through a remote server.
Why is PaaS any different?
A good analogy to explain the fundamental difference between existing applications servers and PaaS enablement platform is the comparison between Google Docs vs Microsoft office. Google came up with a strong collaboration experience that made sharing of documents simple. I was willing to sacrifice many of the other features that Office provided just because of the ease of sharing.
The same goes with PaaS – when I think of PaaS I think of built-in scaling, elasticity, efficiency (mostly through Multi Tenancy) and ease of deployment as the first priority and I’m willing to trade many other features just for that.
In other words, when we switched from desktop office to its Google/SaaS version we also changed our expectations from that same service. It wasn’t like we took the same thing that we had in our desktop and put it over the web – we expected something different,. This brings me to the main topic behind this post:
PaaS shouldn’t be built in Silos
If I take the founding concept of Application Server – i.e. Tier based approach – and try to stretch it into the PaaS world the things that pops up immediately in my mind is the impedance mismatch between the two concepts.
Tiers are Silos – PaaS is about sharing resources.
Tiers were built for fairly static environment – PaaS is elastic.
Tiers are complex to setup – In PaaS everything is completely automated. Or should be.
Now this is an important observation – even if the individual pieces in your Tier based platform is scalable and easy to deploy the thing that matters is how well they can:
- Scale together as one piece,
- Dance in an elastic environment as one piece.
- Run on the same shared environment without interfering with one another… as one piece through multi-tenancy.
Silos are everywhere ..
In the application server world, every tier in our infrastructure is built as a complete silo. By “silo,” I mean that they are not just built separately, but that each component comes with its own deployment setup, scaling solution, high availability solution, performance tuning and the list goes on…
All that individual deployment and tuning in the context of a single application, well, if you stretch that into multi-tenant application, it becomes much worse.
Silos are not just the components that make our products. Silos are also a feature of those who build the components.
If you think of Oracle Fusion as an example or IBM for that matter you’ll see silos. Messaging, data, analytics, all broken into a separate development group with different management, individual development groups.
Each component has a different set of goals, different timelines, different egos, and in most cases they are even incentivized to keep those silos in place simply since they are not measured on how well their products work with other pieces but solely on how well their thing does its (siloed) job which leads into even greater focus on silos – functionality that works as an end to itself and not a contributor to general application development.
Oracle Fusion is an interesting attempt to break those silos. In the course of creating Fusion, they even came up with an interesting announcements of bringing Hardware and Software together.
Is “bringing hardware and software together” good enough? It’s certainly an attempt but if you look closely you’ll see that most of it is a thin glue that tries to put together pieces that were bought separately through acquisitions and are now packaged together and labeled as one thing.
However most of it is still pretty much the same set of siloed products. (Oracle RAC cluster is still very different than Weblogic cluster, etc.) That doesn’t apply to Oracle only – think of VMware and its recent acquisition now labeled under VFabric.
Now imagine how you can scale together an erlang based messaging system with other pieces of their stack as one piece.. The sad thing about VMware is that it looks like they went and built a new platform targeted specifically for PaaS world but inherited the same pitfalls from the old Tier-based application server world.
Silos are also involved in the process by which we measure new technology.
Time and time again I see where even those who need to pick or build a PaaS stack go through the process of scoring different pieces in the puzzle independently, with different people measuring each piece rather than looking on how much those pieces work together.
A good analogy is comparing Apple with Microsoft: Microsoft would have probably score better on feature by feature comparison (they can put a checkmark in more places) but the thing that brings many people to Apple is how well all of its pieces work together.
Apple even went farther and refused to put in features that might have been great – just because they didn’t fit the overall user experience. They often win with users because of the scarcity of features, even though it may not look as nice on a bullet list of features.
Sharing is everything.
The thing that makes collaboration so easy in the case of Google Docs is the fact that we all share the same application and underlying infrastructure. No more moving parts. No more incompatible parts.
All previous attempts to synchronize two separate documents through versioning highlighting and many other synchronization tools pale compared to the experience that we get with Google Docs. The reason is that sharing and collaboration in Office was an afterthought. If we work in silos, we’ll end up… in silos. Synchronizing two silos to make them look as one is doomed to fail, even if we use things like revision control systems for our documents.
We learned through the Google docs analogy that sharing of the same application and its underlying infrastructure was a critical enabler to make collaboration simple.
There is no reason that that this lesson wouldn’t apply to our Application Platform as well. Imagine how simple our lives could be if our entire stack could use one common clustering mechanism for managing its scaling and high availability. All of a sudden the amount of moving parts in our application would go down significantly, along with lots of complexity and synchronisation overhead.
We can measure the benefits of having a shared cluster based on the fact that it comes with less moving parts compared with those that are based on Tier based model:
- Agility – fewer moving parts means that we can produce more features more quickly without worrying if once piece will break another piece.
- Efficiency – fewer moving parts means that will have less synchronization overhead and network calls associated with the each user transaction.
- Reliability – fewer moving parts means lower chance for partial failure and more predictability in the system behavior.
- Scaling – fewer moving parts means that scaling is going to be significantly simpler, because scaling would be a factor of a “part” designed for that purpose.
- Cost – fewer moving parts means fewer products, a smaller footprint, less hardware, reduced maintenance cost, even less brainpower dedicated to the application.
First Generation PaaS – Shared platform
If we look at the first generation PaaS platforms such as Heroku, Engine Yard, etc., we see that they did what Oracle Fusion does. They provide a shared platform and carve out a lot of the deployment complexity associated with the deployment of those multi-tiered applications. (This makes is marginally better than Oracle Fusion, but not good enough… certainly not as good as it could or should be.)
I refer to these as “PaaS wrappers”. The approach that most of those early-stage PaaS providers took was to “wrap” the existing tier-based frameworks into a nice and simple package without changing the underlying infrastructure, to provide an extremely simple user experience.
This model works well if you are in the business of creating lots of simple and small applications but it fails to fit the needs of the more “real” applications that need scalability, elasticity and so forth. The main reason it was built this way was that it had to rely on an underlying infrastructure that was designed with the static tier-based approach in mind. An interesting testimonial on some of the current PaaS limitations can be found in Carlos Ble’s post, “Goodbye Google App Engine (GAE).”
Toward 2nd Generation PaaS
In 2010, a new class of middleware infrastructure emerged, aimed at solving the impedance mismatch at the infrastructure layer. They were built primarily for a new, elastic world, with NoSQL as the new elastic data-tier, Hadoop for Big data analytics, and the emergence of a new category (DevOps) aimed at simplifying and automating the deployment and provisioning experience.
With all this development, building a PaaS platform in 2011 will – or should – look quite different than those that were built in 2009.
The second-generation PaaS systems would be those who could address real application load and provide full elasticity within the context of a single application.
They need to be designed for continuous deployment from the get-go and allow shorter cycles between development and production.
They should allow changes to the data model and introduction of new features without exposing any downtime.
The second-generation PaaS needs to include built-in support for multi-tenancy but in a way that will enable us to control the various degrees of tradeoffs between Sharing (utilization and cost) and Isolation (Security).
The second-generation PaaS should come with built-in and open DevOps tools that will enable us to gain full automation but without losing control, open enough for customization and allowing us to choose our own OSes and hardware of choice as well as allowing us to install our own set of tools and services, not limited to the services that come out of the box with the platform.
It should also come with fine-grained monitoring that will enable us to track down, trouble-shoot our system behavior through a single point of access as if the platform was a single machine.
The 2nd generation PaaS systems are not going to rely on virtualization to deliver elasticity and scaling but instead provide a fine grain level control to enable more efficient scaling.
At Structure 2010, Maritz said that “clouds at the infrastructure layer are the new hardware.” The unit of cloud scaling today is the virtual server. When you go to Amazon’s EC2 you buy capacity by the virtual server instance hour. This will change in the next phase of the evolution of cloud computing. We are already starting to see the early signs of this transformation with Google App Engine, which has automatic scaling built in, and Heroku with its notion of dynos and workers as the units of scalability.
Unlike many of the existing Platforms, in this second-generation phase, its not going to be enough to package and bundle different individual middleware services and products (Web Containers, Messaging, Data, Monitoring, Automation and Control, Provisioning) and brand them under the same name to make them look as one. (Fusion? Fabric? A rose is a rose by any other name – and in this case, it’s not a rose.)
The second-generation PaaS needs to come with a holistic approach that couples all those things together and provide a complete holistic experience. By that I mean that if I add a machine into cluster, I need to see that as an increase in capacity on my entire application stack, the monitoring system needs to discover that new machine and start monitoring it without any configuration setup, the load-balancer need to add it to its pool and so forth.
Our challenge as technologists would be to move from our current siloed comfort zone. That applies not just to the way we design our application architecture but to the way we build our development teams, and the way we evaluate new technologies. Those who are going to be successful are those who are going to design and measure how well all their technology pieces work together before anything else, and who look at a solution without reverence for past designs.