Many enterprises have been using both monolithic and Microservices architectures, side by side, for their IT needs. Each architecture has its own benefits and challenges, yet neither one provides an all around solution to modern day IT challenges which demand high performance without compromising on the decoupling and agility offered by microservices. In this post we’ll discuss the evolution of persistence technologies, the pros and cons of Microservices vs Monolithic architecture, and why a Hybrid Microservices Platform is the way to go.
The Evolution of Persistence Technologies
The evolution of storage volumes and the increase in high-end transient processing engines, consisting of cache and RAM, has brought us to the era of In-Memory Computing platforms (IMC). This golden age of IMC includes the evolution of Low Latency Distributed Microservices (LLDM).
These days, the boundaries between technologies aren’t very clear. Sometimes you find yourself using multiple technologies within a single project, such as RDBMS, NoSQL, caching technologies, IMDG, and more.
RDBMS is simple enough to understand, while NoSQL is still trying to figure out its place in the data world having Document (i.e. MongoDB), Key-Value (i.e. Coherence, Gemfire, Hazelcast, Gridgain etc.), Columnar (i.e. Vertica, ActivePivot, etc.), and Graph (i.e. Neo4J) used in many different projects, sometimes in conjunction.
Finally, caching technologies, such as Memcached and IMDGs, have been around for the past decade. They’ve helped solve performance challenges that other storage volumes have not been capable of handling.
The downside to these technologies is that they introduce new challenges.
One of these challenges is eventual consistency, which is a consistency model used in distributed computing to achieve high availability. It guarantees, informally, that if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value. The problem with this model is that many systems need an always consistent model because the data always needs to show the most recent value. If we take for example a bank account, we cannot build the system on top of a solution that will at some point be inconsistant.
The disadvantage of non-transactional platforms (Cassandra for example) is that although one could avoid the locking of shared resources by linearly scaling out the data across the cluster (assuring availability), it comes at the cost of consistency.
Understanding this makes it clear to see that why we need a new breed of distributed services platforms for a converged microservices architecture.
Microservices vs Monolithic Architecture
The spectrum of Monolithic and Microservices is vast — the main challenge is using only the good qualities from each. At first glance, it would seem impossible to use elements from one end of the spectrum without giving up on others, but is that so?
To understand where we are heading, we need to take a look at common practices used in today’s business world. Looking at some of the more conservative verticals such as Financial Services, Telecom, Healthcare, and eCommerce, we see a pattern that they are reluctant to sway away from legacy Monolithic architectures due to their advantages.
Monolith Pros and Cons
- Classic monolithic architectures provide better performance and strong coupling
- Less reliance on third-party or other department’s services
- Full control of your application
- No agility for fast deployment or scalability
- No simple way to make it highly available
- Nearly impossible to isolate, compartmentalize or decouple system functionalities
Microservices is many things, one of which is the ability to decouple services into smaller components having generic APIs and technologies where each service is isolated and contains the whole stack it needs to independently deploy and operate decentralized. To expand on the concept, take a look at Martin Fowler’s fantastic article on Microservices.
Microservices Pros and Cons
- Compartmentalization and decoupling: Allow you to scale up-and-out dynamically and on-demand.
- Agile methodology: Allows you to easily make changes to the code and run tests to ensure that everything is running smoothly. Each service is separated so you can change more than one at each iteration (unlike Monolithic applications where everything is tied together, including the tests).
- Granularity control
- Inter‑process and unsafe distributed communication mechanism
- Handle partial failure—no transaction safety (Eventual consistency)
- Update multiple databases owned by different services (polyglot persistence drawback)
- Multiple APIs
- Refactoring can be quite difficult
- Service discovery (phonebook service)
- Performance impact caused by HTTP, de/serialization, and network overhead
- Achieving and maintaining high availability is no easy task
What’s Next: A Hybrid Approach is the Way to Go
A full monolithic approach isn’t sufficient anymore for systems that need scalability, such as IoT. Also, a full microservices approach isn’t great for performance or managing the services’ life-cycle.
We see the next logical step as a hybrid approach — utilizing Monolithic performance with all the advantages of Microservices, all while looking forward to the future of HTAP for the true harnessing of data and event-driven architecture. And that’s exactly where we’re heading.
We’ve seen that neither a fully monolithic or microservices approach is enough for complex enterprise systems. However, most enterprises are still unwilling to decommission their core Monolithic systems due to the high price of performance loss. Yet the shift that XAP provides to a fully usable enterprise microservices data fabric while being distributed, secure, and highly available — can make all the difference. Together with the cloud-native approach, our hybrid solution delivers strong consistency, highly available, distributed, scalable, encapsulated services deployment infrastructure.