During the event, Ronnie Bodinger, Head of IT at Avanza Bank AB, gave an excellent talk on how they turned their existing online banking application into a new site that was designed for read/write scaling.
Avanza’s System Description:
Avanza Bank is a Swedish bank that makes it easy for investors to make equity transactions and fund switches. It runs the most trades on the Stockholm Stock Exchange.
It prides itself for providing advanced tools for its investors through its online banking system.
The current online system is a typical web site based on Java/JSP and Spring.
Scaling architecture of the existing site
Most of the interaction with the current site are read-mostly, the main scaling challenge was scaling on concurrent read operations. Read scaling was addressed through a side-cache architecture that is common with many of the existing LAMP + Memcached deployments where the first query hits the database and the following queries hit the cache.
The New System
The new site was designed to fit into the real-time and social era. This means that a lot of the traffic and activities are now being generated by user activities and not by just by the site owner. This activities need to be presented in real-time to the bank users.
The changes to the new site lead to a significant change in the traffic and load behavior that drives a new class of scaling challenges:
In case where there are lots of updates involved the existing side-cache architecture leads to diminishing returns as the cache becomes obsolete pretty quickly and therefore synchronizing the cache only adds overhead.
Using Oracle RAC with a high end hardware platform didn’t prove itself either and yielded a fairly expensive solution that didn’t meet the scaling requirements.
Unlike “Green Field” applications, Avanza has an existing online application (a “Brown Field”) that serves its current customers. That brings the following list of additional challenges :
- Existing Data Model
The entire data model of the application was designed for a relational model – changing the data model or moving it to a new NoSQL architecture as was considered would involve a huge change that could turn into a potentially years-long effort.
- Legacy system
The online bank application consists of large set of legacy application and third party services. Re-writing the existing infrastructure is either impossible (due to the dependency on third party tools) or impractical.
- Complex environment
As it often the case, a large portion of the legacy applications weren’t designed for scale, and weren’t built with a clear holistic architecture as they were built through layers over the years. This increases the complexity of scaling by quite a bit.
- Existing Skillset
The existing development team already had fairly good knowledge of Java and Java EE. Changing the team and/or developing a completely new skillset is a huge barrier as the ramp-up time required to bring new developers up to speed with the complexity of the system can take years.
The solution: Read/write scale without complete re-write
It was clear that meeting the new scaling challenges would involve changes to the existing application – the main question was how to scope that change so that it wouldn’t require a complete re-write. The second goal was to build the change in a way that would reduce the TCO of the current system.
To achieve those two goals the following approach was used:
- Minimize the change by clearly Identifying the scalability hotspots
The areas of the application that need intensive write access are often small parts of the overall system. The first step would therefore be to minimize the change to only the hot spots of the application and keep the rest of the application un-touched. In the case of Avanza, the hot spots were identified on certain tables used by the online web application. Most of the backend systems were still accessing the database for reporting, synchronization and batch processing and could therefore remain unchanged.
- Keep the database as is
One of the key piece in the current design is the ability to address the read/write scalability outside of the database context (See the next bullet). This makes it possible to keep the existing database and the schema of the data unchanged. In that way, all the rest of the systems continue to work with the database as if nothing changed.
- Put an In Memory Data Grid as a front end to the database
Scaling the application is done by front-ending the application with an In-Memory-Data-Grid (IMDG). The IMDG contains all the hot tables or rows of the original database. The online web application accesses the IMDG instead of the database. The IMDG is distributed in nature thus allowing scalability by distributing the load over a cluster of machines for both read and write operations.
- Use write-behind to reduce the synchronization overhead
Updates from the IMDG to the underlying database is done asynchronously in batches through an internal queuing mechanism (redo-log).
- Use O/R mapping to map the data back into its original format
In many cases to achieve best scalability, we need to partition the data. This often involves changes to the data schema. Changing the data schema could break the entire system including the areas that don’t suffer from the scalability bottleneck. To meet this impedance mismatch, we scope the data schema changes only to the IMDG. The data is mapped from the IMDG schema to the original schema through standard O/R mapping tools such as Hibernate or OpenJPA.
- Use standard Java API and framework to leverage existing skillset
One of the challenges with many of the new NoSQL database alternatives is that they often force a complete re-architecture. This comes with the a fairly high cost of re-building the skillsets within the organization across the board for developing against new APIs as well as for maintaining capacity and sizing of those new databases.
IMDGs such as GigaSpaces expose standard APIs, such as JPA. In addition, they allow organizations to extend the use of the existing database while removing a large part of read/write load – both the use of standard APIs and existing database enables organizations to leverage their existing skillset and still meet their scalability requirements. It also enables them to take a smoother transition (through baby steps) into a completely new scale-out architecture by allowing a plug-in new scalable database at a later stage.
- Use two parallel (old/new) sites to enable gradual transition
Switching all the customers into a new system at once is often a bad idea. A better approach would be to enable gradual transition of selected customers into the new sites. A common model to achieve that would be to run two parallel sites. The challenge in doing so is the synchronization between the two parallel sites. In the case of Avanza, they used the GigaSpaces Mirror service to synch all the changes from one site to the other and in that way keep the two sites up to date.
The diagram below provides a visual summary of this approach:
The TCO angle
The second goal in the project was to reduce the TCO of the current system.
This is achieved in the following way:
- Use RAM for high performance access and disk for long term storage
As I noted in one recent post, a RAM based solution can be 10x – 100x cheaper than Disk based solution for high performance applications.
In addition to that, the price for RAM goes down continuously.
1GB can cost only 1.9$ a month.. , storing 1T bytes of data completely in RAM can fit into a single RAC with total cost of $1.8k per month.
The optimal solution would therefore be to use RAM to manage data that needs high speed write/read access and disk-based storage for the long-term data that is accessed less frequently.
- Use commodity Database and HW – A single instance of an Oracle RAC deployment could reach to $500k. Putting a data-grid in front of the database enables us to remove the needs for many of the high-end features available in the Oracle RAC database. It also enables us to remove the need for high end hardware such as storage devices, infiniband network etc.
In this specific case, it was possible to move the data into MySQL and use commodity Dell machines to run the entire relational data system.
Many existing applications are built with layers on top of layers on top of layers of development. Relational databases often sit at the heart of those systems. Scaling these systems is therefore an extremely challenging task. This leads a lot of organizations to take the easy route, simply paying more for more high-end hardware and databases. We have reached to the point where this approach doesn’t cut it for many cases – and it’s simply too expensive to maintain in the face of cheaper, more scalable alternatives.
In a world where the impact of software accretion is no longer tolerable, it’s clear that a chnage is inevitable if we’re to meet the new demands for scalability. The real question is how to make that change. The approach that was taken by Avanza Bank in their use of GigaSpaces is an excellent reference: it shows that they were not just able to meet the new scaling requirements fairly quickly through measurable and easy small changes, but they also reduced their cost of ownership significantly as noted recently by Ronnie Bodinger’s statement:
“Throwing out Oracle’s sluggish databases increases Avanza performance of their business-critical systems, while reducing license costs significantly.”
“Today we have a large Oracle Cluster, which costs a lot. The new system goes in a fraction of it.”