In a previous blog post, we discussed the rising concept of what Gartner has coined a Digital Integration Hub (DIH). In fact, many people view the DIH as an evolved Operational Data Store (ODS) for historical reasons. In this post, we’ll explore the origin of the ODS, and its evolution into a driver of digital transformation.
What is an Operational Data Store and How is it Used?
The Operational Data Store (ODS) concept is not new. It’s actually been around for a couple of decades, continuously evolving with the technology innovations, and is perceived differently by different people. So let’s first try to clarify the common definition. In a nutshell, an ODS aggregates transactional data from multiple sources. Traditionally the ODS was designed and optimized for operational reporting, and was refreshed on a daily or even an hourly basis. Usually the ODS only stores a short time window worth of data.
Why do Organizations Deploy an Operational Data Store?
A main benefit of an ODS is the ability to aggregate data from multiple sources. Many organizations use different systems of record to manage various aspects of their data. Reporting on each data source separately provides a siloed view of the data. The ODS allows for reporting across multiple systems of records for a more complete view of the data. In addition, some systems of record offer limited reporting capabilities, so an ODS is a way for users to gain more comprehensive reporting. Another aspect has to do with database security. Access to systems of record is usually restricted to a select few users. The ODS opens up reporting capabilities to a broader audience within the organization.
Diagram: A traditional Operational Data Store
The Shortcomings of a Traditional Operational Data Store
Traditional Operational Data Stores pose challenges for enterprise, especially as they embark on digital transformation initiatives:
Designed for operational reporting, not for real-time API serving – while a traditional ODS supports operational reporting use cases, it does not offer real-time API services to access the systems of record. Thus it’s unsuitable to meet the requirements of new digital applications.
High latency – the traditional ODS is based on a relational database, or sometimes on a disk-based NoSQL database. These database systems can not provide high performance when handling large amounts of data, and thus can’t support demanding low-latency applications.
Low concurrency – as traditional databases offer limited scalability, they pose a challenge when it comes to user concurrency. Once multiple concurrent users are accessing the data store, the performance takes a hit and as a result, the ODS can not support a high level of concurrent users beyond a certain threshold.
Stale data reporting – a traditional ODS is not a real-time replication of the systems of record because data is refreshed only periodically – hourly or sometimes daily. While this refresh rate is acceptable for end-of-day reporting scenarios, it is not suitable for digital applications that require real-time data such as in trade risk analysis and reporting, e-commerce, fraud, dynamic pricing and more.
The Evolution of Operational Data Stores – A Paradigm Shift
Digital transformation is driving a paradigm shift, signified by the many organizations that are introducing new real-time digital applications to replace previously offline services. A new breed of technology companies in areas such as fintech and insurtech are introducing new business models and new services that require more than what a traditional ODS can offer. New digital banks are differentiating themselves with continuous innovation and the introduction of new online services. New digital insurance companies are leaving “offline” behind, issuing insurance policies in 90 seconds, and paying claims in 3 minutes.
These new disruptors have the advantage of agility on their side as they are not chained to legacy infrastructure and processes. Traditional financial services organizations including banks and insurance companies as well as other industries such as retail, transportation, healthcare, and even higher education, are under pressure. Most are still using mainframe and other legacy data platforms for many of their systems of record and databases, and they must now compete with the new players, more demanding customers, new regulations, and a constantly changing landscape. This means they need to make the necessary changes to modernize their infrastructure. How can this be done without ripping and replacing mission-critical services of record?
The data transformation paradigm shift has spurred the emergence of a new generation of operational data stores and architectural designs. Gartner has coined the modern operational data store as a Digital Integration Hub. GigaSpaces calls it a Smart Digital Integration Hub (DIH).
There Is No Need to Rip-And-Replace
When planning to implement a next-generation ODS in an organization, there are two ways to go about it. If there is no ODS in place, you can deploy an ‘out-of-the-box’ solution that includes all the relevant components. These will typically include a high-performance operational store and compute engine, database integration or CDC, smart caching, analytics, microservices API, and event-driven architecture.
On the other hand, if you already have a traditional ODS in place, it does not necessarily need to be replaced. Instead, you can choose to augment it with the missing layers, by adding the smart cache, microservices API, analytics, and event-driven architecture.
The Benefits of a Next Generation Operational Data Store
Implementing a next generation ODS can support digital transformation by eliminating the limitations of the traditional ODS, thus providing an organization the speed, scale and agility required to enable new digital applications. You can read more in the solution paper.
Diagram: A next generation ODS architecture
The Next Generation ODS benefits include:
Super fast performance. With a distributed high performance in-memory computing and storage engine, a next generation ODS offers the speed and scale to power the most demanding digital applications. Colocation of applications with data in the same memory space takes performance to the next level, as data does not need to travel over the network. The distributed in-memory core also allows for high concurrency of users, without impacting performance. And for planned and unplanned peaks, autonomous scaling guarantees the expected performance, without the need to overprovision.
A next generation ODS also allows you to run analytics on real-time data, while enriching it with historical data so your predictive modeling is as robust and as accurate as you need it to be.
Diagram: GigaSpaces benchmark results
Always on. When digital applications read directly from a system of record, they will be impacted when the system of record is down. And when you manage multiple systems of record, the chance that one of them will become unavailable increases. ‘A next generation ODS decouples the API layer from the system of record, thus allowing the organization’s applications to keep working even when a system of record is down. Read more about high availability.
Optimized TCO. Artificial Intelligence for IT operations (AIOps) is a technology that automates and enhances IT operations with analytics and machine learning. In the context of a next generation ODS, it autonomously scales your infrastructure up or out to accommodate peak volumes and unexpected loads. This way you can avoid over-provisioning of expensive on premise resources while adhering to SLAs and maintaining the customer experience when volumes peak. Watch a live demo.
Diagram: Autonomous scaling is triggered automatically based on system workload with no downtime
You can also read how PriceRunner, a leading eCommerce site in Scandinavia scales at 20X normal load on Black Friday while retaining millisecond performance levels.
AIOps can also move data between hot, warm and cold storage based on business rules. This way, your most important data is always in RAM, providing the fastest access possible while less important data can reside on less expensive storage types.