Most of us are used to thinking of persistence as a mean for reliability i.e. as soon as our critical information is stored in the DB we’re safe!
But what about Hot-Failover? Hot-failover means that we need to keep on doing what were doing even after an event of a failure.
Does the fact that our critical information “sits” on a disk help us in such a scenario?
It will ensure that our data is durable but it will not help us to achieve the required Hot-Failover which in my view is the highest degree of reliability.
With this approach memory is used as a reliable data store which is always available to the application using it.
Reliability of the data in-memory is maintained by copies of the data in more than one instance. In this way if one instance “dies” the data will not be lost.
Since the data is kept in-memory the application can use it immediately after an event of a failure.
In addition to that, an IMDG (In Memory Data Grid) provides also much greater flexibility to achieve different sets of reliability based on the application SLA.
For example, in extreme high performance applications one can set the IMDG to maintain embedded (In Process) replicas of the data, there can be combinations of topologies starting from synchronous replication to asynchronous replication and even a combination of the two.
One of interesting options is the synchronization with data bases. The fact that we can store data in-memory reliably enables us to synchronize that data in an asynchronous fashion. This combination gives us the benefit of the two worlds. In some of the IMDG solutions this can be achieved without writing additional code!