Last Monday we released XAP 6.0.2. This is the second service pack release on top of XAP 6.0, which we released in August. The adoption rate of XAP is exceeding all of our expectations, making us work very hard to address many requirements coming from so many users of our platform.
As always, full release notes are available here. I’d like to shed some light on some of the improvements introduced in 6.0.2.
The first one is called internally “max-instances-per-machine”. This new improvement to OpenSpaces’ deployment-supported SLA is something that has been requested for a while. With this new SLA definition one can guarantee that primary and backup of the same partition will never run on the same physical machine.
These days, with the computation power available on every new box, our users deploy several instances of the Grid Service Container into the same physical machine. Until now, unless statically defined, this could have resulted in cases where both the primary copy and the backup copy of the same data may reside on the same physical machine. If that machine fails, data may be lost. Previous to 6.0.2 the solution was to statically define the phyiscal binding between the data partition and the host. Now with this new SLA definition it is much simpler. The system guarantees that this policy is met.
Another feature worth mentioning is support for batch notifications of Local Cache updates. In 6.0 we introduced a new event API, the EventSession API. This API consolidated all previous API options into a single consistent model. On top of that, we’ve introduced QoS optimizations into the messaging controlled by the application developer. One of them is to batch-up event notifications from servers to clients. Batching is controlled either by the size of the batch (number of messages per batch) or by the time between batch delivery. This option reduces load both from the server and the clients. The next logical step, which was started in 6.0.2, is to make sure the product uses this new API for all components that use events.
In the master-local pattern, the Local Cache relies heavily on events, and it was only logical to start with it. Now, Local Cache uses the batching optimization by default. This, of course, can be controlled via configuration. From our tests, we are seeing major scalability improvements when using Local Cache.
Many other enhancements have been introduced as well as many performance improvements. I’ll let my colleagues from the GigaSpaces R&D team Guy Korland, Shay Banon and others comment on those on their individual blogs.
One last comment on process. By now it is clear that the effort put into implementing Scrum and applying agile methodologies in our team is paying off. This is the third product release in the past four months, with tons of content and improved quality on every release. This remark deserves a post of it’s own, and I plan to write something soon. I am also going to share some of this information in my presentation at the coming JavaPolis event, so you’re all invited.