It has been a while since I last blogged on our product development activities. I’ve been very busy in the past few weeks with helping customers move their systems into production. Today, once XAP R7.0 M4 is released it is a good time to shed some light on our R7.0 release train. So, what’s GigaSpaces team has been doing lately? Please read on some main activities we’ve been doing in our product XAP:
Was released already in M3, however it is worth being mentioned again. The motivation was to enable smoother transition for typical database centric applications and allow them to take advantage of GigaSpaces XAP. The Id API which was added to the GigaSpaces interface, enables accessing objects in the space based on their Id. This feature is extremely useful when converting exiting database centric DAO layers. In addition this feature enables creation of relationship based access layers into the space memory storage.
Admin Model and API
This one is one of my favorites. As I’m a real believer in Domain Driven Design
(a must read book!), we’ve finally formalized GigaSpaces domain model through a new Admin API . The Admin API lets you traverse over GigaSpaces run-time model, get information and statistics and perform operations. The Admin API supports both query model and event model for building sophisticated user interfaces on top. For example, one can select specific GSC, learn about the processing units which are deployed on it and query various interesting configuration and run-time aspects, for example – the amount of memory that is being used vs one being configured. The possibilities are almost endless. The main motivation behind the new Admin Model and API is to enable automatic system’s management of large clusters. We’ve added few Groovy scripts which demonstrate the power behind the API, however, these are just tasters, and you are encouraged to play with it and build your own automation and monitoring tools.
We are taking advantage of this API to automate all of our internal system tests (application tests). Once we are complete our automation rate will be above 90%.
The agent is a new component usually at the machine level which is responsible for creation and life-cycle control of the other run-time components: grid containers (GSC), grid managers (GSM) and lookup services. As of R7.0 the easiest option to start GigaSpaces run-time components would be to make sure agent is running on every available machine and from a single location, remotely you’ll be able to start/restart/stop the entire cluster. The agent was added into the product also to enable better integration with cloud environments , where servers come and go more dynamically and most of the control is being done remotely.
We’ve been working very hard to take advantage of the new ever improving multi-core architectures. GigaSpaces engine is currently locks free for read operations. In cases where multiple threads reads the same data, they will not hold even a read-lock. In previous versions we used an optimized read/write lock. However, in this version we came up with a new strategy which exploits dramatically better the new multi-core architecture and increase overall system’s scalability. In addition we’ve completely re-written the LRU (least recently used) eviction strategy by changing the internal data structures to more optimized and concurrent ones. One of the side effects of this enhancement is the ability to easily add additional eviction strategies for various usage patterns (time based as an example). We are also considering opening the API publicly once it is stabilized. You should expect significant performance and scalability improvement one multi-core hardware architectures.
Improved Local Cache
Local Cache is used to improve clients read performance by caching recent reads within the client’s VM. This pattern is also known as Mater-Local. In previous releases the internal mechanism used to sync between the local cache and the master space was based on commands. While being very effective, this approach required writers to be aware of usage of local caches in the system. Now we no longer use the command pattern and have removed this limitation. In addition the local cache is now more scalable as well as it gains from the scalability improvements described above.
Be advised also that the web session replication is based on local cache, hence benefits from the local-cache improvements.
We will continue to work on some more improvements in the local cache for R7.0 which will become available in the coming R7.0 milestones.
Improved Class-Loading Model
When two processing units, lets say a consumer and a producer used the same data model and in our language, used the same entry set, we have required to put those is a jar under a specific processing unit directory named shared-lib. This is not required anymore. We have changed the way class-loading is being performed in grid containers (GSC). For backwards compatibility purposes still works, however it is not required anymore.
The main benefit from this change is that once a processing unit is undeployed from a grid container it does no leave behind anything at the container’s VM. In addition web applications who write and read from a space can keep the default web-app structure model. Simply put, any standard JEE web application can be deployed into GigaSpaces XAP cluster, take advantage of memory storage and dynamically scalability without structural modifications.
As you can see, we have accomplished quite a bit in the first four milestones of R7.0. The GA release is schedule
d for mid year, with a milestone release every 2-3 weeks. Our team is committed to release only high-quality milestones, we have internal go-no-go meeting prior to each milestone release. So, don’t hesitate to download the milestone releases and try the new features and improvements. We will be very happy to get your feedback.