GigaSpaces 6.5 was released at the end of June, and we are now working on the 6.6 release, with the first milestone already publicly available. These are major milestones in a series of upcoming releases all aimed at strengthening our proposition as a Scale-Out App Server. Our main goal is to significantly simplify the process of achieving scalability, including scaling an EXISTING application within days and without enforcing a complete re-architecture.
I refer to this as our “Seamless Scaling” or “Simple Scaling” initiative. You can read some of the rationale behind this initiative in my previous post Can scalability be made seamless. This is a very ambitious goal, and by all means, we are not finished. In addition to the tremendous enhancements already put in place, we have a long-term roadmap that covers many aspects of the product.
The efforts we have undertaken (as well as those on our roadmap) involve enhancements to our development frameworks in Java, .Net and C++, mostly around the abstraction layer, including supporting standard APIs that enable us to inject many of our Data Grid and event-driven capabilities through annotations and configuration, with zero or minimal changes to application code.
They also involve increasing robustness, making large-scale deployments simpler to deploy and manage. Other efforts include extensive integration with popular frameworks, such as Spring and Mule, and recently we also added Web framework integration with Jetty. All this is designed to make the end-to-end scaling experience extremely simple and native. Judging by recent feedback we received, some of it publicly referenceable, it looks like we’re making great strides in achieving our goal. I particularly liked the following quote by one of our existing customers Monte Paschi Group, who built a new pricing system with GigaSpaces. Their full case study is available here. I chose the following quote from this study as it highlights some of the benefits that we don’t always emphasize enough – development simplicity:
The development team is happy, too, since the architecture has been
greatly simplified compared to the multi-layered application server system.
“We’re not a software company, we’re a financial company,” Santini
explains. “We didn’t have weeks or months to study the technology. Our
main goal was to use it to achieve our goals. GigaSpaces XAP allowed us
to do that right out of the box.
You can read the full details of the 6.5 release here. For convenience, we grouped separately the long list of features into Java,.Net and C++ categories, and provided detailed descriptions that outline the rationale and the value behind each feature.
In this post I’ll try to highlight some of the important features and provide insight into our future plan.
Seamless scaling using the Service Virtualization Framework
At the application layer the most notable feature is the Service Virtualization Framework. The Service Virtualization Framework (SVF) can be seen as a major enhancement to Session Beans in EJB3 and Spring Remoting. This framework enables you to write your business logic as a POJO and deploy the services across a cluster of machines, while providing a single client proxy that virtualizes all those instances as if they were a single server. For more details I recommend reading a new white paper covering the concept behind this framework and how it can help you simply build scalable, high-performance SOA and event-driven applications. The white paper is available here
Seamless scaling of popular development frameworks
We enhanced and expanded our integration with popular development frameworks. The purpose of the integration is to provide end-to-end seamless scaling to such frameworks in a way that doesn’t require changes to the application. A good example is our Mule-ESB support. Mule users can take existing Mule 2.0 applications and significantly improve performance and scalability by plugging in the GigaSpaces runtime into the Mule Framework. The good news is that the wiring happens outside of your application code at a couple of levels:
- Connector level – leveraging our messaging layer as the transport for Mule
- Clustering level – taking advantage of our clustering capabilities, enabling the internal Mule data structure to span across multiple machines for scalability and high-availability
This integration is provided as part of our open source framework, OpenSpaces. We hope and anticipate that these integrations will be used as a reference by other frameworks looking for ways to provide similar levels of scalability and reliability. A good example for that already happening is the work David Greco performed by integrating the Camel open source ESB with GigaSpaces.
With 6.6 we also added out-of-the-box integration to Jetty. This was done in collaboration with the Webtide team (the company behind Jetty), who have given us excellent support throughout the process. What i like about this integration is that it enables taking an EXISTING web application packaged as a WAR and dynamically scaling it across a pool of machines. With this approach, you also get session-replication injected to your existing application without touching your code or WAR package. If you’re willing to make slight configuration changes, you can get caching reference injected into your application. There is a new example that shows what it takes to scale an existing web application. The example uses the Spring Pet Clinic application and deploys it on a GigaSpaces cluster. The full example is available here.
Removing the language barrier
For decades language have been treated almost as a religion by developers. As an ex-CORBA guy, I know how much language interoperability is painful to deal with, and often requires compromises on functionality, performance and complexity. At GigaSpaces, we realized that there is no reason that different languages shouldn’t be treated simply as forms of writing business logic. They each generate different values. For example, .Net provides better integration with Windows applications. C++ provides better performance in certain areas and provides low level APIs for integration with many third-party libraries.
Persistence: 6.5 has some major enhancements for implementing our Persistence-as-a-Service model in .Net through support of nHibernate. This enables .Net applications to integrate with existing databases and have their own database mapping layer with a native .Net API. This comes along with quite extensive Perforamance improvements. You can see some of the results here for .Net, and here for C++.
One of our goals for 6.5 was to bring the same level of scalability and simplicity we provide for Java to .Net and to C++, without compromising performance. Unlike some of the alternatives in the market, we don’t just provide remote access to our Java runtime, but provide complete application server capabilities in these two languages — as well as complete interoperability among all three. Java, C++ and .Net services and clients can run and share the same process and leverage that to remove the network call overhead often required when a call crosses language boundaries. An immediate benefit is that you’re able to run your C++ and .Net business logic where the data is. Furthermore, you can now leverage our existing SLA-driven deployment to automate the deployment of Java and .Net applications. This means that instead of running each server process manually, you have a single deployment command that will make sure that your serves are running on the appropriate machines, that your backups are running on different hosts from your primaries, that if one machine goes down a new instance will take over immediately, or if one is not available, as soon as it becomes available — all that without any human intervention!
Dynamic language support
This feature leverages the SVF mentioned above. This means that you can choose to run these dynamic procedures synchronously, asynchronously, in parallel, etc. Now Isn’t that cool?
Data awareness everywhere
Throughout all of our development efforts, we are making sure data-awareness is maintained across the entire stack. Data-awareness means that invoking a method on the new Service Virtualization Framework can be routed to a particular service instance based on the data associated with that service instance. It also means that when you send a message through our JMS implementation, you we will be able to route it to the JMS partition that manages the relevant data. Unlike alternative solutions, this is native to our environment, meaning that there is no need for external integration and complexity to achieve this behavior.
Performance, Performance and more Performance…
Improving performance remains a constant goal for all of our releases. As the product matures, finding places in the product where performance can be further optimized is getting harder, and I therefore am always surprised when one of the developers comes up with some creative idea around performance.
In this release we improved performance on several fronts — including Java, .Net and C++ — which involved significant optimizations of object serilization and multi-core scalability. For the latter we are working with Azul, and making it part of our testing environment, as well as other multi-core systems such as Sun Niagra. You can see some of the figures and details here, here (Comparison with previous release of .Net) and here(C++).
We conducted detailed comparisons of latency and throughput of a “classic” transactional application based on the standard JEE model (Using JBoss ,JMS, Spring, Hibernate) with the same application but using GigaSpaces as the messaging and data-layer — and eventually replaced the entire JEE stack with a GigaSpaces + Spring stack. It is important to note that throughout this process, the business logic code remained untouched. The initial results of this tests can be found here:
You can find the details of the code that was used in this test and the migration steps in a new whitepaper that is now available on our site here. Uri Cohen wrote up in his blog (The Space as a Messaging Backbone) some of the interesting findings from this analysis that showed the difference between end-to-end measurment and point optimization and why in some cases putting a distributed cache in front of a database is not going to be enough.
For obvious reasons I can’t expose our entire roadmap as of yet. What I can say for sure is that we’re going to continue improving the level of seamless and simple scalabaility provided by our platform.
We view the partnership and integration with other frameworks as strategic, and we’re going to continue with that effort. One of the frameworks we are planning on working on is GlassFish.
We already announced our first cloud offering, designed to run on Amazon EC2, and including partnerships and integration with RightScale and Cohesive FT. If I’m not mistaken, this was the first Java application server available in a pay-per-use model, designed to meet the needs of enterprises and ISVs that want to offer their applications on the cloud, including as Software-as-a-Service. We’re going to put in a lof of effort into making cloud deployments simpler, enabling our customers to use it on their local virtualized environments (private clouds) and on public clouds (Amazon EC2, GoGrid, FlexiScale, AppNexus and others), or even a combination of the two, without changing their applications. We’re now working on a new version that enables provisioning a cluster of machines, deploying the application on said cluster and opening up an adminstrative console for the cluster — all with a single click. This is already working in an internal beta. We’re planning to provide a preview release by next month. With the availability of Windows-based clouds, we will be providing our .Net application platform as a cloud offering as well.
On the API and Standards front, we recently joined the OSGI alliance, where we expect to play an active role. We are also looking into ways through which we can strengthen our compliance with some of the latest standards on the JEE stack, such as EJB 3.0 and JPA. The challenge is not just basic API mapping, but how to do it in a way that doesn’t break our scale-out architecture and doesn’t create complexity. Unfortunately, previous versions of the EJB spec weren’t a good fit. EJB 3.0 looks much more promising.
On the .Net front, we’re going to continue with our performance optimization project. We’re also working on making our .Net offering fit natively within a .Net development environment by providing better development and installation packages that fit better with the .Net spirit. We are also looking into ways to simplify the testing and debugging process. For pure .Net users we will make the .Net version available as a standalone package at a reduced price (details will follow).
On the C++ front, we’re going to provide our customers with an open source version of our C++ binding and a complete package that will enable them to compile and build our C++ with their own set of dependencies, libraries and compiler versions and flags. This will also allow using the current C++ framework as a broad integration framework for third-party tools and languages.
There’s much more than I could cover in this post. I tried to put together what I thought are the highlights of the release. As it’s impractical to cover such a wide spectrum of topics in a single post, we started a process in which different people from our R&D and field engineering teams will post on specific aspects of the product and best-practices for using GigaSpaces in existing web, financial, online gaming and other applications.
Be part of our next release:
As we are now making the decisions of what to include in our 7.0 release, it would be nice to hear your feedback and specific requests for enhancements or new features. You can either send me a direct email or send it to PM at GigaSpaces.com Alternatively, if you think that you have a good idea that other users might be interested in, you can implement it on our community site – OpenSpaces.org.