MuleSource is a rather popular open source ESB implementation. What I particularly liked about it when I first saw it is how simple it is to glue different services that are communicating in different protocols in a very simple way even if they don’t comply with the same standard API.
Another thing that I like about MuleSource is its light weight and simple architecture. Unlike many other ESB implementations, Mule does not bind itself to the use of web services. In fact you can use Mule for orchestrating the flow between SAP and Siebel in the same way that you can orchestrate two POJO services collocated under the same process. This approach has made the integration with GigaSpaces possible as it has enabled us to introduce many of our innovative ideas and bring in the value of Space Based Architecture into the Enterprise SOA world. It is also a good fit with the type of customers we are engaged with in where latency and performance play an important role. The interesting thing is that thanks to the Mule connector architecture your application code does not need to know anything about GigaSpaces. All it “sees” is better performance and reliability. For GigaSpaces users the integration with Mule helps to connect GigaSpaces applications to the external world quite easily and leverage many of the existing sets of connectors available with Mule to plug-in those other protocols without writing additional code. We already have a few customers who chose to use GigaSpaces and Mule together and are quite happy with the results. An example of some of the feedback being received can be seen in the following post GigaSpaces and Mule= Pure Sex.
GigaSpaces and Mule ESB integration overview:
Basically there are a few components to the integration:
- GigaSpaces connector – The GigaSpaces connector enables GigaSpaces to plug-in as an high performance transport.
- Mule server clustering – This is where GigaSpaces clustering is leveraged to enable in-memory fault tolerant solution for the Mule server itself.
The GigaSpaces Mule Connector:
The diagram below shows how services that are running within Mule can use the GigaSpaces Connector as a transport layer.
As can be seen in this diagram the services are unaware of the GigaSpaces implementation. These details are abstracted by the Mule server. The interaction is done declaratively outside of the services code. When a service receives an event it processes it and the result of that event is sent back to Mule which in return calls the GigaSpaces Connector to store that state. Other service uses the same connector to wait for that state change and once that happen that triggers the appropriate method on that service implementation and so forth.
Mule server clustering
The diagram below illustrates how Mule integrates with GigaSpaces in-memory cluster to ensure high availability and reliability.
In general all the Mule state (SEDA Queues in particular) as well as the application state is stored in the GigaSpaces in-memory cluster. That state is replicated synchronously to a backup instance. When the primary node fails the back–up node takes over automatically and continues to process the events from the exact point where they failed. This enables continuous high availability of the application and seamless fail-over without enforcing the user to take explicit action during that failure event and without the need to handle complex re-tries to ensure the reliability and consistency of the application after a failure event.
Given the positive experience both us (GigaSpaces) and the MuleSource team have had, we decided to formalize this integration. . This Wednesday (Feb 11 @ 9 AM PDT/ 12 PM EDT/ 6 PM CET) we are hosting a joint webinar in which we will unveil the details behind this new development. In this webinar Uri Cohen, GigaSpaces Product Manager, and Ken Yagen, Sr. Director of Engineering at MuleSource, will discuss the details behind this integration and use a live demonstration to illustrate how it works. This will also be a good opportunity to ask the experts more questions about this joint offering. To join this webinar make sure that you
See you at the webinar!