Many articles blogs and other documents have been written about how to scale your data linearly. To scale your application you need to partition your data across multiple machines each handling part of the load. If the data can be partitioned correctly, the grid can scale to include more machines and scale out. But not all data can be partitioned. In some use cases, the majority of the application data can be partitioned except a small part of the data that is required for all partitions. This is usually referred to as static or reference data, it compound of relatively small amount of data that is rarely updated. This static data is used frequently to process the data on each partition. An example of such data is the dictionary. For example, trade systems need a dictionary of symbols to verify the data they accept as valid, or it can be simple rules that should be followed by services. Both of these data types change when off peak, and take only several megabytes.
The two options that are available are duplicating this data across all partitions, or storing it in one of the partitions, and accessing it remotely when needed. Both options are not ideal, as duplication requires custom code and it is prone to errors later on, especially if more partitions are added. Remote access is usually not a suitable solution, as fast access to this data is needed, remote access requires serialization and network call both reduce scaling and increase latency and latency.
Fortunately GigaSpaces comes with a tailor made solution for this challenge. We supply functionality that is similar to near cache but with predefined criteria, that is called Local View. Much like a database view, the GigaSpaces view allows you to take part of the cluster data and store it locally in a read only near cache running on each of your processes. This Local View allows you to define the reference data portion of the data and to share it across all partitions. The reference data itself can be located on a specific partition across all of the partitions.
Below you will find a simple example of how to implement the Local View for reference data on your cluster. This example is based on a processing unit example that is located in the example folder:
In each partition we have an embedded proxy that only references the data located in that partition instance for the services. Consequently, we need to add another proxy that is clustered. On top of that proxy, we will define our local view. The Local View can define multiple classes with multiple criteria.
Example of the embedded proxy configuration in pu.xml :
<os-core:space id="space" url="/./space" />
<os-core:giga-space id="gigaSpace" space="space"
This will we will add the local view settings
<os-core:giga-space id=“localGigaSpace” space=“localViewSpace” clustered=“true” />
<os-core:local-view id="localViewSpace" space="space">
<os-core:view-query where=" type=0 "
In this case, I have created a local view that will store all objects of data field type 0. The master copy of these objects can be anywhere in the cluster. My local view will hold of that object so I can access it in the same JVM.