What is Social E-Commerce anyway?
According to wikipedia, Social Shopping is a mechanism for e-commerce where shoppers’ friends become involved in the shopping experience. Social shopping attempts use technology to mimic the social interactions found in physical malls and stores.
A recent study showed that over 92 percent of executives from leading retailers are focusing their marketing efforts on Facebook and subsequent applications. Furthermore, over 71 percent of users have confirmed they are more likely to make a purchase after “liking” a brand they find online. (source)
I recently came across an interesting analysis on the Subscribers, Fans, & Followers: The Social Break-Up. The report analyzed social behavior, and more specifically why consumers end brand relationships. Even though the focus of this post is not about ending brand relationships, I thought that some of the statistics presented in this report are quite useful for quantifying where we are today with social e-commerce in terms of the market size and its potential.
- 73% of U.S. online consumers have created a profile on Facebook.
- 65% of U.S. online consumers are currently active on Facebook.
- 42% of U.S. online consumers (64% of those on Facebook) are “FANS” (have “liked” a company on Facebook).
- 17% of U.S. online consumers have created a Twitter account.
- 9% of U.S. online consumers are currently active on Twitter.
- 5% of U.S. online consumers (56% of those on Twitter) are FOLLOWERS (use Twitter and have “followed” at least one company).
- 71% of FOLLOWERS expect to receive marketing messages from companies through Twitter.
Clearly the adoption of social commence is quite staggering.
The focus of this post wasn’t to convince you should invest in social commerce but to give a context for one of the common scalability challenges associate with building social ecommerce platform – the “Social Graph.”
The Social Graph challenge
The Social Graph is “the global mapping of everybody and how they’re related.”
Processing social networks is not an easy proposition:
- Massive amounts of branching data
- No data locality
- Very few assumptions can be made about the data
In other words, to meet the capacity scaling demand you can’t store the data in a centralized location. That forces us to distribute the data. On the other hand, to access (or query) the data, we can’t assume that the data for our query is located in a single node. That forces us to look for the data on multiple nodes.
Let’s take a simple scenario to get some sense of the complexity of the problem:
- Imagine every Facebook user (500 million)
- Imagine each person is only connected to 100 others (conservative estimate)
Query: How is user X connected with Y?
- X has 100 friends
- Each of them has 100 friends
- 10,001 nodes visited!
- 101 reads from the underlying storage system!
Clearly even if we can scale at the capacity level we can’t scale at the query level.
The crux of the problem:
- High branch factor necessitates many loads to serve even a simple request
- No data locality + high branch factor means very high random I/O
- Traditional storage models (RDBMS, flat files etc.) are a poor fit
- Use Memory as the main storage
- Random I/O access works much better on memory devices than on disk .
- Execute the code with the data – Using Real Time Map/Reduce
- To reduce the number of iterations required to execute a particular query we use the executor API. The executor API enables us to push the code to the data. By doing that we can execute fairly complex data processing on the data node at memory speed vs network speed.
- De-normalize the data
- To reduce the amount of traversal access and network hops per query on the graph we need to copy elements of the graph into each node. For example the list of Friends and friends of friends (up to a certain degree) could be stored in each node and thus become available to any element of the graph without the need to consult with other nodes.
The operational perspective
Scaling Social Ecommerce involves not just the application architecture but also the operational aspect. This is of particular importance as ecommerce and social sites tend to evolve quite rapidly and therefore the time it takes to release a new feature from development to production is critical. This process is often referred to as Continuous Deployment or in our specific case it can be referred to also as “Continuous Scaling.”
There are basically two factors that are important to achieve continuous scaling/deployment:
- Automation – if the process of deployment and scaling involves lots of human intervention than the time it would take to release a new feature would get significantly higher. It is therefore important that the entire process can be fully automated. In some cases, you may want to have manual check points in the process but even in that case the expectation is that manual intervention would involve clicking on a <continue> button and nothing else. To achieve this level of automation we need to interact with the our application infrastructure through an API. This process of automation and integration between the development and operational environment is often referred to as DevOps. The GigaSpaces reference for dev-ops API is provided here.
- Schema evolution –There are many cases in which we would want to add or change our existing data structure to our existing application as part of the upgrade process without brining the system down. This is considered one of the more complex challenges to deal with with many of the existing data bases. A document model is schema-less and therefore more suited for this sort of rapid data change.
The full story
Tomer did a a great job describing in more details the scalability challenges and the approach that they have taken in Sears to address those challenges as well as providing the specific insight on the performance figures that they achieved with their current system. The full interview recording is provided below: