An Argument for Stateful Services

Posted on Posted in Blog

One of our most common engagements is helping companies retool infrastructure after they’ve reached the limits of their existing technology stack. These customers are looking to increase the resiliency and scale of the services that underpin financial, retail or Internet of Things (IoT) offerings.

In these applications the bottleneck is typically the overhead and coordination associated with managing shared state. These applications accept requests on behalf of a user, and then modifying or allocating some resource. The resource could be inventory for a retail system, updating the status of a sensor, but the essence of the request is modification or allocation of a shared value.

Stateless Services

The simplest applications use a load balancer in front of a tier of stateless application servers that perform the application logic and interact with a database. In this model the application servers are stateless, meaning that for each transaction they fetch the data necessary from the database, perform some logic and return a result. After the request is done they purge all allocated memory/state before servicing the next request.

Inherent Inefficiency of Stateless Services

Stateless services retain no memory of the event that immediately proceeded. The primary benefit of stateless services is their simplicity. If your application servers become overwhelmed you can easily add more, and the load balancer will automatically redistribute work across the new machines. However, because the services are stateless the application is required to access the database for every request.

This pattern of fetching, modifying, writing and forgetting amplifies the work required of the database to support your application. Because more than one user could request access to the resource at any given moment the database is required to use transactions and complicated schemes such as MVCC to manage concurrency. This has led to many applications becoming performance bound by their database. New, scalable databases (Apache Cassandra) make it easier to distribute this load across many machines but the fundamental problem lies with the inherent inefficiencies of stateless services.

The Enemy is Shared Mutable State

By identifying commonly accessed pieces of data, and caching state across transactions your application can reduce the load on the database and increase it’s ability to scale. The challenge is that if one application server changes a shared value it needs to notify all other servers holding that cached value of the update. For values that change often, the overhead of notification and propagation negates any benefit gained by caching.

Introduction to Stateful Services

Instead of expecting all applications servers to be able to service any request made by any user we can instead dedicate certain servers to requests made by a subset of users. If we know that one and only one server is interacting with a particular record we know for certain that the value that the server has is the correct one, and we don’t have to constantly re-read from the database. In this model we’ll split the application into two halves, a stateless front-end application that accepts requests from users, and a stateful application that manages resource allocation.

Scaling by Communicating

The most common approach is to use a hash function to assign responsibility to specific servers. In this model the front-end application communicates with the resource service via queues or RPC. Both have advantages and disadvantages and will be the subject of later blog posts, but we’ve seen both used at large scale.

With both approaches the resource service will update the in-memory copy of the value, write to the database server and then respond to the requesting front-end server. If the resource service crashes it will either be restarted or fail over to a standby, and read the most recent value from the database. Because the resource service always writes to the database before responding to the request, you have the guarantee that no resource is allocated more than once.

Managing Resource Ownership Leases

Using both RPC and queues you’ll need a system for managing which server is responsible for resource allocation. Common approaches include using Redis, Zookeeper and Cassandra. We’re partial to Cassandra for this task because it’s an AP system that includes support for TTLs and CAS, which makes it easy to manage the bookkeeping requirements of leases. Other alternatives also include the use of gossip protocols, but unless a framework exists that you can reuse the effort required to write and debug a custom implementation is typically beyond the scope of most projects.

Dynamically Scaling with Load

During periods of low activity each resource management server may be responsible for a large portion of the overall resource pool. As load increases new servers can be added, and the range of resources managed by each server is shrunk. This allows front-end servers and resource management servers to be scaled independently and respond dynamically to load.

Implications for Database Selection

The transition to stateful services has two impacts on database selection. Because we have a guarantee that only one server is managing allocation of a resource we can do away with database transactions and broaden the databases that can be considered. Additionally, because we’re maintaining state in the memory of the resource allocation service we’ve eliminated the vast majority of reads and what was a mixed workload becomes write dominant.

Wrapping Up

Almost no application can do away with mutable state, values must be recorded, profiles must be updated and inventory changes. However, we can limit how much state is shared, and by limiting shared state your application can scale linearly to meet almost any load.