Blog

Scaling web apps explained: Stateful, Hybrid, and Stateless Models.

Technology is a double-edged sword. On one hand, companies strive to deliver an excellent level of performance and efficiency to gain customer trust and loyalty, on the other hand, they must consider the cost component and invest wisely in new technologies when deciding to reinvent themselves. 
It doesn’t seem too complicated in theory. However, considering the increased growth in infrastructure complexity over the years, it’s a lot more complex and resource-intensive to provide high-level security without sacrificing web performance. That’s where cloud-native technologies come in to play, allowing companies to develop applications according to their needs.

The migration to stateless applications is in continuous growth YOY.

Stateless infrastructure promises an ideal business model implementation by focusing on the application and not the infrastructure. That approach not only gained in popularity because “stateless is cool”, it became a standard in the industry because it actually solves real problems.

According to a recent Forrester report, around 40% of the worldwide companies have already adopted stateless technologies,  one way or another. The rest are still using stateful applications with traditional deployment, but are planning to migrate to stateless infrastructure in the near future.

#Stateless vs stateful applications: differences

Stateful and stateless describe the way an application is designed to store or not, the “state”, which would later be used to process further requests.

Any application that stores information or data from one request to another, and uses it to run later, is considered stateful.

An example of a stateful application

A stateful app is an application that saves and stores client data locally or on a remote host and uses that data in the next session,  when the client makes a request.

For example, a user logs into a stateful app by providing his credentials. Once the authentication is authorized and the user logs in successfully, the application stores the state into the memory. By logging in to the app the user sets a variable that changes the state of the server. This change is stored in the memory and if someone else tries to access the application from another server using the same credentials, the connection fails because the second server didn’t store the right variable.

Now let’s suppose we have two servers, server1 and server2. In the middle, between users and servers, is another machine called a load balancer. If the load balancer decides to access server1, the call will succeed because the logged variable is set. However, if the load balancer decides to access server2, the application will fail because the variable is not set on server2.

If a server or an application stores data in the backend and uses it to identify the user is definitely stateful, and scaling horizontally is almost impossible.

An example of a stateless application

A stateless application is an app that doesn’t have persistent interaction between requests.  The session, which is a series of interactions that persists between two requests, is not stored in the application memory. 

A stateless application doesn’t save data in a previous session for use later in the next session, each session behaves as if it was the first time executed.

If we analyze the previous example from a stateless perspective, for a stateless application it doesn’t matter if it’s server1 or server2 since it doesn’t store any state. That’s because, after login, the server sends back an ID token, which contains session information (i.e. authentication details) that enables the user to communicate to the database.

It’s true, stateless applications depend on third-party storage because data is stored somewhere else and not on the disk or in the memory. But they do scale horizontally more easily than stateful applications because the infrastructure allows adding as many compute resources as needed.

Now that we know the differences between stateful and stateless applications let’s see what stateless application is.

#What makes an application stateless?

Stateless applications are composed of many individual microservices, that can be easily scaled, each one having a well-known specific task.  A good example would be the authentication service. Its particular job is to accept credentials, verify the authentication and return an ID token that would be used to validate requests.

With stateless apps, you can focus on the application rather than the infrastructure it runs on because the server is not our responsibility — it’s managed by Cloud vendors. A stateless application also helps with reducing costs or keeping them under control by paying only for the resources you use rather than having machines ready to scale at all times.

Another feature of stateless applications is the ability of the infrastructure to be portable and decoupled, which will reduce the cost and complexity and significantly increases business productivity, development velocity, and operational efficiency.

Furthermore, horizontal and dynamic scaling are built-in features, which ensure nothing crashes regardless of how much traffic the app gets. These two properties become the biggest challenges that companies need to overtake in order to achieve performance and security in tandem.

#Vertical scaling vs Horizontal scaling: Differences

Now that we know the differences between a stateful and a stateless application and also what makes an application truly stateless, it’s time to go deeper and look at the different types of scales used by these applications. We’ll analyze the key differences between vertical scaling and horizontal scaling, in terms of efficiency and long-term sustainability to discover which one is better suited to your scale needs.

Vertical scaling

Let’s assume we have 100 requests into our server. We might not need a big server, but imagine if we get to 10.000 or 1 million requests — this could be a big challenge for our application not to crash.

The fastest way to fix this is to upgrade the server with a faster CPU or a larger hard drive. This idea of continuously scaling up and getting bigger, beefier servers is called vertical scaling.

The problem with vertical scaling is, eventually, we run into problems because there isn’t a server big enough to support millions and millions of requests per second.

Horizontal scaling

Unlike vertical scaling, horizontal scaling means increasing the number of resources by adding more machines to support the application. The idea is to connect multiple servers and make them work as one or setting up a cluster to support that.

Horizontal scaling can also mean growing the number of nodes in the cluster — which is exactly our approach when we build our cloud-native applications so that they can easily scale. Horizontal scaling enables cost optimization and it also involves an increase in the performance of your application. It is long term sustainable and has no scaling limit in terms of resource allocation.

#How to scale an application?

Scaling is the ability of a system to manage an increased load of traffic without sacrificing performance. To understand the real value of scalable infrastructure, we need to dive deeper and analyze three different scaling models — the stateful model, hybrid model, and stateless model, to identify the one that’s right for you and your company’s needs. 

For a real understanding of these models, we will analyze them from the standpoint of WordPress.

#The Stateful Scaling Model

The stateful scaling model is designed as a coupled model where the state and the business logic need to stay and run together to fulfill their meaning. It means connectivity among components — database, uploads, and code which are running on the same server or on the same pod.

Figure 1 – The Stateful Scaling Model

A good example of implementing a stateful scaling model is the traditional WordPress deployment, which is a stateful application, where the business logic is in a tight connection with the state.  Early on, the applications were deployed directly on physical servers, because of that the deployment was very stable and had a long-maintenance life cycle.  But unfortunately, the problems caused by resource allocation don’t fail to show and that makes the traditional deployment difficult to maintain.

In this case, the term “state”, refers to any static or dynamic properties and components that with the endorsement of the business logic, give life to the application.  Referring strictly to WordPress, the state of a site consists of: database (users, posts), uploads and code.

The business logic represents the part of the program that encodes the real-world business rules and manages the communication between the end-user interface and a database. The state always represents the source of truth for business logic. If the state is inconsistent, is not scalable or it’s difficult to maintain, so will be the application business logic.

Having these two components grouped together, deployment becomes much easier. The only problem is it makes horizontal scaling impossible because problems occur when either the database, code, or uploads grow or an unpredicted traffic spike appears. To avoid crashing and avoid downtime, the application must use vertical scaling by upgrading the server with a fast CPU or a large hard drive which will lead to increased costs.

CONCLUSION: The stateful scaling model is not long-term sustainable because sooner or later it reaches its scaling limits.

#The Hybrid Scaling Model

A hybrid scaling model is one where uploads and the code sit together, with the database disengaged and scaled separately.

Figure 2 – The Hybrid Scaling Model

From the perspective of WordPress infrastructure, the problem with this model arises when someone modifies a file or adds a line of code in the application instances. How does that change get propagated to the rest of the courts? If the code is managed via git, with a plugin similar to gitium, then the problem is solved. But for media files, you will need a distributed file system like NFS or GlusterFS which allows you to handle the uploads.

In the Kubernetes environment, you can use rook.io which offers an abstraction over multiple distributed file systems that can be integrated into your application.

This is not an elegant solution and has performance problems due to the overhead of synchronization.

Regarding database scaling, there are several variants with more topologies, depending on budget and requirements. One method could be primary-replica, where writings from the application reach to the primary and the readings in the replica. The other method could be multi-masters in which readings and writings also take place on both servers or can be separated.

In Kubernetes, database scaling problems can be solved with our MySQL operator which manages the necessary resources for deploying and managing a MySQL cluster.

If you have too many requests to your database what you can do is replicate the database into other slave databases and every time data is inserted into the master it gets replicated to the slaves, as well. Instead of sending all the readings to the master, now you can send it to the slave, and if the master handles the writings, the slave can handle all the readings.

Figure 3 – Database Scaling

Another way of scaling databases horizontally is by using a database sharding system for horizontal scalings, such as Vitess

Database sharding means splitting the database tables across multiple databases, called logical shards.

Figure 4 – Database Sharding

These logical shards are then distributed across different database nodes. The main benefit of sharding is that it helps scale horizontally and makes the application more secure and more reliable. Therefore, going forward with a sharding approach could improve application performance as this greatly helps in reducing outages impact.

CONCLUSION: The hybrid scaling model is a mid-term, trade-off solution, as it doesn’t completely fix scalability issues that may arise.

#The Stateless Scaling Model

But what if all components – code, uploads, and the database — would run separately, being managed independently of the underlying infrastructure? This is what the stateless scaling model proposes.

Figure 5 – The Stateless Scaling Model

The stateless scaling model is loosely coupled, which makes horizontal scaling easier to achieve. That means the code is not hard-wired to any of the infrastructure components. It can scale dynamically and horizontally, on-demand, and embrace the concepts of immutable infrastructure. The easiest way to make this concept happen is to use services offered by cloud vendors, like Google Cloud Platform, because they provision and run the server, while at the same time managing resources allocation.

Storing media files may prove a difficult task as well. But cloud providers offer file storage and database solutions — see Google Cloud Storage — which means storage synchronization issues are managed by them and we are left with just scaling the code. 

Modern technologies like Kubernetes, provide native scaling support through a Horizontal pod Autoscaler, which increases and decreases the number of replicas of an application, allowing for horizontal scaling.

CONCLUSION: If you’re building cloud-native apps specifically designed to run in the elastic and distributed nature required by cloud-native platforms, then you must use the stateless scaling model. You may want to take into consideration things like virtualization and containerization services, automation and orchestration, infrastructure and microservices architecture when building a stateless app.

#Final Thoughts 

Understanding the concepts of stateful and stateless scaling models is the foundation on which modern applications are based. 

Adopting cloud-native technology could be considered a big challenge for developers because the switch from a stateful to a stateless model implies many changes. Stateless applications require a totally different architecture compared to traditional applications, but we think it’s worth taking risks and investing in new technologies that will power tomorrow’s next-gen apps.

Presslabs Dashboard Cloud-native WordPress Hosting

One-click install of the Presslabs Dashboard in the Google Cloud Kubernetes Marketplace.

Signup to Presslabs Newsletter

Thanks you for subscribing!