Windows Azure has a load balancer that you can use for free. It is used mostly to load balance port 80 (aka web traffic) across a group of identically configured web servers, although it can be used to load balance any TCP/UDP port. It is not a sticky load balancer.
Most load balancers in the corporate world have a setting called ‘sticky sessions.’ This feature will route a user back to the same web server over and over again. This is done with performance in mind. People think that if the same user goes to the same server, the application will run faster because their data is probably cached on that particular server.
I think this is a fallacy in most cases, and that sticky sessions in a load balancer is a bad smell hinting at some rotting architecture. When I see sticky sessions somewhere, it is usually because of one of the following is true:
To me this is a big crime in and of itself. You should not be using session state, and if you are, it must be kept to a minimum. Session state turns out to be a huge crutch that helps people get their heads around how the web is truly stateless. You will see this most often with people that don’t understand the web at a low level, and probably spent their early part of their careers architecting client/server applications. Or they are just lazy.
The second half to this first problem is that they aren’t sharing said evil session state across their servers. For crying out loud people! If you insist on having session state, please for the love of binary share that state amongst your servers.
This sharing is easy to do with the variety of providers available to you, most are ‘out of the box’. Sharing state means that ASP.NET will read/write the state to a central location, instead of the in-process on box memory. For example, you can share it in a SQL box, or in off box memory. You can share it in Azure storage, or even AppFabric. You can share it with a box. You can share it with a Fox. Ok, not a fox. All by easily changing a line of configuration in your application.
Avoid bloated session state, and share with your friends.
If you are sharing your state amongst the web server group, then you don’t need a sticky load balancer, since no matter which server the user goes to they can get their state.
2. They are trying to shoehorn a stateful application into the web
The web isn’t stateful. Embrace it and move on. Every time a browser makes a connection to a web server it’s starting from scratch. In a natural call (without any crutches) all the server has to go on is the user agent info (browser type, ip address, etc.) and the contents of the requested URL. That’s it.
Over the years, especially early on, our industry has adapted several crutches to help get us around this statelessness that bothers us so. All of them have lead to horror when over used, so a light touch goes a long way. Those are namely session state (see above) and cookies.
Cookies are in some ways the same as session state. They save state for the application between calls. But instead of the storing the state on the server, it is stored on the client. Many people wave this off. “Oh, it’s not state, just some bits of data.”
The Real Problem
The real problem with sticky sessions is that it leads to fragility in your web cluster/farm/group/collective. If user A is always going to Server x, and all of their evil state, and data is on that server, then you have introduced a single dependency on your user. All of our infrastructure efforts always lead us to high availability and reliability. This sticky load balancer throws it’s hat in our faces and says, with an outrageous accent, “I don’t think so!”
All of a sudden, all of our hard work in building our web farm is thrown out the window because of some lazy architecture decisions. Your group of super servers is reduced to a simple group of individual servers.
When that one server does go offline (and it will because failure happens, embrace it) you will lose all those users. They will lost their state, and data. They will hit that button on their screen, and when the browser round trips and refreshes (because if you’re doing this, you likely aren’t doing SPA) the browser will return an error. That person won’t get their star trek pizza cutter, will have a bad experience, and their day will spiral out of control. They will go home and likely kick their neighbors dog. All because you used a sticky load balancer. Note: please do not kick anyone’s dog.
A secondary issue is load leveling. A great use of a load balancer that many people aren’t aware of is sarcasm that it can level the load /sarcasm across several servers. In a sticky LB scenario, it is easy for server x to get really busy, while server n is lonely and sits idle and unused. This always leads to resentment on the part of the server doing all the work, so try to avoid this.
If you were running your load balancer in a normal way, your users and their requests would be flooding evenly (fairly so anyway) across all of those web servers you paid for, giving the user a great experience. A server could go down, and the users work would be picked up by any of the other servers on the next trip. Boom goes the dynamite.
Do you really hate state?
I didn’t mean for this post to turn into a hate piece on state (session or cookie). I will soon get to why I wrote this post, but I wanted to get off my chest how I have seen state be abused before. Like I mentioned above, I don’t think you are evil for using it. An experienced and critically thinking developer who knows when to break the rules is allowed to break the rules.
So, what does this have to do with Windows Azure?
Good question my good man! In Windows Azure, the load balance we started talking about is a non-sticky load balancer. Yes, if you start up the local simulator, and try some tests you won’t see that happening. If you start up a group of web roles in Cloud Services, and try hitting F5 really fast in IE you won’t see balancing either.
That’s because there is some intelligence in the balancer, and because you can’t cause enough traffic with your own keyboard. Nor my super awesome-backlit-mechanical-Cherry-MX-red-switch-keyboard-that-the-neighbors-can-hear-when-I-am-typing-keyboard.
But, some people NEED a sticky balancer. They just NEED it. Either they are migrating something that is built that way, and they just can’t make the investment to fix it yet, or… that’s the only reason I can come up with.
So, here is you escape hatch, IIS Application Request Routing. ARR adds a layer of load balancing in software at the server level, instead of lower down the network stack. This lets it use some intelligence as to what the software is doing.
The IIS ARR can be used on-premises, or in Windows Azure. Basically your front-end your application with servers running ARR. Officially, ARR is:
“IIS Application Request Routing (ARR) 2.5 enables Web server administrators, hosting providers, and Content Delivery Networks (CDNs) to increase Web application scalability and reliability through rule-based routing, client and host name affinity, load balancing of HTTP server requests, and distributed disk caching. With ARR, administrators can optimize resource utilization for application servers to reduce management costs for Web server farms and shared hosting environments.”
So, if you need sticky load balancing, or maybe some smarter session routing in your application, either on-premises, or in the cloud, check out ARR. You can read all about it at the IIS website.
ARR also has some great caching features included as well. ARR Cache would be a good alternative to AppFabric cache, if you don’t want to run that.