Wuaki.tv Tech Blog

Caching at wuaki.tv

Rhommel Lamas

Wuaki.tv

Wuaki.tv is more than 2 years old now, and during this period of time it has changed incredibly fast. One of the things that amuse us about software development is the different ways you can solve different issues, you can make it as complex or as simple as you want on a given time.

During the first year of our company wuaki.tv was a monolithic application running on RoR and Mysql, we were not using any caching other than Rails native, and our Apdex index was around 0.7 - 0.8 depending on the day and time. During 2012 we focused all of our efforts on performance, we implemented lots of new features, made lots of changes and caching was one of the main improvements, this helped us to move from a 0.7 Apdex index to 0.98 - 1. In order achieve this we explored the most used and recommended softwares to implement on our application:

  • Varnish
  • Memcache
  • Redis

During this period of researching and evaluation, we’ve agreed to use Redis as our caching datastore, mainly because we’ve used redis in the past on our queue system, but also because this could suppose an easy implementation, so at the moment we all Dev and Ops agreed that this was the best decision in terms of performance, and implementation.

After deciding which data store we were going to use, we had to design the stack that was going to handle all the traffic generated from our application layers (5 at the moment).

The very first problem we faced in Operations was related to scaling Redis, we had to make sure that our caching stack is available 24/7, even if this means that a whole Amazon availability zone dissapears overnight, so we needed that our stack were redundant and scalable. We found that Redis could be configured in four different ways:

  • Standalone
  • Master-Slave
  • Redis Cluster
  • Sentinel

Redis sentinel probably is the obvious choice for us, but either Sentinel of Redis Cluster are currently work in progress, and NOT recommended for production environments. Beside this issue, we notice that there is a Redis performance degradation when the connections to your nodes increase, so we had to find another way to figure this out.

On December, 2012 @antirez Redis creator, made a post on his blog where he talks about twemproxy. Twemproxy, is an open sourced project maintained by @manju from Twitter, this tool works as a proxy with failover for not only redis but for memcache as well. After, sending a couple of tweets to @antirez and @manju asking them for information about failover and high availability, we came into the conclusion that we should give this Twemproxy a try. At the moment we are using a simple configuration built in top of Debian Squeeze on Amazon m1.large instances.

Building the Stack

At the moment our caching stack looks like the image below, and it is capable of handling more than 10 million requests per day without the need of triggering any autoscaling instance either from Redis or Twemproxy.

Caching

The idea behind this setup is to ensure that our redis instances will only have one incoming connection from each of our Twemproxy servers, so the performance on each redis instance won’t be affected by our current unicorn worker connections, which is around 230 workers at minimum.