The Beauty of Redis

If I had to name a single piece of software that has impressed me lately, it would undoubtably be Redis. Not only is this key/value store on steroids blazing fast, but it is also very simple, and incredibly powerful.


How simple, you ask?

redis> SET mykey “Hello”
redis> GET mykey

That simple.

It’s also a breeze to install and get up and running. The suggested way of installing Redis isn’t to fetch some pre-compiled package for your Linux distribution. It is to download the source code (a tiny 655K tarball) and build it yourself! This can be a real crap shoot for most software, but since Redis only depends on a working GCC compiler and libc, it is not an issue at all. It just works.

After it is installed, you can start it by simply running


at the command line. The quickstart guide also has some easy-to-follow instructions on how to start Redis automatically at boot as a daemon.


Redis is a very powerful piece of software. This power, I believe, is a direct result of its simplicity.

Redis is so much more than your run of the mill key/value store. In fact, calling it a key/value store would be like calling the Lamborghini Aventador a car. It would be far more accurate to call Redis a key/data structure store, because Redis natively supports hashes, lists, sets, and sorted sets as well. These data structures are all first class citizens in the Redis world. Redis provides a host of commands for directly manipulating the data in these data structures, covering pretty much any operation you would want to perform on a hash, list, set, or sorted set. Therefore, it is super simple to perform tasks like incrementing the value of a key in a hash by 1, push multiple values onto the end of a list, trim a list to the specified range, perform a union between two sets, or even return a range of members in a sorted set, by score, with scores ordered from high to low.

This native support for data structures, combined with Redis’ incredible performance, make it an excellent complement to a relational database. Every once in a while we’ll run into an issue where, despite our best efforts, our relational database simply isn’t cutting the mustard performance wise for a certain task. Time and time again, we’ve successfully been able to delegate these tasks to Redis.

Here are some examples of what we are currently using Redis for at Signal:

  • Distributed locking. The SETNX command (set value of key if key does not exist) can be used as a locking primitive, and we use it to ensure that certain tasks are executed sequentially in our distributed system.
  • Persistent counters. Redis, unlike memcache, can persist data to disk. This is important when dealing with counters or other values that can’t easily be pulled from another source, like the relational database.
  • Reducing load on the relational database. Creative use of Redis and its data structures can help with operations that may be expensive for a relational database to handle on its own.

When Not To Use Redis

Redis stores everything in RAM. That’s one of the reasons why it is so fast. However, it is something you should keep in mind before deciding to store large amounts of data in Redis.

Redis is not a relational database. While it is certainly possible to store the keys of data as the values of other data, there is nothing to ensure the integrity of this data (what a foreign key would do in a relational database). There is also no way to search for data other than by key. Again, while it is possible to build and maintain your own indexes, Redis will not do this for you. So, if you’re looking to store relational data, you should probably stick with a relational database.


It is very clear that the Redis team has put a ton of effort into making sure that Redis remains simple, and they have done an amazing job.

It’s worth pointing out that Redis has some of the best online documentation that I have ever seen. All commands are easy to find, clearly documented, with examples and common patters of usage. AND the examples are interactive! Not sure what the result of a certain command will be? No need to install Redis and fire it up locally. You can simply try it right there in the browser.

With client libraries in virtually every programming language, there is no reason not to give it a try. You’ll be glad you did.

Want to Build a Better Web API? Build a Client Library!

A solid web API can be an important thing to have. Not only is it great to give users direct access to their data, but exposing data and operations via a web API enables your users to help themselves when it comes to building functionality that doesn’t really make sense in the application itself (or functionality that you never really thought of). It’s also a great way for users to get more familiar with your service.

However, if your API sucks, you can rest assured that nobody will touch it. We’ve all had to deal with crappy web APIs, the ones that make you jump through hoops in order to perform a task that should be dead simple to do. Web APIs should make the simple tasks easy, and the hard tasks possible. To add to the challenge, APIs are notoriously difficult to change. Even with a solid versioning scheme, it is often a real chore to get your users to stop using the deprecated API in favor of the new version. So, it’s important to do a good job the first time.

When building a web API, identifying the tasks that one might want to perform can sometimes be difficult to see when you’re surrounded by JSON, XML, GETs, POSTs, PUTs, DELETEs, and HTTP status codes. While it can be easy to see what single actions you would want to expose, seeing how those actions may interact with each other can be much more difficult. Sometimes you need to take a step back, away from the land of HTTP, in order to see your API as another programmer would see it.

Building a client library that wraps your web API is a great way to do this. It’s relatively easy to imagine how your requests and responses could be represented as objects. The largest benefit of this exercise is to take it a step further, and give the user of your client library the ability to determine what they should do next. Simply knowing if an API call succeeded or failed is usually not enough. Users of your client library need to be able to determine why the request failed, and understand what they can do about it. This extends well beyond the lifecycle of a single HTTP request and response.

Communicating errors

There are several different ways to communicate errors to the user. The proper use of HTTP status codes is one such way. The 4xx class of status codes are specifically intended to be used to communicate that something was wrong with the client’s request. If your API methods are simple, and specific in their purpose, you may be able to rely on HTTP status codes alone to communicate the various causes of failure to the client.

If your API method is complex, and could result in many different failure scenarios, you should first try to break it down into smaller, more specific API methods :) If that can’t be done, then another option is to return some easily parseable text in the response body (JSON or XML) that includes an error code that identifies the specific failure scenario. The response body could be as simple as:

{ "error_code" : 123 }

You could also provide a description of the error in the response as well. This helps users getting started with the API, saving them from having to constantly refer to your API’s documentation every time they get an error:

  "error_code" : 123,
  "error_message" : "A widget with that name already exists"

The important thing is that all failure cases be easily identifiable via a specific, documented code (HTTP status code or custom error code). Error messages should be seen as purely supplemental information. At no point should your users have to parse the error message to determine what happened, or what they should do next.

Isn’t this the same as “dogfooding”

Not exactly. Dogfooding simply involves using what you have created. You could easily dogfood your web API by firing HTTP requests at it using a simple HTTP client library. It is not until you need to take different actions based on different responses that you really start to see if you are properly communicating the result of the request. Building a client helps with this because it forces you to think about the different results and error scenarios in order to decide how your client should handle them. Which failures should raise exceptions? What sort of exception should be raised? How should non exceptional failures be communicated to the caller?

The next step in this process would be to build an application that uses your client library. That step could help identify issues with your client library, just like building the client library helps identify issues with your web API.

The client library

Oh, and don’t forget. At the end of the day, you’ll end up with a better designed web API, AND a great client library that your users can use to interact with your system. Not a bad deal!

Professionals Act Professional

I’m sick of it.

Every week (at least it feels that way) some new drama rears its ugly head in the ruby community. Petty arguments via twitter, one ranting blog post after another, people mocking ideas they consider less than ideal, and even some personal attacks thrown in the mix. There’s so much drama in fact that there is now a site out there that lists it all for the rest of the world to see.

Seriously? Are we all still in junior high?

Just think for a minute about all of the time and energy we are wasting here. Instead of igniting these flame wars, from which nothing productive is ever achieved, we could be growing as a community. We could be bringing up the next generation of software developers. We could be positively encouraging others to build better software. We could be sharing our experiences with others. We could be leading by example.

For a community of people complaining that they’re not treated like professionals, we sure don’t act very professional. If this is the way we behave, can we honestly expect people to treat us with the respect that they treat doctors, accountants, teachers, and members of other professions?

If you want to be treated like a professional, it’s best to start acting like one first.

Take the high road for once. The view is much nicer.

ReloadablePath in Rails 3

A core feature of the Signal application is support for custom promotion web forms. Custom promotion web forms allow our customers to create custom web pages that will allow their customers to interact with their promotions via the web. Our customers currently use these web forms for sweepstakes entry, email/SMS subscription list opt-ins, online polls, and more.

One of the best things about custom promotion web forms is that it allows our customers to completely control what the web page looks like. The Signal application allows for the creation of a web form theme that can be used as a template for a given promotion. The specific promotion can then customize the web page further by specifying the copy that appears on the web page, the data attributes that should be collected, and more.

The web form themes are managed by the Signal application, and are saved to disk as a view (an ERB template) when created or updated. Our customers can edit these themes at any time. When a theme is updated, we need to tell Rails to clear the cache for these specific views, so our customer will see their changes the next time they visit a web page that uses the updated theme.

In Rails 2, this was done using ReloadablePath.

class SomeController < ApplicationController

However, ReloadablePath is no more in Rails 3. So, we needed to find a new solution to this problem.

Rails 3 introduced the concept of a Resolver, which is responsible for finding, loading, and caching views. Rails 3 also comes with a FileSystemResolver that the framework uses to find and load view templates that are stored on the file system.

FileSystemResolver is very close to what we want. However, we need the ability to clear the view cache whenever one of the web form themes has been updated. Thankfully, this was fairly easy to do by creating a new Resolver that extends FileSystemResolver, which is capable of clearing the view cache if it determines that it needs to be cleared.

Looking at the code for the Resolver class, you can see that it checks the view cache in the find_all method. If it does not have the particular view cached, it will proceed to load it using the proper Resolver. So, we simply have to override find_all to clear the cache if necessary before delegating the work to the super class to find, load, and cache the view.

class ReloadablePathResolver < ActionView::FileSystemResolver

  def initialize

  def find_all(*args)

  def self.cache_key


  def clear_cache_if_necessary
    last_updated = Rails.cache.fetch(ReloadablePathResolver.cache_key) { }

    if @cache_last_updated.nil? || 
        @cache_last_updated < last_updated "Reloading reloadable templates"
      @cache_last_updated = last_updated


Since we're running multiple processes in production, we need a way to signal all processes that their view caches should be cleared. So, we're using memcache to store the time that the web form themes were last updated. Each process then checks that timestamp against the time that particular process last updated its cache. If the timestamp in memcache is more recent, then the ReloadablePathResolver will clear the cache using the clear_cache method it inherited from Resolver.

Next, we need to add some code that will update memcache any time a web form theme has been updated and saved to disk.

class WebFormTheme < ActiveRecord::Base
  after_save :update_cache_timestamp


  def update_cache_timestamp

The final step is to simply prepend the view path with the new ReloadablePathResolver.

class SomeController < ApplicationController


  • Beware the Hack!

    Hacks are dangerous little creatures. They live in the darkest, dustiest corners of your application, forgotten about, waiting… Waiting for the chance to rear their ugly little heads, open their disease infested mouths, and sink their jagged teeth into customer confidence and developer productivity.

    We’ve all been there. We have a product that works great. It solves a certain problem incredibly well. Then, a well meaning customer comes along and says, “This is fantastic. It almost solves my problem perfectly. Is there any way you can modify it slightly to do X instead of Y.”

    Sometimes this is no problem at all. Sometimes the design of the product is flexible in ways that make it a breeze to add this functionality. But, sometimes the request comes out of left field, and takes your product in a direction you never anticipated. While we strive to build software that is extensible and adaptable (it is software after all, isn’t it?), none of us can see the future, nor anticipate every possible customer request.

    About this time you start to hear a little voice inside your head. “Well, I suppose I can hard code this, or add an if statement here, or write a one-off script to do X”. After all, you don’t want to tell your customer “No, sorry, we can’t do that”. And, they certainly don’t want to hear “Sure, we can make that modification, but it will require a significant amount of refactoring in order to ‘do it right’”.

    Acting on these thoughts births a tiny baby Hack. The Hack is little when it’s born, but it certainly doesn’t stay that way. Once the Hack is born, it is much easier to add to the Hack, or feed it. With everybody modification to the Hack, it gets bigger, and bigger. Pretty soon you have a large, ugly Hack with a nasty attitude on your hands. And, despite you being it’s “mommy”, it doesn’t like you, at all. Not one bit.

    Hacks are dangerous for several reasons.

    First, they almost always live outside the main execution path of the code. This means they’re not executed nearly as often as the other code. Even if you have a series of tests for the Hack, nothing exercises code like constant execution by your customers. Also, because they’re not really “part of the application”, Hacks are often forgotten about when updating or fixing code.

    Second, they’re usually created to quickly get around some issue. And by “quickly”, I mean “didn’t totally think this though, but I’m fairly certain that if I tweak X, alter Y, and drive it with a custom script, it should work just fine”. And, usually it does work just fine…at least in the beginning. But, this is when the Hack is still young, and under your control. Adult Hacks are not nearly as cooperative.

    Third, they’re usually only known about (at least in detail) by the members of the team that created them. A Hack is like a big, puss filled pimple on your ass. You don’t go around showing those to your friends and co-workers, do you? Hacks, by definition, are quick and dirty solutions to problems. They’re not elegant, or sexy. So, developers tend to keep their Hacks to themselves. At most, a developer will mention that they hacked around a problem, but rarely do they go into details. The other team members are largely left in the dark. Not knowing where a Hack lives or how it behaves is a sure fire way to get bitten by it down the road.

    Always remember, that little baby Hack…it will grow up. It will get nasty. It will bite. It’s just a matter of when and where. Those who have been programming for long enough know this to be fact. And, being nasty little creatures, Hacks usually wait until the worst possible time to bite.

    So, beware the Hack! They are big, ugly, mean, have teeth, and will most certainly bite.