CouchDB: Databases and Documents

This is part 2 in a series of posts that describe our investigation into CouchDB as a solution to several database related performance issues facing the TextMe application.

<< Part 1: A Case Study Part 3: Views – The Advantages >>

CouchDB is a document oriented database. A document oriented database stores information as documents of related data. All of the data within a document is self contained, and does not rely on data in other documents within the database. This can be quite a shift if you’re used to working with a relational database, where data is broken up in to multiple rows existing in multiple tables, limiting (or eliminating) the duplication of data. Although radically different, the document oriented approach is a very good fit for many applications. For some applications, data integrity is not the primary concern. Such applications can work just fine without the restrictions provided by a relational database, which were designed to preserve data integrity. Instead, giving up these restrictions lets document oriented databases provide functionality that is difficult, if not impossible to provide with a relational database. For example, it is trivial to setup a cluster of document oriented databases, making it easier to deal with certain scalability and fault tolerance issues. Such clusters can theoretically provide you with limitless disk space and processing power. This is the primary reason why document oriented databases (or key/value pair databases) are becoming the standard for data storage in the cloud.

There are plenty of articles on the web describing the benefits of using a document oriented or key/value pair database, so I won’t re-hash any of that information here.

Databases

Creating a new database in CouchDB is a simple process, with no overhead. In fact, it’s as simple as issuing a single HTTP request.

curl -X PUT http://127.0.0.1:5984/my_database

There appears to be no penalty for hosting many databases within the same CouchDB server, as opposed to storing all of the documents within a single database. We took advantage of this when migrating data into CouchDB. Three very large tables were the focus of this migration, each containing between 3 to 50 million rows. We decided to store the data from each table in its own database. The data within these tables are completely unrelated, so we would never need to view data from one database combined with another. If they were related, we would have combined the tables into a single database, as CouchDB cannot create views across multiple databases. Also, storing each set of data in its own database provides additional flexibility. During the migration process, there were several points where we changed the structure of the documents. Having the ability to easily delete the affected database and re-populate it, without affecting any other document types, came in handy. With multiple databases, we have the flexibility to change the replication schedule for one database to be different from the others. It also makes it easy to move one or more databases to another server, should we ever choose to do so.

My colleague Dave made a few changes to CouchRest Rails to support connecting to multiple CouchDB databases within a single rails application. You simply specify the database server location in the configuration files, and then each model object can specify which database it is using.

Documents

CouchDB documents are very flexible. Documents are stored in JSON format, allowing you to take advantage of JSON arrays and dictionaries to represent collections of data. There is no external force dictating how a document should be structured, or what it should contain (as long as the document is valid JSON). Below is an example of what a document may look like for a blog post.

{
   "_id": "CouchDB: Databases and Documents",
   "_rev": "1-704787893",
   "author": "John Wood",
   "email": "john_p_wood",
   "post": "CouchDB is a documented oriented database.  A document...",
   "tags": ["couchdb", "couchdb case study", "json"],
   "comments": [
      {
         "email": "joe@somewhere.com",
         "comment": "Thanks for the information"
      },
      {
         "email": "kevin@xyz.com",
         "comment": "CouchDB sounds pretty interesting"
      }
   ]
}

Schema-less

Probably the best part about the document oriented approach is the ability to make each document different from the next. There is no schema to enforce that a document contains specific information. This makes CouchDB a great fit if your application needs to store data that can be wildly different between objects of the same type. In a relational database, this is usually handled by serializing the data in some format, writing the serialized data to the database, and de-serializing the data when it is read by the application. However, this is really nothing more than a hack. Querying the data in such columns can be a nightmare. And, the serialization/de-serialization process is just one more thing that can go wrong. In a document oriented database, there is no need for such a hack. You simply code your CouchDB views to account for the fact that certain fields may not be in the document, and act accordingly (either defaulting to some value, or simply move onto the next document in the database).

Self contained

The most important thing to remember about documents is that they are self contained. All of the data representing a particular concept is right there in the document. (This is a bit of a fabrication, as it is completely possible to establish relationships between documents by having one document store the unique id of another document. However, these links are not directly supported by CouchDB, and can be easily broken.) So if you are moving from a relational database to CouchDB, you should de-normalize your data as much as possible while defining the structure of your document. JSON arrays and dictionaries can help tremendously when de-normalizing relationships. If there is only one piece of information from the relationship worth storing in the document, then an array works great (see the “tags” property above). For relationships with more complex data structures, an array of dictionaries fits the bill quite nicely (see the “comments” property above).

The document id

Another important point to consider when designing your document structure is defining what you will use as the id of the document. The id must be unique not only in that database, but all instances of the same database if you happen to be running inside a database cluster. CouchDB uses document ids to replicate changes between servers. Auto-generated sequential keys are a poor fit for this. While wildly popular in relational databases, auto-generated sequential keys throw a wrench into the gears of the replication process. If each database in the cluster was responsible for generating its own sequential ids, it is highly likely that different documents on different servers could be assigned the same id, which would make CouchDB think that two distinct documents are the same document. Badness would surly ensue.

Instead, it is recommended that you use the data’s natural key as the id of your document. The natural key is some field, or combination of fields, in your document that uniquely identifies that document. In the example above, the title of the blog post is a good fit for a natural key. It is not very likely that I will be writing posts with the same title. If you happen to enjoy writing about the same stuff over and over, perhaps the title of the post combined with the date and time it was created would be a better fit. Either way, the id should be composed from data within the document.

If you do not provide an id, CouchDB will provide one for you. CouchDB uses an algorithm that makes it virtually impossible for multiple CouchDB instances to generate the same id. However, I have read articles on the web indicating that this is a very slow operation, so you may want to avoid letting CouchDB generate an id for you. Regardless, natural keys pulled straight from data within your document always make better ids, as they are easier to read, and more identifiable.

For a few of our documents, we used the sequential key generated by MySQL as the id :) I know how stupid this sounds, given the last few paragraphs. However, I think this was the best choice for an id in our case. The data contained in these documents are basically a collection of ids to rows that exist, and will remain in MySQL. None of the data within the document would be any more readable than the MySQL id. Also, since all of the keys were originally generated in a single MySQL database, they are guaranteed to be unique. As of right now, we always plan on creating the data in MySQL, and then “archiving” it to CouchDB at a later date, so this approach should continue to work just fine.

Supporting existing functionality

If you are migrating data from a relational database to CouchDB, there is another important item to think about. If your application needs to interact with CouchDB in the same way that it did with the relational database, then you need to make sure that the CouchDB views you build will be able to replace any SQL queries that are done against that data in the relational database. In order to make this happen, the CouchDB document will need to contain all of the necessary information for you to build views to replace those queries, if you intend on supporting the same functionality. Remember, there are no JOINs in CouchDB.

Summary

I think the way CouchDB handles databases and documents is very straight forward. Once you get used to the idea that there could be multiple instances of databases in a cluster, and that documents should be self contained, the rest is cake. The schema-less approach has the potential to open a lot of doors. I know that we’re already making plans to take advantage of it.

Paginating Records in CouchDB via CouchRest

Update: This change has been incorporated into CouchRest version 0.30

When I began looking into replacing some of TextMe‘s large MySQL tables with CouchDB databases, one of the things I noticed right away was that pagination support was not quite there in CouchRest. I say “not quite there” because CouchRest does have the ability to fetch data from the database in paginated chunks, but the current support didn’t really fit too well with way the rest of the library interacts with CouchDB views. A helper class had to be used to fetch the data, and the data came back as hash instead of an instance of the appropriate class.

Pagination is a must for us, because these tables in particular are very large. That’s one of the main reasons why we’re moving them to CouchDB in the first place. Loading all of the data into memory at once would be troublesome to say the least.

CouchRest is still a very young library, currently on version 0.29. However, despite its age, it is already fully featured and off to a great start. So, I saw this as an opportunity to contribute to something that we have already greatly benefited from.

With a little inspiration from Rails, I decided to implement a proxy that would be created when a view was called to fetch data. The proxy would defer getting data from the database until that data was actually needed. I then implemented will_paginate style paginate and paginated_each methods on the proxy object. If either of these methods are called, only a chunk of data will be fetched from the database, and that data will be returned as an array of instances of the appropriate class. If any other method is called on the proxy, the proxy will fetch all of the data from the view, and forward the call on to the “real” array.

I decided to go with will_paginate style methods because the will_paginate gem is by far the most popular pagination solution for Rails. We use it extensively in TextMe. So, implementing the same methods would ensure that we could continue to use our existing pagination code, and the code wouldn’t have to know if it was dealing with a collection of ActiveRecord objects or a collection of CouchRest ExtendedDocument objects.

The new code also throws some methods onto the class itself that lets you paginate over instances of the class without having an instance of the proxy, or a view in your CouchRest ExtendedDocument object.

Here are some examples, pulled from the CouchRest tests:

Paginating using instance methods:

articles = Article.by_date :key => Date.today
articles.paginate(:page => 1, :per_page => 3).size.should == 3

articles = Article.by_date :key => Date.today
articles.paginated_each(:per_page => 3) do |a|
  a.should_not be_nil
end

Paginating via class methods:

articles = Article.paginate(:design_doc => 'Article', 
  :view_name => 'by_date', :per_page => 3, :descending => true, 
  :key => Date.today, :include_docs => true)
articles.size.should == 3

options = { :design_doc => 'Article', :view_name => 'by_date',
  :per_page => 3, :page => 1, :descending => true, 
  :key => Date.today, :include_docs => true }
Article.paginated_each(options) do |a|
  a.should_not be_nil
end 

Currently, the forked version of CouchRest containing this feature can be found on GitHub, at http://github.com/jwood/couchrest/tree/master. I’ve submitted a request to have this pulled into the main CouchRest repository.

Hopefully this will be helpful to others.

CouchDB: A Case Study

This is part 1 in a series of posts that describe our investigation into CouchDB as a solution to several database related performance issues facing the TextMe application.

Part 2: Databases and Documents >>

The wall was quickly approaching. After only a few short years, several of our database tables had over a million rows, a handful had over 10 million, and a few had over 30 million. Our queries were taking longer and longer to execute, and our migrations were taking longer and longer to run. We even had to disable a few customer facing features because the database queries required to support them were too expensive to run, and were causing other issues in the application.

The nature of our business requires us to keep most if not all of this data around and easily accessible in order to provide the level of customer support that we strive for. But, it was becoming very clear that a single database to hold all of this information was not going to scale. Besides, it is common practice to have a separate, reporting database that frees the application database from having to handle these expensive data queries, so we knew that we’d have to segregate the data at some point.

Being a young company with limited resources, scaling up to some super-powered server, or running the leading commercial relational database was not an option. So, we started to look into other solutions. We tried offloading certain expensive queries onto the backup database. That helped a little, but the server hosting the backup database simply didn’t have enough juice to keep up with the load. We also considered rolling up key statistics into summary tables to save us from calculating those stats over and over. However, we realized that this was only solving part of the problem. The tables would still be huge, and summary tables would only replace some of the expensive queries.

It was about this time that my colleague Dave started looking into CouchDB as a possible solution to our issues. Up until this point, I had never heard of CouchDB. CouchDB is document oriented, schema-free database similar to Amazon’s SimpleDB and Google’s BigTable. It stores data as JSON documents and provides a powerful view engine that lets you write Javascript code to select documents from the database, and perform calculations. A RESTful HTTP/JSON API is used to access the database. The database boasts other features as well, such as robust replication, and bi-directional conflict detection and resolution.

The view engine is what peeked our interest. Views can be rebuilt whenever we determine it is necessary, and can be configured to return stale data. Stale data? Why would I want stale data?, you may be asking yourself. Well, one big reason comes to mind. Returning stale data is fast. When configured to return stale data, the database doesn’t have to calculate anything on the fly. It simply returns what it calculated the last time the view was built, making the query as fast as the HTTP request required to get the data. The CouchDB view engine is also very powerful. CouchDB views use a map/reduce approach to selecting documents from the database (map), and performing aggregate calculations on that data (reduce). The reduce function is optional. CouchDB supports Javascript as the default language for the map and reduce functions. However, this is extensible, and there is support out there for writing views in several other languages.

In our case, we are planning to use CouchDB as an archive database that we can move old data to once a night. Once the data is moved to the CouchDB database, it would no longer be updated, and would only be used for calculating statistics in the application. Since we would only be moving data into the database once a day, we only need to rebuild the views once a day. Therefore, all queries could simply ask for (and get) stale data, even when the views were in the process of being rebuilt. Also, moving all of the old data out of the relational database would dramatically reduce the size of the specific tables, improving the performance of the queries that hit those tables.

I’m really looking forward to this partial migration to CouchDB. The ability to add new views to the database without affecting existing views gives us the flexibility we need to grow the TextMe application to provide better, more specific, and more relevant statistics. In marketing, statistics are king. Since TextMe is a mobile marketing tool, we want it to be able to provide all of the data that our customers are looking for, and more. I feel that by moving to CouchDB, we will not only be able to re-activate those features that we had to disable due to database performance, but also add more features and gather more statistics that would have otherwise been impossible with our previous infrastructure.

The migration to CouchDB was not always straight forward. We faced several challenges, and learned many lessons over the past month. All of those challenges will be addressed here.

In the coming posts, I plan to talk about:

  • Structuring your CouchDB databases, and the documents within them.
  • More details about CouchDB views.
  • The application code necessary to talk to CouchDB.
  • Migrating parts of an existing application from a relational database backed by ActiveRecord to CouchDB.
  • How the CouchDB security model differs from a traditional relational database.

Stay tuned!

Strive to Limit Integration Points

Last week, I was working on a new feature of TextMe that required a call to one of our external service providers for some data. The call in particular was to lookup the carrier for a given mobile number. Sounds simple enough. However, we already had code that integrated with this provider in one component of our architecture, and I needed to make this call from another component.

A couple of options jumped out at me. I could pull the code I needed to use into a library that could be shared between the components, or implement some form of inter-process communication that would enable me to invoke the service from the one component, and have it processed by the component that already integrated with the service provider.

Pulling the code into a library would be the easier of the two to implement for sure. Like any project of reasonable size, we were already doing this for several other shared pieces of code. Adding one more to the list would be a piece of cake. The second option would require a bit more work. The component that integrates with the service provider runs as a daemon process, so using something straightforward like HTTP to handle the interprocess communication was out of the question. Instead, I’d likely have to utilize the queuing framework that we already had in place. What makes it more difficult is that the queuing library we use only handles asynchronous calls, and this would need to be a synchronous call. Not the end of the world by any means, but without a doubt more complicated than simply sucking the code into a library.

Even though option one was easier to implement, having two components in the architecture integrate with a 3rd party seemed like a bad idea. Sprinkling integration points throughout your application is usually a recipe for failure. Largely because it is only a matter of time before an integration point fails.

If we went with option one, we could have the library handle the failures. However, even if handled properly, failures like this usually have other consequences. For example, if the service never responded, it could cause requests to back up in the given component. Even if we implemented a timeout, it is likely that the timeout would be greater than the average response time, which means our system would take longer to process each request. If you had to deal with a lot of incoming requests at the time of the failure, you could be in for a world of hurt, especially if you had multiple components suffering from this issue.

With option two, we have a bit more control over the situation. First off, we would know there was one, and only one spot in our architecture that integrated with that particular service. This would allow us to better understand the potential impact of the failure, and the steps that needed to be taken to address it. Second, it would allow us to more easily implement a circuit beaker to prevent the failure from rippling across the system. If the circuit breaker was tripped, we could return an error, some sort of filler data, or queue the request up for processing at a later time. Third, we could potentially add resources to account for the situation. Since the work was being done in a completely different component, if it was simply a matter of increased latency on the part of our service provider, we could always spin up a few more instances of that component to account for the fact that some of the requests may be starting to back up.

In his fantastic book, Release It, Michael Nygard talks about integration points, along with a host of other topics regarding the deployment and support of production software. Any developer who writes code that will eventually be running in a production environment (which I hope is EVERY developer) should read this book. Regarding integration points, Michael says the following:

  • Integration points are the number-one killer of systems.
  • Every integration point will eventually fail in some way, and you need to be prepared for that failure.
  • Integration point failures take several forms, ranging from various network errors to semantic errors.
  • Failure in a remote system quickly becomes your problem, usually as a cascading failure when your code isn’t defensive enough.

However, even though integration points can be tough to work with, system’s without any integrations points are usually not that useful. So, integration points are a necessary evil. Our best tools to keep them in line are defensive coding, being smart about where you place the integration points in your system, and limiting the integration points in the system.

With the help of my colleague Doug Barth, we (mostly Doug) whipped up a synchronous client for the Ruby AMQP library. I then used this code to implement the synchronous queuing behavior I needed to keep the integration point where it belonged. Those interested can find the code in GitHub, at http://github.com/dougbarth/amqp/tree/bg_em.

Increase Design Flexibility by Separating Object Creation From Use

I just finished reading Emergent Design, by Scott Bain. Overall, I thought it was a pretty good book that touched on some important concepts in software design. I’ve read about one particular concept covered in the book a few times before, but the value of it didn’t sink in until I read Emergent Design. This concept states that code that creates an object should be separate from code that uses the object.

Separating code that creates an object from the code that uses the object results in a much more flexible design, which is easier to change. Creating this separation is also very easy to do. By simply avoiding a call to the new operator in the “client” code for the particular object you wish to instantiate, you are able to evolve your code to adjust to a variety of changes, most of which require no changes in the code that uses the object. Let’s walk through an example.

Let’s say we have a logging class, named Logger, that we use to log messages from our application. The class is pretty simple, and looks something like this.

public class Logger {
    private static final String logFileName = "application.log";
    private FileWriter fileWriter;
    private Class from;
    
    public Logger(Class from) {
        this.from = from;

        try {
            fileWriter = new FileWriter(logFileName, true);
        } catch (IOException e) {
            throw new RuntimeException("Log file '" + logFileName + 
                    "' could not be opened for writing.", e);
        }
    }

    public void log(String message) {
        try {
            fileWriter.write(
                from.getCanonicalName() + ": " + message + "\n");
            fileWriter.flush();
        } catch (IOException e) {
            System.err.println("Writing to the log file failed");
            e.printStackTrace();
        }
    }
}

In our application, we would typically use the Logger class like this:

Logger logger = new Logger(MyClass.class);
logger.log("Some message");

I think this is pretty typical, and seems to be the default pattern. Create the object that you need, and then use it. Simple and straightforward. However, the simplicity comes at the price of limited flexibility. For example, what if I wanted to limit the Logger class to only having one instance? Or, what if I wanted to start logging some messages to the database, and some to the file system? By combining the code that creates the object with the code that uses the object, we’ve greatly limited the ways in which we can evolve our design without affecting existing “client” code. Sure, we can work our way out of it, but since the Logger is a very popular class used by almost every other class in the system, it will require a lot of work to change.

So, how can we avoid this? How can we effectively encapsulate the creation of the object from the code that uses it? The very first “tip” in Effective Java, by Joshua Bloch, is to prefer static builder methods over constructors. Joshua suggests this for the same reasons Scott suggests separating code that creates the object from code that uses the object in Emergent Design. Instead of making your clients use the new operator to create instances of your object, provide them with a static builder method to do so.

    public static Logger getInstance(Class from) {
        return new Logger(from);
    }
    
    protected Logger(Class from) {
        this.from = from;

        try {
            fileWriter = new FileWriter(logFileName, true);
        } catch (IOException e) {
            throw new RuntimeException("Log file '" + logFileName + 
                    "' could not be opened for writing.", e);
        }
    }

Note that I changed the scope of Logger‘s constructor from public to protected. This will discourage other classes outside of the logging package from using it, while leaving the Logger class open for subclassing. With this new method in place, users of this class can now create an instance by doing the following.

Logger logger = Logger.getInstance(MyClass.class);
logger.log("Some message");

It seems silly to provide a method that simply calls new. But, doing so adds so much flexibility to the design, that Scott considers it a “practice”, or something he does every time without even thinking about it. Abandoning the constructor also opens a few doors. You are no longer required to return an instance of that specific class, giving you the freedom return any object of that type. You don’t always have to return a new instance, allowing you to implement a cache, or a singleton. You can use this flexibility to your advantage when evolving your design. Let’s see how.

Let’s say we get a request from our accounting department to log messages from code that deals with financial transactions (conveniently located in the net.johnpwood.financial package) to the database. This sounds like the birth of a new type of logger. Because clients are not using the new operator to create new instances of the Logger class, we can easily evolve Logger into an abstract class, keeping the static getInstance() method as the factory method for the Logger class hierarchy. After we have the abstract class, we can create two new subclasses to implement the individual behavior. All of this with no change to how the client uses the logging functionality.

Because the filesystem logger and the database logger don’t have too much in common, the Logger class has been slimmed down quite a bit. What remains is the interface for the Logger subtypes, defined via the abstract log() method, and a factory method to create the proper logger, which is implemented in getInstance().

public abstract class Logger {
    
    public static Logger getInstance(Class from) {
        if (from.getCanonicalName().startsWith(
                "net.johnpwood.financial")) {
            return DatabaseLogger.getInstance(from);
        } else {
            return FilesystemLogger.getInstance(from);
        }
    }
    
    protected Logger() {}
    public abstract void log(String message);
}

We now have two distinct classes that handle logging transactions. FilesystemLogger, which contains most of the old Logger code, and DatabaseLogger. FilesystemLogger should look pretty familiar.

public class FilesystemLogger extends Logger {
    private static final String logFileName = "application.log";
    private FileWriter fileWriter;
    private Class from;

    public static FilesystemLogger getInstance(Class from) {
        return new FilesystemLogger(from);
    }
    
    protected FilesystemLogger(Class from) {
        this.from = from;
        
        try {
            fileWriter = new FileWriter(logFileName, true);
        } catch (IOException e) {
            throw new RuntimeException("Log file '" + logFileName + 
                    "' could not be opened for writing.", e);
        }
    }

    @Override
    public void log(String message) {
        try {
            fileWriter.write(
                from.getCanonicalName() + ": " + message + "\n");
            fileWriter.flush();
        } catch (IOException e) {
            System.err.println("Writing to the log file failed");
            e.printStackTrace();
        }
    }
}

DatabaseLogger is also pretty simple, since I didn’t bother to implement any of the hairy database code (doesn’t help to illustrate the point…and I’m lazy).

public class DatabaseLogger extends Logger {
    private Class from;
    
    public static DatabaseLogger getInstance(Class from) {
        return new DatabaseLogger(from);
    }
    
    protected DatabaseLogger(Class from) {
        this.from = from;
        establishDatabaseConnection();
    }

    @Override
    public void log(String message) {
        LoggerDataObject dataObject = 
            new LoggerDataObject(from, message);
        dataObject.save();
    }
    
    private void establishDatabaseConnection() {
        // Connect to the database
    }
}

We’ve significantly changed how the Logger works, and the client is totally oblivious to the changes. The client code continues to use the Logger as it did before, and everything just works. Pretty sweet, eh?

As you can imagine, there are many other ways you can evolve your design if you have this separation of creation and use. If we need to create a MockLogger for testing purposes, it can be created in Logger.getInstance() along with the other Logger implementations. The client would never know that it is using a mock. If we ended up creating 10 different loggers, it would be trivial to have Logger.getInstance() delegate the creation of the proper Logger instance to a factory, moving the creation logic out of the Logger class. Again, no changes to the client.

Separating creation from use also allows you to easily evolve your class into a singleton (or any other pattern that controls the number of instances created). This doesn’t make much sense for Logger, since each unique Logger instance contains state. However, it does make sense for some classes. Evolving your class into a singleton simply requires a static instance variable on the class containing the instance of the singleton object, and an implementation of getInstance() that returns the singleton instance. If clients have already been using the getInstance() method to get an instance of the class, then no change would be required on their end. Here’s an example:

public class SomeOtherClass {
    private static SomeOtherClass instance = new SomeOtherClass();
    
    public static SomeOtherClass getInstance() {
        return instance;
    }
    
    private SomeOtherClass() {}
}

It is worth pointing out that static builder methods are not the only way to achieve this separation. Dependency injection frameworks like Spring and Guice do all of this for you. They take on the responsibility of creating the objects, and getting the instances to the code that uses them. If you are a disciplined developer, and never “cheat” by instantiating the objects directly, then all of the same benefits outlined above apply when using a dependency injection framework.

Like everything in life, there are cons that go along with the pros. Separating the code that creates an object from the code that uses the object is not the default pattern. It is not the norm. It will take time for you and your co-workers to get comfortable with this pattern. API documentation tools don’t “call out” static builder methods like they do constructors. This could have an effect on anybody using your library. Dependency injection frameworks take the creation of objects completely out of your code, moving it to some magical, mysterious land where things just happen, somehow. This also can take some time to get used to, especially for those new to the concept.

However, I feel that the benefits of separating creation from use far outweigh the drawbacks.

In our field, change is a constant. As a profession, we’re gradually learning to stop fighting change, and to start accepting it. This means designing for change. Doing so makes everybody’s life easier, from the customer to the developer. Separating creation from use is one, quick way we can increase the flexibility of our design, with very little up front cost.

Thanks to Mahesh Murthy for reviewing this post.