One of the hardest problems in Open-source software (OSS) and product development in general is getting to 2.0. The first version is rather straightforward to develop, with features and additions added at a rapid pace. As the codebase grows, the developers must take care and attention to make sure the project stays viable and vibrant. While working on the next version of my OSS project, AutoMapper, I can look back at the successes and failures of two years of effort.

AutoMapper is an OSS project that solves the problem of boring left-hand-side, right-hand-side code. This is the code where a developer creates an object and sets a multitude of properties from some other source object. It is boring, rote code, both annoying to write and tedious to test. AutoMapper intends to solve this problem by applying optimized reflection techniques to perform this sort of operation for you. For me, this project was an idea that surprised me when I originally developed it, as I was almost sure someone else had solved this problem.

Over the lifetime of the project, tens of thousands of developers downloaded and used this library, rather surprising me again because of its humble origins. But looking back, many of those early decisions, shaped by assumptions, guesses, and wrong turns, wound up cementing themselves in many aspects of AutoMapper’s design, whether I like it or not.

As I push towards 2.0, I can right many of the wrongs I found in getting to 1.0, preserving all the things that I was lucky enough to get right.

What Went Right

For me, a lot of AutoMapper’s success lies in timing. It is a simple API, but solved a rather growing problem in a fairly new space, with the release of ASP.NET MVC. Although not inspired by problems encountered in ASP.NET MVC, nor coupled to ASP.NET MVC, it is this framework’s lack of opinions on model design that necessitated a project like AutoMapper.

Simple API

From the very first line of code I wrote to build AutoMapper, I wanted configuration and execution to be absolutely as dirt simple as possible. I strove towards a low entry point, so that simple scenarios were easily accomplished:

// Configure a mapping
Mapper.CreateMap<Order, OrderDto>();
// Perform a mapping
var dto = Mapper
    .Map<Order, OrderDto>(order);

In the above code snippet, it is one line of code to configure a mapping between a source and destination type. Internally, AutoMapper performs some highly optimized reflection logic to cache the mapping execution from the source to destination type. Next, with one more line of code, I can perform the mapping from an object of type Order to build an object of type OrderDto. As long as the names of the properties match up from source to destination type, no additional configuration is needed.

Starting from the very first test written to build out AutoMapper functionality, every single piece of behavior was written test first, assuring a simple API for both configuration and execution. With hundreds of tests exercising the API, I worked out bad configuration models, incorrect method names, and confusing interactions by using the library again and again, in repeatable, automated tests.

Harvested Framework

The best way to prove the use of a framework is to have it grow out of an opinionated, representative, production system. Products can be successful when they do not grow out of an existing system, but usually only when dogfooded and used by the team building the software. I built AutoMapper to support problems faced in an actual production system. It is a very large ASP.NET MVC application, with something on the order of a thousand different screens.

Since my team had such a strong need to perform many mappings, very early on I recognized the need for building a tool or framework to do this for the team. Before I ever released a public version of AutoMapper, the team used the library in the codebase for many months. The 1.0 release marked almost two years of use since the very first line of code was written. I did not try to solve a problem that did not exist, nor did I try and guess how to solve other people’s problems. I focused on solving the problems faced, and extracted the result with the idea that other people might benefit from the team’s work.

Moving to GitHub

Having worked with OSS frameworks for many years, for a long time the story of contributing back to OSS tools was quite painful. With centralized version-control systems such as Subversion or Team Foundation Server, contributing back often meant an email and a patch, comprising of the sum total of the work completed. Oftentimes, the maintainers rejected my patch, leaving me with uncommitted work sitting on my local hard drive.

With GitHub, changing another open source project is highly encouraged, to the point that there is a large button at the top of each project page, “Fork”. The code then becomes your code, where you can modify and extend in a personalized sandbox. Contributing back is equally seamless, as automated tools allow OSS project leads to easily see changes from incoming requests. Since moving to GitHub from SVN, the number of developers contributing code skyrocketed, from just one to over a dozen (with several open pull requests still pending). GitHub makes developing and contributing OSS projects an absolute joy.

Decoupled from Existing Frameworks

Although developed on an ASP.NET MVC project, the original inspiration behind AutoMapper came from a project that heavily used WCF. This project exposed some data via web services through WCF, but I found that the external messages looked nearly identical to my team’s internal representations. Because I found many situations where I wanted to expose rich models through serializable formats, I did not want AutoMapper to be tied to specific presentation or application frameworks.

As of this writing, AutoMapper depends on only four external assemblies:

  • System
  • System.Core
  • System.Data
  • Castle.Core

The last library is only used for generating proxy objects for mapping directly to interfaces where there is no implementation, which has no analog in the Base Class Library. Because of this, developers may use AutoMapper in a wide variety of scenarios, many of which I would not have predicted.

Highly Opinionated

Users of AutoMapper often ask for a variety of scenarios that I would rather not support. AutoMapper was built to support a targeted scenario that lends itself well to a convention-based approach. For example, people often ask for reverse mapping, where a DTO can map back into a rich entity. Since I never ran into this scenario and try to avoid it, it is very difficult to me to add this kind of feature into the library.

It is not a value judgment, but a question of logistics. AutoMapper succeeds because it solves problems I understand. I cannot solve problems I do not understand, so I generally rely on community support for these situations. When feature requests come in, I poke and prod until I understand the underlying problem the developer is trying to solve. Many times, the solution lies elsewhere besides diluting the core focus of AutoMapper. By keeping the product focus distilled and opinionated, I can fight feature bloat and code dilution.

What Went Wrong

Although my team and others encountered success using AutoMapper, there are quite a few elements of its design that I would like to do over. AutoMapper was not the first OSS project I developed, and I corrected many of the mistakes I made in the previous projects. In fact, many of the successes were as a direct result of fixing previous gaffes, such as eschewing speculative development in favor of harvesting the library from production code.

I was most surprised at how seemingly innocuous decisions early on propagated and dictated later features. Some items I wanted to include earlier have become much more difficult, because architectural decisions are quite difficult to change as I build towards the next version of AutoMapper.

Over-use of Statics

When I showed the simplicity of the AutoMapper API, the simplicity is also its crutch. Initially, all API calls through the single Mapper class were a gateway to its functionality. Eventually, the single static class became merely a façade over separate configuration and execution objects, but the static class stayed. Its only responsibility now lies in lifecycle management of the supporting classes.

The problem in this design comes with each additional feature on the underlying classes. Since the static class is merely a façade, all calls are passed through. Each additional method on the underlying supporting objects carries a tax of implementing that additional method on the static class. The design was inspired by other OSS projects that exposed a fluent API, but what I did not know at the time was that all of those OSS authors regretted that design.

Since the static class is already widely used in the community, my choices for addressing this issue are much more limited. The simplicity of a static class somewhat restricts future growth if for no other reason than having to maintain two separate APIs.

No Separation Between Configuration and Execution Model

The advent of OSS libraries such as Fluent NHibernate pushed the ideas of extending configuration-time activities through conventions. AutoMapper includes built-in conventions on how to map from one type to another, but the lack of a true, segregated configuration model prevents more advanced, higher-order conventions that can be applied across all mappings. One example would be to scan for types that inherit from a base Entity class, and register a custom type converter that provides a generic solution for mapping an entity identifier to loading the entity from a persistent store.

Unfortunately, the configuration model for AutoMapper doubles as its execution model. The configuration model both describes what to do and how to do it. These are two separate activities with vastly different needs. Because mapping execution is baked into the configuration model, extending AutoMapper to support bi-directional mapping becomes quite difficult without severely bloating the existing model to support two entirely different paradigms.

Many OSS projects with fluent APIs take this one step further, providing separate models for the execution, configuration, and DSL expressions. With this separation, a developer could extend the DSL to provide custom configuration, which ultimately feeds into a final execution model. I plan to right this wrong, but it will take quite a bit of internal refactoring before end users can see tangible benefits.

No Fine-grained Unit Tests

My open source projects tend to provide an excellent snapshot of my automated testing philosophy at the time I initially developed the project. With AutoMapper, I wrote all unit tests in a BDD, context-specification style manner, driving all functionality through the static Mapper class. While this is valuable, it can be difficult to verify fine-grained behavior when so many supporting classes are used. These full-system functional-level tests are highly valuable, but tend to fall down when driving out lower-level API behavior.

Currently, I drive out behavior from a system starting with failing full-system tests that exercise the outermost layer of the system. I then drive down lower-level behavior with additional tests. When I finish, my full-system test is now the last one to pass. Having a mix of full-system tests and lower-level unit tests allows a greater flexibility of refactoring than just having only unit tests or end-to-end tests. Refactoring occurs at many different scopes and layers, and tests at varying scopes and layers cover each scenario for different kinds of refactoring.

Going forward, this is the approach I take for building new features where I introduce entirely new areas of the library. For modifying existing features, it is not worth the cognitive overhead of trying to navigate between multiple testing styles in one area of AutoMapper. Only new areas of the product will incorporate evolved testing strategies.

Feature Bloat

Zeal to make customers happy is a double-edged sword. On one hand, I want to support valid scenarios that I did not consider. Many of these scenarios are interesting and useful to situations I might run into. On the other hand, adding features that I do not personally use tends to increase the number of features I have to support with patches, updates, and support questions. The difficulty lies in understanding exactly how many might use a certain feature, and how.

I developed a new respect for product teams as the result of incorporating pull requests. Since GitHub made it so easy to pull in other’s changes, the decision on whether the feature should be included became an afterthought. That isn’t to say I didn’t carefully consider features, as I always tried to include things that seemed genuinely useful and in the spirit of the product.

But the issue I ran into is that piecemeal additions tended to create lots of small but useful features, but overwhelmed what the 99% cases were. Instead, waiting until I had numerous common features to pull in would force me to consider if there were global themes or abstractions that could be realized.

Supporting Alternate Versions

Very early after I released the initial Alpha and Beta versions of AutoMapper, I was asked about supporting additional frameworks. Right away, anything previous to .NET 3.5 was out, since the configuration API made heavy use of lambdas. More reasonable were questions to support Silverlight. Having never written a Silverlight application, I was completely unfamiliar with the runtime and restrictions that Silverlight development entailed.

Trying to support multiple versions of the .NET Framework in a single codebase is not a trivial task. Unit tests are different, and many features and methods are not supported. Although the assembly and class names might be the same, I encountered many areas where seemingly random methods from a Base Class Library class were not present in the Silverlight version of the assembly. Many advanced reflection techniques were also not available.

Eventually, I resorted to adding linked files and conditional compilation, which at best is a huge hack. There is an official Silverlight 3.0 version supported, but it took quite a bit of work to try and keep both versions updated. A feature added to the .NET version needed to be added to the Silverlight version, and so on. In the future, alternate versions will be developed after major versions are released, instead of alongside.


The hardest part of product development, whether it is an OSS product or commercial product is getting to version 2.0. AutoMapper’s 1.0 release was far more successful than I could have ever imagined, especially since it started with one test and one class to solve a simple problem.

It should not have been a surprise, but those early, innocent decisions on architecture greatly shaped the ability of AutoMapper to cross the chasm into the next version. However, trying to guess on what the fifth version might need would lead to an unusable product. Those decisions I don’t worry about, but some of the small things that I should have known better about turned out to be bigger hurdles than they should have been. I’m happy where it ended up, but I’ll be even happier undoing those warts.