Getting to a Middle Ground Between Monolith and Microservices

By eric

It’s difficult to think about micro services without considering the large management overhead the goes with it. In many cases, a micro service architecture might even make technical sense and not business sense. These two are completely separate but very important considerations. Microservices comes with more technical management because they require actual components to deploy, manage, monitor, etc. There are also far fewer experts in building, managing and deploying them successfully. As with many of life’s more complex problems, I believe the answer lies somewhere in the middle of these 2 concepts.

When you want to get up and running, monoliths are almost always going to be the way to go. They are faster, take less experience to build and time to deploy, and most importantly, your iterations require fewer dependent changes to be successful. The really interesting part comes when you want to begin slicing off individual well defined chunks of the monolith to increase performance, decrease costs, or <fill in your reason here>. The elephant in the room is, how do you decide what to do and when?

This is where I believe the middle ground usually begins. To make headway on finding this point, the questions usually start out something like:

  1. What is a useful well-defined scope of work in the context of your environment?
  2. What benefits are there from slicing that work out of the monolith?
  3. Does this create or fix other ancillary problems? Code, technical, management, etc
  4. What type of impact does this have on our technical debt?

Well-Defined Scope of Work

The easiest way to think of a well defined scope of work is something that can be done entirely in a vacuum. A few good example of this might be something like writing to an external audit system, handling a click in click stream, post processing of an upload, etc. It’s typically easier to build these things in to the initial flow of data without moving things outside of the monolith. You have all the models and connections handy and that’s where the data enters anyway, so why not? When you need to get things up and running quickly, you don’t always have the ability to make the best technical decision. Sometimes the technical decision is the best for the amount of time available. But this chunk of well defined work could be your chance to refactor that forced decision if the juice is worth the squeeze.

What Are The Benefits

Now you need to decide if this all makes sense. You need to have a(t least one) success metric. Something that you can use to figure out if everything you did was worth while once it was completed. If you are trying to decrease request latency, you should be monitoring request durations. If you want to decrease cost, then you should know the cost beforehand, the anticipated delta, and the final cost along with the cost to develop the feature. There are many types of ways to prove value, the trick is actually stating that value early and following through. There will typically also be ancillary benefits (even sometimes accidentally).

More Components, More Problems

Since there is almost never such thing as a free lunch, you need to make sure you know what you are giving up (if anything) and what you are getting. Maybe you know this changes requires you to factor out your models into a reusable library. You can always use this opportunity to test things about your models that were initially neglected.

There is also the clear management headache of having to pay attention to more components. That’s never easy especially at the beginning when you are first trying to assess the cost of moving to monitoring more than just a single monolith. There is always a cost to more mindshare even if you’ve managed large infrastructures before. If you’ve never done it before, then I encourage to read up on monitoring tools. Your hands are now more full than you realize.

Technical Debt

There are many ways to look at technical debt. For the sake of brevity, let’s use Tami Reiss‘s strategy on categorizing technical debt (if you haven’t read it, I recommend you do so now). It’s succinct and marries the ideas of business and technical debt cleanly without assigning more importance to one consideration than the other. The strategies basically fall into 3 categories: regular refactoring, periodic rearchitecting, and platform transitioning. The piece of the pie that we’re talking about here is a rough mix of periodic rearchitecting and platform transitioning. You can use this optimization to reduce your technical debt while refactoring a piece of the infrastructure.


Ultimately, the answers are going to need to come from a place that balances the needs of :

  1. The business, both financially and from a product perspective.
  2. The technology team for the morale boost of building things intelligently and not accruing too much additional debt.
  3. The product team from creating situations where the technology will impede the ability to deliver future features in a timely fashion.
  4. Any other team that has a stake in what’s being worked on.

Once you find that balance, you should be able to make much clearer decisions on whether the choice to move away from monolith makes sense for you. That doesn’t mean that the news of the decision will be easier to deliver to those negatively affected. But it does help you figure out what’s important to you and provides the framework to decide if this is a good move for your organization or not. Never underestimate the value of a consistent, data driven decision making process that others can rely on.

Follow My Travels

Buy My Book


  • 2020
  • 2019
  • 2017
  • 2014
  • 2013
  • 2012
  • 2011
  • 2010
  • 2009
  • 2008
  • 2007
  • 2006

New Posts By Email