Microservices: Software decomposition is not for performance, it is for the human brain


You can actually prove that most software systems would work faster in a monolith than in a microservice architecture. Just put back together all the microservices in a monolith, mentally. Replace all the REST calls with direct Java calls, this should eliminate a lot of extra work to serialize/deserialize. 

The resulting system will consume less resources overall just by less serialization. The latency should be smaller without the network calls. If you eliminate all the workarounds added just to make the distributed transactions to work, the performance should be significantly higher - with a proper load balancing of course.

There are some edge cases, like if your resulting application does not fit into DRAM, however this is rarely the case. The horizontal scalability should still work is you deploy enough of those monoliths and correctly balance the work among them. Of course, the starting time might be higher with the monolith. The risk for Out Of Memory from one module to the other is higher. Compile time is higher with monoliths.

However, performance or resource utilization is almost never a gain when distributing your application. It is likely that you will consume significantly more hardware resources compared with the monolith.


Then why we decompose our software in separate systems, even microservices?

Decomposing a monolith is mainly because of the human brain's limitations. We need to manage the complexity, and decomposition is often the solution to have subsystems that can fit in the brains of the team that is handling that code.

If done well, decomposing should also reduce the need to synchronize with other teams that are handling other subsystems. Maybe you also get accountability - you know which team is responsible to keep a certain functionality working.

Sometimes you can address conflicting non-functional requirements, like keeping one region of your  software super-duper-stable-and-secure and have other parts where you can safely experiment with higher risk appetite. You can reduce the risk for a security breach to propagate from one microservice to the other, at least in theory.


What do you lose when decomposing a monolith?

Let's observe first that the monolith was a system because different parts need to communicate to each-other. If is absurd to have a monolith that is doing 2 completely unrelated activities, like a chess playing  platform and an online store. The monolith is there because it made sense to group all those parts to do a job.

Maybe some parts are less coupled with the rest and makes sense to separate them. However, there is no decomposition that will not need communication with the rest. And after decomposition, all this communication will have higher latency and it will have higher serialization overhead.

When choosing to extract a microservice from a monolith, always look at how many wires will remain between the extracted microservice to the rest. Each of these wires will need to be implemented as a REST API probably and it will add implementation and maintenance overhead.

If there are too many wires that cannot be cut on decomposition, the overhead of separation might not worth the advantages. You might end up with a distributed monolith, that is often worse than the original monolith.

Think also about transactions. Distributing a transaction over different systems is at least an order of magnitude more complex than doing it in a single system with a single database. Find a solution to work without distributed transactions or keep the transactions in the same system. There are solutions to distribute transactions, but it rarely worth the trouble.


Start with a problem to solve

Think about the outcome you want to achieve by decomposition. If you define the most important problem correctly, it is possible that there are lower hanging fruits that can be addressed first. If you don't have good indexes in your database, microservices will not solve that. If the people are not disciplined, microservices will not improve that.

Don't just decompose your software because it is trendy. Splitting a monolith is not trivial, almost always the effort is higher than originally estimated. Then the maintenance overhead is even harder to measure. At least, you should be able to measure how was the system improved by splitting the monolith. Sometimes you pay the cost of decomposing and the system gets even worse than before.


Be a good surgeon

When you really have a pain that is hard to be addressed in the existing monolith, maybe the solution is to extract that part first, in a microservice. Extracting a microservice is like removing an organ from a living being. It is not feasible to cut too many blood vessels, you need to find a way to only cut a few.

When deciding what to extract as a microservice, start "pulling" that problematic part from the whole and observe what is the highly cohesive part (a mesh of wires) and what is a border with few wires to the rest. The part with few wires is where you want to make your cut to separate the microservice.

You might want first to extract a module in the monolith, to see how cohesive things are. After you played with different ways to separate a module, you might be in the position to extract such module in a microservice. And if there are still other problems, you can do the process again.

But before doing the cut... thing again: 

  • what is the problem are you trying to solve? 
  • does it worth the cost of extra complexity?

Comments

Popular posts from this blog

Dependency Inversion for Entities - software architecture