Interconnect past and the future – monolith with microservice in mind

Interconnect past and the future – monolith with microservice in mind

Stuck with a big fat monolith? A company just recently invested time, effort and money in a brand new piece of software, which is working correctly or at least it looks like it… A new marketing campagne, attractive discount in your webshop ( or pandemic lockdown) resulted with an unpredicted number of the request which exceeds every prediction.

No new resources available to “reinvent the wheel” from scratch, rewrite and redesign adopting latest best-practise: microservices, service mesh, auto-scaling…

Sound familiar?

Microservice architecture brings a lot of changes and improvements in software design and development process. Flexibility in the development phase and later in operations is a game changer. Developers and their teams are focused on specific problems, challenged to identify and isolate specific functionalities and use all available resources (languages, frameworks, tools) in order to respond in the best possible way. And even if they miss or oversee something, adding new features is simple and fast.

Biggest impact of this approach is in the operations. Opportunity to perform a “fine system tuning”, adding just enough resources to a “slow-performing” part, results with global system performance improvement, without changing a single line in code.

Unfortunately the infrastructure seeks for changes too. Utilizing and handling of containers, especially in production and on premise, in a secure way can be quite a demanding task.

Cloud users have much easier tasks, a few very stable and usable (usually offered as managed services) will do the work. AWS offers two managed container orchestration services: Elastic Container Service and Elastic Kubernetes Service, both very reliable and already associated with monitoring and logging systems.

Insist on a system redesign and enjoy the long-term benefit of new technologies or try to use the best from both worlds?

Enter container

Although containers are designed to be lightweight, fast and easy to move and transfer, nobody forbids appling same approach against a monolith structure. Some of the flexibility will be lost (images are usually bigger), but some of the benefits are still usable. Containerized apps usually have very similar performance compared to a traditional deployment and in the same way much easier to transport and distribute among multiple machines, which in some cases can be an easy and “quick-fix” solution. So, this step tends to be an easy one, create a Dockerfile, build a container image and store him on registry.

The power of the orchestration service

Managing a number of containers is a challenge, especially in a production env! Where and how to perform a deployment? How to monitor and upgrade? Nowadays, we have a quite big number of possibilities for this problem: Kubernetes, Swarm, AWS ECS, Helios…

In our case, the decision was to go with Elastic Container Service (ECS), with simple reasons, we had a good knowledge of technology and already solved CI/CD procedure, which was quite handy at the moment. So one ECS Task, managed with a single ECS Service, exposed to the outside world via Load Balancer and that is all. And as a benefit, when needed, the system can be easily scaled simply by adding more containers in task, and load will be evenly distributed.

Brake the monolith!

Our monolith is performing all the work, but only some of the specific tasks are slow and throttle overall system performance. Microservices design is focusing on a specific system functionality implemented via dedicated service. Our idea was to apply the microservice pattern and create multiple instances of the same application for every problematic functionality of the system. That instance will be later used only for a specific task.

Biggest challenge is to identify slow services (or parts) of the system! Sometimes it is the authorization, sometimes cart manipulation, and most often integration with third-party systems.

In our case, a simple jMeter with a standard set of tests did the magic, problematic services especially under a heavy load, were easily identified. Deeper application log analysis only confirmed what was detected in performance testing. Once identified, process isolation looks like the easiest task to be done! Run a new service instance, using the same docker container image, for every problematic process, just with automatic scaling this time.

And to connect the dots, some kind of traffic routing needs to be implemented. A quite number of services can do the job: HAProxy, Nginx, L7 LoadBalancer…

AWS users can use an Application Load Balancer, as this service offers all what is needed out-of-the-box. Simple L7 path based routing will forward traffic to a specific Target Group/ESC without affecting other parts of the system.

Number of running containers will rise, hardware demands will be higher, but now administrators will be able to finely tune parts of the system independently.

Put the fun into deployment

CI will offload the burden from your shoulders. Utilize all aspects of testing pyramids, test whatever possible, and put a goal, every release needs to be a production ready. Deployment part can be tricky, especially when you need to redeploy multiple services (containers) without downtime. We use Jenkins, but any other tool we do. Rolling update, blue/green whatever suits you, but deploy as much as you can and as often as you need.

And one last thing,implement caching! Use caching and use it always. For sharing sessions, search results. It is much faster and results are visible instantly, which will put a smile on the face of your managers…

Not a perfect solution, sounds more like a workaround. But in some cases, specially when things become problematic, like sudden increase in traffic, high number of users and limited hardware resources. Breaking the monolith, isolation of a problematic process brings us in a position to easily scale only specific, problematic parts or parts of the system, and easily enhance a system throughput without changing the code itself or major architectural change.

Better approach is to adopt modern design patterns, rewrite code from scratch but this is a topic for some other discussion.

External text on link.

Only constant in software is Change
App Clips: How you can make your iOS app better

Related posts

Improved Java microservice logging with MDC

Improved Java microservice logging with MDC

Ingemark 22.12.2020.
Continuous Delivery and Code Quality

Continuous Delivery and Code Quality

Ingemark 18.12.2018.
AssertJ – fluent assertions for Java

AssertJ – fluent assertions for Java

Ingemark 15.01.2015.

By Clicking on Accept, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. For more details, please review our Privacy Policy.

Accept all