Domain Orchestrator - A retrospective
A look back at what went well, and things I would change or do differently
This post is part of the Domain Orchestrator series:
The architecture we had been working with for the last few years was fundamentally quite simple. As a Risk Segment, we had a bunch of microservices, each with a specific purpose (checking credit scores for a company, checking for compliance etc.). When an order was placed, our primary consumer coordinated several risk checks directly until it was happy to either accept or reject the order attempt.
Already, you could argue there are some issues. Our consumer had accumulated domain-specific integration logic. They have to integrate with n services, where each step in n adds complexity.
As the business grew, we onboarded new partners, attracted new customers, and over time, our risk assessment requirements grew. We had to evolve with the market, however, we found ourselves with an issue; change was tough.
Let’s suppose a new partner sent us a new data point with each attempted order - let’s say customer_shoe_size. This is an extremely important piece of information for us, and we need it to help evaluate risk in 4 of our risk services. What are the implications of this seemingly simple change?
If these kind of changes are required once or twice, you can swallow it. However, the more it happens, the more frustrated everyone gets.
On the other side of the coin, let’s say for a given customer, we wanted bespoke functionality. Perhaps we want to enable an experiment, or we want to support richer or dynamic interactions between checks, or we want to adjust the sequencing of checks. Suddenly, this becomes a major initiative. But the burden of change doesn’t fall on the Risk team; it falls entirely on the consumer. For them, this is a large, complex change that brings them no direct value and doesn’t contribute to their team goals.
All in all, regardless of the size of the change, it was becoming too complex to manage in the current setup; the smart consumer setup.
The idea we had been discussing for a number of years now was introducing an “Orchestrator”, which would become the single, unified entry point for our primary consumer to interact with. Behind the orchestrator, we (in the Risk Domain) would have full authority on exactly what happens, and we can be largely in control of changes going forward, so long as we receive all of the information we need up front.
On paper, this looks great, but a lot of things to do in isolation. When we consider the full picture, well, a lot of things begin to reveal themselves, and then it becomes a question of “is it worth the investment?”.
Let’s start with the cons, downsides, or dangers of this approach:
Honestly, whilst writing that list of cons, I began questioning if we made the right decision by taking this route! Let’s take a dive into the pros of this approach:
n integrations to our services, with a single integration to the orchestrator. Their implementation is simpler, and our API contract is a single, clean, stable boundary between the 2 systems.The “cons” were significant and real, but the “pros” represented a necessary shift. They weren’t just about code; they were about giving our team the autonomy to build, experiment, and move as fast as we can. Despite the risks, in our minds, the investment was worth it.
In the next post, we’ll dive into the blueprint for the orchestrator itself and how we approached it using Domain-Driven Design and Ports & Adapters, and the key technical decisions we made to reduce some of the risks noted above.