DevOps #7: Isolate Integration Complexity

How Will We Debug the Monolith?

If your teams are following our approach to escape the monolith that Deep Roots has provided in this year’s newsletters, you can probably see a big problem looming. Your teams are gaining independence at the cost of increasing integration complexity. Who is going to manage that complexity as every team flees the monolith? How?

Let’s look at our example organization to understand the complexity and its solution.

  • Nearly 2000 components (125 teams * 10+ components per team)
  • Each component has a low-fixed-point API (general and complex for client)
  • Each component library is independently updated, built, and verified (more complexity coordinating pre-release, whole-system activities, and timing)

Each of these factors makes each integration more durable, but also makes reasoning about the monolith more complex. And the monolith doesn’t have dedicated staff.

Each team has minimized the impact of their code change onto the system. The next step is to also internally manage the integration complexity so that the system remains simple.

An example of teams doing a good job in minimizing the impact of their code change would be to change a tax calculation from being a specific function call to be one of several handlers for a generic PrepareToCheckOut event. The problem is that when a cart rings up the wrong total, it’s hard to identify which component or interaction between components caused the bugs. However, if the team manages the integration complexity also, then the API can remain general and isolate changes while the integration code is easy to debug.

As such, the solution for minimizing system complexity is for each team to create a client library for their primary integration.

That solution raises several more questions:

  • How do we create a fit-to-purpose client library that narrows our general API to exactly what the monolith needs?
  • What do we do when our component is consumed by another component?
  • Who maintains the client library and how?
  • How do we test the integration? Who tests it?

Create a Simplifying Port and a Unit-Tested Adapter

We will create a client library that consists of three parts:

  • Port: the client library’s API, consumed by the monolith.
  • Component API: the API for the component itself, as built by last month’s newsletter.
  • Adapter: the code between, which adapts the component API to the Port.

The Port’s job is to be as simple as possible. This is how the monolith wishes your component behaved. A good port encapsulates all of the team’s complexity, leaving the monolith’s code simple. Among other considerations, a Port should be a high-fixed-point design.

The component API’s job is to encapsulate all possible change. This exposes the entire complexity of the component’s possible implementations in a uniform way, so that the team is free to change its component independently. It should be a low-fixed-point design.

The Adapter’s job is to manage the component’s complexity. It can be fully unit tested, yet still manage unstable networks, parallelism, and other challenges. The adapter implements the Port’s simplified world using the component API’s generalized, yet complex, capabilities.

Not Quite a Client Library

This probably seems similar to the many services which create their own client library. Our approach has one key difference – we don’t have a general client library.

In the standard approach, the service provider creates the client library, and then each consuming application creates integration code to interact with that client library. Instead, we are creating one client’s integration with our service API directly. Other clients would create their own integrations.

That difference allows us several simplifications:

  1. We don’t need to support the full general capability of our component. We can expose, build, and test only what the one consumer needs.
  2. We don’t need to separate the client library from the specific consumer’s integration code. We can build it all together with fewer abstractions and moving parts.
  3. We can optimize the Port to use whatever design concept this one consumer wishes the world was like. That allows us to encapsulate real-world complexity into the Adapter.
  4. We don’t need to support the client library forever. We contribute the integration code back to the one consumer — the monolith in our case — and then move on. That project then owns it just like any other integration code.
  5. The Adapter is just normal classes and methods. It is easy to unit test, even though it contains all the complexity for dealing with our component.

There are two approaches to building this integration. Both are described in this month’s recipes, but you will only need to do one.

Either approach will isolate your component’s complexity from the specific consumer. Your team will move entirely out of the monolith, leaving behind only a simple Port and a well-tested Adapter with little reason to change. When other components wish to use your component directly, they can use the monolith’s Port and Adapter as a starting point for their own integration.

Create a single client integration, not a reusable client library.

Check out our developing Legacy Cookbook to access the recipe to isolate your component’s integration complexity, as well as other recipes coming in the future!

Keep Integration Simple

Each team will still be able to update its component independently, and now everyone will be able to reason about the monolith as a whole. Interactions between components will be easier to debug. Each client uses the API (Port) which maximally simplifies the code for that client. The integration code is unit tested. The component authors can contribute their deep expertise to create the integration code, without having to maintain all integrations forever.

Benefits:

  • Each client is as simple as possible.
  • Fewer integration bugs.
  • Fewer interaction bugs between components.
  • Clear code ownership between teams.

Downsides:

  • Each new consumer must consider its own needs and custom-design its own Port.
  • Can result in code duplication for common functionality. If that happens, the consuming teams can refactor out a shared library.

Demo the value to team and management…

Show three things at your sprint demo:

  1. Example: Integration code before and after.
  2. Progress: Number of Component API calls from outside the Adapter.
  3. Impact: Example debugging session.

Example: Integration code before and after

Show a single place where the monolith uses your Port. Show the integration code that was there before — the code you created when you made your API more variable. Then show that the complicated integration code still exists, just in the adapter. Show its unit tests, verifying that all parts of the monolith will get the same, well-tested behavior from the Port.

Your goal is to show that the complex code still exists, but is now in the Adapter. The monolith code is simpler. The separation of concerns allows you to slightly simplify the Adapter and test it much more independently.

Impact: Example debugging session

The impact of this change is much harder to measure than that of previous newsletters. I don’t see any way to create a chart, because the payoff is delayed and diffuse. Basically, we spend less time debugging or trying to read code than we would have before.

To help your audience get an idea of this impact, walk them through a fake debugging session. Assume that there was an interaction bug between your component and one from another team (or some non-extracted code). Using the code before your change, walk them through the discovery process it would take to find. Include time estimates. Then walk them through the much simpler discovery process you would face now, including time estimates.

Finally, approximate the number of integration or interaction bugs that your organization debugs each year. Multiply to estimate the annual time savings, were all of the code to be in components with clean Ports.

Lastly, let them know that unfortunately you have found now way to measure achieved impact. So you can’t show the results of incremental improvements. You can predict that your team will spend less time debugging cross-team issues, but you can’t show it happening.

Deep Roots is

Deep Roots is on a mission to help you prevent software bugs. They have identified the hazards that make bugs happen. Their Code by Refactoring process shows you what behaviors create those hazards, and what specific shifts will help you change those hazardous conditions while continuing to deliver software at your current speed.

Neep help addressing this topic in your organization?

Reach out to Sales@DigDeepRoots.com to discuss your situation and how we can help.