What Happens When You Don't Define Domain Boundaries
Updated: Mar 17
I hosted Neil Syrett on the podcast series - How to get started with API Contract testing. In the first episode where we talked about breaking down monoliths, public API's, defining domain boundaries, modern testing principles, testing at scale, bi-directional contracts, difference between contracts & schemas
Breaking down monoliths into microservices and Testing at Scale
Neil: My first exposure to microservices in earnest was when I joined ASOS back in 2015. It was at a time when they were moving from on premise monolith solution, traditional set up into the cloud. Using Microsoft Azure and looking at how they can break down their monolith into discrete microservices, obviously all the pains and learnings that came with it, getting used to failures in the cloud and understanding how to manage that how to test for those different scenarios. Talking about massive scale Black Friday weekends were always fun. When thinking about testing for this type of scale I think as with any kind of performance testing it's about understanding or any kind of testing, it's about understanding requirements and I think all very performance and load testing, it's a different, potentially different skill set and different tooling, but fundamentally the fundamentals of testing are the same.
Neil: So what I've always found is what's really key is having the right level of telemetry in place, observability strategy in place in your application to make sure that you can ask your stakeholders what they think the load will be or what the load profiles will look like, but often the proof is in the pudding and if you've got something running production and customers are using it and then having the right telemetry to understand what that load profile looks like. That's really key to them, building up the right load test model or the right model for your performance tests that are going to kind of mimic what you're seeing in production.
Defining Domain Boundaries
Neil: ClearBank built something to get live and then since then they've broken that down, into these microservices and at the moment we're having constant discussions about where our domain boundaries lie and an agreement on how we get to that.
Lewis: That's one of the big things around microservices is understanding what you own and what other teams own and then where that kind of responsibility lives because in environments that are working in microservices, you don't necessarily have a team that cares about that overarching journey for the user, marketing care about that, but not necessarily a software team. So I think that's definitely an area of contention when it comes to testing, but I think it is wholly necessary.
Neil: I think if you're trying to build something, if you're always concerned about every the whole user journey that can wear you down and you can lose focus. I think of those clear domain boundaries really helpful because it helps you focus and and deliver on your objectives rather than getting bogged down or sidetracked. We're probably not anywhere near as mature as we'd like to be in terms of contract testing, but certainly where I found the primary challenge we have at the moment is defining our domain boundaries because until we're clear on where those boundaries are, it's hard to think about where the contracts are and where the testing is required at the moment, although we do have like a distributed system, it is still treated as a single product and those domain boundaries aren't clear. So me going in and trying to insert contracts and contract tests in the middle of what I think is to separate domains might stand in the way because that might not be a clear boundary that we want to enforce, that might be a bit more fluid boundary that we might be, there might be refactor or changed over time. So that's one thing I've had to kind of take a step back on.
Test duplication at different levels
Lewis: I think there also can be quite a lot of overlap and where that overlap lives you can duplicate the work that can be tests which are maintained by different teams, which cover the same thing. So I think you can find efficiencies there as well.
Neil: Yeah, and that is something when I joined ClearBank coming up for two years ago, they had broken up their application into discrete microservices, they still had a lot of end to end of regression test packs that would be exercising the full user journey and all the different kind of product offerings that they had. Part of what I've been advocating for is to clarify those boundaries and break up the tests as well and make sure that we're testing close to the domain. We're not trying to duplicate our testing effort and do the same kind of exercises. Which has been done with the microservices and obviously there's much clearer ownership as well and better maintainability and all the rest of it.
Lewis: I don't know if you were on the ground when I was there, but the biggest thing that I remember about my time at ASOS was Black Friday sales, 24/7 support being on rota, having to get in at midnight and then not leaving until 8am the next day. Did you get involved with that yourself?
Neil: Fortunately I didn't have to come in out of hours on Black Friday but I did do it a few times for the, for old legacy releases before we broke up into microservices, now they are an international brand but at the time they were very much focused on the the UK market and and having a few hours if they had to choose an hour to be out to have the website off then doing that in the middle of the night UK time was preferable. When we had scheduled releases once every kind of month or six weeks, they will schedule them in the middle of the night and I would have to come in early hours to support that process.
Lewis: Wow, that takes me back for sure all the time where you're moving stuff from one server to another before the times of the blue green deployments and everything like that.
Differences between Test Approach with Monolith and with Microservices
Neil: I think it offers a really good opportunity for a tester or developer who's interested in testing to kind of get much closer to the implementation and stop looking at the entire system as a kind of black box and get down deep in the dirty the side of the program and understand how to best test to adapt your test strategy to test in the most appropriate way rather than looking outside in. We can start looking at how it's being implemented and looking at more intelligent, faster ways to test looking at things like component testing, integration testing and obviously with that, not just looking at the functional side of things, but also looking at we're talking about performance but not just testing the performance of the entire system, but we can test the performance of a single service and it makes it much faster to find where those bottlenecks are in the system, much easier to diagnose those issues as well. So I think that's the main difference is being able to rather than always looking from the outside in the application in more of a traditional testing mindset that kind of get down in the detail and devise a test approach which suits the application.
Modern Testing Principles
Neil: Our role is more in the kind of model of a kind of a coach or mentor to the team and less as an actual individual contributor if you like. So although I do still write code, individually it is much more about how do I coach the team into improving its testing practices and being the test consultant. So when they have issues or when they have questions, they can come to me or bring me into their meetings, into their refinements of the planning sessions to consult me on the best way about going to test something. So I think Adam page was the one who came up with the modern testing principles and I guess I've used that extensively to kind of educate other people around what my role is because that's kind of how the role was pitched to me when I was hired and I find that quite a useful tool to say here, you may have misconceptions about how you've worked with testers in the past or the kind of skills or the mindset that they come with. But what I'm being asked to do in this role is much more advocating for good practices around testing and facilitating and partnering with engineering teams, not necessarily just offering a testing service or being another member of the team working on tickets on the board.
Lewis: So how do developers kind of react to that, how the developers find not having that kind of safety net, you know, of having a test to fall back on?
Neil: It's one of those things, every team, every individual is different. I'd say the lion's share of the engineers I work with have got quite a well ingrained testing mindset, obviously all doing TDD or at least doing some kind of unit testing. So they are having to think about the testability of their application, they are having to think about testing. So to go from writing unit tests and then thinking about kind of higher level tests and it's not too much of a leap. I think that the biggest challenge I always find is it's just a difference in mindset and if I'm merely sat in a refinement meeting and someone is saying, we need this new feature and describing it to us. I think naturally people from the testing background we are automatically thinking about, questioning those requirements, challenging assumptions, thinking about good questions we can ask, understanding how that feature or that requirement might integrate with other features or other product offerings, how it might be used by the customers, kind of bread and butter to people who come from a testing mindset. Developers, not always but a lot of developers will naturally think of how am I going to implement this and be less inclined to kind of really question the requirements. So I find that the biggest challenge is often having to slow the team down.
Neil: In terms of where I see the benefits for ClearBank at the moment with contract testing is on our public interfaces. ClearBank our main product is a public facing web API and other banks, other Fintechs, other financial service providers to integrate directly with our public APIs and we offer a bunch of different payment services. So those external APIs have contracts. They have well defined documented contracts and obviously we do test them but we don't have a specific strategy around contract testing which means that if we're not on our game we could potentially make a breaking change to the interface and that would obviously have customer impact. So I think that's one area that ClearBank, I would like to invest a bit more in contract testing around that area and see how really refined that strategy and make sure that we're making sure our public interfaces.
Lewis: That's a really good use case, people often think about contract testing from an internal perspective like what you own and what you have control over with your web app interacting with your API service for defining those contracts. But having those tests to say, this is what's going out to the public and making sure that you conform with those is a really good use case for that. Pactflow are about to introduce bi-directional contracts will obviously allow you as the provider to kind of put up those contracts and then use that in that way. I think also the thing with the consumer, is that you can put that contracts as your documentation? These are how we provide you with the information. This is the what the responses look like and then that's living documentation at the same time and that may come in useful down the line.
Contracts vs OpenAPI
Neil: One thing I often get asked is why why do we need to define this thing twice? Why do we need to define in swagger or OpenAPI and in your funny pact language?
Lewis: So they formed different purposes and as you mentioned about the consumer driven part. The swagger docs are usually generated once you've built the API. So why would you go back and then create the contracts after that? But it comes down to the static vs dynamic. API docs are your static form of documentation, they're not going to break if you make a breaking change for that contract, they're just going to generate either a new document or flag that some attributes have changed. Not going to break in terms of your release process. So that's where contract tests come in and also breaking changes can be very subtle, you might be changing small things about how you respond to something or changing something from string to an array or something like that. And that's where your OpenAPI documents don't really come in because they're not checking for that information there, just presenting what information they're given.
Neil: I think it boils down to the independence of the tests. Whereas like you said, the documentation might be also generated. It most likely is also generated. So if the code changes then the documentation changes with it and if you're relying on that to validate schema validation, then you're potentially going to miss a breaking change because like you said, your baseline if you like, is also changing. So I think that independence is key and it's something I've been caught by in the past is where you've got an integration test according to an endpoint. But that integration test is part of the application code, it's in the same solution in the same repository, which is obviously where it should be but in the same light it's not very independent because, you know, with tools like intellij and resharper and ryder and things like that, you know, it's very easy to do a rename across the entire repo and something you've missed that. You've changed your test and your application you've introduced breaking change and the test will still be passing. So I think having independence of tests particularly around public interfaces that can't change is really important here.
If you liked our conversation checkout the full podcast. We've got some really exciting guests coming in next few episodes to stay tuned.