Threat Stack is now F5 Distributed Cloud App Infrastructure Protection (AIP). Start using Distributed Cloud AIP with your team today.
Microservices, even when designed correctly, can be challenging to test. But when the system architecture evolves to the point where tens or hundreds of connected services comprise a software platform in a constantly changing infrastructure, then testing the product that you and your team is responsible for becomes a monumental complex task. Writing test automation for these environments is difficult, so you want to ensure that you get the most value from the tests you do write.
At Threat Stack, we write system integration tests, a form of gray box functional testing, and we choose Docker to run the containerized test environment.
What’s wrong with other types of tests?
Some organizations still rely on the traditional testing model, in which white box and black box testing is seen as a sufficient complementary testing strategy. While they still play an important part in qualifying software, they are no longer sufficient to give you the feedback you need to confidently release software.
In this type of environment, black box user acceptance testing is too widely focused, and relying on it can have some serious implications on time to market strategies:
- It does not provide adequate coverage, since there are too many possible scenarios to test in the time allotted to running tests.
- It runs for long periods of time, and can often time out due to factors beyond your application’s control.
- It must be run after the service is deployed, instead of at build time, which means feedback is further separated from product development. In the shift-left philosophy dominating test engineering, relying on this testing makes your organization less competitive.
- Because they are black box tests, determining the cause of failures can be an incredibly difficult task.
- They are not easily written or maintained by developers.
On the other hand, white box unit testing is too narrowly focused, and is often used incorrectly to qualify software system functionality:
- It doesn’t take into account the real-world behavior of services interacting with components or other services.
- It can test meaningless scenarios that are not valuable enough to maintain.
- It can become onerous to developers who have to continually update large collections of tests for small changes to the code.
- It is not easily written or maintained by test engineers.
Why focus on system integration tests?
Threat Stack test engineers rely increasingly on system integration tests to test all of these services in a way that maximizes test coverage while simultaneously providing the speed and specificity we need to ensure that the application will behave properly under many operational conditions.
An integration test is a gray box test that focuses on the behavior of the software or system under test when interfacing with external components. For example:
- A data store, e.g., PostgreSQL, Cassandra, ElasticSearch
- A message broker, e.g., Kafka
- An HTTP server
In other words, software with which you can interact to validate behavior. At Threat Stack, we primarily execute functional tests that replicate real situations, but the behaviors under test will stop at the boundaries of the containerized environment, avoiding unwanted external interactions.
And since these tests are tightly coupled with the microservice, you have the benefit of examining and integrating code from the service under test in the automated tests to consistently identify areas of the code to exercise. Additionally, these tests can be written as user acceptance tests so that non-developers, such as product managers and QA engineers, can more easily understand the behavior under test.
In other words:
Given a set of some conditions
When the service receives this input (OR this condition affects service)
Then the service behaves in an expected manner and produces expected side effects
Not only are these functional tests easy to understand and write, they are also tightly coupled with the microservice’s code, so they can easily be run and updated by developers throughout the development process or afterwards by dedicated test engineers.
Why use Containers?
Containers are extremely useful for test systems since they allow you to quickly reproduce your test environment with minimal resources for the duration of the tests, and then easily clean up when the tests are finished running. Unlike a lot of black box test automation, you do not need an expensive, always-on test environment in order to carry out your integration testing. When you run the tests, the microservice’s behavior can be reproduced in the test environment.
Once you define a containerized environment that mirrors the microservice’s environment in production, then your test framework becomes the external clients or mock services that interface with the components or microservice under test. This ensures that the test code drives all aspects of the tests, which allows you to control many more aspects of testing.
When starting out, Docker is a good platform for quickly spinning up a containerized environment. Using Docker Compose, you can easily define and run the sections of your application under test, either locally or in CI using the same code. Other container infrastructure tools and services, such as Kubernetes, AWS EKS, or AWS Fargate, can also be used to deploy your test environment if your organization supports using them.
Conclusion
Ultimately the decision to focus testing efforts on integration tests as opposed to other types of automation gives you two big benefits:
- Your microservices tests are shifted left so they execute ahead of deployment, which gives you faster feedback for iterative development.
- You still run tests against real services and components, which means that you are reducing test coverage gaps but still running functional tests.
Threat Stack is now F5 Distributed Cloud App Infrastructure Protection (AIP). Start using Distributed Cloud AIP with your team today.
About the Author
Related Blog Posts

F5 accelerates and secures AI inference at scale with NVIDIA Cloud Partner reference architecture
F5’s inclusion within the NVIDIA Cloud Partner (NCP) reference architecture enables secure, high-performance AI infrastructure that scales efficiently to support advanced AI workloads.
F5 Silverline Mitigates Record-Breaking DDoS Attacks
Malicious attacks are increasing in scale and complexity, threatening to overwhelm and breach the internal resources of businesses globally. Often, these attacks combine high-volume traffic with stealthy, low-and-slow, application-targeted attack techniques, powered by either automated botnets or human-driven tools.
F5 Silverline: Our Data Centers are your Data Centers
Customers count on F5 Silverline Managed Security Services to secure their digital assets, and in order for us to deliver a highly dependable service at global scale we host our infrastructure in the most reliable and well-connected locations in the world. And when F5 needs reliable and well-connected locations, we turn to Equinix, a leading provider of digital infrastructure.
Volterra and the Power of the Distributed Cloud (Video)
How can organizations fully harness the power of multi-cloud and edge computing? VPs Mark Weiner and James Feger join the DevCentral team for a video discussion on how F5 and Volterra can help.
Phishing Attacks Soar 220% During COVID-19 Peak as Cybercriminal Opportunism Intensifies
David Warburton, author of the F5 Labs 2020 Phishing and Fraud Report, describes how fraudsters are adapting to the pandemic and maps out the trends ahead in this video, with summary comments.
The Internet of (Increasingly Scary) Things
There is a lot of FUD (Fear, Uncertainty, and Doubt) that gets attached to any emerging technology trend, particularly when it involves vast legions of consumers eager to participate. And while it’s easy enough to shrug off the paranoia that bots...