Unveiling the Power of Test with Testcontainers

Abdullah Asım KILIÇ
9 min readApr 25, 2024

--

Importance of Writing Tests in Software Development

The practice of writing tests in software development processes not only verifies the accuracy of the code but also brings along a range of significant advantages. The impact of writing tests enhances the reliability, sustainability, and scalability of the developed software, raising the overall quality standards. Tests not only detect errors early on but also facilitate the software’s adaptation to changes, providing developers with a secure environment for modifications. Additionally, tests support a better understanding and maintenance of the code, making software processes more effective and efficient.

Role of Unit Tests

Unit tests play a crucial role in software development processes, offering a set of advantages and disadvantages. The most notable advantage of unit tests is the ability to test each component of the developed software separately, enabling early error detection and providing developers with the opportunity to make safer changes. Furthermore, unit tests simplify code maintenance since ensuring the validity of tests after any changes allows for the rapid detection of potential errors.

Challenges with Unit Tests

However, there are some challenges and disadvantages associated with unit tests in the application processes. Firstly, writing and maintaining unit tests may require additional effort. Particularly in large and complex projects, testing each component separately can be time-consuming. Additionally, in some cases, writing tests may take more time than actual application code. Continuous updating of tests may be necessary as software changes are made. Nevertheless, the advantages provided by unit tests generally outweigh these challenges, allowing for the minimization of errors in the development process and the creation of a more reliable and sustainable codebase.

Role of Integration Tests

It is commonly assumed that a method passing a unit test will always be correct; however, this is not always the case. Even though a unit test has been quickly written and achieves full coverage, the result can still be incorrect. For instance, in a scenario where an object is sent to another service within our method:

Assertions.verify(serviceName, times(1)).save(any(Entity.class));

This test will verify that the save method is called, but it won't validate whether the fields within the object passed in the method are correct, or if default value assignments during database storage are functioning as intended. These aspects cannot be addressed by unit tests alone. This is precisely where integration tests come into play.

Importance of Integration Tests

Integration tests provide a more comprehensive evaluation of the system by testing the interactions between different components. They can assess whether the method, along with external services and data storage, functions correctly as a whole. While unit tests focus on individual components in isolation, integration tests ensure that these components work seamlessly together, providing a more accurate representation of the system’s behavior. Therefore, a combination of unit tests for specific components and integration tests for broader system functionality is essential to ensure the overall correctness and reliability of the software.

Integration tests go beyond unit tests, focusing on scenarios where different components come together. They specifically simulate complex processes that occur at integration points, such as service calls, database interactions, or the use of external resources. In the scenario mentioned above, integration tests allow us to test such situations in more detail.

For example, we can write an integration test that verifies whether the fields within an entity have the correct values when it is saved to the database. Additionally, we can simulate a scenario involving a call to an external service and check whether this call occurs as expected. In this way, integration tests go beyond the scope of unit tests, ensuring the integrity of the system and validating the code’s conformity to real-world scenarios.

Challenges with Integration Tests

However, there are some challenges associated with integration tests. These tests tend to run more slowly and may require more complex setups, making the testing process somewhat more challenging. However, when the right balance is struck, the reliability provided by unit tests, combined with the integrity ensured by integration tests, contributes to building software on a robust and solid foundation.

What is Testcontainers, and how does it simplify our lives?

Testcontainers is a Java library that provides lightweight, disposable containers for integration testing purposes. It allows developers to define and manage containers directly within their test code, facilitating the creation of isolated environments for testing against external dependencies such as databases, message queues, or third-party services.

What Testcontainers brings to the table:

  1. Isolation and Reproducibility: Testcontainers enables the creation of isolated, reproducible testing environments. With containers encapsulating external dependencies, tests become more predictable and can be reliably reproduced across different environments.
  2. Ease of Use: By integrating seamlessly with popular testing frameworks, Testcontainers simplifies the process of managing test containers within your test code. It provides a user-friendly API that abstracts away the complexities of container lifecycle management.
  3. Avoidance of External Dependencies: Testcontainers allows you to run tests without relying on external systems, reducing the need for shared testing databases or services. Each test can spin up its own container, ensuring independence and avoiding interference with other tests.
  4. Database Testing Made Simple: When it comes to database testing, Testcontainers shines. It supports various database systems, allowing you to easily start a containerized database instance for your tests. This ensures that your integration tests interact with a real database, validating the functionality against realistic scenarios.
  5. Consistency Across Environments: Testcontainers promotes consistency by encapsulating dependencies in containers. This consistency ensures that tests behave similarly across development, testing, and production environments, reducing the chances of issues arising due to environmental differences.

Especially suitable for testing microservices architectures and distributed systems, Testcontainers proves to be an ideal tool for developers and test engineers to understand how an application performs in real-world scenarios. It provides a robust set of tools, allowing each test to create and manage its own Docker container. This enables tests to run in a controlled environment when interacting with external dependencies.

For instance, Testcontainers can be utilized to test the integration of an application with a database. Before each test starts, an automatic Docker container is initiated, and after the tests are executed, this container is cleaned up. This ensures that each test runs in an isolated environment, minimizing the impact of one test on another. Testcontainers also offers flexibility by supporting various Docker images and features, allowing for testing different scenarios. This flexibility eases the process of test creation and maintenance, making it more feasible to ensure the successful operation of software under real-world conditions.

How to Use Testcontainers

To start using Testcontainers, you will first need to add the library as a dependency in your project.

<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>testcontainers</artifactId>
<version>1.19.4</version>
<scope>test</scope>
</dependency>

Managing versions for multiple Testcontainers dependencies

<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>testcontainers-bom</artifactId>
<version>1.19.4</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>

and then use dependencies without specifying a version

<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>mysql</artifactId>
<scope>test</scope>
</dependency>

Upon adding the necessary dependency, annotate your JUnit test class with @Testcontainers. This annotation instructs JUnit to manage events in the test lifecycle for the specified field. In this case, the @Container annotation designates a Testcontainers GenericContainer configured for a Redis image from Docker Hub, exposing a port. During test execution, Testcontainers activates before the test method, assesses the local Docker setup, fetches the required image if necessary, starts a new container, awaits its readiness, and ultimately shuts down and removes the container post-test. This orchestrated process ensures a controlled testing environment and maintains consistent test outcomes.

The @Testcontainers annotation facilitates Jupiter integration, identifying fields annotated with @Container and invoking their container lifecycle methods. Note that static fields are shared across test methods, starting once before any test and stopping after the last test, while instance fields are initiated and terminated for each test method. It’s important to mention that this extension has been tested primarily with sequential test execution, and deploying it with parallel test execution is unsupported and may yield unintended side effects.

@Testcontainers
class MixedLifecycleTests {

// will be shared between test methods
@Container
private static final MySQLContainer MY_SQL_CONTAINER = new MySQLContainer();

// will be started before and stopped after each test method
@Container
private PostgreSQLContainer postgresqlContainer = new PostgreSQLContainer()
.withDatabaseName("foo")
.withUsername("foo")
.withPassword("secret");

@Test
void test() {
assertThat(MY_SQL_CONTAINER.isRunning()).isTrue();
assertThat(postgresqlContainer.isRunning()).isTrue();
}
}

Let’s consider that we have an “order service” in our scenario. This service listens to the “order-create” topic on Kafka. While consuming messages, it performs an idempotency check by writing the event key to Couchbase. Afterward, it inserts the necessary records into the “order,” “order_item,” and “customer” tables. Once all these processes are completed, it publishes an event to the “order-create-completed” topic, with the event’s value containing the orderNumber.

Let’s assume that we are using PostgreSQL with the PostGIS extension as our database. Using Liquibase, we’ll create tables in this scenario. We’ll also set up the necessary containers to monitor the Kafka. After creating the Couchbase container, we’ll use the couchbase-cli to create buckets/scopes. Subsequently, we’ll use curl to send requests and create primary indexes. Similar to our Docker Compose file, here we can specify commands we want to run within the container or pass any desired environments.

Since we will manually start our created containers, we did not mark them with @Container. As some of our containers may take a long time to start, we can request certain operations here to be retried within a specified time frame using the Unreliables.retryUntilTrue method. We can link the acknowledgment of a container being up to a log message using waitingFor. For instance, in the case of the postgresContainer, we instructed it to wait for the log message “database system is ready to accept connections” twice. Similarly to our Docker Compose file, we can create a network here and include the desired containers in this network. We attempted to use all the features for diversity, and by refactoring, we can achieve cleaner code and a faster start.

@Testcontainers
public abstract class TestSpringBootTestcontainersApplication {

public static Cluster cluster;
private static final Network network = Network.newNetwork();


@BeforeAll
static void setup() throws IOException, InterruptedException {
zookeeper.start();
broker.start();
connect.start();
controlCenterContainer.start();
postgresContainer.start();
couchbaseContainer.start();
couchSetup();
liquibaseContainer.start();

Unreliables.retryUntilTrue(30, TimeUnit.SECONDS, () -> {
var log = liquibaseContainer.getLogs();
return log.contains("Liquibase command 'update' was executed successfully.");
});
}

@ServiceConnection
public static CouchbaseContainer couchbaseContainer = new CouchbaseContainer("couchbase/server:enterprise-7.1.4")
.withCredentials("Administrator", "123456")
.withNetwork(network)
.withNetworkAliases("couch");


@DynamicPropertySource
static void configureProperties(DynamicPropertyRegistry registry) {
registry.add("spring.datasource.url", () -> "jdbc:postgresql://localhost:" + postgresContainer.getFirstMappedPort() + "/postgres?useSSL=false&useUnicode=true&characterEncoding=utf-8");
registry.add("ff.couchbase.connection-string", couchbaseContainer::getConnectionString);
var kafkaAddress = broker.getHost() + ":" + broker.getFirstMappedPort();
registry.add("kafka.address", () -> kafkaAddress);
}

static GenericContainer<?> zookeeper = new GenericContainer<>(DockerImageName.parse("bitnami/zookeeper:latest"))
.withNetwork(network)
.withNetworkAliases("zookeper")
.withExposedPorts(2181)
.withEnv("ZOO_MY_ID", "1")
.withEnv("ZOO_PORT", "2181")
.withEnv("ZOO_SERVERS", "server.1=zookeeper:2888:3888")
.withEnv("ALLOW_ANONYMOUS_LOGIN", "yes");


static KafkaContainer broker = new KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:latest"))
.withExternalZookeeper("zookeper:2181")
.withExposedPorts(29092)
.withExposedPorts(9999)
.withExposedPorts(9093)
.withEnv("KAFKA_BROKER_ID", "1")
.withEnv("KAFKA_LISTENER_SECURITY_PROTOCOL_MAP", "PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT")
.withEnv("KAFKA_ADVERTISED_LISTENERS", "PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092")
.withEnv("KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR", "1")
.withEnv("KAFKA_TRANSACTION_STATE_LOG_MIN_ISR", "1")
.withEnv("KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR", "1")
.withEnv("KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS", "0")
.withEnv("KAFKA_JMX_PORT", "9999")
.withEnv("KAFKA_JMX_HOSTNAME", "localhost")
.withNetwork(network)
.withNetworkAliases("broker")
.dependsOn(zookeeper);

private static GenericContainer connect = new GenericContainer(DockerImageName.parse("confluentinc/cp-kafka-connect:latest"))
.withNetwork(network)
.withNetworkAliases("connect")
.withExposedPorts(8083)
.withEnv("CONNECT_BOOTSTRAP_SERVERS", "broker:9092")
.withEnv("CONNECT_REST_ADVERTISED_HOST_NAME", " connect")
.withEnv("CONNECT_REST_PORT", " 8083")
.withEnv("CONNECT_GROUP_ID", " compose-connect-group")
.withEnv("CONNECT_CONFIG_STORAGE_TOPIC", " docker-connect-configs")
.withEnv("CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR", " 1")
.withEnv("CONNECT_OFFSET_FLUSH_INTERVAL_MS", " 10000")
.withEnv("CONNECT_OFFSET_STORAGE_TOPIC", " docker-connect-offsets")
.withEnv("CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR", " 1")
.withEnv("CONNECT_STATUS_STORAGE_TOPIC", " docker-connect-status")
.withEnv("CONNECT_STATUS_STORAGE_REPLICATION_FACTOR", " 1")
.withEnv("CONNECT_KEY_CONVERTER", " org.apache.kafka.connect.storage.StringConverter")
.withEnv("CONNECT_VALUE_CONVERTER", " org.apache.kafka.connect.json.JsonConverter")
.withEnv("CONNECT_PLUGIN_PATH", "/usr/share/java,/usr/share/confluent-hub-components")
.withEnv("CONNECT_LOG4J_LOGGERS", " org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR");


private static GenericContainer controlCenterContainer = new GenericContainer(DockerImageName.parse("confluentinc/cp-enterprise-control-center:latest"))
.withNetwork(network)
.withNetworkAliases("controlCenter")
.withExposedPorts(9021)
.withEnv("CONTROL_CENTER_BOOTSTRAP_SERVERS", "broker:9092")
.withEnv("CONTROL_CENTER_ZOOKEEPER_CONNECT", "zookeeper:2181")
.withEnv("CONTROL_CENTER_CONNECT_CLUSTER", "connect:8083")
.withEnv("CONTROL_CENTER_REPLICATION_FACTOR", "1")
.withEnv("CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS", "1")
.withEnv("CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS", "1")
.withEnv("CONFLUENT_METRICS_TOPIC_REPLICATION", "1")
.withEnv("PORT", "9021");


private static GenericContainer<?> postgresContainer = new GenericContainer<>("postgis/postgis")
.withNetwork(network)
.withEnv("POSTGRES_PASSWORD", "postgres")
.withNetworkAliases("postgreSQL")
.waitingFor(Wait.forLogMessage(".*database system is ready to accept connections.*", 2))
.withExposedPorts(5432);

private static GenericContainer<?> liquibaseContainer = new GenericContainer<>("liquibase/liquibase:latest")
.withClasspathResourceMapping("db", "/liquibase/changelog", BindMode.READ_WRITE)
.withNetwork(network)
.withNetworkAliases("liquibaseContainerAlias")
.withCommand(
"--url=jdbc:postgresql://postgreSQL/postgres?useSSL=false&useUnicode=true&characterEncoding=utf-8",
"--changelog-file=master.changelog.xml",
"--username=postgres",
"--password=postgres",
"--driver=org.postgresql.Driver",
"--log-level=DEBUG",
"--searchPath=/liquibase/changelog",
"update"
)
.withEnv("appUserName", "public")
.dependsOn(postgresContainer);


private static void couchSetup() throws IOException, InterruptedException {
couchbaseContainer.execInContainer("curl", "-v", "-X", "POST", "127.0.0.1:8091/pools/default", "-d", "memoryQuota=1500", "-d", "indexMemoryQuota=512");
couchbaseContainer.execInContainer("curl", "-v", "127.0.0.1:8091/node/controller/setupServices", "-d", "services=kv%2Cn1ql%2Cindex");
couchbaseContainer.execInContainer("curl", "-v", "127.0.0.1:8091/settings/web", "-d", "port=8091", "-d", "username=Administrator", "-d", "password=123456");

//normal bucket creation
couchbaseContainer.execInContainer("couchbase-cli", "bucket-create", "-c", "127.0.0.1:8091", "--username", "Administrator", "--password", "123456", "--bucket",
"fulfillment", "--bucket-type", "couchbase", "--bucket-ramsize", "110", "--enable-flush", "1");

//ephemeral bucket creation
couchbaseContainer.execInContainer("couchbase-cli", "bucket-create", "--cluster", "http://localhost:8091", "--username", "Administrator", "--password", "123456",
"--bucket", "fulfillment-ephemeral", "--bucket-type", "ephemeral", "--bucket-ramsize", "128", "--max-ttl", "500000000",
"--durability-min-level", "none", "--enable-flush", "0");

//scope creation
couchbaseContainer.execInContainer("couchbase-cli", "collection-manage", "--cluster", "http://localhost:8091", "--username", "Administrator", "--password", "123456",
"--bucket", "fulfillment", "--create-scope", "testcontainers");

//collection creation
couchbaseContainer.execInContainer("couchbase-cli", "collection-manage", "-c", "localhost", "--username", "Administrator", "--password", "123456", "--bucket",
"fulfillment", "--create-collection", "testcontainers.broker_message", "--max-ttl", "0");

cluster = Cluster.connect(couchbaseContainer.getConnectionString(), couchbaseContainer.getUsername(), couchbaseContainer.getPassword());
Unreliables.retryUntilTrue(30, TimeUnit.SECONDS,
() -> {
String createIndexCommand =
"curl -v -X POST -u Administrator:123456 http://localhost:8093/query/service -d 'statement=CREATE PRIMARY INDEX ON `fulfillment`.`testcontainers`.`broker_message`'";
org.testcontainers.containers.Container.ExecResult indexExecResult = couchbaseContainer.execInContainer("sh", "-c", createIndexCommand);
return indexExecResult.getStdout().contains("\"status\": \"success\"");
});

}

}

And our test class utilizing these containers.

@SpringBootTest
class TestcontainersApplicationTests extends TestSpringBootTestcontainersApplication {

@Autowired
private KafkaTemplate<String, Object> kafkaTemplate;

@Autowired
private OrderRepository orderRepository;

@Autowired
private BrokerMessageRepository brokerMessageRepository;

@Autowired
private CustomerRepository customerRepository;

@Autowired
private OrderItemRepository orderItemRepository;

private String topic = "order-create";

private OrderCreatedEvent orderCreatedEvent = ResourceScenarios.getOrderCreatedEvent();

@Test
void shouldCreateOrderAndSendOrderCreateCompletedEvent() {
kafkaTemplate.send(TOPIC, orderCreatedEvent.getId(), orderCreatedEvent).whenComplete((kv, throwable) -> {
// Idempotency check
var id = String.format("%s-%s", TOPIC, orderCreatedEvent.getId());
Optional<BrokerMessage> byId = brokerMessageRepository.findById(id);
assertThat(byId).isNotEmpty();

// order creation check
AtomicReference<Optional<Order>> atomicOrder = new AtomicReference<>(Optional.empty());

Unreliables.retryUntilTrue(10, TimeUnit.SECONDS, () -> {
atomicOrder.set(orderRepository.findByOrderNumber(orderCreatedEvent.getOrderNumber()));
if (atomicOrder.get().isEmpty()) {
return false;
}
return true;
});

var order = atomicOrder.get().get();
Assertions.assertEquals(BigDecimal.valueOf(150.75), order.getPrice());
Assertions.assertEquals("ORD12345", order.getOrderNumber());
Assertions.assertEquals("This is a sample order", order.getNote());


//order items check
var orderItems = orderItemRepository.findByOrder_OrderNumber(orderCreatedEvent.getOrderNumber());
assertEquals(2, orderItems.size());

//customer check
Customer customerByPhoneNumber = customerRepository.getCustomerByPhoneNumber(orderCreatedEvent.getCustomerData().phoneNumber());
assertEquals(orderCreatedEvent.getCustomerData().fullName(), customerByPhoneNumber.getFullName());


});

// complete event fire check
Consumer<String, String> kafkaConsumer = getStringOrderCreateCompletedEventConsumer();
kafkaConsumer.subscribe(Collections.singletonList("order-create-completed"));
ConsumerRecords<String, String> records = kafkaConsumer.poll(Duration.ofMillis(100));
records.forEach(record -> {
System.out.println(record);
assertEquals(orderCreatedEvent.getId(), record.value());
});

}

private Consumer<String, String> getStringOrderCreateCompletedEventConsumer() {
Properties properties = new Properties();
properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, broker.getBootstrapServers());
properties.put(ConsumerConfig.GROUP_ID_CONFIG, "test_container_group_id");
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return new KafkaConsumer<>(properties);
}

}

--

--