Mocking and Testing AWS APIs with TestContainers and LocalStack

Cloud Computing is the default today. I do believe the future is containers and multi-cloud solutions. However today I work a lot with AWS. There are specific endpoints such as S3, AutoScaling, Route53 and other that are among the ones I use more in my day to day work. AWS API is easy to use however not no easy to test. Distributed systems tend to be hard to test. Having quick feedback is very important for engineers. There is some kind of tasks that need to interact with AWS APIs like S3 for backups for instance. However, if you need to wait to deploy in AWS to test it because is basically impossible to test it locally then we have a problem. We need to be able to do end-2-end testing however while you are coding or doing some troubleshooting is important to do things faster. There are 2 specific projects that can help us with this task. TestContainers and LocalStack. Today I will show to use LocalStack and TestContainers together with JUnit in order to do unit tests mocking S3 API. I will show how to do this using Java8. So Let's get started!

Running LocalStack locally

In order to run LocalStack locally we need to have Python and Docker installed. After you install them we can get and run LocalStack.

Install and Run

sudo pip install localstack
sudo localstack start --docker

Use

aws --endpoint-url=http://localhost:4568 kinesis list-streams

Dashboard

# change edea65b0021e by your container_id (docker ps)
docker inspect --format '{{ .NetworkSettings.IPAddress }}' edea65b0021e 
# Change the IP that will result from previous command
# goto http://172.17.0.2:8080

More on: https://github.com/localstack/localstack



After you run LocalStack docker container make sure you shut down because when we run with TestContaoiners we will be able to do it over JUnit.

Setting Up Gradle Project

Now we need to set up a gradle project. Let's take a look at the build.gradle file.

apply plugin: "java"
sourceCompatibility = 1.8
targetCompatibility = 1.8
repositories {
maven { url 'http://repo.spring.io/libs-release' }
mavenCentral()
maven { url "https://oss.sonatype.org/content/groups/public/" }
}
dependencies {
compile group: 'org.testcontainers', name: 'localstack', version: '1.7.0'
compile group: 'com.amazonaws', name: 'aws-java-sdk-s3', version: '1.11.313'
compile([
'log4j:log4j:1.2.17',
'org.slf4j:slf4j-log4j12:1.7.25',
'org.springframework:spring-core:4.3.8.RELEASE',
'org.springframework:spring-context:4.3.8.RELEASE',
'org.springframework:spring-beans:4.3.8.RELEASE',
])
testCompile([
'junit:junit:4.12',
'org.springframework:spring-test:4.3.8.RELEASE'
])
}
view raw build.gradle hosted with ❤ by GitHub


Great. Now we can proceed and work with localstack and testcontainers together.

Hacking to use Latest LocalStack Image

There is a java project on TestContainers that does the Integration between LocalStack and TestContainers we will hack that project because we want to use the latest version of LocalStack. Right now the code is using an old version. So let's take a look at the code.

import static org.testcontainers.containers.BindMode.READ_WRITE;
import java.net.InetAddress;
import java.net.UnknownHostException;
import java.util.Arrays;
import java.util.stream.Collectors;
import org.jetbrains.annotations.Nullable;
import org.junit.rules.ExternalResource;
import org.rnorth.ducttape.Preconditions;
import org.testcontainers.containers.GenericContainer;
import org.testcontainers.containers.wait.LogMessageWaitStrategy;
import com.amazonaws.auth.AWSCredentialsProvider;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.client.builder.AwsClientBuilder;
@SuppressWarnings("deprecation")
public class LocalStackContainerLatest extends ExternalResource {
@SuppressWarnings("rawtypes")
@Nullable private GenericContainer delegate;
private ServiceLatest[] services;
@SuppressWarnings({ "rawtypes", "resource" })
@Override
protected void before() throws Throwable {
Preconditions.check("services list must not be empty", services != null && services.length > 0);
final String servicesList = Arrays
.stream(services)
.map(ServiceLatest::getLocalStackName)
.collect(Collectors.joining(","));
final Integer[] portsList = Arrays
.stream(services)
.map(ServiceLatest::getPort)
.collect(Collectors.toSet()).toArray(new Integer[]{});
delegate = new GenericContainer("localstack/localstack:latest")
.withExposedPorts(portsList)
.withFileSystemBind("//var/run/docker.sock", "/var/run/docker.sock", READ_WRITE)
.waitingFor(new LogMessageWaitStrategy().withRegEx(".*Ready\\.\n"))
.withEnv("SERVICES", servicesList);
delegate.start();
}
@Override
protected void after() {
Preconditions.check("delegate must have been created by before()", delegate != null);
delegate.stop();
}
public LocalStackContainerLatest withServices(ServiceLatest... s3) {
this.services = s3;
return this;
}
public AwsClientBuilder.EndpointConfiguration getEndpointConfiguration(ServiceLatest service) {
if (delegate == null) {
throw new IllegalStateException("LocalStack has not been started yet!");
}
final String address = delegate.getContainerIpAddress();
String ipAddress = address;
try {
ipAddress = InetAddress.getByName(address).getHostAddress();
} catch (UnknownHostException ignored) {
}
ipAddress = ipAddress + ".xip.io";
while (true) {
try {
InetAddress.getAllByName(ipAddress);
break;
} catch (UnknownHostException ignored) {
}
}
return new AwsClientBuilder.EndpointConfiguration(
"http://" +
ipAddress +
":" +
delegate.getMappedPort(service.getPort()), "us-east-1");
}
public AWSCredentialsProvider getDefaultCredentialsProvider() {
return new AWSStaticCredentialsProvider(new BasicAWSCredentials("accesskey", "secretkey"));
}
public enum ServiceLatest {
API_GATEWAY("apigateway", 4567),
KINESIS("kinesis", 4568),
DYNAMODB("dynamodb", 4569),
DYNAMODB_STREAMS("dynamodbstreams", 4570),
// ELASTICSEARCH("es", 4571),
S3("s3", 4572),
FIREHOSE("firehose", 4573),
LAMBDA("lambda", 4574),
SNS("sns", 4575),
SQS("sqs", 4576),
REDSHIFT("redshift", 4577),
// ELASTICSEARCH_SERVICE("", 4578),
SES("ses", 4579),
ROUTE53("route53", 4580),
CLOUDFORMATION("cloudformation", 4581),
CLOUDWATCH("cloudwatch", 4582);
private final String localStackName;
private final int port;
ServiceLatest(String localstackName, int port) {
this.localStackName = localstackName;
this.port = port;
}
public String getLocalStackName() {
return localStackName;
}
public Integer getPort() { return port; }
}
}


The changes I did above are quite simple - I just change the docker tag from 0.6 to latest and rename the name of class + enum and that's it.  Now we can move to testcontainers test code.

Testing S3 API and Running with TestConainers

Now we can focus on the unit test and mock S3. So let's go for it.

import org.junit.Rule;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.github.diegopacheco.sandbox.java.testcontainers.AppConfig;
import com.github.diegopacheco.sandbox.java.testcontainers.test.LocalStackContainerLatest.ServiceLatest;
@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = {AppConfig.class})
public class TestS3 {
@Rule
public LocalStackContainerLatest localstack = new LocalStackContainerLatest(). withServices(ServiceLatest.S3);
@Test
public void someTestMethod() {
AmazonS3 s3 = AmazonS3ClientBuilder
.standard()
.withEndpointConfiguration(localstack.getEndpointConfiguration(ServiceLatest.S3))
.withCredentials(localstack.getDefaultCredentialsProvider())
.build();
s3.createBucket("foo");
s3.putObject("foo", "bar", "baz");
System.out.println("Key Counts on S3>: " + s3.listObjectsV2("foo").getKeyCount() );
}
}
view raw TestS3.java hosted with ❤ by GitHub


There are some important things here we need to cover. Let's start with the Annotation @Rule. This is important because the runner will use it when JUnit boot up and will boot up the Docker container needed. In regards to LocalStack as you can see I need to pass a Specific Service/Endpoint from AWS that we want to mock.

After the Rule we initialize AWS API - It's important to note here we are passing a custom endpoint otherwise we will reach the real AWS API. Then we can use S3 API normally and all commands will go to your Docker image. So we have mocked S3 Succesful. LocalStack support most of AWS endpoints and the combination with TestContainers is killer now is very easy to test AWS specific code.

The complete code is available on my GitHub here.

Cheers,
Diego Pacheco

Popular posts from this blog

Having fun with Zig Language

C Unit Testing with Check

Cool Retro Terminal