As software engineers we are used to working with containers to package and ship our software. Be it to a Kubernetes cluster, a Docker compose stack, a serverless stack or just your own developer environment for testing purposes. This way, we don’t care if the underlying system is Unix based or a Windows server. Of course there is a catch: Docker containers are specific to the CPU architecture.
I’ll wager that most developers have used or are using x86 based architecture for creating, deploying and running these containers. However, with Apple moving all their machines to their custom ARM-based silicon and AWS starting to offer more cost-efficient ARM options, we can no longer expect that all software only runs on x86 processors. Which can introduce issues when developers with differing architectures want to collaborate. Here are some experiences we have made with Java, Docker and the M1.
TLDR; π
- Use Java JDK with ARM support (the Oracle official JDK starting with version 17). You can give Azul a try.
- Use Maven with ARM64 support (e.g. Homebrew on MacOs).
- Cross compile your docker images.
- Use ARM64 architecture docker images.
This is fine π
Our microservices were running in a docker-compose stack (old-school). This meant that you could spin up the whole
application locally. Starting fresh, a developer could just run docker-compose up
and start up the application,
pulling service images from the GitHub container registry. At least in theory.
In 2022, some developers started with a brand-new Mac with a M1 chip based on ARM architecture.
And we had to learn the hard way that a simple command wouldn’t do.
Come on, hurry up already π
Pulling images and starting the stack worked as expected, but services took an excruciatingly long time to come up or
didn’t even start at all. And even when they did manage to come up, response time was slow.
When you made changes and wanted to compile and re-build the service locally, you could live the meme by XKCD:
Only that it wasn’t an excuse: Packaging a jar of one of the microservices in our project took over two minutes, which was way too long. Colleagues with AMD64 architecture only needed a fraction (some 30 seconds). Executing tests was similarly tiresome.
Furthermore, even when - on surface level - all services were running normally (indicated by docker), we noticed that the JVM inside a container sometimes just crashed with some obscure memory exception. The container kept on running, but the JVM was gone. No responses from the Java service anymore. Also, the services were taking up a huge amount of memory.
The Solution(s) π
Switch JDK π
So we figured we had to switch JDKs to reduce build time. Unfortunately, due to some legacy dependencies we couldn’t go more recent than Java version 11, even though later Oracle Java versions (since 17) officially support ARM architecture. We resorted to a third party JDK, the Zulu build of OpenJDK from Azul, which offers ARM supporting builds even for Java 8. But: to no avail - the build time stayed the same. After some more experiments, switching to an ARM64 Maven version finally did the trick, and we reduced build time to 20 seconds.
Cross compile docker images π
If you want to run and test your services on your M1, then cross compilation reduces build time and significantly boosts performance during run time. Most importantly, it also prevents the JVM from crashing. See also Docker cross compilation guide.
docker build --no-cache --platform linux/arm64 -t my-service:arm64 .
Use ARM docker images π
If you want to use images of databases, message queues and other software components necessary for running your stack, you should look out for official ARM64 architecture images and use them instead. This significantly boosts performance and stability. See also Official Docker ARM64 images. If there is no official image yet, you can build your own with the Dockerfile of the service and the cross compile flag (see above).
Conclusion π
In hindsight, you can of course expect problems when you try to run any piece of software on a different architecture than it initially was build on.
You can count yourself lucky, if things start in the first place. Our initial approach of just starting
everything with docker-compose was naΓ―ve, but we were also curious as to what would happen with our setup and with the new chip.
Well, we found out and with a gentle reminder that despite all the layers of abstraction, code is still running on actual hardware after all.