Before reading this article, it is recommended that you take a look at this first, where we setup docker containers as jenkins slaves.
In this article, we’ll answer the question WTH ( Why the Hassle !)
If you’ve follow along with the previous tutorial, you may not see the power of using docker containers as slaves for jenkins, so let’s clarify things a little bit.
One of the question that has been raised in regard of this configuration is that we have implemented ephemeral Jenkins slaves as containers, so in some cases, where we have to pull dependencies, we will need to do it each time (build), and that is 100% true. In this case you can use volumes to store this dependencies and use them later.
It really depends on the application, in my experience, I have created a volume to store java based projects dependencies, generated when using maven, and it worked like a charm. It is also straightforward, and very important in python projects too, to use volumes to store needed dependencies with the right versions, to keep the integrity across all your pipeline stages.
There is also a huge flexibility when adding slaves, you just need to spin up another container and add it as slave, and you can do that as many times as you want, without needing to configure each time the environment or to deal with ssh keys.
You may want to do a clean up when your pipeline or build finishes, by stopping all the containers (if you want to preserve the ephemeral thing :) ), or keep them as permanent slave agents. Here’s a snippet to stop running containers, I tend to use this a lot in my pipelines as a make
step :
$ docker ps -aq | xargs -I docker rm -f # Or
$ docker rm -f $(docker ps -aq)
$ docker network prune --force
$ docker image prune --filter dangling=true -f
$ docker container prune -f