I could use the time “between the years” to educate myself in a topic that I wanted to understand for a longer time: Docker.

Last year, I bought a book about it, so I knew it is something about containers and portable, scalable software development tools, but as long as you haven’t really put your hands on it, you don’t really know what’s going on. Luckily, there is a very good tutorial, directly delivered by docker.com: https://docs.docker.com/get-started/  that I played through in the last days.

Screen Shot 2017-12-28 at 12.44.21.png
if the docker whale is that happy, how can the software be bad???

I would like to share some of my findings here that I experienced throughout the exercise:

  • Similar to VMs, Containers have the goal to separate environments from code: Specifying as much of the runtime versions, dependencies etc from your logic as possible helps you avoid incompatibilities between your local development machine and the productive runtime of your app.
    While this can also be achieved by installing VMs for development + production that have the same versions for every tool, VMs always contain a redundant copy of the Guest OS for every VM instance. That can make them heavy in terms of size and start-up-time. Docker containers instead are running on a single docker instance on the host-OS, proposing access to the host resources in a similar logic than for processes running on the OS directly.
    The benefit: Containers are much thinner, they don’t need much space and can quickly be built up or torn down.

    source: docker.com
  • To build a container, you describe an image first. The image serves as description how one container should look like and is like a blueprint. Once a new container instance should be created, the image description is responsible for the set-up process.
    • First source of truth for building an image is the Dockerfile. Here inside, you can specify what “ingredients” (runtimes, versions, ….) your image should contain, what working folders should be used etc…. You use the “docker build” command to build a docker image based on the instructions in your Dockerfile.
    • You can use Docker Cloud as an Image repository. But obviously, you can also use other repository services like AWS, Azure or Cloud Foundry to push and pull your images.
  • As stated out before, one motivation for using containers is scalability. To be able to scale the application to several instances, some management one meta-level is necessary. Those tasks are done in the docker-compose.yml file. You can describe for example the number of containers you want your application to start or what to do if one of the containers crashes.
  • To be able to scale your application by distributing via several containers to several host machines (servers), you can build a swarm of Docker containers. You define one “swarm manager” node and connect the “swarm workers” to the manager who tells the workers what to do (just as in real life).
  • Stacks enable you to share dependencies between containers. As worker containers can be killed and reborn all the time, they can for example not be responsible for persisting data. To save data persistently, you can offer file system access to the host-file system on the swarm manager. As only containers can be wiped out, but the host-OS-file system stays the same, you can achieve persistency with this set-up.

I think is a very interesting technology for flexible software architectures. I’m very interested to dig deeper into that topic – there are a lot more aspects I need to understand better, some examples on the Docker website.

(title image by Erwan Hesry on unsplash)

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.