Docker workshop

whale


About containers

containers


A brief history about containers

  • During the unix development of Unix V7 in 1979, the chroot system call was introduced, changing the root directory of a process and its children to a new location in the filesystem.
  • Chroot was added to BSD in 1982.
  • 2000: FreeBSD Jails
  • 2001: Linux VServer
  • 2004: Solaris Containers

  • 2005: Open VZ (Open Virtuzzo)
  • 2006: Process Containers
  • 2008: LXC
  • 2013: Docker
  • 2016: The Importance of Container Security Is Revealed
  • 2017: Container Tools Become Mature
    • Kubernetes Grows Up

How they work?

With containers, instead of virtualizing the underlying computer like a virtual machine (VM), just the OS is virtualized.


About images


A Docker image is a file, comprised of multiple layers, used to execute code in a Docker container. An image is essentially built from the instructions for a complete and executable version of an application, which relies on the host OS kernel. When the Docker user runs an image, it becomes one or multiple instances of that container.


The plan

We’ll explore most of the basic docker concepts creating a real microservice project consistent on:


  • Rust program talking to kafka
  • CI compiling docker process using gitlab-ci
  • Kafka cluster using docker-compose
  • Containers monitored with telegraf and grafana
  • Load balancer with traefik

At the end of the exercice you will have a working environment with those services.


Requirements


Edit your hosts file with exercice name docker.local. This will be used to access services from load-balancer using Host header.

1
127.0.0.1 docker.local

Having docker and docker-compose installed and working in the system.


Creating a rust application with docker

Just create a rust project of hello world which will be the base of our rust kafka-client

1
cargo new hello_world --bin

Add a Dockerfile in the root folder of the application, this file is used to create the docker image of our hello-world.


1
2
3
4
5
6
7
8
FROM rust

WORKDIR /app
COPY . .

RUN cargo install --path .

CMD ["hello_world"]

Test if the Dockerfile compiles properly our app

1
docker build .

If your image compiles properly; Congratulations!! you have your first docker image available.

At this point you can use this image in your localhost or push it to some Docker registry to make it available to other docker hosts.

But we’re going to apply some CD here to automatize this process and be aligned with Continuous Delivery from the first step.


Just add the gitlab-ci.yml which is provided for the gitlab itself.


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# This file is a template, and might need editing before it works on your project.
build-master:
# Official docker image.
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
only:
- master

build:
# Official docker image.
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG"
except:
- master

This file work pretty similar to travis-ci or drone CI systems.
It tells gitlab to compile your package and push it to the gitlab docker registry.


Commit and push this file (or use gitlab-ci add files template) and will be able to
see the process of your building in gitlab-ci. After successfull build you will be able to pull your created image
from any docker host machine.


You can test your new application by using docker run

1
2
$ docker run registry.gitlab.com/gtrias/rust-example
Hello, world!

This will download our previous built image (if it isn’t available in localhost already) and run it as a container.


If you want to update local docker image with latest version you can run docker pull

1
docker pull registry.gitlab.com/gtrias/rust-example

Creating kafka cluster using docker-compose


Docker compose is a layer over docker API which allows you to manage collections of docker containers using a definition file. Since we don’t want to remember all the time our docker commands we’ll going to define our kafka cluster using a yml definition file.


We are going to base the cluster from already done job here.

So let’s clone this project to follow.


The base docker-compose has this shape:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
build: .
ports:
- "9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 172.17.0.1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock

This specify two services, one zookeper and kafka itself.


You can pull, create containers and run them running:

1
docker-compose up -d

As docker run command this will download needed images, create containers
and run them. Note the -d param refers to detached, it won’t close the containers if you
close the current shell. Additionally note the build . in kafka service, this tells docker-compose
to build this image instead of looking one already created one.


After this process finishes you will have your running containers ready to work :)
You can see the status running

1
2
3
4
5
[genar@thinktravel kafka-docker]$ docker-compose ps
Name Command State Ports
----------------------------------------------------------------------------------------------------------------------
kafka-docker_kafka_1 start-kafka.sh Up 0.0.0.0:32768->9092/tcp
kafka-docker_zookeeper_1 /bin/sh -c /usr/sbin/sshd ... Up 0.0.0.0:2181->2181/tcp, 22/tcp, 2888/tcp, 3888/tcp

Also you can chech their logs to ensure they are working as expected:

1
2
3
4
5
6
7
8
9
genar@thinktravel kafka-docker]$ docker-compose logs -f --tail 20
Attaching to kafka-docker_zookeeper_1, kafka-docker_kafka_1
zookeeper_1 | 2019-06-24 19:38:29,608 [myid:] - INFO [main:Environment@100] - Server environment:os.name=Linux
zookeeper_1 | 2019-06-24 19:38:29,608 [myid:] - INFO [main:Environment@100] - Server environment:os.arch=amd64
zookeeper_1 | 2019-06-24 19:38:29,608 [myid:] - INFO [main:Environment@100] - Server environment:os.version=5.1.5-arch1-2-ARCH
zookeeper_1 | 2019-06-24 19:38:29,608 [myid:] - INFO [main:Environment@100] - Server environment:user.name=root
zookeeper_1 | 2019-06-24 19:38:29,609 [myid:] - INFO [main:Environment@100] - Server environment:user.home=/root
zookeeper_1 | 2019-06-24 19:38:29,609 [myid:] - INFO [main:Environment@100] - Server environment:user.dir=/opt/zookeeper-3.4.13
zookeeper_1 | 2019-06-24 19:38:29,612 [myid:] - INFO [main:ZooKeeperServer@836] - tickTime set to 2000

To stop all containers just type docker-compose stop or if you want to stop and also remove them run docker-compose down (if containers are destroyed all data not stored in volumes will be destroyed too)


Now you can scale kafka to 2 brokers running:

1
docker-compose scale kafka=2

Make our rust application talk to kafka


The proper implementation to listen kafka has been implemented in this repository


Add our rust app to the kafka docker-compose.yml file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
build: .
ports:
- "9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 172.17.0.1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
rust-client:
image: registry.gitlab.com/gtrias/rust-example

Add kafka-manager

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
build: .
ports:
- "9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 172.17.0.1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
kafka-manager:
image: sheepkiller/kafka-manager
ports:
- 9000:9000
environment:
APPLICATION_SECRET: letmein
ZK_HOSTS: zookeeper:2181
rust-client:
image: registry.gitlab.com/gtrias/rust-example

Generate some data to kafka to see if is received by our rust app

You can use kafka-manager to generate the topics to be consumed by rust app.

See examples here: https://wurstmeister.github.io/kafka-docker/


Docker API


Docker has a full rich API to manage containers, here I’m going to talk about the most basic and used (by me) to manage host containers:


List of images images in the docker host

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[genar@thinktravel orgmode]$ docker images

REPOSITORY TAG IMAGE ID CREATED SIZE
factorial-backend_backend latest 06cf665f0d95 7 weeks ago 1.78GB
factorial-backend_sidekiq latest 06cf665f0d95 7 weeks ago 1.78GB
memcached latest 1941054e2bdf 8 weeks ago 62.2MB
elixir latest c4eee7f74185 2 months ago 1.08GB
ruby 2.5 e86557c9a8ab 2 months ago 873MB
redis latest 0f55cf3661e9 4 months ago 95MB
mysql 5.7 e47e309f72c8 4 months ago 372MB
node 8 4f01e5319662 4 months ago 893MB
node 8-alpine e8ae960eaa9e 4 months ago 66.3MB
jwilder/nginx-proxy latest 60f01f8052f5 4 months ago 148MB
phpmyadmin/phpmyadmin latest c6ba363e7c9b 4 months ago 166MB
ruby 2.4.1 e7ca4a0b5b6d 21 months ago 684MB

Get an image

1
docker pull traefik

Run a new container from an image

1
2
3
docker run -d -p 8080:8080 -p 80:80 -v $PWD/traefik.toml:/etc/traefik/traefik.toml traefik

9e4a65ae2982832902c47e9429a8fbc81aee56de5aea414ab33d133da22d513e

This will get the image for you if its not already on the system.


Show running container logs

1
docker logs -f --tail 20 e9c118b35981

Stop and remove a running container

1
docker rm -f e9c118b35981

Run a command inside a running docker container

1
docker exec -ti e9c118b35981 bin/rails console

Also you can enter to the container shell (if it has any installed inside which is the most habitual)

1
docker exec -ti e9c118b35981 bash

Pull images, create containers and run them

1
docker-compose up -d

Note: -d is to run them detached, if not the process will be binded in your shell session and be lost if you close it.


Stop all containers

1
docker-compose stop

See all containers logs

1
docker-compose logs -f --tail 20

See one container logs

1
docker-compose logs -f --tail 20 traefik

Stop and remove all containers (information non stored on volume will be lost)

1
docker-compose down

Add traefik for easy access to our services

Usually when you want to have a running container you also want to publish it to internet. A very nice way to achieve this is using a load-balancer able to reconfigure itself when some of the containers change. traefik does exact this job in a very neat way.


A picture is worth a thousand words


Add it to your project:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41

version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
build: .
ports:
- 9092
environment:
KAFKA_ADVERTISED_HOST_NAME: 172.17.0.1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
kafka-manager:
image: sheepkiller/kafka-manager
ports:
- 9000:9000
environment:
APPLICATION_SECRET: letmein
ZK_HOSTS: zookeeper:2181
labels:
- "traefik.port=9000"
- "traefik.frontend.rule=Host:kafka-manager.docker.local"
rust-client:
image: registry.gitlab.com/gtrias/rust-example
command: hello_world --topics my-topic --brokers kafka:9092
traefik:
image: traefik
command: --web --docker --docker.domain=docker.local --logLevel=DEBUG
ports:
# access this with the correct Host header to access the respective container
- "80:80"
# management UI
- "8080:8080"
volumes:
# traefik does its magic by reading information about running containers from the docker socket
- /var/run/docker.sock:/var/run/docker.sock
- /dev/null:/traefik.toml

Comments

⬆︎TOP