Create a CI/CD pipeline with Gitlab on container deployments

In order to create a CI/CD pipeline with gitlab built-in functionality you should firstly create the appropriate .gitlab-ci.yml file. This is the file on which the steps will be described for the pipeline.

This file should be placed on the root structure of the branch and every time a commit is pushed on the remote repository the steps will run. Instructions have been provided from gitlab and can be found here

For this example I chose gitlab runner as the building tool and the deployment method of a docker container.

In order to install gitlab runner as a container perform the below steps:

Download the image.

 docker run -d --name gitlab-runner --restart always \
     -v /srv/gitlab-runner/config:/etc/gitlab-runner \
     -v /var/run/docker.sock:/var/run/docker.sock \
     gitlab/gitlab-runner:latest

Create a persistent volume

docker volume create gitlab-runner-config

Stop the container if already started from previous step and run it again with the mapped volume

docker run -d --name gitlab-runner --restart always \     -v /var/run/docker.sock:/var/run/docker.sock \     -v gitlab-runner-config:/etc/gitlab-runner \     gitlab/gitlab-runner:latest

You will see the container running

Register gitlab with your runner. You should get the registration token and runner url from your repository settings.

Inspect container and press gitlab-runner register

Start the runner

gitlab-runner start

The runner should have been registered on your gitlab environment

Perform a commit and push changes to your repository

The run task should have started

Check the pipeline and see its status

The job was not succesful and by checking the logs I could verify that DNS resolution could not be enstablished.

In order to fix that you should add an entry for your named gitlab container to your gitlab runner. Unfortunately there are no tools like vim, nano installed on gitlab-runner. However you can bypass this by echoing a value in your /etc/hosts file.

It is also important that your local computer can resolve by fqdn your gitlab deployment. This is necessary because docker should be able to read this entry and perform actions on it.

After those changes you will be able to run your pipeline successfully.

Provision gitlab-ce on docker with Portainer

Portainer is a fantastic tool that includes a GUI in order to manage your container workloads easier than with command line. It is free to use with a community edition and the documentation describes the installation which will take one approximately 5 minutes to complete.

In this article I will show you how to use portainer and its GUI to deploy a gitlab container on your setup.

If you use the default setup instructions then your instance will be created on localhost under 9000 port.

docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce

You can access it on http://127.0.0.1:9000/ where you will be prompted to login with the credentials you specified during the initial setup.

Under containers you can create a new container by clicking the add container button

Under volumes you should create a new persistent volume which will be consumed from gitlab for data saving operations.

persistent volume creation for gitlab container

You can either create a new container and specify the dockerHub location or pull the image first and then use it to deploy your instance. I preferred the second way so I pulled the image locally.

docker pull gitlab/gitlab-ce

When completed you should see the below message

From containers press +Add new container. Set up the requested name (gitlab) and specify the image.

According to the documentation three partitions are needed in order to store data for gitlab.

$GITLAB_HOME/data:/var/opt/gitlab
$GITLAB_HOME/logs:/var/log/gitlab
$GITLAB_HOME/config:/etc/gitlab

Under env variables add the needed value as described from Gitlab documentation. Personally I used /Users/username/Documents/Gitlab on my computer.

Press deploy container and the creation procedure should start.

When you first launch your container by checking the logs you will verify that the installation steps are running. This operation should take up to 5-10 minutes.

Then you will see your container running. Stop container and perform also the below configurations:

From restart policies, select always:

From ports configuration add the below bindings (80:80, 443:443)

By mapping ports 80 and 443 to your host you will be able to access gitlab from your browser using localhost:80.

Add also your hostname and domain from network tab.

When the deployment is finished, if you access the localhost address you will see the setup screen.

You can find instructions on how to install gitlab through cli or dockerfile from gitlab

The default username that should be used to login is root.

In order to verify that persistent storage is working as expected, create a new test project, commit a file, close container and then start it again.

Stop container

Login to Gitlab again, and your test-project should be there for you.

Configure HAproxy to load balance Centos httpd containers

In this article I will explain a HAproxy installation on docker centos images. First things first, 3 centos images should be deployed. Two of them will be simple web servers with httpd installed and the third one will have haproxy installed to load balance between the two web servers.

In order to deploy 3 new centos docker images you should first download the latest centos image.

Just pull the Centos docker image from dockerHub by using the below command

docker pull centos

And then deploy 3 instances of it:

docker container run -it --name centos-lab1 -d centos:latest
docker container run -it --name centos-lab2 -d centos:latest
docker container run -it --name centos-lab3 -d centos:latest

Verify that containers have been deployed succesfully and execute some interactive commands on them.

docker exec -it centos-lab1 uname -r

You will get a result like the below, depending on the image you have installed.

4.19.76-linuxkit

Install httpd package on the two web servers. I am using portainer so that I can interact easier with containers. You could also execute an interactive command as shown below.

yum install httpd
docker exec -it centos-lab2 yum install httpd

Lastly you should install haproxy package for the third server that will be used as a load balancer.

yum install haproxy
[root@ad1d23c22355 /]# haproxy -v
HA-Proxy version 1.8.15 2018/12/13
Copyright 2000-2018 Willy Tarreau

Verify connectivity between your containers. Based on the default network that have been deployed on my computer I get the following 3 IP’s.

172.17.0.4 , 172.17.0.5 , 172.17.0.6

Install a test html page on web servers that will be used to identify the node.

echo "this is centos-lab1" > /var/www/html/index.html
echo "this is centos-lab2" > /var/www/html/index.html

Enable and start httpd server on web servers and test that their page is up and running by running a curl from load balancer (server 3). You will get a respond like the below:

apache is running and responding on web servers 1,2

In order to use systemctl and systemd commands, check my previous article about deploying a Centos Image with systemd enabled.

Edit haproxy configuration setting under /etc/haproxy/haproxy.cfg and add your two webservers as backend servers of app section.

haproxy configuration

Restart haproxy so that configuration changes are loaded:

systemctl restart haproxy

Curl loadbalancer and verify from the results that load is balanced between centos-1 and centos-2 webservers: