Posted on Leave a comment

Downgrade docker by installing an older version on linux

Sometimes the latest version of software may include bugs that have not been fixed.This is the case for docker on Ubuntu 22.04. The buildkit version prints on stderr instead of stdout and this causes some issues on teamcity. As a solution I wanted to downgrade docker on a previous version. First you will need to delete the existing docker installation. You can do this with the below commands:

sudo apt-get purge -y docker-engine docker docker.io docker-ce docker-ce-cli docker-compose-plugin
sudo apt-get autoremove -y --purge docker-engine docker docker.io docker-ce docker-compose-plugin

sudo rm -rf /var/lib/docker /etc/docker
sudo rm /etc/apparmor.d/docker
sudo groupdel docker
sudo rm -rf /var/run/docker.sock

Based on your OS version you can go under docker downloads and find a specific version. Some older docker version exist for older OS versions but you can install them on later versions also.

Index of linux/ubuntu/dists/ (docker.com)

There could be a case when you have jammy (22.04) but you want to install binaries that were available on bionic version (18.04). In order to install an older version of docker you will need to download the old binaries by navigating inside the specific version then selecting pool and finally stable. Inside the last folder you can find all the architectures that are supported and you should select the appropriate one. For x64 you can select amd64 and then you can find a specific version of the binary.

In order to install specific version you will need to download all the packages that you need (some of them are dependencies of docker-ce) with wget and then install them.

Download:

wget https://download.docker.com/linux/ubuntu/dists/bionic/pool/stable/amd64/docker-scan-plugin_0.17.0~ubuntu-bionic_amd64.deb

Install:

dpkg -i docker-scan-plugin_0.17.0~ubuntu-bionic_amd64.deb
Posted on Leave a comment

Get powershell command result as string

Sometimes you may end up with wrong results on powershell because of the return object. A detailed demonstration can be located below where the return object is not a string and the evaluation of equals is not correct.

For example lets assume that we need to check docker status from powershell and catch this result through the string that is returned. When docker is not running you can expect a similar message like the below.

By getting the result of the docker info command into a variable we can see that the return object is of type Object in powershell.

When you try to use the contains functions with this object in order to evaluate the docker status you will end up with a false result as is not evaluated correctly.

In order to resolve this issue you should specify that the result should be a string with Out-String function.

Then when you evaluate the expression with Contains function this is performed as expected and the correct result is returned.

Posted on 2 Comments

Run jobs with containers on Azure batch service

Azure Batch can be a great tool for instant batch processing as it creates and manages a pool of compute nodes (virtual machines), installs the applications you want to run, and schedules jobs to run on the nodes. Sometimes however a container could be a more appropriate solution for simplicity and scaling than a virtual machine. In this guide I will explain how you could use containers for batch service in order to run jobs and tasks.

Use the Azure Compute Gallery to create a custom image pool – Azure Batch | Microsoft Learn

First things first, you will need to have a azure container registry or another public or private registry to store your container image. I have already created mine and pushed my batchcontainer image inside which is a .NET micro service that returns a hello world message as an output.

using System;

namespace samplebatch
{
    internal class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine($"Hello {args[0]}");
        }
    }
}

https://github.com/geralexgr/samplebatch

The next step would be to create your batch service account. The part on which you set your container as the workload is when you create a pool. Pools consist of the compute node that will execute your jobs and there you will add a new pool which will host containers from the image that you pushed earlier.

On the node selection you will have to select Marketplace on the Image type and specifically microsoft-azure-batch and ubuntu-server-container of 20-04-lts version. Then you will need to select Custom on the container configuration and add your container registry by pressing the hyperlink.

selection of custom container image on the batch service

Then you will need to input the username and password for the container registry as well as the registry URL.

When you have your pool ready you can go and create your job. You can leave the default settings on the job creation but you should specify the pool where the job will run.

Then you can create a task or multiple tasks for your job and provide the commands or inputs for them. In my case I created a task named kati with the command of my name. This will be provided as input in my container which is a .NET microservice that prints a hello world message based on the input.

The important thing to do is to fill the image name from your repository. You can also provide container run options that you want for this node to have like mount of directories etc.

Example: repo.azurecr.io/batchservice:latest

As a result the output would be Hello gerasimos

The output of the run can be found on the stdout.txt file which is located on the task pane. You can also find a stderr.txt file which will log errors/failures that could appear during the execution.

Lastly, you can locate your job execution by navigating in the nodes where you can find a history of your tasks. As you can see I have two successful task executions and non failed.

YouTube video:

Posted on Leave a comment

Docker Desktop as background task on Windows server

Docker desktop is not easy to run as a background task on a windows server. A common issue that you may find would be that although the service is running, when the user log out from the machine, then docker stops working.

Error during connect: In the default daemon configuration on Windows, the docker client must be run with elevated privileges to connect.: Post
open //./pipe/docker_engine: The system cannot find the file specified
Process exited with code 1

In order to bypass this behavior you can leave the user session online inside the server by using lock instead of sign out in the windows server machine.

Given that the machine restarts, the docker service will stop working on the background. In order to bypass this problem you can use an external utility from sysinternals in order to auto logon the user.

https://learn.microsoft.com/en-us/sysinternals/downloads/autologon

When you unzip the download, you can notice the exe application which you should run and input the user password.

Then an automatic logon will be configured using the password that you provided but is stored encrypted on the machine.

After the reboot, the docker desktop service will run without any manual action.