Posted on Leave a comment

Azure DevOps agent cannot checkout GitHub repository

Recently I faced an issue with my Azure DevOps self hosted container agents. They could not checkout the git repositories and the build stopped due to the default timeout of 60 minutes per run.

This happened for multiple builds, as a result I had to investigate the reason behind this error.

By checking the logs inside the container on the C:\agent\_diag folder I could recognize an error message like the below:

A session for this agent already exists.
Agent connect error: The task agent xxx already has an active session for owner xxx.. Retrying until reconnected.

By searching online, I figured out that this is a reported bug on previous agents versions. In order to resolve, I updated and reconfigured the agent. You can update the agent, either from the GUI or by creating a new container and installing the latest version of azure devops agent.

In order to reconfigure the agent I first took an interactive shell on it.

docker exec -it agent-name powershell.exe

Then inside C:\agent run the below commands.

Reconfigure the agent

This is a temporary fix for your agent. If the problem persists you should open a support ticket on Microsoft to troubleshoot the issue.

Posted on Leave a comment

Install and configure kubernetes dashboard for Docker Desktop local cluster

Kubernetes dashboard is a helpful UI application that presents all your resources inside your k8s cluster. As most people prefer GUI instead of single commands, this tool can make your k8s administration experience better.

When you install docker desktop on your local or development machines, you can select to also include a k8s installation with it. You can locate all your Kubernetes settings using the Docker Desktop UI.

The local cluster is composed of only one node, the computer itself.

In order to install dashboard first run the below kubectl apply command:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml

Then you will need to run kubectl proxy

Then open GUI Dashboard.

Kubernetes Dashboard

The below dialog will appear.

We will examine the Token example.

Create and save the below definition as s.yml. Then apply this configuration with kubectl apply -f s.yml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

Create and save the below definition as r.yml. Then apply this configuration with kubectl apply -f r.yml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

Then run the below command:

kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

The output will be your Token.

Paste the Token on the previous link, and then you will have a working dashboard for your local cluster.

You can also skip the Token procedure. Simply run the below command:

kubectl patch deployment kubernetes-dashboard -n kubernetes-dashboard --type 'json' -p '[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--enable-skip-login"}]'

Then you will see a skip button near the sign in

Kubernetes dashboard:

Deploy and Access the Kubernetes Dashboard | Kubernetes

Token procedure:

dashboard/creating-sample-user.md at master · kubernetes/dashboard (github.com)

Posted on Leave a comment

Maintenance Jobs for build agents explained – Azure DevOps

When you need to scale up your infrastructure, you should enable as much automated options for maintenance as possible. One of the available options for devops agents are included under Organization Settings -> Agent pools -> Settings.

There you can define automated procedures for cleanup on your agent pools.

In my setup, I changed the days to keep unused working directories to 20.

The working directories of the agent are some folders with specific numbers inside C:\agent\work.

Every time a new build is initiated a new folder for this specific run is created. If the same pipeline runs more than one time, then the same working directory will be kept and the files will get overridden. For example lets say my pipeline A is bind to the folder 5. Then every time the pipeline A runs, then the folder 5 will be used for the sources (git repositories) builds, artifacts etc. All previous data hosted there will be deleted and written again.

The maintenance jobs will remove any directories than are not used for x period of days. In my example I had set up 20 days for that task.

You can configure agent pools to periodically clean up stale working directories and repositories. This should reduce the potential for the agents to run out of disk space. Maintenance jobs are configured at the project collection or organization level in agent pool settings.

You can check your history under Organization Settings -> Agent pools -> Maintenance History.

You can also download the log and figure out how much data have been deleted.

Maintenance jobs:

Create and manage agent pools – Azure Pipelines | Microsoft Docs

Video tutorial on YouTube:

Posted on 1 Comment

Update variable group using Azure DevOps rest API – pipeline example

Following my previous article about how to update a variable group using POSTMAN, I will now document how to implement the same behavior through a pipeline.

First things first you will need a PAT. I have included this PAT in a different variable group than the one that I will update. this is because when you update the variable group, all the variables that are inside will get lost. If you need to retain them, you should have to get them first and then add them again on the variable group.

For this reason I have created a variable group named token-group which holds my PAT. I also made this variable a secret.

The variable group that I will update has the name of var-group and the id of 5.

The pipeline includes two tasks. The first task will loop through the variables on the group and print them out. The second task will update the variable group based on the JSON that you provided. You should change your ORG and project URLs.

trigger:
– none
pr: none
pool:
vmImage: ubuntu-latest
variables:
– group: token-group
steps:
– task: PowerShell@2
displayName: Get variables from variable-group
inputs:
targetType: 'inline'
script: |
$connectionToken="$(PAT)"
$base64AuthInfo= [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($connectionToken)"))
$URL = "https://dev.azure.com/geralexgr/test-project/_apis/distributedtask/variablegroups?groupIds=5&api-version=7.1-preview.1"
$Result = Invoke-RestMethod -Uri $URL -Headers @{authorization = "Basic $base64AuthInfo"} -Method Get
$Variable = $Result.value.variables | ConvertTo-Json -Depth 100
Write-Host $Variable
– task: PowerShell@2
displayName: add variables on variable-group
inputs:
targetType: 'inline'
script: |
$connectionToken="$(PAT)"
$base64AuthInfo= [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($connectionToken)"))
$URL = "https://dev.azure.com/GeralexGR/test-project/_apis/distributedtask/variablegroups/5?api-version=5.1-preview.1"
$body = '{"id":5,"type":"Vsts","name":"var-group","variables":{"rest-var1":{"isSecret":false,"value":"rest-var-value-1"},"rest-var2":{"isSecret":false,"value":"rest-var-value-2"},"rest-var3":{"isSecret":false,"value":"rest-var-value-3"}}}'
$Result = Invoke-RestMethod -Uri $URL -Headers @{authorization = "Basic $base64AuthInfo"} -Method Put -Body $body -ContentType "application/json"
$Variable = $Result.value.variables | ConvertTo-Json -Depth 100
Write-Host $Variable

After running the pipeline you will notice a null output on the update of the variable group. This is the requested result and as task has not failed your var group will get updated.

Variables inside json