Posted on Leave a comment

Tasks, jobs, stages templates combined for Azure DevOps

In this article we will examine the power of templates for Azure DevOps pipelines. Templates let you define reusable content, logic, and parameters and function in two ways. You can insert reusable content with a template or you can use a template to control what is allowed in a pipeline.

When you build complex automation scenarios for your organizations, you will need to use stages, jobs and tasks as they would contain multiple environments and configuration settings.

You can find some of the reasons why you would need to follow this approach on my previous article

In this article we will examine a azure devops pipeline which contains stages, jobs and tasks. Those will be created inside templates and they will be called from the main pipeline. A high level view of the architecture can be found in the below picture.

My code structure is shown below. There is a folder for the appropriate templates and a main pipeline which is located in another folder and will refer the templates folder.

stage.yml
The stage.yml file will contain the template code for the stages. It has as parameter the name which will be given in the stage.

parameters:
  name: ''

stages:

- stage: ${{ parameters.name }}
  jobs:    
  - template: job.yml
    parameters:
      name: ${{ parameters.name }}_build_job

job.yml
The job.yml file will contain the template code for the jobs. It has as parameter the name which will be given in the job and also a variable sign which will indicate if a task will be executed.

parameters:
  name: ''
  sign: false

jobs:

- job: ${{ parameters.name }}
  displayName: running ${{ parameters.name }} 
  steps:

  - template: step.yml
    parameters:
      name: task1

  - ${{ if eq(parameters.sign, 'true') }}:
    - script: echo sign is requested
      displayName: sign task

step.yml
The step.yml file will contain the template code for the steps. It has as parameter the name which will be given in the task.

parameters:
  name: ''

steps:

- script: echo ${{ parameters.name }}
  displayName: running  ${{ parameters.name }}

main.yml
The main.yml file is the main reference of the pipeline and the one that will be called. If you need to add more stages on it, you would only have to add another -template section under the stages.

trigger:
- none

variables:
- template: templates/vars.yml  
pool:
  vmImage: $(myagent)

stages:
- template: templates/stage.yml  
  parameters:
    name: "App_Env1"

By executing the pipeline we can locate that we have one stage that is not visible (as it is the only one) and under this stage a job has been created for the task1 which we added on our template.

Find more about azure devops templates on my Udemy course:

Mastering Azure Devops CI/CD Pipelines with YAML | Udemy

Posted on Leave a comment

Error from server (Forbidden): nodes is forbidden: User “” cannot list resource “nodes” in API group “” at the cluster scope

kubelogin is a client-go credential (exec) plugin implementing azure authentication. This plugin provides features that are not available in kubectl. It is supported on kubectl v1.11+ and you can bypass interactive authentication with it. This means that you do not have to enter a device code login when interacting with AKS.

I had to use the tool for managed identity authentication with Kubernetes service. In the documentation you can find instructions on how to use it for cases like user login, service principal, managed identity.

https://azure.github.io/kubelogin/concepts/login-modes/msi.html

Although I was following the correct instructions I was struggling with the error shown below when I was executing kubectl commands.

This error was due to the fact that I was not requesting the admin credentials on the kubectl command.

When I was asking for credentials with the below command I ended up with the error.

az aks get-credentials--resource-group rg --name clustername

As I had assigned the Kubernetes admin Cluster role on my managed identity I was able to execute kubectl commands when using.

az aks get-credentials --admin --resource-group rg --name clustername
Posted on Leave a comment

Get /app/metrics remote error: tls handshake failure – Promotheus

Teamcity can be integrated with Grafana in order to gather statistics about builds and agents activity. In order to gather those teamcity metrics you will need a prometheus installation and then a configuration for the teamcity source.

When you connect teamcity as a target on prometheus you have two options regarding an https secure communication. Either you trust the not verified certificate and place that in the configuration file, or you should enable tls communication with certificates between teamcity and prometheus.

If you do not set one of the two options on prometheus you might end up with the issue:

Get /app/metrics remote error: tls handshake failure

The first option as discussed would be to trust the insecure certificate using https.

tls_config:
      insecure_skip_verify: true

The second option would be to communicate through certificates. First create a new folder inside your Prometheus installation and store the certificates for the teamcity server. Then you will need to point the files in the configuration.

    tls_config:
       ca_file:  /var/prometheus/ssl/signalocean.com.crt

The detailed options for tls_config in prometheus can be found below.

Configuration | Prometheus

Posted on Leave a comment

Scan azure devops repositories for credentials and passwords

DevSecOps practices are important for organizations especially when it comes to code repositories. Your code should avoid hard coded passwords and secrets for many reasons as a leak may occur. In this guide I will examine how you can massively scan Azure DevOps repositories for security leaks as passwords and secrets with gitleaks utility.

https://github.com/gitleaks/gitleaks

Simon has provided a very useful script that you can use in order to download all your repositories from Azure DevOps.

Cloning all repositories from Azure DevOps using Azure CLI – Simon Wahlin

When you execute the script, all the repositories will be downloaded inside your project folder.

Then you will need to install gitleaks and execute for each repository.

$folder_for_cleanup = "C:\Users\geralexgr\Documents\AzureRepos"
Get-ChildItem $folder_for_cleanup | Sort -Property FullName | ForEach-Object {
                gitleaks detect -s $_.FullName -v >> gitleaks-results.txt
                echo "######################################################################################################" >> gitleaks-results.txt
            }

The scan will go through each repository and scan for leaks. The output will be stored in gitleaks-result text file.