Posted on Leave a comment

Automatic rollback procedure for Azure DevOps

Azure devops pipelines provide a variety of tools for automated procedures. One mechanism that administrators can build using the YAML structure is an automated rollback mechanism during a deployment.

This means that after a deployment you can revert the previous state using your YAML tasks without having to redeploy. Another case would be a broken deployment which can be identified by monitoring tools and then a validation could approve or not the final release. This is exactly depicted in the below image. After releasing a version we have a validation step that requires manual approval from an administrator. If the validation is approved the release will proceed else the rollback will be triggered.

This mechanism is described below with YAML. Release stage includes release, validation and rollback jobs. Release job performs the actual release. Validation will depend on release job and will continue only if is approved. The rollback job will run only if validation failed which means that an administrator canceled the approval.

trigger: none
pr: none

stages:

- stage: releaseStage
  jobs:

  - deployment: release
    displayName: Release
    environment:
      name: dev
      resourceType: VirtualMachine
    strategy:
      runOnce:
        deploy:
          steps:
            - task: PowerShell@2
              displayName: hostname
              inputs:
                targetType: 'inline'
                script: |
                    deployment script here...
  
  - job: validation
    dependsOn: release
    pool: server
    steps:
    - task: ManualValidation@0
      inputs:
        notifyUsers: 'admin@domain.com'
        instructions: 'continue?'
        onTimeout: reject

  - deployment: rollback
    displayName: rollback
    dependsOn: validation
    condition: failed()
    environment:
      name: dev
      resourceType: VirtualMachine
    strategy:
      runOnce:
        deploy:
          steps:
            - task: PowerShell@2
              displayName: rolling back
              inputs:
                targetType: 'inline'
                script: |
                    rollback script here..
                    Write-Host "rollback"

When the release can be verified from the administrator the rollback will be skipped. This is the case when the validation is approved from the user.

Validation task will ask the user for a review.

On the other hand if validation is rejected the rollback stage will run.

Posted on 2 Comments

Run jobs with containers on Azure batch service

Azure Batch can be a great tool for instant batch processing as it creates and manages a pool of compute nodes (virtual machines), installs the applications you want to run, and schedules jobs to run on the nodes. Sometimes however a container could be a more appropriate solution for simplicity and scaling than a virtual machine. In this guide I will explain how you could use containers for batch service in order to run jobs and tasks.

Use the Azure Compute Gallery to create a custom image pool – Azure Batch | Microsoft Learn

First things first, you will need to have a azure container registry or another public or private registry to store your container image. I have already created mine and pushed my batchcontainer image inside which is a .NET micro service that returns a hello world message as an output.

using System;

namespace samplebatch
{
    internal class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine($"Hello {args[0]}");
        }
    }
}

https://github.com/geralexgr/samplebatch

The next step would be to create your batch service account. The part on which you set your container as the workload is when you create a pool. Pools consist of the compute node that will execute your jobs and there you will add a new pool which will host containers from the image that you pushed earlier.

On the node selection you will have to select Marketplace on the Image type and specifically microsoft-azure-batch and ubuntu-server-container of 20-04-lts version. Then you will need to select Custom on the container configuration and add your container registry by pressing the hyperlink.

selection of custom container image on the batch service

Then you will need to input the username and password for the container registry as well as the registry URL.

When you have your pool ready you can go and create your job. You can leave the default settings on the job creation but you should specify the pool where the job will run.

Then you can create a task or multiple tasks for your job and provide the commands or inputs for them. In my case I created a task named kati with the command of my name. This will be provided as input in my container which is a .NET microservice that prints a hello world message based on the input.

The important thing to do is to fill the image name from your repository. You can also provide container run options that you want for this node to have like mount of directories etc.

Example: repo.azurecr.io/batchservice:latest

As a result the output would be Hello gerasimos

The output of the run can be found on the stdout.txt file which is located on the task pane. You can also find a stderr.txt file which will log errors/failures that could appear during the execution.

Lastly, you can locate your job execution by navigating in the nodes where you can find a history of your tasks. As you can see I have two successful task executions and non failed.

YouTube video:

Posted on Leave a comment

Jobs explained in Azure Pipelines – Azure DevOps

Following the article about stages in Azure DevOps in this article we will examine jobs, which are units of tasks grouped together.

In more detail we will check the dependsOn along with the condition keyword in order to create dependencies between various jobs and indicate which should run first and if it will be executed.

Main scenario
We have a stage which contains multiple jobs. This stage could be a larger unit of actions like the deployment to production environment. A procedure like a deployment could be very complex with many different components working together for the outcome. In the stage1 there are 4 jobs with the names job1..4. Job1 needs to run first and the job2 depends on the job1 as a result it needs to wait the execution. Job2 will execute successful or fail based on the input that the user provides. Then job3 and job4 will be executed with a condition. The job3 will be executed if all the previous jobs have succeeded and job4 will be executed if one of the previous jobs had a failure and the pipeline stopped.

Example 1
We execute the pipeline with parameter equal to 1 in order to have the job2 failed. Then we will see that only job4 runs and job3 will be skipped because of the conditions.

run-1

Code

trigger:
- none

parameters:
  - name: state
    displayName: select 1 for failure and 0 for success
    type: number
    values:
      - 0
      - 1 

pool:
  vmImage: ubuntu-latest

stages:
- stage: stage1
  displayName: stage1
  jobs:
  - job: job1
    timeoutInMinutes: 60
    displayName:  job1
    steps:
    - script: echo running task1
      displayName: task1

  - job: job2
    dependsOn: job1
    displayName:  job2
    continueOnError: true
    steps:
    - script: exit ${{ parameters.state }}
      displayName: task will succeed or fail based on user input

  - job: job3
    dependsOn: job2
    condition: succeeded()
    displayName:  job3
    steps:
    - script: echo task to be executed on success
      displayName: execute on success

  - job: job4
    condition: failed()
    dependsOn: job2
    displayName:  job4
    steps:
    - script: echo task to be executed on failure
      displayName: execute on failure

Then we execute the pipeline with parameter equal to 0 in order to have the job2 run. As a result job3 will run and job4 will be skipped.

run-2

Example 2
We will now execute the same jobs but we will also use the continueOnError keyword on job2. This will allow subsequent tasks to run and skip the failure of the pipeline. By looking at the execution we will now see that job3 seems to be executed in comparison with the run that did not have the continueOnError. This is done because job2 is handled as partially failed and the next steps will continue. The job4 was skipped because pipeline did not recognize a failure.

run-3

If we execute again the pipeline with continueOnError and 0 as the parameter we will get the same result as with run-2.

Microsoft Docs:
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops&tabs=yaml

Youtube video:

Posted on 3 Comments

Azure DevOps best practices – jobs and stages

Azure DevOps stages and jobs on build pipelines could save you a lot of minutes when you deal with more complex setups and heavy projects. In this article I will analyze how you can use jobs and stages to maximize performance and parallelism.

Lets take the starter pipeline example on Azure DevOps (code below). Although this is perfect for a single task scenario, it will execute tasks on a row without parallelism. The output of this pipeline will be two printed messages on your debug window.

trigger:
- main

pool:
  vmImage: ubuntu-latest

steps:
- script: echo Hello, world!
  displayName: 'Run a one-line script'

- script: |
    echo Add other tasks to build, test, and deploy your project.
    echo See https://aka.ms/yaml
  displayName: 'Run a multi-line script'

The two tasks that are included (script task) are located under steps. Those steps are part of a job that is not present as it is the only one.

The hierarchy of a pipeline can include the below: Stages -> Jobs -> Steps -> Tasks

On the starter pipeline two tasks are included under steps that belong on a single job. Jobs big advantage is that they can run in parallel. Take for example a big pipeline that includes a lot of jobs 30 and more. It would be time killing to wait all these jobs execute one by one. Keep in mind that one job can also fail and you will have to start the process again.

If you have a single job and all tasks are included on it, you have to use continueOnError if you do not want to stop the pipeline on a task failure.

The below example shows two jobs that can be executed on parallel based on your Azure DevOps configuration (this may include more costs on your subscription). As you can see the failure of the first job will not affect the second job that will be executed. If you are also eligible for parallel jobs, these can run simultaneously if you do not have constraints between them.

pool:
  vmImage: ubuntu-latest

trigger: none
pr: none  

jobs:
  - job:
    displayName: Install npm and yarn tools 
    steps:
      - task: Npm@1
        displayName: Install npm
        inputs:
          command: 'install'
          workingDir: '$(Pipeline.Workspace)/s'

      - task: PowerShell@2
        displayName: Install yarn
        inputs:
          targetType: 'inline'
          script: |
            npm install -g yarn

  - job:
    displayName: build code and publish artifact
    steps:

      - task: PowerShell@2
        displayName: build code
        inputs:
          targetType: 'inline'
          script: |
            Write-Host "this task will be executed"
      - task: PowerShell@2
        displayName: this task will fail
        inputs:
          targetType: 'inline'
          script: |
            fail

Lets now examine the power of stages. The stage includes multiple jobs as you can see from the example below. A typical production environment will include stages for QA -> DEV -> Production deployments.

This approach big advantage is that you can rerun failed jobs separately and also rerun the whole stage in separation from each other. As a typical build pipeline may take a lot of minutes to complete by using stages you do not have to rerun the whole pipeline on a task failure.

trigger:
- none

pool:
  vmImage: ubuntu-latest

stages:
- stage: BuildApp
  displayName: Build Apps
  jobs:
  - job: BuildFrontendApp
    displayName: Build Frontend App
    steps:
    - script: echo building frontend app
      displayName: build frontend app
    - script: echo running unit tests for frontend app
      displayName: unit tests frontend

  - job: BuildBackendApp
    displayName: Build Backend App
    steps:
    - script: echo building backend app
      displayName: build backend app
    - script: echo running unit tests for backend app
      displayName: unit tests backend

- stage: DeployDev
  displayName: Deploy to DEV environment 
  jobs:
  - job: DeployFrontendDev
    displayName: Deploy frontend to DEV
    steps:
    - checkout: none
    - script: echo deploying frontend app to DEV
      displayName: deploy frontend app to DEV

  - job: DeployBackendDev
    displayName: Deploy backend to DEV
    steps:
    - checkout: none
    - script: echo deploying backend app to DEV
      displayName: deploy backend app to DEV

- stage: DeployProd
  displayName: Deploy to PROD environment
  jobs:
  - job: Failjob
    displayName: Running this job will fail
    steps:
    - checkout: none
    - script: kati
      displayName: deploy frontend app to PROD

  - job: DeployBackendProd
    displayName: Deploy backend to PROD
    steps:
    - checkout: none
    - script: echo deploying backend app to PROD
      displayName: deploy backend app to PROD

https://docs.microsoft.com/en-us/azure/devops/pipelines/licensing/concurrent-jobs?view=azure-devops&tabs=ms-hosted

Video tutorial on YouTube: