Posted on 1 Comment

Install windows azure devops agent on docker container

On previous articles I have explained how you can install an azure devops agent on the operating system in order to create your self hosted agent pools for your projects.

Windows installation example:

Mac OS X installation example:

But what if you need to create multiple agents inside a virtual machine? The best solution would be to use docker virtualization and separate those agents from each other. We will now examine how we can host our azure devops agents on containers.

The first thing that you will need is a virtual machine that runs docker. When this requirement is fulfilled you can jump on the image building. In order to build your image you will need your Dockerfile and the instructions for the agent.

You can read the rest of the article on Medium using the link below:

A detailed deployment video can be found on my Udemy course:

https://www.udemy.com/course/mastering-azure-devops-cicd-pipelines-with-yaml/

Posted on Leave a comment

Deploy kubernetes cluster with kubectl and azure devops

In this guide we will examine how you can deploy pods on your Azure Kubernetes Cluster with Azure devops. In order to getting started you will need to create an AKS cluster under a resource group and connect this cluster with azure devops. After the creation you will need to connect with the cluster and export the kubeconfig file for the ado service connection.

You can do that by pressing connect

You can read the rest of the article on Medium using the link below:

A detailed deployment video can be found on my Udemy course:
https://www.udemy.com/course/mastering-azure-devops-cicd-pipelines-with-yaml/

Posted on Leave a comment

Jobs explained in Azure Pipelines – Azure DevOps

Following the article about stages in Azure DevOps in this article we will examine jobs, which are units of tasks grouped together.

In more detail we will check the dependsOn along with the condition keyword in order to create dependencies between various jobs and indicate which should run first and if it will be executed.

Main scenario
We have a stage which contains multiple jobs. This stage could be a larger unit of actions like the deployment to production environment. A procedure like a deployment could be very complex with many different components working together for the outcome. In the stage1 there are 4 jobs with the names job1..4. Job1 needs to run first and the job2 depends on the job1 as a result it needs to wait the execution. Job2 will execute successful or fail based on the input that the user provides. Then job3 and job4 will be executed with a condition. The job3 will be executed if all the previous jobs have succeeded and job4 will be executed if one of the previous jobs had a failure and the pipeline stopped.

Example 1
We execute the pipeline with parameter equal to 1 in order to have the job2 failed. Then we will see that only job4 runs and job3 will be skipped because of the conditions.

run-1

Code

trigger:
- none

parameters:
  - name: state
    displayName: select 1 for failure and 0 for success
    type: number
    values:
      - 0
      - 1 

pool:
  vmImage: ubuntu-latest

stages:
- stage: stage1
  displayName: stage1
  jobs:
  - job: job1
    timeoutInMinutes: 60
    displayName:  job1
    steps:
    - script: echo running task1
      displayName: task1

  - job: job2
    dependsOn: job1
    displayName:  job2
    continueOnError: true
    steps:
    - script: exit ${{ parameters.state }}
      displayName: task will succeed or fail based on user input

  - job: job3
    dependsOn: job2
    condition: succeeded()
    displayName:  job3
    steps:
    - script: echo task to be executed on success
      displayName: execute on success

  - job: job4
    condition: failed()
    dependsOn: job2
    displayName:  job4
    steps:
    - script: echo task to be executed on failure
      displayName: execute on failure

Then we execute the pipeline with parameter equal to 0 in order to have the job2 run. As a result job3 will run and job4 will be skipped.

run-2

Example 2
We will now execute the same jobs but we will also use the continueOnError keyword on job2. This will allow subsequent tasks to run and skip the failure of the pipeline. By looking at the execution we will now see that job3 seems to be executed in comparison with the run that did not have the continueOnError. This is done because job2 is handled as partially failed and the next steps will continue. The job4 was skipped because pipeline did not recognize a failure.

run-3

If we execute again the pipeline with continueOnError and 0 as the parameter we will get the same result as with run-2.

Microsoft Docs:
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops&tabs=yaml

Youtube video:

Posted on 1 Comment

Stages explained in Azure Pipelines – Azure DevOps

Stages on Azure devops can be a powerful tool when it comes to complex environments as you can divide the deployment process into different logical units. For example you could have different stages for different environments like Uat, Dev, Production or you could separate functionality for different products or technology stacks like FrontEnd, Backend, Mobile etc.

In this article we will examine the dependsOn keyword that creates dependencies between various stages and indicates which should run first and what will be the sequence.

Main scenario
We have an application that is created from various components/microservices. Those components need to be compiled in one or more binaries and be exported for release in our platform/hosting provider. In order to deploy our application we will need to first compile all those dependencies, export them and later on use them in the release tasks.

Example 1
In the below example we have starting point which will be some initialization for our environment. Then we continue with the build steps that will be the components A, B, C and then we need to produce the artifacts. The artifacts stage need to wait for all three components stages to be completed so we use dependsOn and provide as a list all the component stages. After the artifact stage we evaluate the result and if we have a success we deploy the application in a new stage otherwise we perform a rollback. Rollback and deploy application will be executed only if the condition of the stage is true so as to create a branching logic.

When you need to depend ON more than one stages you can provide those as a list

Code

trigger:
- none

pool:
  vmImage: ubuntu-latest

stages:
- stage: Stage_Starting_Point
  displayName: Starting point
  jobs:
  - job: Starting_point_Job
    displayName:  Starting_point_Job
    steps:
    - script: echo pre processing
      displayName: pre processing

- stage: Stage_Comp_A
  dependsOn: Stage_Starting_Point
  displayName: Stage Component A
  jobs:
  - job: Job_Comp_A
    displayName:  Job Component A
    steps:
    - script: echo building Component A
      displayName: build component A

- stage: Stage_Comp_B
  
  displayName: Stage Component B
  dependsOn: Stage_Starting_Point
  jobs:
  - job: Job_Comp_B
    displayName:  Job Component B
    steps:
    - script: echo building Component B
      displayName: build component B

- stage: Stage_Comp_C
  dependsOn: Stage_Starting_Point
  displayName: Stage Component C
  jobs:
  - job: Job_Comp_C
    displayName:  Job Component C
    steps:
    - script: echo building Component C
      displayName: build component C

- stage: Stage_Artifacts
  dependsOn: 
  - Stage_Comp_A
  - Stage_Comp_B
  - Stage_Comp_C
  displayName: Produce artifacts
  jobs:
  - job: Job_Artifacts
    displayName:  Job Artifacts
    steps:
    - script: echo producing artifacts
      displayName: producing artifacts

- stage: Stage_Deploy_Prod
  dependsOn: Stage_Artifacts
  condition: succeeded('Stage_Artifacts')
  displayName: Deploy application Prod
  jobs:
  - job: Job_Deploy_Prod
    displayName:  Job Deployment
    steps:
    - script: echo deploying
      displayName: deploying application Prod

- stage: Stage_Rollback
  dependsOn: Stage_Artifacts
  condition: failed('Stage_Artifacts')
  displayName: Rolling back
  jobs:
  - job: Job_Rollback
    displayName:  Job Rollback
    steps:
    - script: echo rolling back application
      displayName: roll back

Example 2
The second example will be the same as previous one with one small difference. After the deploy for the production environment we want to deploy also on the Disaster recovery environment. For this scenario we depend on production stage and also the rollback stage, but as we see from the output we have the final stage skipped.

You can specify the conditions under which each stage, job, or step runs. By default, a job or stage runs if it does not depend on any other job or stage, or if all of the jobs or stages that it depends on have completed and succeeded

As a result deploy application DR stage will run only if we remove the dependency from the roll back stage. As the rollback stage is skipped, the final stage is also skipped.

Code

trigger:
- none

pool:
  vmImage: ubuntu-latest

stages:
- stage: Stage_Starting_Point
  displayName: Starting point
  jobs:
  - job: Starting_point_Job
    displayName:  Starting_point_Job
    steps:
    - script: echo pre processing
      displayName: pre processing

- stage: Stage_Comp_A
  dependsOn: Stage_Starting_Point
  displayName: Stage Component A
  jobs:
  - job: Job_Comp_A
    displayName:  Job Component A
    steps:
    - script: echo building Component A
      displayName: build component A

- stage: Stage_Comp_B
  
  displayName: Stage Component B
  dependsOn: Stage_Starting_Point
  jobs:
  - job: Job_Comp_B
    displayName:  Job Component B
    steps:
    - script: echo building Component B
      displayName: build component B

- stage: Stage_Comp_C
  dependsOn: Stage_Starting_Point
  displayName: Stage Component C
  jobs:
  - job: Job_Comp_C
    displayName:  Job Component C
    steps:
    - script: echo building Component C
      displayName: build component C

- stage: Stage_Artifacts
  dependsOn: 
  - Stage_Comp_A
  - Stage_Comp_B
  - Stage_Comp_C
  displayName: Produce artifacts
  jobs:
  - job: Job_Artifacts
    displayName:  Job Artifacts
    steps:
    - script: echo producing artifacts
      displayName: producing artifacts

- stage: Stage_Deploy_Prod
  dependsOn: Stage_Artifacts
  condition: succeeded('Stage_Artifacts')
  displayName: Deploy application Prod
  jobs:
  - job: Job_Deploy_Prod
    displayName:  Job Deployment
    steps:
    - script: echo deploying
      displayName: deploying application Prod

- stage: Stage_Rollback
  dependsOn: Stage_Artifacts
  condition: failed('Stage_Artifacts')
  displayName: Rolling back
  jobs:
  - job: Job_Rollback
    displayName:  Job Rollback
    steps:
    - script: echo rolling back application
      displayName: roll back

- stage: Stage_Deploy_DR
  dependsOn: 
  - Stage_Rollback
  - Stage_Deploy_Prod
  displayName: Deploy application DR
  jobs:
  - job: Job_Deploy_DR
    displayName:  Job Deployment
    steps:
    - script: echo deploying
      displayName: deploying application DR

Microsoft Docs:
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/conditions?view=azure-devops&tabs=yaml

Youtube video: