Following my previous article about how to update a variable group using POSTMAN, I will now document how to implement the same behavior through a pipeline.
First things first you will need a PAT. I have included this PAT in a different variable group than the one that I will update. this is because when you update the variable group, all the variables that are inside will get lost. If you need to retain them, you should have to get them first and then add them again on the variable group.
For this reason I have created a variable group named token-group which holds my PAT. I also made this variable a secret.
The variable group that I will update has the name of var-group and the id of 5.
The pipeline includes two tasks. The first task will loop through the variables on the group and print them out. The second task will update the variable group based on the JSON that you provided. You should change your ORG and project URLs.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
I was struggling to update a variable group using the Azure DevOps Rest API. In this article I will document the procedure using POSTMAN.
First things first you should create a PAT in order to interact with the API. If you do not know how to create such a thing you should read my previous article about running a build through a REST api on which I documented also the creation of a PAT.
Then you will need to add the access token under authorization tab of POSTMAN using Type Basic Auth. The PAT should be added as plain text.
Then you will need to add Content-Type as application/json under Headers.
Then you will have to create your URL. This should be of the format:
Important: You should use the version=5.1-preview.1. If you use the latest version you will notice an error on the call. This is a bug that has not been fixed as I found online.
In my example I wanted to update the variable group with the ID 5 and add a variable named new-var. The body of your request should be like below. Keep in mind that we use the PUT HTTP verb to update the variable group. This means everything that is inside the variable group will be discarded. If you followed all the steps correctly you will notice the below output JSON. This should indicate success on the procedure.
Lastly you can locate your new variable inside the variable group.
Azure DevOps stages and jobs on build pipelines could save you a lot of minutes when you deal with more complex setups and heavy projects. In this article I will analyze how you can use jobs and stages to maximize performance and parallelism.
Lets take the starter pipeline example on Azure DevOps (code below). Although this is perfect for a single task scenario, it will execute tasks on a row without parallelism. The output of this pipeline will be two printed messages on your debug window.
- script: echo Hello, world!
displayName: 'Run a one-line script'
- script: |
echo Add other tasks to build, test, and deploy your project.
echo See https://aka.ms/yaml
displayName: 'Run a multi-line script'
The two tasks that are included (script task) are located under steps. Those steps are part of a job that is not present as it is the only one.
The hierarchy of a pipeline can include the below: Stages -> Jobs -> Steps -> Tasks
On the starter pipeline two tasks are included under steps that belong on a single job. Jobs big advantage is that they can run in parallel. Take for example a big pipeline that includes a lot of jobs 30 and more. It would be time killing to wait all these jobs execute one by one. Keep in mind that one job can also fail and you will have to start the process again.
If you have a single job and all tasks are included on it, you have to use continueOnError if you do not want to stop the pipeline on a task failure.
The below example shows two jobs that can be executed on parallel based on your Azure DevOps configuration (this may include more costs on your subscription). As you can see the failure of the first job will not affect the second job that will be executed. If you are also eligible for parallel jobs, these can run simultaneously if you do not have constraints between them.
Lets now examine the power of stages. The stage includes multiple jobs as you can see from the example below. A typical production environment will include stages for QA -> DEV -> Production deployments.
This approach big advantage is that you can rerun failed jobs separately and also rerun the whole stage in separation from each other. As a typical build pipeline may take a lot of minutes to complete by using stages you do not have to rerun the whole pipeline on a task failure.
- stage: BuildApp
displayName: Build Apps
- job: BuildFrontendApp
displayName: Build Frontend App
- script: echo building frontend app
displayName: build frontend app
- script: echo running unit tests for frontend app
displayName: unit tests frontend
- job: BuildBackendApp
displayName: Build Backend App
- script: echo building backend app
displayName: build backend app
- script: echo running unit tests for backend app
displayName: unit tests backend
- stage: DeployDev
displayName: Deploy to DEV environment
- job: DeployFrontendDev
displayName: Deploy frontend to DEV
- checkout: none
- script: echo deploying frontend app to DEV
displayName: deploy frontend app to DEV
- job: DeployBackendDev
displayName: Deploy backend to DEV
- checkout: none
- script: echo deploying backend app to DEV
displayName: deploy backend app to DEV
- stage: DeployProd
displayName: Deploy to PROD environment
- job: Failjob
displayName: Running this job will fail
- checkout: none
- script: kati
displayName: deploy frontend app to PROD
- job: DeployBackendProd
displayName: Deploy backend to PROD
- checkout: none
- script: echo deploying backend app to PROD
displayName: deploy backend app to PROD