But what happens if you want to run some tasks, get user input and then continue with some other tasks? For this reason you could use the Manual Validation task. As the name indicates, you can validate the user input and perform actions based on this input.
If you approve the execution then the pipeline will continue and if not the pipeline will stop.
You can then use the result of the previous job in order to perform other actions with succeeded() or failed() conditions.
The ManualValidation should be used mandatory with pool:server as shown above because it is an agentless job and cannot be run on the standard agent pools.
The job2 as it depends from job1 will run only if the approval is not rejected from the user.
Recently I had to implement the scenario that is depicted below.
In more detail I had to implement a way to get user input (usernames) in order to pass this information on an Azure DevOps pipeline and through this pipeline make some actions on Azure through az cli.
For the described solution I used the below services:
Azure Devops
Power Automate
Azure DevOps rest API
Azure
The first thing that I created was the form. In this form the user has to provide the input of the usernames in a requested format in order to pass this information on the later components.
Then I created a new power automate flow that would handle the input of this form and make a POST request on Azure DevOps api in order to trigger a build pipeline with the parameters of the form as input.
The flow and the task that have been used are depicted below.
Select response ID on the form.
On the POST request you should enter your details regarding the pipeline ID, organization and project. The body of the request should be as shown in order to get the parameters parsed correctly.
The azure devops pipeline will have as an input parameter and empty object.
Azure DevOps stages and jobs on build pipelines could save you a lot of minutes when you deal with more complex setups and heavy projects. In this article I will analyze how you can use jobs and stages to maximize performance and parallelism.
Lets take the starter pipeline example on Azure DevOps (code below). Although this is perfect for a single task scenario, it will execute tasks on a row without parallelism. The output of this pipeline will be two printed messages on your debug window.
trigger:
- main
pool:
vmImage: ubuntu-latest
steps:
- script: echo Hello, world!
displayName: 'Run a one-line script'
- script: |
echo Add other tasks to build, test, and deploy your project.
echo See https://aka.ms/yaml
displayName: 'Run a multi-line script'
The two tasks that are included (script task) are located under steps. Those steps are part of a job that is not present as it is the only one.
The hierarchy of a pipeline can include the below: Stages -> Jobs -> Steps -> Tasks
On the starter pipeline two tasks are included under steps that belong on a single job. Jobs big advantage is that they can run in parallel. Take for example a big pipeline that includes a lot of jobs 30 and more. It would be time killing to wait all these jobs execute one by one. Keep in mind that one job can also fail and you will have to start the process again.
If you have a single job and all tasks are included on it, you have to use continueOnError if you do not want to stop the pipeline on a task failure.
The below example shows two jobs that can be executed on parallel based on your Azure DevOps configuration (this may include more costs on your subscription). As you can see the failure of the first job will not affect the second job that will be executed. If you are also eligible for parallel jobs, these can run simultaneously if you do not have constraints between them.
Lets now examine the power of stages. The stage includes multiple jobs as you can see from the example below. A typical production environment will include stages for QA -> DEV -> Production deployments.
This approach big advantage is that you can rerun failed jobs separately and also rerun the whole stage in separation from each other. As a typical build pipeline may take a lot of minutes to complete by using stages you do not have to rerun the whole pipeline on a task failure.
trigger:
- none
pool:
vmImage: ubuntu-latest
stages:
- stage: BuildApp
displayName: Build Apps
jobs:
- job: BuildFrontendApp
displayName: Build Frontend App
steps:
- script: echo building frontend app
displayName: build frontend app
- script: echo running unit tests for frontend app
displayName: unit tests frontend
- job: BuildBackendApp
displayName: Build Backend App
steps:
- script: echo building backend app
displayName: build backend app
- script: echo running unit tests for backend app
displayName: unit tests backend
- stage: DeployDev
displayName: Deploy to DEV environment
jobs:
- job: DeployFrontendDev
displayName: Deploy frontend to DEV
steps:
- checkout: none
- script: echo deploying frontend app to DEV
displayName: deploy frontend app to DEV
- job: DeployBackendDev
displayName: Deploy backend to DEV
steps:
- checkout: none
- script: echo deploying backend app to DEV
displayName: deploy backend app to DEV
- stage: DeployProd
displayName: Deploy to PROD environment
jobs:
- job: Failjob
displayName: Running this job will fail
steps:
- checkout: none
- script: kati
displayName: deploy frontend app to PROD
- job: DeployBackendProd
displayName: Deploy backend to PROD
steps:
- checkout: none
- script: echo deploying backend app to PROD
displayName: deploy backend app to PROD
There are multiple ways to define your continuous integration trigger on a pipeline depending on your needs. One common approach is to trigger a build whenever a new merge or push is done on your branch.
For example with the below notation you could trigger a new build every time a new push is merged on the uat branch.
trigger:
- uat
Another approach could be the pull request. Every time a new pull request is created for a specific branch your build could be initiated. In order to accomplish that you should use the pr keyword.
The below example will trigger when a new pull request is created and the merge destination is main branch. This approach could help you identify if the code of a specific feature/branch actually builds and can be merged on your main branch.
pr:
branches:
include:
- main
Another approach is the tags functionality. You could run a build only if a specific tag is pushed along with the commit.
The below example will only build when the tag release.* is pushed on the branch on which the pipeline is located.
trigger:
tags:
include:
- release.*
Some tags that could trigger my build are: release.v1 , release.master, release.v2
In order to push a tag on your branch using cmd you should
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here:
Cookie Policy