Build triggers on Azure devops pipelines

Continuous integration (CI) triggers cause a pipeline to run whenever you push an update to the specified branches or you push specified tags. Build in triggers can become a powerful tool for your build strategy and the most common scenarios will be explained using the examples below.

Continuous integration triggers:

Branches:
Those will trigger your pipeline when a new commit is performed on your branch. In the scenario below the pipeline will run if code is merged on your main branch or a branch starting from releases like release1, release-1 releases etc. Also the pipeline will not run if a push is commited on uat branch. This is excluded through the exclude keyword.

trigger:
branches:
include:
- main
- release/*
exclude:
- uat

Tags:
Those will trigger your pipeline only when a tag is pushed on your repository. Tags are bound with a commit on a git source control system. In the scenario below the pipeline will get triggered if a tag is pushed following the v.* regex like v.1 , v.something, v.2 etc. Also it will not run if the tag that is pushed starts with uat keyword for example uat-1 will not trigger the pipeline.

trigger:
tags:
include:
- v.*
exclude:
- uat*

Pull Requests:
pr keyword will trigger your pipeline if a pull request is created from a branch and the destination is the noted one. For example if you create a new pull request from mybranch to current branch then the pipeline will get triggered. The pipeline will not trigger if a pull request is created from any branch to uat branch as it is excluded.

trigger:
pr:
branches:
include:
- current
exclude:
- uat

One powerful tool that you can combine with your build strategy is paths. When you specify paths, you must explicitly specify branches to trigger on. You can’t trigger a pipeline with only a path filter; you must also have a branch filter, and the changed files that match the path filter must be from a branch that matches the branch filter.

Paths:
Using a path you specify a directory/folder which will trigger the build. For example you may want only to trigger a build when particular files are changed on your repository. In the below example the pipeline will get triggered if files inside the docs folder change for the branches master and releases*

trigger:
branches:
include:
- master
- releases/*
paths:
include:
- docs
exclude:
- docs/README.md

Microsoft documentation for build strategies:

https://docs.microsoft.com/en-us/azure/devops/pipelines/repos/azure-repos-git?view=azure-devops&tabs=yaml#ci-triggers

Trigger azure Devops pipeline from another repository

For security reasons you may want to store the pipelines to another repository than the one that the code is hosted. Lets say for example that you have your application code located on test-project/repository location but your pipelines are stored on the consoleapp1 repository. Also you want to have continuous integration and deployment for the repository on which the code is hosted, so when code is pushed to repository then the pipeline should trigger which is hosted on ConsoleApp1.

You can do that using the repositories resource of azure devops. You will need to define the repository and also configure trigger for this repository based on your strategy.

The final result would be the pipeline to run when code is commited on the code repository instead of the one that pipelines are hosted.

Enable Diagnostic settings for Azure App service using terraform loop

Imagine that you want to enable diagnostic settings for multiple app services on Azure using terraform. The required options can be located under Monitoring tab.

A appropriate rule option should be created to indicate where the logs should be sent. 

The available categories can be located below and I will instruct terraform to enable them all.

In order to accomplish that through terraform I used a loop. The depends_on keyword is used because firstly the app services should be created and then the diagnostic settings for them. Create a file like app_diagnostics.tf and place it inside your terraform working directory.

resource "azurerm_monitor_diagnostic_setting" "diag_settings_app" {
  depends_on = [ azurerm_windows_web_app.app_service1,azurerm_windows_web_app.app_service2 ]
  count = length(local.app_service_ids)
  name               = "diag-rule"
  target_resource_id = local.app_service_ids[count.index]
  log_analytics_workspace_id = local.log_analytics_workspace_id
  
  dynamic "log" {
    iterator = entry
    for_each = local.log_analytics_log_categories
    content {
        category = entry.value
        enabled  = true

        retention_policy {
      enabled = false
        }
    }
   
  }

  metric {
    category = "AllMetrics"

    retention_policy {
      enabled = false
      days = 30
    }
  }

}

Inside locals.tf I have created a variable that holds the app services ids, the log analytics workspace ID on which the logs will be sent and also the categories which I want to enable on Diagnostics. As shown on the first screenshot all the categories are selected.

locals {

 log_analytics_workspace_id = "/subscriptions/.../geralexgr-logs" 
 log_analytics_log_categories     = ["AppServiceHTTPLogs", "AppServiceConsoleLogs","AppServiceAppLogs","AppServiceAuditLogs","AppServiceIPSecAuditLogs","AppServicePlatformLogs"]

app_service_ids = [azurerm_windows_web_app.app_service1.id,azurerm_windows_web_app.app_service2.id]
}

As a result the loop will enable for every app service you add on app_service_ids each Diagnostic category placed on log_analytics_log_categories variable.

Test your backup mechanism – Automated restore for MS SQL using Azure DevOps

Most organizations rely on their backup solutions for application faults or data corruptions. However the backup is not frequently tested in order to verify that restore would be successful. In this post I implement a backup testing mechanism.

Lets examine a scenario for an MS SQL database server. The server will output a backup file (.bak) on a storage account based on a retention policy. This backup will be automatically restored on a SQL server through a pipeline and a result will be written as an output. The result can be then reported on the monitoring solution.

The flow is depicted below. An azure devops agent should be installed on the server on which the database will be restored. The pipeline will fetch the backup file from the storage account and store it on a data disk (in my case R:\files). Then sqlcmd command will be used to restore the .bak file and record the result. The backup file is provided by a parameter on the pipeline. Also a service connection should be created with your subscription on which the storage account is located.

Pipeline code:

trigger: none
pr: none
pool:
vmImage: windows-latest
parameters:
– name: backupfile
type: string
jobs:
– job: download
displayName: Download DB backup file
steps:
– task: AzureCLI@2
displayName: az cli download backup file from storage account
inputs:
azureSubscription: 'Azure-Service-Connection'
scriptType: 'ps'
scriptLocation: 'inlineScript'
inlineScript: |
$container_name_input = "container_name"
$saccount_name = "storage_account_name"
#$json = az storage blob list –container-name $container_name_input –account-name $saccount_name
az storage blob download –file "R:\files\${{parameters.backupfile}}" –name "${{parameters.backupfile}}" –container-name $container_name_input –account-name $saccount_name –auth-mode login
– job: restore
displayName: Restore SQL backup
dependsOn: download
steps:
– task: PowerShell@2
displayName: sqlcmd restore backup
inputs:
targetType: 'inline'
script: |
sqlcmd -q "RESTORE DATABASE [Database_Name] FROM DISK=N'R:\files\${{parameters.backupfile}}' WITH REPLACE,RECOVERY" -o R:\files\result.txt;
[string]$result = Get-Content R:\files\result.txt
if ($result.contains('successfully')) {
Write-Host "Restore was succesfull…"
}
elseif ($result.contains('terminating')) {
Write-Host "Terminating…"
}

Executing pipeline:

Result:

Important:

Azure DevOps agent service is configured to run with a specific account (in my case NT/ Local System). This account should have the appropriate permissions on the SQL server for the restore procedure. The easier way would be to make this account a database sysadmin.

Adding the NT Authority\System on SQL server sysadmins