Posted on Leave a comment

Create and use terraform modules

Terraform modules are useful in various scenarios when working with Infrastructure as Code (IaC) using Terraform. Modules provide a way to encapsulate and reuse infrastructure configurations, making it easier to manage and scale your infrastructure code.

Every Terraform configuration has at least one module, known as its root module, which consists of the resources defined in the .tf files in the main working directory.

A module can call other modules, which lets you include the child module’s resources into the configuration in a concise way. Modules can also be called multiple times, either within the same configuration or in separate configurations, allowing resource configurations to be packaged and re-used.

Lets take a look how we can create and use terraform modules. The first thing that you will need to do is to structure your code somehow. A most common way to create a structure would be to create folders and name them after your module. For example if you have to deploy a cloud solution with multiple components you could create a module for each component. In my structure you can find two folders with the name module1 and module2. The module2 will call module1 to reuse its code inside the terraform configuration.

The module1 folder contains the files that are shown.

provider.tf file will be used to fetch all the necessary providers. In my example I use the random provider to generate a random string with terraform.

terraform {
  required_providers {
    random = {
      source  = "hashicorp/random"
      version = "~> 3.5"
    }
  }
}

kati.tf will call the random code to generate the string.

resource "random_string" "module1_random" {
  length           = 16
  special          = true
  override_special = "/@£$"
}

And finally the outputs.tf file will be used to mark the string as an output and print it in the console.

The module2 folder will only include a main.tf which will call module1. In order to make a module in terraform we will only need to specify the location of the folder. We do not need complex actions or definitions. In this example module2 will call module1 to generate the random string.

the main.tf file will only call the module1

module "kati" {
    source = "../module1"
}

In order to call the module I will need to perform an apply on module2.

cd module2; terraform apply

As we have not specified an output variable in module2 the result will not be printed in the console, you can see from below screenshot that module2 is calling kati file from module1 to generate the random string.

However we can find the result by navigating in the terraform.tfstate file of module2.

When we have more complex scenarios we will need to pass variables inside the calling modules. In order to learn how to pass variables in the calling modules you can read my previous article.

You can find the example on GitHub

https://github.com/geralexgr/terraform-modules-blog-example

Modules – Configuration Language | Terraform | HashiCorp Developer

Youtube video:

Posted on Leave a comment

Deploy resources on aws using terraform

Terraform is the most popular IAC tool among developers and devops engineers created by hashicorp. Anyone can use it freely to create multiple deployment environments and make the deployment procedure faster. In this article we will examine how we can use terraform AWS provider to deploy resources on AWS cloud.

The terraform AWS provider documentation can be found in the below link.

hashicorp/aws | Terraform Registry

The first need we will need to do is to create a user in AWS from IAM in order to create an access token for the deployment. By navigating in the IAM tab you can go and create a new user for terraform. I gave this user the name terraform

Then by pressing the user you can go in the security credentials tab and create a new access key. Those will be needed in the terraform script later on.

When creating the user you must specify the permission policies to attach. This will allow the necessary actions on the infrastructure. As in the provided terraform script below I only create a new vpc I should use the least privilege principal and only provide the permissions that are required. As a result I do not provide administrator access but only AmazonVPCFullAccess for this user. This build in policy rule allow full access on VPCs like creating, updating, deleting etc.

After those steps I will need to run my terraform script to create the resources I need. First you will need to initialize the terraform so that it downloads the providers stated in the files

terraform init

and the second step would be to apply the configuration

terraform apply

when you apply the configuration you will view what is created or deleted

Code:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }

  required_version = ">= 1.2.0"
}

provider "aws" {
  region     = "eu-west-1"
  access_key = "KEY" // access key you generated for the user
  secret_key = "SECRET" // secret of the key
}

resource "aws_vpc" "vpc-test" {
  cidr_block = "10.10.0.0/16"


  tags = {
    Name = "ExampleAppServerInstance"
  }
}

When the deployment finishes you can find your vpc in your account.

Build infrastructure | Terraform | HashiCorp Developer

Youtube video:

Posted on Leave a comment

Publish coverage report on SonarQube for dotnet test

When you create coverage reports for your .NET projects you have the ability to use the native logger mechanism in order to export a trx report with your results.

An example of this native functionality can be found below as the logger parameter is used along with dotnet test.

dotnet test keyvault.sln  --logger "trx;logfilename=mytests.trx"

However if you try to upload this trx file in a sonarqube in order to get your coverage report result, this will fail with the below error message.

Error message:

WARN: Could not import coverage report ‘.\Keyvault_MI_Pod\Tests\TestProject1\TestResults\mytests.trx’ because ‘Only dotCover HTML reports which start with “” are supported.’. Troubleshooting guide: https://community.sonarsource.com/t/37151

Using the below sonarqube documentation page you will find all the available methods that are supported for the coverage reporting. When you need to upload your coverage results in sonarqube you will need to use a tool from the list and upload the file that this tool generates.

.NET test coverage (sonarsource.com)

In my case I will use dotCover and create the relevant report file.

In the teamcity task (begin analysis) you will need to add in the additional parameters where you export your report.

Then in the pipeline you will need to add the dotnet test task as shown from the documentation.

dotnet dotcover test --dcReportType-HTML

In the finish task you will see the coverage report type is now compatible and can be parsed from sonarqube.

Finally you will have your coverage report inside sonarqube.

Posted on Leave a comment

Access Managed Identity from container inside VM – Azure

Managed identity is the best practice regarding security when accessing resources on Azure. There are many ways you can use it for service to service communication. Sometimes though you can use nested managed identity in more complex scenarios like the one demonstrated below. In this guide we will enable managed identity on a virtual machine and we will access this managed identity within a container that runs on that specific virtual machine. This case can be useful in complex deployment scenarios where you have multiple containers inside a virtual machine and you want to deploy using managed identity on azure.

The first thing you will need is the system assigned managed identity on the virtual machine.

Then you can run your containers inside the virtual machine. In my case the containers are windows based as a result I will use the route print command to show the routing table.

Run the following Commands to expose the managed identity endpoint

$gateway = (Get-NetRoute | Where { $_.DestinationPrefix -eq '0.0.0.0/0' } | Sort-Object RouteMetric | Select NextHop).NextHop
$ifIndex = (Get-NetAdapter -InterfaceDescription "Hyper-V Virtual Ethernet*" | Sort-Object | Select ifIndex).ifIndex
New-NetRoute -DestinationPrefix 169.254.169.254/32 -InterfaceIndex $ifIndex -NextHop $gateway -PolicyStore ActiveStore # metadata API

After the successful add of the route the managed identity endpoint should be redirected in the gateway and from there you will be able to authenticate.

We can verify the procedure by executing a key vault managed identity secret retrieval.

$token = Invoke-WebRequest -Uri 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fvault.azure.net' -Headers @{Metadata="true"} -UseBasicParsing
$tokenvalue = ($token.Content | ConvertFrom-Json).access_token

Retrieve secret:

Invoke-WebRequest -Uri "https://test.vault.azure.net/secrets/testsecret/d9ce520dfdfdf4bdc9a41f5572069708c?api-version=7.3" -Headers @{Authorization = "Bearer $tokenvalue"} -UseBasicParsing

At last you can login using Managed Identity from the container using the powershell module.

References:

Co authored with Giannis Anastasiou @ Vivawallet