When you enable MFA in Gitlab you may face issues when interacting with git repositories. Some of your commands like git pull, push etc could fail and this is done because of the MFA.
There is a way to resolve those issues by communicating with ssh keys. The procedure to create and upload your keys are described in the below article.
Terraform is the most popular IAC tool among developers and devops engineers created by hashicorp. Anyone can use it freely to create multiple deployment environments and make the deployment procedure faster. In this article we will examine how we can use terraform AWS provider to deploy resources on AWS cloud.
The terraform AWS provider documentation can be found in the below link.
The first need we will need to do is to create a user in AWS from IAM in order to create an access token for the deployment. By navigating in the IAM tab you can go and create a new user for terraform. I gave this user the name terraform
Then by pressing the user you can go in the security credentials tab and create a new access key. Those will be needed in the terraform script later on.
When creating the user you must specify the permission policies to attach. This will allow the necessary actions on the infrastructure. As in the provided terraform script below I only create a new vpc I should use the least privilege principal and only provide the permissions that are required. As a result I do not provide administrator access but only AmazonVPCFullAccess for this user. This build in policy rule allow full access on VPCs like creating, updating, deleting etc.
After those steps I will need to run my terraform script to create the resources I need. First you will need to initialize the terraform so that it downloads the providers stated in the files
terraform init
and the second step would be to apply the configuration
terraform apply
when you apply the configuration you will view what is created or deleted
Code:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
required_version = ">= 1.2.0"
}
provider "aws" {
region = "eu-west-1"
access_key = "KEY" // access key you generated for the user
secret_key = "SECRET" // secret of the key
}
resource "aws_vpc" "vpc-test" {
cidr_block = "10.10.0.0/16"
tags = {
Name = "ExampleAppServerInstance"
}
}
When the deployment finishes you can find your vpc in your account.
Managed identity is the best practice regarding security when accessing resources on Azure. There are many ways you can use it for service to service communication. Sometimes though you can use nested managed identity in more complex scenarios like the one demonstrated below. In this guide we will enable managed identity on a virtual machine and we will access this managed identity within a container that runs on that specific virtual machine. This case can be useful in complex deployment scenarios where you have multiple containers inside a virtual machine and you want to deploy using managed identity on azure.
The first thing you will need is the system assigned managed identity on the virtual machine.
Then you can run your containers inside the virtual machine. In my case the containers are windows based as a result I will use the route print command to show the routing table.
Run the following Commands to expose the managed identity endpoint
After the successful add of the route the managed identity endpoint should be redirected in the gateway and from there you will be able to authenticate.
We can verify the procedure by executing a key vault managed identity secret retrieval.
In this article we will examine the power of templates for Azure DevOps pipelines. Templates let you define reusable content, logic, and parameters and function in two ways. You can insert reusable content with a template or you can use a template to control what is allowed in a pipeline.
When you build complex automation scenarios for your organizations, you will need to use stages, jobs and tasks as they would contain multiple environments and configuration settings.
You can find some of the reasons why you would need to follow this approach on my previous article
In this article we will examine a azure devops pipeline which contains stages, jobs and tasks. Those will be created inside templates and they will be called from the main pipeline. A high level view of the architecture can be found in the below picture.
My code structure is shown below. There is a folder for the appropriate templates and a main pipeline which is located in another folder and will refer the templates folder.
stage.yml The stage.yml file will contain the template code for the stages. It has as parameter the name which will be given in the stage.
job.yml The job.yml file will contain the template code for the jobs. It has as parameter the name which will be given in the job and also a variable sign which will indicate if a task will be executed.
main.yml The main.yml file is the main reference of the pipeline and the one that will be called. If you need to add more stages on it, you would only have to add another -template section under the stages.
By executing the pipeline we can locate that we have one stage that is not visible (as it is the only one) and under this stage a job has been created for the task1 which we added on our template.
Find more about azure devops templates on my Udemy course:
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here:
Cookie Policy