Posted on Leave a comment

Containerize a .NET app with Docker and vs code

When you build your application with cloud native technologies you will build microservices on containers instead of monolithic applications. We will now examine how easy is to build a .NET application in a container and run this application on your local machine.

First we will need to create the visual studio solution. I will go through that with visual studio IDE and then I will use vs code. For my microservice I am using a ASP .NET Core web api with default code.

The target framework for the solution will be the latest .NET framework which is version 7. All other settings will be set to defaults.

When you run the app locally with IIsExpress you will be able to access the swagger interface through the port which you defined in the launchSettings.json. 

https://localhost:7057/swagger/index.html

This file can be located under Properties and there you can configure on which port the application will run. In the profiles section under https settings, you can find the default application URL and port. This will be needed in later steps.

Microsoft provides the below documentation in order to create a containerized application that runs on .NET

Build and run an ASP.NET Core app in a container
In this guide you will learn how to: Create a Dockerfile file describing a simple .NET Core service container. Build…code.visualstudio.com

In order to create a microservice based on our vs solution we will need a dockerfile. This can be created automatically with vs code.

In vs code command dialog search for docker add and select docker compose files to workspace.

Then select asp net core.

and after that your operating system. The next step will be to select the exposed port, or otherwise under which port your application will run. There we should provide the port that we found under our launchSettings.json or the one that we configured manually. In my case I will select the default one for the solution which was 7057.

When a popup window appears on the screen you should select add Dockerfile and automatically the build files will be generated.

Dockerfile

Based on my setup I altered two things in the generated Dockerfile. The first thing will be to change configuration to Debug instead of Release. For production environments you will consider using the release build directive. The second thing will be to add an environmental variable ASPNETCORE_ENVIRONMENT inside the container with the value Development.

FROM mcr.microsoft.com/dotnet/aspnet:7.0 AS base
WORKDIR /app
EXPOSE 7057

ENV ASPNETCORE_URLS=http://*:7057
ENV ASPNETCORE_ENVIRONMENT=Development

FROM mcr.microsoft.com/dotnet/sdk:7.0 AS build
WORKDIR /src
COPY ["AspNetWebApi.csproj", "./"]
RUN dotnet restore "AspNetWebApi.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "AspNetWebApi.csproj" -c Debug -o /app/build

FROM build AS publish
RUN dotnet publish "AspNetWebApi.csproj" -c Debug -o /app/publish /p:UseAppHost=false

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "AspNetWebApi.dll"]

docker build command

after the build is completed and the image is created you can run a new container locally.

Keep in mind that in order to test your container you should create a port forward from the container to your host. I used the same port for the host so I added the -p 7057:7057

The logs of the container indicate a successful run of the application.

Our application now runs as a microservice container inside the host machine (my laptop). 

We can verify the access to our application using the URL with the swagger.

Youtube video:

Posted on Leave a comment

Connect Azure Web app container to Keyvault using Managed identity

Following the article on which I described how you can connect to Azure resources through Managed Identity, I will showcase how one can connect through a container running on an App Service (web app) to a keyvault in order to gather secrets from it.

The main two components that are required for this demo will be an app service and a keyvault.

First things first we will need some secrets in order to gather through the hosted application. The dbpassword that is shown below will be retrieved and used from the web app running on the container.

As examined in the article mentioned above, we should construct the appropriate URL in order to retrieve the access_token.

$kati = Invoke-WebRequest -Uri $env:MSI_ENDPOINT"?resource=https://vault.azure.net&api-version=2017-09-01" -Headers @{Secret=$env:MSI_SECRET} -UseBasicParsing | ConvertFrom-Json

Store the access_token on a separate variable (as it sometimes is not parsed correctly from powershell)

and perform an API call on your keyvault using as Authorization the token that we retrieved earlier.

Invoke-WebRequest -Uri "https://spfykey.vault.azure.net/secrets/dbpassword/4f371b23cf244717a585e12af9846dec?api-version=7.3" -Headers @{Authorization = "Bearer $metavliti"} -UseBasicParsing

As a result we sucessfully retrieved the password for the secret which is 123456 by performing a rest api call through the web app using the Managed Identity of the app service.

References:

https://learn.microsoft.com/en-us/rest/api/keyvault/keyvault/vaults

Posted on 2 Comments

Connect to Azure resources with Managed Identity – Azure Web app container example

Managed identity is the recommended way to go when you need to access resources on Azure as they eliminate the need for developers having to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Active Directory (Azure AD) tokens.

An administrator can locate the managed identity of the resource usually under Settings tab.

When you enable the system assigned identity an object (principal ID) will be created. This is the entity that Azure uses in order to reference this resource when you assign permissions through IAM.

We will examine now how we can use the managed identity in order to get an access_token that can be used to authenticate with Azure resources. In my scenario I have created a simple container that runs powershell (mcr.microsoft.com/powershell) in order to interact with rest-api calls with the azure apis. In order to do so, I got a console on the container running on the app service through the Development tools section under advanced tools

Using the below UI you can get a console of the container.

All resources that support Azure AD authentication, and thus work with managed identity use oauth access tokens for authorization. This means we first need to get a token before we can access resources.

When managed identity is enabled on a app service a local http endpoint that can provide access tokens will be available on the app service. This local http endpoint can only be reached from code running on the app service.

You can locate the http endpoint along with the secret needed by displaying environmental variables. As I used powershell image I had a command line so I pressed

set

The variables that we need are MSI_ENDPOINT which is the same as IDENTITY_ENDPOINT and MSI_SECRET. Using those two variables we can get an access_token and use this token in order to authenticate to azure resources.

In order to interact with the API I used curl. The request URL that should be created is a concatenation of MSI_ENDPOINT and the specific resource category that you want to use (see appendix at the bottom of the article). You should also use the secret inside App service as a header.

Example

curl MSI_ENDPOINT?resource=https://management.azure.com&api-version=2017-09-1 -H "Secret: MSI_SECRET" -v

Using curl we can Identify that the requested has a 200 response code and has been performed correctly.

In order to get the output of the curl command you can use -o argument.

By saving the file as kati.txt we can verify that the access_token is saved on the file under a JSON structure.

Lets now examine how we can perform the same request using powershell. First of all we should navigate in the folder on which powershell is located and execute powershell.exe.

cd windows\system32\windowspowershell\v1.0
powershell.exe

Then we can use Invoke-WebRequest to perform an HTTP call on the same url that we described above.

$kati = Invoke-WebRequest -Uri $env:MSI_ENDPOINT"?resource=https://management.azure.com&api-version=2017-09-01" -Headers @{Secret=$env:MSI_SECRET} -UseBasicParsing | ConvertFrom-Json

You can then use the $kati.access_token in order to Authenticate your Azure API calls.

Azure Resource Manager

https://management.azure.com/
Use this when you want to manage resources. I.e. create, delete, update Azure resources. This is when you would do stuff programmatically that you would otherwise do using Azure CLI or in the portal.

Resources supporting managed identity

If you want to interact with one of the APIs for a specific type of service use the following URIs for the resource parameter.
Keyvault: https://vault.azure.net
Datalake: https://datalake.azure.net/
Azure SQL: https://database.windows.net/
Eventhub: https://eventhubs.azure.net
Service Bus: https://servicebus.azure.net
Storage blobs and queues: https://storage.azure.com/

Links:

Azure Services with managed identities support – Azure AD – Microsoft Entra | Microsoft Docs

References:

Co authored with Giannis Anastasiou @ Vivawallet

Posted on Leave a comment

Using slots with appservice for Continuous delivery – Azure DevOps

Azure deployment slots allow your web apps to function in different instances called slots. Slots are different environments accessed through a publically available endpoint. One app instance is always assigned to the production slot, where you can toggle between multiple app instances on demand. This could contribute to have your application always available and deploy different versions without a downtime.

In this scenario we will examine an appservice setup called gservice that has a staging slot.

This staging slot will be used to deploy the code first, then do some health checks and finally swap this slot on production. In this article I will explain only the release procedure. If you want to learn how to build an appservice check the article attached below.

In the initial setup the staging environment and also the production one are both on v1. Lets say that code is pushed on the repository and now the version of the code is v2.

The first thing to do in the deployment would be to deploy the code on staging slot. This is an important step.

The code should be always deployed to staging slot.

Then after the code deployment some health tests will follow. If everything goes as expected we will need to swap the slots.

The swap should be performed always from staging to production slot.

After those two steps on your release pipeline you will have your code published on the production app service and the staging slot will retain the previous build for failover and backup reasons.