https://techwatching.dev/
Alexandre Nédélec
Copyright © 2024
2024-03-29T05:12:01Z
Hi, I'm a .NET developer fond of Microsoft technologies, welcome to my blog !
https://techwatching.dev/posts/http-clients-oauth2
Call your Azure AD B2C protected API with authenticated HTTP requests from your JetBrains IDE
2024-03-11T00:00:00Z
<p>I have written several <a href="https://www.techwatching.dev/posts/http-clients">blog posts</a> about HTTP clients in the past. I am a big fan of using HTTP text files versioned in a git repository alongside API code and executed by an IDE tooling. However, there was one use case where a GUI tool like Postman or a swagger page was more convenient: retrieving OAuth 2.0 users' tokens. Thanks to the latest <a href="https://www.jetbrains.com/help/idea/oauth-2-0-authorization.html">OAuth 2.0 feature</a> in JetBrains' IDE built-in HTTP client, this is no longer an issue.</p>
<h2 id="context">Context</h2>
<p>I am developing a web application composed of a Vue.js frontend and an ASP.NET Core backend (just describing my use case, technologies don't matter). The end users of this application are authenticated using <a href="https://learn.microsoft.com/en-us/azure/active-directory-b2c/overview">Azure AD B2C</a>, which is a <a href="https://en.wikipedia.org/wiki/Customer_identity_access_management">customer identity access management</a> solution like Auth0 or other competitors.</p>
<p>I often need to manually call the endpoints of the API to verify the code is working properly and that an endpoint is returning the expected result. HTTP files are a convenient way of writing and executing the HTTP requests. Once committed in the Git repository, they can easily be shared with other developers of the team who may not have worked on some endpoints and want to have proper examples with the query parameters and payloads.</p>
<p>As the API is protected by Azure AD B2C, I need to retrieve a valid access token and pass it to my requests.</p>
<h2 id="previous-solutions">Previous solutions</h2>
<p>Passing a valid access token to my HTTP requests is something I was previously doing by:</p>
<ul>
<li><p>signing in my frontend</p>
</li>
<li><p>grabbing the token in the web browser dev tools</p>
</li>
<li><p>copying the token to my <a href="https://www.jetbrains.com/help/idea/exploring-http-syntax.html#environment-variables">HTTP environment variables</a> (preferably the private environment file to avoid committing a secret in your repository)</p>
</li>
</ul>
<p>That works but:</p>
<ul>
<li><p>it's cumbersome</p>
</li>
<li><p>you have to do it each time your access token expires</p>
</li>
</ul>
<p>Another solution is to use a tool that generates app-specific local JWTs and configure your local dev environment to authenticate with these tokens instead of using the Azure AD B2C configuration. In .NET, you can use the <a href="https://learn.microsoft.com/en-us/aspnet/core/security/authentication/jwt-authn"><code>dotnet user-jwts</code></a> to do exactly that. It allows you to generate a JWT token with the scopes, roles, and claims you want. So it's a good solution to debug your API locally without having to bypass the authentication and authorization mechanisms.</p>
<p>However, it has some downsides:</p>
<ul>
<li><p>the tokens are only valid in your local machine so it only works for your local environment</p>
</li>
<li><p>the Azure AD B2C authentication is replaced by this "local JWT authentication" so you are not testing your API in real conditions</p>
</li>
</ul>
<h2 id="with-the-new-http-client-oauth-2.0-feature">With the new HTTP Client OAuth 2.0 feature</h2>
<p>Starting version 2024.1, HTTP Client in the JetBrains IDEs (in my case Rider 2024.1) support automatically authenticating HTTP requests, provided that you properly configured it.</p>
<blockquote>
<p>🗨 Support for OAuth 2.0 started in <a href="https://blog.jetbrains.com/idea/2023/10/intellij-idea-2023-3-eap-3/#oauth-2.0-support">version 2023.3</a>, however, Authorization Code Flow with PKCE (PKCE challenge is required in the <a href="https://oauth.net/2.1/">OAuth 2.1 specification</a> is only supported since 2024.1.</p>
</blockquote>
<h3 id="oauth-2.0-authorization-code-flow-with-pkce">OAuth 2.0 authorization code flow with PKCE</h3>
<p>The OAuth 2.0 flow involved in retrieving a valid access token to make requests to an Azure AD B2C protected API is the authorization code flow with PKCE. There are 2 steps in the <a href="https://learn.microsoft.com/en-us/azure/active-directory-b2c/authorization-code-flow">OAuth 2.0 authorization code flow</a>:</p>
<ol>
<li><p>Get an authorization code</p>
</li>
<li><p>Exchange the authorization code for an access token</p>
</li>
</ol>
<p>Step 1 involves the user entering their credentials in the login form (Azure AD B2C login form in this case). At first sight, it might appear not very suitable for using HTTP files but the JetBrains HTTP Client handled it by opening the login form in the IDE embedded browser.</p>
<p>For Azure AD B2C,</p>
<ul>
<li><p>the authorize endpoint is <code>https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/{policy}/oauth2/v2.0/authorize</code></p>
</li>
<li><p>the token endpoint is <code>https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/{policy}/oauth2/v2.0/token</code></p>
</li>
</ul>
<p>where:</p>
<ul>
<li><p><code>tenant</code> is the name of the Azure AD B2C tenant</p>
</li>
<li><p><code>clientId</code> is the application ID of the application registered in Azure AD the B2C tenant</p>
</li>
<li><p><code>policy</code> is the name of the policy created in the Azure AD B2C tenant</p>
</li>
</ul>
<blockquote>
<p>💡When using a custom domain in Azure AD B2C, the endpoints are similar but the <code>{tenant}.b2clogin.com</code> part is replaced by the custom domain.</p>
</blockquote>
<p>If you want to better understand how this flow works, there is a nice diagram in <a href="https://www.jetbrains.com/help/idea/oauth-2-0-authorization.html">Auth0 documentation</a>.</p>
<h3 id="configuration-in-the-jetbrains-http-client">Configuration in the JetBrains HTTP Client</h3>
<p>To make the authorization code flow work in the HTTP Client, all I have to do is provide the configuration for the Azure AD B2C tenant in the HTTP environment file.</p>
<p>Here is an example of such configuration:</p>
<pre><code class="language-json">{
"apiUrl": "https://localhost:5001/api",
"Security": {
"Auth": {
"CIAM": {
"Type": "OAuth2",
"Grant Type": "Authorization Code",
"PKCE": true,
"Client ID": "3a53c90d-20c4-40e9-b440-4825b70374d7",
"Scope": "openid offline_access profile https://mytenant.onmicrosoft.com/security/user.read",
"Auth URL": "https://mytenant.b2clogin.com/mytenant.onmicrosoft.com/b2c_1_sign_in/oauth2/v2.0/authorize",
"Token URL": "https://mytenant.b2clogin.com/mytenant.onmicrosoft.com/b2c_1_sign_in/oauth2/v2.0/token",
"Redirect URL": "https://localhost:8080/oidc-callback",
"Acquire Automatically": true
}
}
}
}
</code></pre>
<blockquote>
<p>💡 Instead of setting PKCE to true, you can set if to a JSON object with the code challenge method and code verifier to use in it.</p>
</blockquote>
<blockquote>
<p>💬 In this example, I have set a local Redirect URL as my front was running locally. But I could also have set the Redirect URL to another environment where my web application is running.</p>
</blockquote>
<p>You can check the <a href="https://www.jetbrains.com/help/idea/oauth-2-0-authorization.html">JetBrains documentation</a> to have more information about the HTTP Client support for OAuth 2.0 authorization.</p>
<h3 id="authenticated-http-requests-in-the-http-file">Authenticated HTTP Requests in the HTTP file</h3>
<p>Once the configuration is set, retrieving an access token can be done with a simple click in the configuration file.</p>
<p>The authentication process is logged so we can check the requests made and identify any mistakes made in the configuration.</p>
<img src="/posts/images/httpclientsoauht2_1.webp" class="img-fluid centered-img" alt="HTTP authentication log.">
<p>Hopefully, we don't have to manually retrieve an access token each time we need to execute an HTTP request in an HTTP file of our IDE. We can just use the <code>{{$auth.token()}}</code> variable in the Authorization header of our requests, like this:</p>
<pre><code class="language-http">GET {{apiUrl}}/products
Authorization: Bearer {{$auth.token("CIAM")}}
</code></pre>
<p>The IDE will handle the rest for us.</p>
<h2 id="wrapping-up">Wrapping up</h2>
<p>The HTTP Client OAuth 2.0 feature in JetBrains IDEs has greatly simplified making authenticated HTTP requests to secure APIs. While this article focused on Azure AD B2C, the same principles apply to other Authorization Servers, with only the authorize and token endpoints differing.</p>
<p>I hope other IDEs will adopt this feature, using the same convention for the <code>$auth.token()</code> variable and its configuration. The only drawback is for developers not using JetBrains IDEs, who will need to adjust requests containing the <code>$auth.token()</code> variable to run them in their IDEs.</p>
<p>I have written several <a href="https://www.techwatching.dev/posts/http-clients">blog posts</a> about HTTP clients in the past. I am a big fan of using HTTP text files versioned in a git repository alongside API code and executed by an IDE tooling. However, there was one use case where a GUI tool like Postman or a swagger page was more convenient: retrieving OAuth 2.0 users' tokens. Thanks to the latest <a href="https://www.jetbrains.com/help/idea/oauth-2-0-authorization.html">OAuth 2.0 feature</a> in JetBrains' IDE built-in HTTP client, this is no longer an issue.</p>
https://techwatching.dev/posts/it-event-calendars
Having Fun With IT Event Calendars
2024-03-04T00:00:00Z
<p>In this post, we will discuss how to write a small .NET program that retrieves events from an IT event calendar and submits them to another one using AngleSharp.</p>
<h2 id="some-context">Some context</h2>
<p>There are plenty of websites that list IT events in the world. One that is particularly popular is the <a href="https://github.com/scraly/developers-conferences-agenda">developers conferences agenda</a> Github repository that was created by Aurélie Vache, a well-known French DevRel. This repository is an excellent resource where numerous tech conferences and CFPs (Call for Papers) are listed. Adding a new conference/CFP is very easy for any developer because you just have add it in the readme that contains all the conferences and make a PR. Additionally, there is now a <a href="https://developers.events/">website</a> available to easily view the list of conferences.</p>
<p>Another one I like is the <a href="https://techcommunitycalendar.com/">Tech Community Calendar</a> created by Lee Englestone, a Microsoft MVP. What I find interesting it that it does not just list conferences and call for papers but also other tech events like hackathon or meetups. Events are displayed on small cards with thumbnails of the events websites, and you can filter them by country or type of event. Yet, it is less popular than the developers conferences agenda I mentioned before, so there are fewer events listed. There is a form to suggest new events, and I have been submitted events from time to time. However, most events I submit are developer conferences and CFPs that people have already added in the developer conferences agenda.</p>
<p>So I thought, what if I automate the process of retrieving events from the developer conferences agenda and submitting them to the tech community calendar?</p>
<h2 id="its-just-a-poc">It's just a PoC!</h2>
<p>At first, I spent too much time thinking about how to schedule and host the program I hadn't even started writing 😁. Of course, time-triggered <a href="https://azure.microsoft.com/fr-fr/products/functions">Azure Functions</a> came to my mind, I even considered Durable Functions to break down the process into steps (retrieve events, check for existing events, submitting each event...). Then I thought about <a href="https://learn.microsoft.com/en-us/azure/container-apps/jobs">Jobs in Azure Container Apps</a>, or Dapr with Azure Container Apps and even <a href="https://docs.dapr.io/developing-applications/building-blocks/workflow/workflow-overview/">Dapr Workflows</a>. Eventually, I realized it did not matter much since it was just a proof of concept. I decided to postpone the choice for later (if ever it goes beyond the poc) and just start coding.</p>
<p>I often like writing .NET tools or small programs using the Worker Service template because it's straightforward and includes useful features like dependency injection and configuration. However, this time I decided to keep things simple: just a .NET console application and all the code in Program.cs file. With <a href="https://learn.microsoft.com/en-us/dotnet/csharp/fundamentals/program-structure/top-level-statements">top level statement</a>, it feels similar to writing a Bash or PowerShell script, making it quite convenient for experimenting. Of course, this approach isn't what I would use for a real project.</p>
<h2 id="retrieve-developer-conferences">Retrieve Developer Conferences</h2>
<p>In addition to the readme file, the developers conferences agenda exposes all the data publicly in JSON <a href="https://developers.events/all-events.json">here</a>.</p>
<p>Developer conferences can be easily represented with a record (I only kept the properties I needed):</p>
<pre><code class="language-csharp">public record DeveloperEvent(
string Name,
long[] Date,
string Hyperlink,
string Location,
string City,
string Country
);
</code></pre>
<p>We can use an <code>HttpClient</code> to retrieve the events. The namespace <code>System.Net.Http.Json</code> contains an interesting method to make the <code>GET</code> HTTP call and deserialize the data using <code>System.Text.Json</code>.</p>
<pre><code class="language-csharp">using var httpClient = new HttpClient()
{
BaseAddress = new Uri("https://developers.events/")
};
var events = await httpClient.GetFromJsonAsync<DeveloperEvent[]>("all-events.json");
</code></pre>
<h2 id="convert-events-to-the-proper-format">Convert Events To The Proper Format</h2>
<p>The form to submit events in the Tech Community Calendar look likes that:</p>
<img src="/posts/images/iteventcalendar_tcc.webp" class="img-fluid centered-img" alt="Form to submit events to tech community calendar">
<p>The Tech Community Calendar events can be represented with the following record :</p>
<pre><code class="language-csharp">public record TechCommunityCalendarEvent(
string Name,
string Url,
DateTimeOffset StartDate,
DateTimeOffset EndDate,
EventType EventType,
EventFormat EventFormat,
string Country,
string City
)
{
public string? TwitterHandle { get; set; }
};
</code></pre>
<blockquote>
<p>💬 Positional parameters in a record are init-only. As I want to set the Twitter URL after the event has been created, I use a read-write property for it.</p>
</blockquote>
<p>We can write a method to convert a <code>DeveloperEvent</code> to a <code>TechCommunityCalendarEvent</code>:</p>
<pre><code class="language-csharp">TechCommunityCalendarEvent ConvertToTechEvent(DeveloperEvent developerEvent)
{
var startingDate = DateTimeOffset.FromUnixTimeMilliseconds(developerEvent.Date.First());
var endingDate = DateTimeOffset.FromUnixTimeMilliseconds(developerEvent.Date.Last());
var eventNameContainsYear = int.TryParse(developerEvent.Name.Split(" ").LastOrDefault(), out var year)
&& year == startingDate.Year;
return new TechCommunityCalendarEvent(
eventNameContainsYear ? developerEvent.Name : $"{developerEvent.Name} {startingDate.Year}",
developerEvent.Hyperlink,
startingDate,
endingDate,
EventType.Conference,
developerEvent.Country is "Online" ? EventFormat.Virtual : EventFormat.In_Person,
developerEvent.Country,
developerEvent.City
);
}
</code></pre>
<p>It allows us to convert all retrieved events after filtering on their date to only keep upcoming events.</p>
<pre><code class="language-csharp">var upcomingEvents = events
.Where(e => e.Date.FirstOrDefault() > DateTimeOffset.UtcNow.ToUnixTimeMilliseconds())
.Select(ConvertToTechEvent)
.ToList();
</code></pre>
<h2 id="retrieve-an-event-twitter-profile-link">Retrieve An Event Twitter Profile Link</h2>
<p>In the submission form, there's an optional field for entering the Twitter Profile Link of an event. That's not something the events from the developers conferences agenda have but that's interesting data that could be useful to supply. All events have an associated website and most of them contain a link to their Twitter Profile on it.</p>
<p>This is where a library like <a href="https://github.com/AngleSharp/AngleSharp">AngleSharp</a>, which can parse HTML according to W3C specifications, becomes useful. Although I have not used this library before, creating a method to find the Twitter URL on an event's webpage is straightforward.</p>
<pre><code class="language-csharp">async Task<string?> RetrieveEventTwitterProfileLink(string eventUrl)
{
var context = BrowsingContext.New(Configuration.Default.WithDefaultLoader());
var queryDocument = await context.OpenAsync(eventUrl);
var twitterSelector = "a[href*='twitter.com'], a[href*='https://x.com']";
var twitterSocialLink = queryDocument.QuerySelector(twitterSelector)
?.GetAttribute("href");
return Uri.TryCreate(twitterSocialLink, UriKind.Absolute, out var twitterProfileUri) ?
// Normalize X/Twitter profile URL by removing query parameters and fragments
$"{twitterProfileUri.Scheme}://{twitterProfileUri.Host}{twitterProfileUri.AbsolutePath}" : null;
}
</code></pre>
<blockquote>
<p>💬 As the DOM API exposed follows the W3C specifications, it is very convenient. If you can retrieve something with <code>document.querySelector</code> in your browser console, you will be able to retrieve it using the same selector in your AngleSharp code.</p>
</blockquote>
<h2 id="submit-an-event">Submit An Event</h2>
<p>Submitting forms is also possible using AngleSharp. We first have to retrieve the form element in the HTML document using the query sector <code>form[action="/addevent/"]</code>. Then we can directly submit the event.</p>
<pre><code class="language-csharp">async Task SubmitEventToTechCommunityCalendar(TechCommunityCalendarEvent techCommunityCalendarEvent)
{
var context = BrowsingContext.New(Configuration.Default.WithDefaultLoader());
var queryDocument = await context.OpenAsync("https://techcommunitycalendar.com/addevent/");
var form = queryDocument.QuerySelector<IHtmlFormElement>("""form[action="/addevent/"]""");
if (form is not null)
{
var response = await form.SubmitAsync(techCommunityCalendarEvent);
}
}
</code></pre>
<blockquote>
<p>💬 I intentionally named the properties in the <code>TechCommunityCalendarEvent</code> record with the same names as the fields in the form. This way, I can directly submit the event without any transformation. Otherwise, I would have to convert the event to an anonymous object with the correct names.</p>
</blockquote>
<h2 id="the-full-program">The Full Program</h2>
<p>Here is the content of the complete <code>Program.cs</code> file.</p>
<pre><code class="language-csharp">using System.Net.Http.Json;
using AngleSharp;
using AngleSharp.Dom;
using AngleSharp.Html.Dom;
using var httpClient = new HttpClient()
{
BaseAddress = new Uri("https://developers.events/")
};
var events = await httpClient.GetFromJsonAsync<DeveloperEvent[]>("all-events.json");
var upcomingEvents = events
.Where(e => e.Date.FirstOrDefault() > DateTimeOffset.UtcNow.ToUnixTimeMilliseconds())
.Select(ConvertToTechEvent)
.ToList();
foreach (var upcomingEvent in upcomingEvents)
{
upcomingEvent.TwitterHandle = await RetrieveEventTwitterProfileLink(upcomingEvent.Url);
await SubmitEventToTechCommunityCalendar(upcomingEvent);
}
async Task<string?> RetrieveEventTwitterProfileLink(string eventUrl)
{
var context = BrowsingContext.New(Configuration.Default.WithDefaultLoader());
var queryDocument = await context.OpenAsync(eventUrl);
var twitterSelector = "a[href*='twitter.com'], a[href*='https://x.com']";
var twitterSocialLink = queryDocument.QuerySelector(twitterSelector)
?.GetAttribute("href");
return Uri.TryCreate(twitterSocialLink, UriKind.Absolute, out var twitterProfileUri) ?
// Normalize X/Twitter profile URL by removing query parameters and fragments
$"{twitterProfileUri.Scheme}://{twitterProfileUri.Host}{twitterProfileUri.AbsolutePath}" : null;
}
async Task SubmitEventToTechCommunityCalendar(TechCommunityCalendarEvent techCommunityCalendarEvent)
{
var context = BrowsingContext.New(Configuration.Default.WithDefaultLoader());
var queryDocument = await context.OpenAsync("https://techcommunitycalendar.com/addevent/");
var form = queryDocument.QuerySelector<IHtmlFormElement>("""form[action="/addevent/"]""");
if (form is not null)
{
var response = await form.SubmitAsync(techCommunityCalendarEvent);
}
}
TechCommunityCalendarEvent ConvertToTechEvent(DeveloperEvent developerEvent)
{
var startingDate = DateTimeOffset.FromUnixTimeMilliseconds(developerEvent.Date.First());
var endingDate = DateTimeOffset.FromUnixTimeMilliseconds(developerEvent.Date.Last());
var eventNameContainsYear = int.TryParse(developerEvent.Name.Split(" ").LastOrDefault(), out var year)
&& year == startingDate.Year;
return new TechCommunityCalendarEvent(
eventNameContainsYear ? developerEvent.Name : $"{developerEvent.Name} {startingDate.Year}",
developerEvent.Hyperlink,
startingDate,
endingDate,
EventType.Conference,
developerEvent.Country is "Online" ? EventFormat.Virtual : EventFormat.In_Person,
developerEvent.Country,
developerEvent.City
);
}
public record DeveloperEvent(
string Name,
long[] Date,
string Hyperlink,
string Location,
string City,
string Country
);
public record TechCommunityCalendarEvent(
string Name,
string Url,
DateTimeOffset StartDate,
DateTimeOffset EndDate,
EventType EventType,
EventFormat EventFormat,
string Country,
string City
)
{
public string? TwitterHandle { get; set; }
};
public enum EventFormat
{
Unknown = 1,
Virtual = 2,
In_Person = 3,
Hybrid = 4
}
public enum EventType
{
Any = 0,
Unknown = 1,
Conference = 2,
Meetup = 3,
Hackathon = 4,
Call_For_Papers = 5,
Website = 6,
}
</code></pre>
<p>Keep it mind that it's a quick experiment to automate the submission of developer conferences to the Tech Community Calendar, not production-ready code.</p>
<h2 id="final-thoughts">Final Thoughts</h2>
<p>I think this PoC is a good starting point to create a scheduled process that automatically submit the events from the developers conferences agenda to the tech community calendar.</p>
<blockquote>
<p>💡Of course it would be great to do the opposite as well (automatically import events from the tech community calendar to the developers conferences agenda) but it seems complicated as events in the tech community calendar are stored in a database I don't have access to and it would involved parsing and writing in the README file of the developers conferences agenda repository.</p>
</blockquote>
<p>Some ideas for improvement:</p>
<ul>
<li>store somewhere the events already submitted to only process new events on each run</li>
<li>parallelize the processing of events as retrieving the twitter URL of submitting an event can take some time</li>
<li>reorganize the code</li>
</ul>
<p>It was the first time I used AngleSharp, and I was happy with the result. It's a nice library that I would use again for similar needs.</p>
<p>A big thank you to the contributors of these IT event calendars. As someone who tries to attend tech events and speak at developer conferences, I find them incredibly useful. A special shoutout to Aurélie Vache and her developer conferences agenda for making this data openly available (JSON files with CFPs and conferences publicly accessible).</p>
<p>In this post, we will discuss how to write a small .NET program that retrieves events from an IT event calendar and submits them to another one using AngleSharp.</p>
https://techwatching.dev/posts/azure-sdk-di
Using dependency injection with Azure .NET SDK
2024-02-19T00:00:00Z
<p>I love how the Azure SDKs have evolved over the years. In the past, there was no consistency between the various Azure SDKs. However, that's not longer the case (at least for most Azure libraries), as they now adhere to the same principles and follow a set of well-defined <a href="https://azure.github.io/azure-sdk/general_introduction.html">guidelines</a>.</p>
<blockquote>
<p>💡You can learn more about these guidelines and how the Azure .NET SDKs work in this <a href="https://youtu.be/v36NXLU3TLY?si=L8e1ic898kDCisJ7">video</a> from 2021 which is I think still relevant today.</p>
</blockquote>
<p>Having consistency between libraries, it's easier to handle things like authentication and dependency injection consistently when you are using multiple Azure SDKs in your project.</p>
<p>One aspect often overlooked by people using Azure SDKs is the use of <a href="https://www.nuget.org/packages/Microsoft.Extensions.Azure"><code>Microsoft.Extensions.Azure</code></a>. This package facilitates registering and configuring the service clients for interacting with Azure APIs.</p>
<p>Let's see why using this package could be beneficial for your project.</p>
<h2 id="avoid-making-mistakes-when-registering-service-clients">Avoid making mistakes when registering service clients</h2>
<p>It's mentioned in the <a href="https://learn.microsoft.com/en-us/dotnet/azure/sdk/dependency-injection?view=azure-dotnet&tabs=web-app-builder">documentation</a> to use this package for dependency injection with the Azure SDK for .NET. Still, many people don't read the documentation and manually register the Azure service clients.</p>
<p>It's not a problem in itself if you know what you are doing. Otherwise,</p>
<ul>
<li><p>you might choose the wrong lifetime for the Azure service clients, they must be singleton</p>
</li>
<li><p>you may forget to register a dependency that is needed for your use of the SDK</p>
</li>
</ul>
<pre><code class="language-csharp">using Azure.Identity;
using DIWithAzureSDK;
using Microsoft.Extensions.Azure;
var builder = Host.CreateApplicationBuilder(args);
builder.Services.AddHostedService<Worker>();
builder.Services.AddAzureClients(clientBuilder =>
{
clientBuilder.AddBlobServiceClient(new Uri("https://stdiwithazuresdk.blob.core.windows.net/"));
clientBuilder.UseCredential(new DefaultAzureCredential());
});
var host = builder.Build();
host.Run();
</code></pre>
<p>In this sample, the <code>AddBlobServiceClient</code> handles the registration of all dependencies for us so that the <code>BlobServiceClient</code> can then be injected directly where needed.</p>
<pre><code class="language-csharp">public class Worker : BackgroundService
{
private readonly ILogger<Worker> _logger;
private readonly BlobServiceClient _blobServiceClient;
public Worker(ILogger<Worker> logger, BlobServiceClient blobServiceClient)
{
_logger = logger;
_blobServiceClient = blobServiceClient;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
await foreach (var blobContainer in _blobServiceClient.GetBlobContainersAsync(cancellationToken: stoppingToken))
{
_logger.LogInformation(blobContainer.Name);
}
}
}
</code></pre>
<blockquote>
<p>💬 You may find it convenient to configure the dependency injection for all Azure service clients in a central place with the <code>AddAzureClients</code> method. When applications become larger with different <code>csproj,</code> I often prefer to separate service registration by business domain/module so having everything in a central place does not always suit my needs. That's not a problem, as the internal methods of the library make use of the <code>TryAddd</code> methods for registering services, I can call <code>AddAzureClients</code> in multiple places with only the services I want to register.</p>
</blockquote>
<h2 id="easily-manage-the-authentication-to-azure-services">Easily manage the authentication to Azure services</h2>
<p>All the SDKs use the <a href="https://www.nuget.org/packages/Azure.Identity">Azure.Identity</a> package to authenticate to Azure. There are different authentication methods available and you can easily specify which one to use with each client. Additionally, you can define a default authentication method for all clients, as demonstrated in the previous example.</p>
<pre><code class="language-csharp">builder.Services.AddAzureClients(clientBuilder =>
{
clientBuilder.AddServiceBusClient("https://sb-diwithazuresdk.servicebus.windows.net/")
.WithCredential(new ManagedIdentityCredential());
clientBuilder.AddTableServiceClient(new Uri("https://stdiwithazuresdk.table.core.windows.net"))
.WithCredential(new EnvironmentCredential());
clientBuilder.AddBlobServiceClient(new Uri("https://stdiwithazuresdk.blob.core.windows.net/"));
clientBuilder.UseCredential(new DefaultAzureCredential());
});
</code></pre>
<p>In the example above, we configured:</p>
<ul>
<li><p>the service bus client to use the managed identity of the application to obtain a valid token for the service bus</p>
</li>
<li><p>the table client to use environment variables to obtain a valid token for the storage table</p>
</li>
<li><p>the blob client without any credentials so that it will use the one that we configured by default (with the <code>UseCredential</code> method)</p>
</li>
</ul>
<h2 id="effortlessly-configure-the-azure-clients-options">Effortlessly configure the Azure clients' options</h2>
<p>All Azure clients have options that can be effortlessly configured when registering them in the <code>AddAzureClients</code> method.</p>
<pre><code class="language-csharp">builder.Services.AddAzureClients(clientBuilder =>
{
clientBuilder.AddBlobServiceClient(new Uri("https://stdiwithazuresdk.blob.core.windows.net/"))
.WithCredential(new DefaultAzureCredential())
.ConfigureOptions(options =>
{
options.TrimBlobNameSlashes = true;
options.Retry.MaxRetries = 10;
options.Diagnostics.IsLoggingEnabled = false;
});
});
</code></pre>
<p>Some options are specific to the client (like the <code>TrimBlobNameSlashes</code> here for Blob client). Others can be configured globally and overridden on a client if necessary.</p>
<pre><code class="language-csharp">builder.Services.AddAzureClients(clientBuilder =>
{
clientBuilder.AddBlobServiceClient(new Uri("https://stdiwithazuresdk.blob.core.windows.net/"))
.WithCredential(new DefaultAzureCredential())
.ConfigureOptions(options =>
{
options.TrimBlobNameSlashes = true;
options.Retry.MaxRetries = 10;
options.Diagnostics.IsLoggingEnabled = false;
});
clientBuilder.ConfigureDefaults(options =>
{
options.Retry.MaxRetries = 5;
options.Retry.Mode = RetryMode.Exponential;
options.Diagnostics.IsDistributedTracingEnabled = true;
});
});
</code></pre>
<p>That's the purpose of the <code>ConfigureDefaults</code> method.</p>
<blockquote>
<p>💡Please note that all this configuration (as well as the Uris of each client) can be loaded from the configuration like this <code>clientBuilder.AddTableServiceClient(builder.Configuration.GetSection("Inventory:Tables"));</code></p>
</blockquote>
<h2 id="use-named-clients-for-different-azure-resources">Use named clients for different Azure resources</h2>
<p>Usually, you only need one client of each SDK in your application. Let's say you have multiple Azure Storage tables that are used in your application, you will only need to have one <code>TableServiceClient</code>. However, if you are interacting with tables in two different storage accounts, you will need multiple table clients.</p>
<p>To do that you can register your clients with a specific name:</p>
<pre><code class="language-csharp">builder.Services.AddAzureClients(clientBuilder =>
{
clientBuilder.AddTableServiceClient(builder.Configuration.GetSection("Shop:Inventory"))
.WithName("Shop");
clientBuilder.AddTableServiceClient(builder.Configuration.GetSection("Warehouse:Inventory"))
.WithName("Warehouse");
}
</code></pre>
<p>This way, you will be able to retrieve the specific client you need in your code:</p>
<pre><code class="language-csharp">public class WarehouseDeliveryService
{
private readonly TableServiceClient _tableServiceClient;
public WarehouseDeliveryService(IAzureClientFactory<TableServiceClient> azureClientFactory)
{
_tableServiceClient = azureClientFactory.CreateClient("Warehouse");
}
}
</code></pre>
<h2 id="register-a-custom-client-factory">Register a custom client factory</h2>
<p>If you have <a href="https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/extensions/Microsoft.Extensions.Azure/README.md#registering-a-custom-client-factory">specific needs</a>, the <code>AddClient</code> method can help you register your Azure client while letting you control how you instantiate the client.</p>
<p>For instance, the Azure Cosmos Db .NET SDK is not built on the same foundation (<code>Azure.Core</code>) as the other SDKs. So at the time of writing, there is no <code>AddCosmosServiceClient</code> you can use in the <code>AddAzureClients</code> (there is an <a href="https://github.com/Azure/azure-cosmos-dotnet-v3/issues/4002">issue</a> about that though). However, you can use the <code>AddClient</code> I've just mentioned.</p>
<pre><code class="language-csharp">builder.Services.AddOptions<CosmosDbConfiguration>().BindConfiguration("Warehouse:CosmosDb");
builder.Services.AddAzureClients(clientBuilder =>
{
clientBuilder.AddClient<CosmosClient, CosmosClientOptions>((_, _, serviceProvider) =>
{
var cosmosConfiguration = serviceProvider.GetRequiredService<IOptions<CosmosDbConfiguration>>().Value;
return new CosmosClientBuilder(cosmosConfiguration.Endpoint, cosmosConfiguration.AuthKey)
.WithSerializerOptions(new () { PropertyNamingPolicy = CosmosPropertyNamingPolicy.CamelCase })
.Build();
}).WithName("Warehouse");
}
</code></pre>
<p>You can note that using the <code>AddClient</code> method allows us to take profit from the named clients' feature.</p>
<h2 id="wrapping-up">Wrapping up</h2>
<p>As you have seen, the use of the <a href="https://www.nuget.org/packages/Microsoft.Extensions.Azure"><code>Microsoft.Extensions.Azure</code></a> package simplifies the registration and configuration of Azure clients. While providing you with a consistent way of handling the dependency injection for Azure SDKs, it also allows you to easily customize the authentication and other options available.</p>
<p>I hope you learned something. Don't hesitate to share your tips or what you like about the Azure SDKs in the comments.</p>
<p>I love how the Azure SDKs have evolved over the years. In the past, there was no consistency between the various Azure SDKs. However, that's not longer the case (at least for most Azure libraries), as they now adhere to the same principles and follow a set of well-defined <a href="https://azure.github.io/azure-sdk/general_introduction.html">guidelines</a>.</p>
https://techwatching.dev/posts/w04-2024-tips-learned-this-week
Week 4, 2024 - Tips I learned this week
2024-01-29T00:00:00Z
<h2 id="easily-debug-a-non-http-triggered-azure-function">Easily debug a non-HTTP-triggered Azure Function</h2>
<p>The other day, I wanted to locally debug a Queue-triggered function without manually adding a queue message to my local storage.</p>
<p>My Azure Function looked like that:</p>
<pre><code class="language-csharp">public record Order(string Product,int Count);
public class ProcessOrder
{
private readonly ILogger<ProcessOrder> _logger;
public ProcessOrder(ILogger<ProcessOrder> logger)
{
_logger = logger;
}
[Function(nameof(ProcessOrder))]
public void Run([QueueTrigger("orders")] Order sentOrder)
{
_logger.LogInformation($"Order contains {sentOrder.Count} {sentOrder.Product}");
}
}
</code></pre>
<p>To trigger it, I could simply add a message in the order queue of my <a href="https://github.com/Azure/Azurite">storage emulator</a> like this:</p>
<img src="/posts/images/w042024tips_storage.webp" class="img-fluid centered-img" alt="Queue message in Azure Storage Explorer.">
<p>You may notice that I don't even have to go to the Azure Storage Explorer to add the message, I can do it directly in the IDE. However, call me lazy but I wanted to execute the function just by making an HTTP call, like we do for HTTP-triggered functions.</p>
<p>This way, I could write the HTTP request in an HTTP file, commit it, and push it to my repository to share it with my colleagues, so they don't have to guess what message they should put in the queue to trigger the function.</p>
<p>Fortunately, the <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-manually-run-non-http?tabs=azure-portal#define-the-request-location"><strong>documentation</strong> explains</a> how to do this.</p>
<img src="/posts/images/w042024tips_function.webp" class="img-fluid centered-img" alt="Define the request location: host name + folder path + function name.">
<p>Thus, for my use case, the resulting request is as follows:</p>
<pre><code class="language-http">POST http://localhost:7071/admin/functions/ProcessOrder HTTP/1.1
Content-Type: application/json
{
"input": "{\n \"product\": \"laptop\",\n \"count\": 3\n}"
}
</code></pre>
<p>The content of your queue message goes in the value of the key "input" and <strong>must be escaped</strong>.</p>
<blockquote>
<p>🚧 If like me, you skim through the documentation, you might miss the "escape" requirement and your request will fail so be sure to properly escape your content.</p>
</blockquote>
<h2 id="the-azure-devops-tip-you-did-not-know-about-azure-pipelines-tasks-name-conflicts">The Azure DevOps tip you did not know about: Azure Pipelines tasks name conflicts</h2>
<p>I recently discovered that when you install extensions from the Azure DevOps marketplace, several Azure Pipelines tasks can have the same name. And if you use that name in your pipelines, Azure Pipelines won't know which task you are referring to and will prevent your pipeline from running.</p>
<p>This can easily occur if you install multiple extensions for Terraform in your Azure DevOps organization. For instance, the extensions <a href="https://marketplace.visualstudio.com/items?itemName=JasonBJohnson.azure-pipelines-tasks-terraform">Azure Pipelines Terraform Tasks</a> from Jason Johnson and <a href="https://marketplace.visualstudio.com/items?itemName=ms-devlabs.custom-terraform-tasks">Terraform</a> from Microsoft Dev Labs both have a task named the same way: <code>TerraformInstaller</code>.</p>
<p>To avoid these conflicts, you must use the full name of the tasks in your pipelines. You can find their full names in the GitHub repository of the extensions. Another way is to use these tasks in a test Release and click on the "View YAML" button to see the full name of the task you added.</p>
<img src="/posts/images/w042024tips_ado_release.webp" class="img-fluid centered-img" alt="Screenshot of a release in Azure DevOps.">
<h2 id="using-metrics-to-understand-your-usage-of-azure-resources">Using metrics to understand your usage of Azure resources</h2>
<p>I don't often use all my monthly free credits of my Azure subscription, but this month my spending limit was quickly reached and my subscription was disabled!</p>
<p>The cost analysis tab of my subscription showed me that an Azure Maps Account resource was responsible for consuming most of my credits but didn't provide more details.</p>
<p>So, I went to the Metrics tab of my resource and discovered that I could split the Usage metric by API name to determine exactly which Azure Maps API was heavily used by my applications. Combined with the <a href="https://azure.microsoft.com/en-us/pricing/details/azure-maps/">pricing page</a>, I can deduce which API requests I'm making too frequently and, therefore how to optimize costs.</p>
<img src="/posts/images/w042024tips_azuremaps_metrics.webp" class="img-fluid centered-img" alt="Azure Maps usage metrics by API name.">
<p>Depending on the type of resource, you will use different metrics and split on different properties. Regardless, metrics can help you comprehend your resource usage and its associated cost.</p>
<p>And that's it for this week, happy learning!</p>
<p>The other day, I wanted to locally debug a Queue-triggered function without manually adding a queue message to my local storage.</p>
https://techwatching.dev/posts/2023-retro
Another year of sharing and learning - Dev Retro 2023
2024-01-02T00:00:00Z
<p>Last year, I wrote my <a href="https://www.techwatching.dev/posts/2022-retro">first annual retrospective</a>. It was an interesting exercise that I intend to do every year. So for 2023, here is my year in review.</p>
<h2 id="plans-for-2023-versus-reality">Plans for 2023 versus reality</h2>
<p>My plans for 2023 that I shared in <a href="https://www.techwatching.dev/posts/2022-retro">my 2022 retro</a> were to:</p>
<ul>
<li>keep learning about Vue.js and Nuxt.js</li>
<li>explore Azure Container Apps and Dapr</li>
<li>keep writing articles on my blog about topics I am interested in</li>
<li>keep sharing links and tips on social networks</li>
<li>improve my use of PKM tools like Obsidian</li>
<li>give at least 1 talk at a developer conference</li>
</ul>
<p>I must admit that I didn't fully achieve my goals:</p>
<ul>
<li>I continued learning Vue.js and Nuxt.js but not as extensively as I would have hoped</li>
<li>I didn't dive deeply into Azure Container Apps and Dapr although I did experiment with them a bit</li>
<li>I wrote articles on my blog but fewer than in the previous years</li>
<li>I shared links and tips on social networks but not consistently</li>
<li>I took my notes using Obsidian, but I haven't utilized it as a true PKM tool</li>
<li>I did give several talks at various developer conferences (more on that later)</li>
</ul>
<p>It's not a big deal that I didn't accomplish everything. My primary goal for 2023 was for it to be a year of learning and sharing, just like in 2022. And I succeeded in doing that. 2023 was another year of learning and sharing, and it also provided numerous speaking opportunities. This is one of the reasons why I didn't have the time to do everything I planned.</p>
<h2 id="public-speaking">Public speaking</h2>
<p>In 2022, I gave my first talk at a developer conference (online). In 2023, I had the opportunity to speaker at five French developer conferences:</p>
<ul>
<li>Global Azure France in Paris - May 2023</li>
<li>Cloud Est in Lyon - June 2023</li>
<li>Breizh Camp in Rennes - June 2023</li>
<li>BDX I/O in Bordeaux - November 2023</li>
<li>.NET Conf 2023 with MTG (online) - December 2023</li>
</ul>
<p>The first three talks focused on Infrastructure as Code in general, and more specifically on <a href="https://www.pulumi.com/">Pulumi</a>. I particularly enjoyed speaking at Breizh Camp, as many people attended my talk 🥰 and the organization was excellent.</p>
<img src="/posts/images/2022_retro_talk.webp" class="img-fluid centered-img" alt="Screenshot of talk record at Breizh Camp">
<p>The fourth talk showcased <a href="https://sli.dev/">slidev</a>, a tool for developers by <a href="https://antfu.me/">Anthony Fu</a> that allows creating slides in markdown (and using web technologies). It was a 15-minute talk titled "Oops, I Forgot to Make My Slides," during which a friend helped me create my slides about Vue 3 on stage. It was an incredibly fun talk to prepare and deliver. I cannot thank my co-speaker Xavier Noya enough for agreeing to do this talk with me. Additionally, I was delighted to be a speaker at this fantastic conference that takes place in my hometown.</p>
<p>The last talk was online (for a French event related to the .NET Conf 2023). It concentrated on the new features of C# 12 and .NET 8, and demonstrated how to implement Infrastructure as Code (IaC) using .NET.</p>
<blockquote>
<p>🎥 The talks I gave are all in French, but if you are interested some of them have been <a href="https://drp.li/f7I9N">recorded</a></p>
</blockquote>
<p>I am incredibly proud of the numerous speaking opportunities I've had. While it may not seem impressive to experienced speakers, it means a lot to me. I am truly grateful for the chance to speak at these events and to have attended some fantastic talks as well.</p>
<p>Developer conferences' Call for Papers are highly selective, and they always receive many excellent proposals. Thus, I have no idea if I will be able to speak at multiple conferences in 2024, but I will certainly do my best.</p>
<h2 id="blogging">Blogging</h2>
<p>In 2023, I "only" wrote 11 articles on my blog, which is fewer than the 15 articles I wrote in 2022 and significantly less than the 19 articles in 2021.</p>
<p>For some of my articles, I created a GitHub repository with the code samples used in the article. That's something I intend to do more.</p>
<img src="/posts/images/2022_retro_github.webp" class="img-fluid centered-img" alt="Example of GitHub repository sample code for article.">
<p>My blog's traffic decreased a little (not enough articles this year I guess):</p>
<ul>
<li>27K users vs 28K</li>
<li>27K pages seen vs 37K</li>
</ul>
<p>I kept cross-posting all my articles on <a href="https://techwatching.hashnode.dev/">Hashnode</a> and <a href="https://dzone.com/users/4682620/techwatching.html">dev.to</a>, and published two of them on <a href="https://dzone.com/users/4682620/techwatching.html">DZone</a>.</p>
<p>Together with a friend, we initiated a <a href="https://bordeauxcoders.com/series/pnpm-101">blog post series about pnpm</a> on a new team blog called "<a href="https://bordeauxcoders.com/">Bordeaux Coders</a>". It was enjoyable, but our motivation waned after the summer. We need to regain our motivation, start writing again, and perhaps find others interested in collaborating on this blog.</p>
<img src="/posts/images/2022_retro_blog.webp" class="img-fluid centered-img" alt="Screenshot of th Bordeaux Coders' blog">
<p>I have also co-authored an <a href="https://www.avanade.com/fr-fr/blogs/le-blog/life-at-avanade/notre-expertise-au-service-des-nouvelles-generations">article on my company's blog</a> about something I have been doing for 5 years now: overseeing student projects at my former engineering school.</p>
<h2 id="school-relationships-and-teaching">School Relationships and Teaching</h2>
<p>As I mentioned, this year I once again supervised a group of students on a small software development project over a few months. This experience provided the opportunity to:</p>
<ul>
<li>Explore new tools and technologies</li>
<li>Share knowledge with students and learn from them as well</li>
<li>Grow (by wearing different hats and utilizing educational management skills)</li>
<li>Promote my company's expertise and attract future talent</li>
</ul>
<p>For the first time, in 2023, I taught a DevOps course at the same engineering school. It was an optional module on DevOps practices for 2nd-year students.</p>
<p>Building relationships with schools takes time and isn't always easy, but I enjoy doing it (and I'm not alone, as I have colleagues who help me). Unfortunately, I am uncertain whether my company will continue supporting me in this area next year, so I don't know what I will be able to do in 2024.</p>
<h2 id="whats-next"><strong>What's next?</strong></h2>
<p>Together with two friends, we have started a tech community called "<a href="https://www.meetup.com/mtg-bordeaux/">MTG:Bordeaux</a>," which will host meetups in Bordeaux to discuss Microsoft technologies (among others) several times a year. It is affiliated with <a href="https://www.mtg-france.org/">MTG:France</a>, which already encompasses numerous local communities in various French cities. The inaugural meetup is scheduled for February 1, 2024, and I hope it will be the first of many.</p>
<p>Instead of setting vague plans for 2024 that I might not fully achieve, I prefer creating a list of small, tangible goals for the year. I know I won't be able to accomplish all of them, but it will provide me with achievable objectives to work on throughout the year:</p>
<ul>
<li>Organize 3 meetups for MTG:Bordeaux</li>
<li>Obtain the official Vue.js certification</li>
<li>Create a small speaker website in Nuxt, listing my previous talks</li>
<li>Build a small application using Dapr and running in Azure Container Apps</li>
<li>Write a blog post about Obsidian</li>
<li>Write 2 articles for the <a href="https://bordeauxcoders.com/series/vuejs-cicd">Vue CI/CD series</a> on <a href="https://bordeauxcoders.com/">Bordeaux Coders</a></li>
<li>Present at least 2 different talks at developer conferences</li>
<li>Reach 1K followers on LinkedIn</li>
<li>Add missing sections to the <a href="https://github.com/TechWatching/pulumi-azure-workshop">Pulumi Azure Workshop</a></li>
<li>Develop a 1-day Pulumi training course</li>
<li>Create a YouTube video about a developer tool or technology</li>
</ul>
<h2 id="to-conclude">To conclude</h2>
<p>Despite not fully achieving my goals, 2023 was an interesting year, especially regarding public speaking. Looking ahead to 2024, I have outlined a series of concrete goals that emphasize continuous learning and community involvement.</p>
<p>As I close the 2023 chapter, I want to thank 3 people:</p>
<ul>
<li><p>Christian Bonnaud - you played a role in many aspects I mentioned in this article (school relationships, blogging, tech community, ...), and it's always a pleasure to collaborate with you.</p>
</li>
<li><p>Xavier Noya - it was nice to give a talk alongside you this year at BDX I/O</p>
</li>
<li><p>My life partner - I wouldn't be able to write these articles, prepare these talks, or accomplish everything I do without your support and understanding</p>
</li>
</ul>
<p>Enjoy 2024, and keep learning.</p>
<p>Last year, I wrote my <a href="https://www.techwatching.dev/posts/2022-retro">first annual retrospective</a>. It was an interesting exercise that I intend to do every year. So for 2023, here is my year in review.</p>
https://techwatching.dev/posts/playing-with-dotnet8
Playing with the .NET 8 Web API template
2023-12-19T00:00:00Z
<p>In this article, we will explore the latest C# 12 and .NET 8 features by applying them to the basic dotnet Web API template.</p>
<h2 id="getting-started-with-the-asp.net-core-web-api-template">Getting started with the ASP.NET Core Web API template</h2>
<p>First, let's install the latest <a href="https://dotnet.microsoft.com/en-us/download/dotnet/8.0">.NET 8 SDK</a>:</p>
<pre><code class="language-powershell">winget install --id Microsoft.DotNet.SDK.8
</code></pre>
<p>We can list the available templates:</p>
<img src="/posts/images/dontnet8_templates.webp" class="img-fluid centered-img" alt="List of the available dotnet templates">
<p>Let's go for the basic ASP.NET Core Web API template but with the controllers:</p>
<pre><code class="language-powershell">dotnet new webapi --use-controllers -n WeatherApi
</code></pre>
<blockquote>
<p>💡[Minimal APIs] are great too but having controllers is more suited to what I want to show in this article.</p>
</blockquote>
<img src="/posts/images/dontnet8_webapi_template.webp" class="img-fluid centered-img" alt="Screenshot of the generated project in Rider">
<p>We can run the API and test the <code>GET /weatherforecast</code> endpoint using the generated request file:</p>
<pre><code class="language-http">@WeatherApi_HostAddress = http://localhost:5103
GET {{WeatherApi_HostAddress}}/weatherforecast/
Accept: application/json
</code></pre>
<p>This is included in the dotnet <code>webapi</code> template and is supported by Visual Studio, Rider, and vscode (using the <a href="https://marketplace.visualstudio.com/items?itemName=humao.rest-client">REST Client extension</a>)</p>
<blockquote>
<p>💡Read my article about <a href="https://www.techwatching.dev/posts/http-clients">choosing an API Client</a> and why I prefer versioned HTTP files rather than GUI tools like Postman.</p>
</blockquote>
<p>If we put a breakpoint in the controller we can see one small ASP.NET 8 improvement concerning the debugging experience: <a href="https://learn.microsoft.com/en-us/aspnet/core/release-notes/aspnetcore-8.0?view=aspnetcore-8.0#improved-debugging-experience">better debug summaries are displayed for types like <code>HttpContext</code></a>.</p>
<img src="/posts/images/dontnet8_httpcontext.webp" class="img-fluid centered-img" alt="Debugging display of the HTTPContext class">
<h2 id="enhancing-the-weather-forecast-api">Enhancing the Weather Forecast API</h2>
<p>Currently, the template randomly generates weather forecasts in the controller. It would be nice to retrieve real weather data from a weather API.</p>
<p>To do that we can:</p>
<ul>
<li><p>introduce an <code>IWeatherService</code> interface that contains a method to retrieve weather forecasts</p>
</li>
<li><p>extract the current logic that generates the random weather forecasts in a <code>RandomWeatherService.cs</code> that implements this interface</p>
</li>
<li><p>creates a new implementation <code>OpenWeatherService</code> of this interface that retrieves the weather data from the Open Weather Map API</p>
</li>
</ul>
<img src="/posts/images/dontnet8_webapi_diagram.webp" class="img-fluid centered-img" alt="A diagram of the ASP.NETCore Weather API">
<p>The <code>WeatherForecastController</code> becomes:</p>
<pre><code class="language-csharp">[ApiController]
[Route("[controller]")]
public class WeatherForecastController : ControllerBase
{
private readonly IWeatherService _weatherService;
public WeatherForecastController(IWeatherService weatherService)
{
_weatherService = weatherService;
}
[HttpGet(Name = "GetWeatherForecast")]
[ProducesResponseType(typeof(WeatherForecast), StatusCodes.Status200OK)]
public Task<WeatherForecast[]> Get()
{
return _weatherService.GetWeatherForecasts();
}
}
</code></pre>
<p>We can get rid of the <code>typeof</code> because there are now <a href="https://learn.microsoft.com/en-us/aspnet/core/release-notes/aspnetcore-8.0?view=aspnetcore-8.0#support-for-generic-attributes">generic attributes for some common ASP.NET Core attributes</a> like <code>ProducesResponseType.</code></p>
<pre><code class="language-csharp"> [HttpGet(Name = "GetWeatherForecast")]
[ProducesResponseType<WeatherForecast>(StatusCodes.Status200OK)]
public Task<WeatherForecast[]> Get()
{
return _weatherService.GetWeatherForecasts();
}
</code></pre>
<p>There are now 2 implementations of the <code>IWeatherService</code> interface:</p>
<ul>
<li><p><code>RandomWeatherService</code> that contains the code that previously was in the controller</p>
</li>
<li><p><code>OpenWeatherService</code> that makes a call to the Open Weather Map API to retrieve the weather forecasts and then maps the obtained data to a list of <code>WeatherForecast</code></p>
</li>
</ul>
<pre><code class="language-csharp">public class OpenWeatherService : IWeatherService
{
private readonly IOpenWeatherMapApi _openWeatherMapApi;
private static readonly (double Latitude, double Longitude) BordeauxCoordinates = (44.837789, -0.57918);
public OpenWeatherService(IOpenWeatherMapApi openWeatherMapApi)
{
_openWeatherMapApi = openWeatherMapApi;
}
public async Task<WeatherForecast[]> GetWeatherForecasts()
{
var weatherApiResponse = await _openWeatherMapApi.GetWeatherForecast(BordeauxCoordinates.Latitude, BordeauxCoordinates.Longitude);
var computeWeatherSummary = (double temperature) =>
temperature switch
{
< 0 => "Freezing",
>= 0 and < 5 => "Bracing",
>= 5 and < 12 => "Chilly",
>= 12 and < 18 => "Cool",
>= 18 and < 24 => "Mild",
>= 24 and < 30 => "Warm",
>= 30 and < 35 => "Balmy",
>= 35 and < 40 => "Hot",
>= 40 and < 45 => "Sweltering",
>= 45 => "Scorching",
_ => "Warm"
};
return weatherApiResponse.List
.Select(x =>
new WeatherForecast
{
Date = DateOnly.FromDateTime(DateTimeOffset.FromUnixTimeSeconds(x.Dt).Date),
TemperatureC = Convert.ToInt32(x.Main.Temp),
Summary = computeWeatherSummary(x.Main.Temp)
})
.ToArray();
}
}
</code></pre>
<p>The weather forecasts of a specific geolocation are retrieved. Indeed coordinates (corresponding to Bordeaux in France) are passed to the Open Weather Map API call. In C# 12, we can alias any type so we can introduce an alias "Coordinates" for the coordinates tuple:</p>
<pre><code class="language-csharp">using Coordinates = (double Latitude, double Longitude);
public class OpenWeatherService : IWeatherService
{
private readonly IOpenWeatherMapApi _openWeatherMapApi;
private static readonly Coordinates BordeauxCoordinates = (44.837789, -0.57918
</code></pre>
<p>Once this call is done, results are mapped to the expected model <code>WeatherForecast</code>. A lambda expression is used to get the "weather summary" from a temperature. If we want to have a default summary, that's something we can do thanks to the support of <a href="https://learn.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-12#default-lambda-parameters">default lambda parameters</a> in C#12.</p>
<pre><code class="language-csharp">var computeWeatherSummary = (double temperature, string defaultSummary = "Warm") =>
temperature switch
{
< 0 => "Freezing",
>= 0 and < 5 => "Bracing",
>= 5 and < 12 => "Chilly",
>= 12 and < 18 => "Cool",
>= 18 and < 24 => "Mild",
>= 24 and < 30 => "Warm",
>= 30 and < 35 => "Balmy",
>= 35 and < 40 => "Hot",
>= 40 and < 45 => "Sweltering",
>= 45 => "Scorching",
_ => defaultSummary
};
</code></pre>
<p><code>RandomWeatherService</code> does not have this logic because Summaries are randomly selected from an array containing possible summaries.</p>
<pre><code class="language-csharp">private static readonly string[] Summaries = new [] { "Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching" };
</code></pre>
<p>With <a href="https://learn.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-12#collection-expressions">collection expressions</a>, this array can be defined directly with square brackets.</p>
<pre><code class="language-csharp">private static readonly string[] Summaries = [ "Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching"];
</code></pre>
<p>It would work with other types of collections as well. If we needed to have another list containing only cold summaries and avoid duplication between the two lists, we could also define the two lists and use the spread operator.</p>
<pre><code class="language-csharp"> private static readonly IList<string> ColdAdjectives = ["Freezing", "Bracing", "Chilly", "Cool"];
private static readonly string[] Summaries = [ ..ColdAdjectives, "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching"];
</code></pre>
<p>The last C# 12 thing we could do in this example is to take advantage of the new class (and structs) primary constructors that were previously limited to records.</p>
<p>The <code>WeatherForecast</code> class could become the following:</p>
<pre><code class="language-csharp">namespace WeatherApi;
public class WeatherForecast(DateOnly date, int temperatureC, string? summary)
{
public int TemperatureC { get; } = temperatureC;
public DateOnly Date { get; } = date;
public string? Summary { get; } = summary;
public int TemperatureF => 32 + (int)(TemperatureC / 0.5556);
}
</code></pre>
<blockquote>
<p>💬 I'm not sure this is completely relevant here, a record would probably be better but you get the idea.</p>
</blockquote>
<p>You can use primary constructors in any class, it will work as well with dependency injection. However, be aware that the services you used to assign to a private read-only field of your class won't be read-only anymore like <code>weatherService</code> in this example:</p>
<pre><code class="language-csharp">public class WeatherForecastController(IWeatherService weatherService, ILogger<WeatherForecastController> logger) : ControllerBase
{
[HttpGet(Name = "GetWeatherForecast")]
[ProducesResponseType(typeof(WeatherForecast), StatusCodes.Status200OK)]
public Task<WeatherForecast[]> Get()
{
return weatherService.GetWeatherForecasts();
}
}
</code></pre>
<p>Having 2 different implementations of the <code>IWeatherService</code> is great, but what if you need one of them in some part of your code? The one you will have injected in your class is the last one registered in the DI container, but that may not be the one you want. You could get all of them by injecting <code>IEnumerable<IWeatherService></code> and selecting the one you need. You could also create a sort of factory to retrieve the correct instance. Yet in .NET 8, you don't need to worry about all that because you have the <a href="https://learn.microsoft.com/en-us/dotnet/core/whats-new/dotnet-8#keyed-di-services">keyed DI Services</a>.</p>
<p>Specifying a key (that can be anything, not necessarily a string) is done when registering the services in the DI container.</p>
<pre><code class="language-csharp">builder.Services.AddKeyedTransient<IWeatherService, RandomWeatherService>("random");
builder.Services.AddKeyedTransient<IWeatherService, OpenWeatherService>("api");
</code></pre>
<p>With this key, retrieving a specific implementation becomes easy.</p>
<pre><code class="language-csharp"> public WeatherForecastController([FromKeyedServices("random")] IWeatherService weatherService, ILogger<WeatherForecastController> logger)
{
_logger = logger;
_weatherService = weatherService;
}
</code></pre>
<p>I did not discuss the code that requests the Open Weather Map API. It's quite simple thanks to the uses of <a href="https://github.com/reactiveui/refit">Refit</a>.</p>
<pre><code class="language-csharp">using Refit;
namespace WeatherApi.Services.OpenWeatherMap;
public interface IOpenWeatherMapApi
{
[Get("/forecast?lat={latitude}&lon={longitude}&units=metric")]
Task<WeatherMapResponse> GetWeatherForecast(double latitude, double longitude);
}
public record WeatherMapResponse(IList<WeatherMapForecast> List);
public record WeatherMapForecast(int Dt, WeatherMapMain Main);
public record WeatherMapMain(double Temp);
</code></pre>
<p>I created an HTTP Message Handler to take care of adding the Open Weather Map API key to the requests. This API key and the URL to the API come from the configuration and are mapped to a configuration object <code>WeatherMapConfiguration</code>.</p>
<p>In .NET 8, we can use <a href="https://learn.microsoft.com/en-us/dotnet/core/whats-new/dotnet-8#data-validation">data validation attributes</a> for data like configuration options. There is also a <a href="https://learn.microsoft.com/en-us/dotnet/core/whats-new/dotnet-8#options-validation">source code generator</a> that can implement the validation logic:</p>
<pre><code class="language-csharp">namespace WeatherApi.Services.OpenWeatherMap;
public class WeatherMapConfiguration
{
[Required]
public required string ApiKey { get; init; }
[Required]
[Url]
public required string Uri { get; init; }
}
[OptionsValidator]
public partial class WeatherMapConfigurationValidator : IValidateOptions<WeatherMapConfiguration>
{
}
</code></pre>
<p>This way we can make sure that the configuration contains the API Key and the URI that has the <code>Url</code> format. The configuration in the <code>Program.cs</code> looks like that:</p>
<pre><code class="language-csharp">builder.Services.Configure<WeatherMapConfiguration>(builder.Configuration.GetSection("WeatherMap"));
builder.Services.AddSingleton<IValidateOptions<WeatherMapConfiguration>, WeatherMapConfigurationValidator>();
builder.Services.AddTransient<ApiKeyHandler>();
builder.Services.AddRefitClient<IOpenWeatherMapApi>()
.ConfigureHttpClient((provider, client) =>
{
var configuration = provider.GetRequiredService<IOptions<WeatherMapConfiguration>>().Value;
client.BaseAddress = new Uri(configuration.Uri);
})
.AddHttpMessageHandler<ApiKeyHandler>();
</code></pre>
<h2 id="a-few-closing-words">A few closing words</h2>
<p>Here is the recap of what we talked about:</p>
<table>
<thead>
<tr>
<th>Feature</th>
<th>Area</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://learn.microsoft.com/en-us/aspnet/core/release-notes/aspnetcore-8.0?view=aspnetcore-8.0#support-for-generic-attributes">Support for generic attributes</a></td>
<td>.NET 8</td>
</tr>
<tr>
<td><a href="https://learn.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-12#primary-constructors">Primary constructors</a></td>
<td>C# 12</td>
</tr>
<tr>
<td><a href="https://learn.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-12#collection-expressions">Collection expressions</a></td>
<td>C# 12</td>
</tr>
<tr>
<td><a href="https://learn.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-12#default-lambda-parameters">Optional parameters in lambda expressions</a></td>
<td>C# 12</td>
</tr>
<tr>
<td><a href="https://learn.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-12#alias-any-type">Alias any type</a></td>
<td>C# 12</td>
</tr>
<tr>
<td><a href="https://learn.microsoft.com/en-us/visualstudio/debugger/using-the-debuggerdisplay-attribute">Debug customization attributes on ASP.NET Core types </a></td>
<td>ASP.NET Core 8</td>
</tr>
<tr>
<td><a href="https://learn.microsoft.com/en-us/dotnet/core/whats-new/dotnet-8#options-validation">Options validation</a></td>
<td>.NET 8</td>
</tr>
<tr>
<td><a href="https://learn.microsoft.com/en-us/dotnet/core/whats-new/dotnet-8#keyed-di-services">Keyed DI Services</a></td>
<td>.NET 8</td>
</tr>
</tbody>
</table>
<p>There are many more interesting features in <a href="https://learn.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-12">C# 12</a>, <a href="https://learn.microsoft.com/en-us/dotnet/core/whats-new/dotnet-8#keyed-di-services">.NET 8</a>, or <a href="https://learn.microsoft.com/en-us/aspnet/core/release-notes/aspnetcore-8.0?view=aspnetcore-8.0">ASP.NET Core 8</a>. Yet, the ones I introduced in this article are the ones I will probably use the most.</p>
<p>You can find the complete code sample <a href="https://github.com/TechWatching/CodeAppAndInfraInDotnet8">here</a>. The repository also contains a folder <code>infra</code> to set up the Azure infrastructure to host this API. 2 IaC solutions that use .NET are shown: one using Azure SDK and one using Pulumi.</p>
<p>This article was published as part of the <a href="https://www.csadvent.christmas/">C# Advent 2023</a> which is a nice initiative. Make sure to check the other blog articles on the advent calendar.</p>
<p>In this article, we will explore the latest C# 12 and .NET 8 features by applying them to the basic dotnet Web API template.</p>
https://techwatching.dev/posts/scripting-azure-ready-github-repository
Effortlessly Configure GitHub Repositories for Azure Deployment via OIDC
2023-10-23T00:00:00Z
<p>What if we could script the creation and configuration of a GitHub Repository so that it is ready to provision or deploy Azure resources from a GitHub Actions pipeline? We will do that in this article using the Azure CLI and GitHub CLI.</p>
<h2 id="the-objective">The Objective</h2>
<p>The goal is to go from nothing to running a GitHub Actions workflow that authenticates to Azure using Open ID Connect (so without secret credentials) in a newly created GitHub repository.</p>
<p>The workflow we plan to run is as follows:</p>
<pre><code class="language-yaml">name: Run Azure Login with OIDC
on:
workflow_dispatch:
permissions:
id-token: write
contents: read
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- name: 'Az CLI login'
uses: azure/login@v1
with:
client-id: ${{ secrets.AZURE_CLIENT_ID }}
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
- name: 'Run az commands'
run: |
az account show
az group list
</code></pre>
<p>This workflow is an example coming from <a href="https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-azure">the GitHub documentation</a> showing how to configure GitHub Actions workflow to access Azure resources protected by Microsoft Entra.</p>
<p>To run this workflow we will need to automate the configuration of these resources:</p>
<img src="/posts/images/scripting_azurereadygithub_azure_1.webp" class="img-fluid centered-img" alt="A diagram showing the interactions between Azure and GitHub.">
<blockquote>
<p>💬 Looks familiar? That's the same diagram from my article about <a href="https://www.techwatching.dev/posts/azure-ready-github-repository">creating an Azure-Ready GitHub Repository using Pulumi</a>. The purpose was the same but using Pulumi instead of CLI tools. If you prefer a declarative Infrastructure as Code approach using programming languages over CLI tools, you should definitively read it 😉</p>
</blockquote>
<h2 id="the-script">The Script</h2>
<h3 id="a-word-about-the-tools-used">A word about the tools used</h3>
<p>I will be using <a href="https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell?view=powershell-7.3">PowerShell which is cross-platform</a>. However, if you prefer using a different shell, you will simply need to adjust some syntax (such as the environment variable declarations) to ensure compatibility.</p>
<p>To create and configure the Microsoft Entra ID resources, we will need the <a href="https://learn.microsoft.com/en-us/cli/azure/install-azure-cli">Azure CLI</a>.</p>
<p>To create and configure the GitHub repository, we will need the <a href="https://cli.github.com/">GitHub CLI</a>.</p>
<h3 id="create-the-repository-on-github">Create the repository on GitHub</h3>
<p>Let's assume we are already in a new directory with the YAML workflow file <code>.github\workflows\main.yml</code> in it.</p>
<p>First, we can initialize the git repository.</p>
<pre><code class="language-powershell">git init
git add .
git commit -m "Intialize repository with the GitHub Actions workflow file"
</code></pre>
<p>Second, we can create the GitHub repository and push the git repository we just initialized in it.</p>
<pre><code class="language-powershell">$repositoryName = "MyAzureReadyRepository"
gh repo create $repositoryName --private --source=. --push
</code></pre>
<blockquote>
<p>💡You can use the <code>--public</code> flag instead of the <code>--private</code> one if you want your GitHub repository to be public.</p>
</blockquote>
<p>The repository's full name (containing the organization name) can be retrieved like this:</p>
<pre><code class="language-powershell">$repositoryFullName=$(gh repo view --json nameWithOwner -q ".nameWithOwner")
</code></pre>
<blockquote>
<p>💡Passing the <code>--json</code> flag converts the output format to JSON which, combined with the <code>--q</code> flag can be handy for filtering or formatting a command output. More on that <a href="https://cli.github.com/manual/gh_help_formatting">in the documentation</a></p>
</blockquote>
<h3 id="create-the-microsoft-entra-id-resources">Create the Microsoft Entra ID resources</h3>
<p>Later, we will need the subscription and the tenant identifiers. Let's retrieve them now and take this opportunity to check that we are logged in on the correct tenant with the correct subscription selected.</p>
<pre><code class="language-powershell">$subscriptionId=$(az account show --query "id" -o tsv)
$tenantId=$(az account show --query "tenantId" -o tsv)
</code></pre>
<blockquote>
<p>💬 Similar to the GitHub CLI, the Azure CLI has a <code>--query</code> flag to filter a command output. There are also different output formats. The <code>tsv</code> (tab-separated values) one is useful for capturing a value in an environment variable. If you are not very familiar with the Azure CLI, you can check my article on the topic <a href="https://www.techwatching.dev/posts/welcome-azure-cli">here</a></p>
</blockquote>
<p>To create the app registration and its associated service principal, we can execute the following commands:</p>
<pre><code class="language-powershell">$appId=$(az ad app create --display-name "GitHub Action OIDC for ${repositoryFullName}" --query "appId" -o tsv)
$servicePrincipalId=$(az ad sp create --id $appId --query "id" -o tsv)
</code></pre>
<p>We can now assign the contributor role to the service principal on the subscription.</p>
<pre><code class="language-powershell">az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $servicePrincipalId --assignee-principal-type ServicePrincipal --scope /subscriptions/$subscriptionId
</code></pre>
<p>Creating federated credentials is a bit more complex as one of the arguments needs to be an in-line JSON string.</p>
<pre><code class="language-powershell">$parametersJson = @{
name = "FederatedIdentityForWorkshop"
issuer = "https://token.actions.githubusercontent.com"
subject = "repo:${repositoryFullName}:ref:refs/heads/main"
description = "Deployments for ${repositoryFullName}"
audiences = @(
"api://AzureADTokenExchange"
)
}
</code></pre>
<blockquote>
<p>💡The <code>subject</code> property here specifies that the GitHub Actions workflow from the created repository is only authorized to authenticate to Azure when it runs on the main branch. Of course, there are other possible configurations, such as those involving pull requests or environments. Consult the <a href="https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect#example-subject-claims">documentation</a> to learn more about these options.</p>
</blockquote>
<p>To make this JSON string an inline string with escaped quotes that works for the Azure CLI, we have to transform the string using a command I found in this <a href="https://medium.com/medialesson/use-dynamic-json-strings-with-azure-cli-commands-in-powershell-b191eccc8e9b">blog article</a>.</p>
<pre><code class="language-powershell">$parameters = $($parametersJson | ConvertTo-Json -Depth 100 -Compress).Replace("`"", "\`"")
</code></pre>
<p>And finally, we can create the federated credentials.</p>
<pre><code class="language-powershell">az ad app federated-credential create --id $appId --parameters $parameters
</code></pre>
<h3 id="configure-the-github-actions-and-run-the-workflow">Configure the GitHub Actions and run the workflow</h3>
<p>For the OIDC authentication to function properly, we need to set 3 GitHub Actions Secrets (could also be GitHub Actions variables as there are not really secrets):</p>
<ol>
<li><p>The identifier of the Azure tenant</p>
</li>
<li><p>The identifier of the Azure subscription</p>
</li>
<li><p>The application identifier of the app registration</p>
</li>
</ol>
<pre><code class="language-powershell">gh secret set AZURE_TENANT_ID --body $tenantId
gh secret set AZURE_SUBSCRIPTION_ID --body $subscriptionId
gh secret set AZURE_CLIENT_ID --body $appId
</code></pre>
<p>We can directly run the workflow from the GitHub CLI, and watch the run until it is completed.</p>
<pre><code class="language-powershell">gh workflow run main.yml
$runId=$(gh run list --workflow=main.yml --json databaseId -q ".[0].databaseId")
gh run watch $runId
</code></pre>
<img src="/posts/images/scripting_azurereadygithub_github_1.webp" class="img-fluid centered-img" alt="Screenshot of the GitHub Actions workfow run">
<h2 id="full-script">Full script</h2>
<pre><code class="language-powershell"># Initialize git repository with current code
# You should have added the main.yml workflow file in the `.github\workflows` directory
git init
git add .
git commit -m "Intialize repository with the GitHub Actions workflow file"
# Create a new remote private GitHub repository
$repositoryName = "MyAzureReadyRepository"
gh repo create $repositoryName --private --source=. --push
# Retrieve the repository full name (org/repo)
$repositoryFullName=$(gh repo view --json nameWithOwner -q ".nameWithOwner")
# Retrieve the current subscription and current tenant identifiers
$subscriptionId=$(az account show --query "id" -o tsv)
$tenantId=$(az account show --query "tenantId" -o tsv)
# Create an App Registration and its associated service principal
$appId=$(az ad app create --display-name "GitHub Action OIDC for ${repositoryFullName}" --query "appId" -o tsv)
$servicePrincipalId=$(az ad sp create --id $appId --query "id" -o tsv)
# Assign the contributor role to the service principal on the subscription
az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $servicePrincipalId --assignee-principal-type ServicePrincipal --scope /subscriptions/$subscriptionId
# Prepare parameters for federated credentials
$parametersJson = @{
name = "FederatedIdentityForWorkshop"
issuer = "https://token.actions.githubusercontent.com"
subject = "repo:${repositoryFullName}:ref:refs/heads/main"
description = "Deployments for ${repositoryFullName}"
audiences = @(
"api://AzureADTokenExchange"
)
}
# Change parameters to single line string with escaped quotes to make it work with Azure CLI
# https://medium.com/medialesson/use-dynamic-json-strings-with-azure-cli-commands-in-powershell-b191eccc8e9b
$parameters = $($parametersJson | ConvertTo-Json -Depth 100 -Compress).Replace("`"", "\`"")
# Create federated credentials
az ad app federated-credential create --id $appId --parameters $parameters
# Create GitHub secrets needed for the GitHub Actions
gh secret set AZURE_TENANT_ID --body $tenantId
gh secret set AZURE_SUBSCRIPTION_ID --body $subscriptionId
gh secret set AZURE_CLIENT_ID --body $appId
# Run workflow
gh workflow run main.yml
$runId=$(gh run list --workflow=main.yml --json databaseId -q ".[0].databaseId")
gh run watch $runId
# Open the repostory in the browser
gh repo view -w
</code></pre>
<h2 id="final-thoughts">Final Thoughts</h2>
<p>I am very glad to have scripted the creation and configuration of a GitHub repository ready to deploy to Azure. Even if I had already done the <a href="https://www.techwatching.dev/posts/azure-ready-github-repository">same using Pulumi</a>, having a small script can sometimes be more convenient than having a full IaC program. In my case, I needed to automate that for a workshop, so it was easier to give participants a script to execute.</p>
<p>However, I must admit that developing this script proved to be much more challenging than provisioning the same resources using Pulumi. I didn't expect it to take so much time: browsing the CLI documentation, finding the correct syntax, and understanding the cause of failures. In contrast, using the GitHub and Azure Pulumi providers in my TypeScript code turned out to be a much more enjoyable experience.</p>
<p>Nevertheless, I was pleased to be introduced to the GitHub CLI, which I hadn't explored extensively until now. While I found it very useful, a few things bothered me. Not all commands can be used with the <code>--json</code> and <code>-q</code> parameters, which is not very convenient for scripting. Commands that create things (repo, workflow runs) don't return the identifier of the thing they create. I wish GitHub CLI would be more similar to Azure CLI in these matters. I have no doubt these will be improved over time.</p>
<p>As for Azure CLI, I am still a big fan, although a bit disappointed to have struggled with the inline JSON string.</p>
<p>Keep learning, keep sharing.</p>
<p>What if we could script the creation and configuration of a GitHub Repository so that it is ready to provision or deploy Azure resources from a GitHub Actions pipeline? We will do that in this article using the Azure CLI and GitHub CLI.</p>
https://techwatching.dev/posts/ado-workload-identity-federation
Deploying to Azure from Azure DevOps without secrets
2023-09-21T00:00:00Z
<p>If you are deploying your application to Azure from Azure Pipelines, you might want to leverage the ability to do so without using secrets thanks to Workload identity federation. In this article, I will demonstrate how to automate the configuration of your Azure DevOps project, with everything pre-configured to securely deploy applications to Azure.</p>
<h2 id="why-should-you-use-workload-identity-federation-for-your-deployment-pipelines">Why should you use Workload Identity Federation for your deployment pipelines?</h2>
<p>I already wrote about the <a href="https://www.techwatching.dev/posts/azure-ready-github-repository#the-problem-with-secret-credentials">problem of secret credentials</a>, but let me remind you 2 reasons why I think you should always avoid using secrets in your deployment pipelines:</p>
<ul>
<li>It's more secure if you don't need a secret to authenticate to Azure</li>
<li>It's more practical if you don't need to handle secret rotation when secrets expire</li>
</ul>
<p>This is true whatever the CI/CD platform you are using.</p>
<p><a href="https://learn.microsoft.com/en-us/azure/active-directory/workload-identities/workload-identity-federation">Workload identity federation</a> leverages OpenID Connect to solve these problems and avoid using secrets in your pipelines to authenticate to Azure. I previously published <a href="https://www.techwatching.dev/posts/azure-ready-github-repository">an article about using Azure OpenID Connect with Pulumi in GitHub Actions</a>, but that also works with Azure Pipelines.</p>
<img src="/posts/images/azuredevopsoidc_schema_1.webp" class="img-fluid centered-img" alt="Workload Identity Federation for Azure DevOps">
<blockquote>
<p>ℹ Microsoft has announced the <a href="https://devblogs.microsoft.com/devops/public-preview-of-workload-identity-federation-for-azure-pipelines/">public preview of Workload identity federation for Azure Pipelines</a> on the 11th September 2023.</p>
</blockquote>
<h2 id="how-can-you-use-workload-identity-federation-to-deploy-to-azure-from-azure-pipelines">How can you use <strong>Workload Identity Federation to deploy to Azure from Azure Pipelines?</strong></h2>
<p>Azure Pipelines tasks use service connections to authenticate with external services. Specifically, for Azure, it is necessary to create an Azure Resource Manager service connection.</p>
<p>You can create an Azure Resource Manager service connection that uses workload identity federation by configuring it in your Azure DevOps organization portal (check the documentation <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/library/connect-to-azure?view=azure-devops#create-an-azure-resource-manager-service-connection-using-workload-identity-federation">here</a>).</p>
<p>Or ... you can automate that using Infrastructure as Code 😉.</p>
<p>Yet, who wants to manually configure things from a wizard when everything can be automated in versioned code? So let's go the IaC way.</p>
<blockquote>
<p>💬 All kidding aside, I genuinely believe that there are many advantages to provisioning your Azure DevOps projects and their associated resources (Repos, Service Connections, policies, pipelines, ...) using Infrastructure as Code. It takes time to properly configure Azure DevOps projects, and if they are often organized similarly, it's more efficient to automate their configuration rather than doing it manually.</p>
</blockquote>
<img src="/posts/images/azuredevopsoidc_schema_2.webp" class="img-fluid centered-img" alt="Diagram to deploy from Azure Pipelines to Azure">
<p>I will use Pulumi and its Azure DevOps provider to provision the necessary resources. The infrastructure as code will be written in C# but you could easily convert the C# code to any language that Pulumi supports (like TypeScript, I am a big fan of using TypeScript to write infrastructure code 🔥).</p>
<p>Here is the complete solution to implement:</p>
<img src="/posts/images/azuredevopsoidc_schema_3.webp" class="img-fluid centered-img" alt="Schema of the complete solution">
<h2 id="automate-the-configuration-of-workload-identity-federation-in-azure-devops">Automate the configuration of Workload identity federation in Azure DevOps</h2>
<h3 id="create-the-pulumi.net-project">Create the Pulumi .NET project</h3>
<p>Let's start by scaffolding a new Pulumi project using .NET:</p>
<pre><code class="language-powershell">pulumi new csharp -n AzureDevOpsWorkloadIdentity -s dev -d "A program to set up an Azure-Ready Azure DevOps repository"
</code></pre>
<p>This command creates a new pulumi project and stack from the <code>csharp</code> template:</p>
<ul>
<li>The name of the project "<em>AzureDevOpsWorkloadIdentity</em>" is specified using the <code>-n</code> option</li>
<li>The description of the project "<em>A program to set up an Azure-Ready Azure DevOps repository</em>" is specified using the <code>-d</code> option</li>
<li>The stack of the project "<em>dev</em>" is specified using the <code>-s</code> option</li>
</ul>
<p>This project will need 3 different providers:</p>
<ul>
<li>the <a href="https://www.pulumi.com/registry/packages/azure-native/">Azure Native provider</a></li>
<li>the <a href="https://www.pulumi.com/registry/packages/azuread/">Azure Active Directory provider</a> (provider for Microsoft Entra ID)</li>
<li>the <a href="https://www.pulumi.com/registry/packages/azuredevops/">Azure DevOps provider</a></li>
</ul>
<p>So we can add the following Nuget packages to our project:</p>
<ul>
<li><a href="https://www.nuget.org/packages/Pulumi.AzureNative"><code>Pulumi.AzureNative</code></a></li>
<li><a href="https://www.nuget.org/packages/Pulumi.AzureAD"><code>Pulumi.AzureAD</code></a></li>
<li><a href="https://www.nuget.org/packages/Pulumi.AzureDevOps"><code>Pulumi.AzureDevOps</code></a></li>
</ul>
<h3 id="create-the-azure-devops-project">Create the Azure DevOps project</h3>
<p>First, we must select the Azure DevOps organization where we wish to create a project and set its URL in our Pulumi configuration.</p>
<pre><code class="language-powershell">pulumi config set azuredevops:orgServiceUrl XXXXXXXXXXXXXX --secret
</code></pre>
<p>Second, we need to supply the necessary Azure DevOps credentials. For that, we can <a href="https://learn.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?view=azure-devops&tabs=Windows#create-a-pat">create a personal access token</a> and add it to our Pulumi configuration.</p>
<pre><code class="language-powershell">pulumi config set azuredevops:personalAccessToken YYYYYYYYYYYYYY --secret
</code></pre>
<blockquote>
<p>🔐 I followed the documentation but to be honest, I don't think it's necessary to include the <code>--secret</code> option for the organization URL as it's not really a sensitive value that needs to be encrypted. However, <strong>it's mandatory to include it for the access token</strong> so that we can safely commit the configuration files without creating security risks.</p>
</blockquote>
<p>Third, we can create the DevOps project:</p>
<pre><code class="language-csharp">var project = new Project("AzureReadyADOProject", new()
{
Description = "New project with everything correctly configured to provision Azure resources or deploy applications to Azure",
Features = new()
{
["boards"] = "disabled",
["repositories"] = "enabled",
["pipelines"] = "enabled",
["testplans"] = "disabled",
["artifacts"] = "disabled"
},
});
</code></pre>
<p>I intentionally disabled Azure Boards, Azure Test Plans, and Azure Artifacts as we only need Azure Repos and Azure Pipelines for this demo but you can enable what you need for your projects.</p>
<p>By default, when we create an Azure DevOps project, a <a href="https://www.pulumi.com/registry/packages/azuredevops/api-docs/git/">Git repository</a> is created for us with the same name as the project. This repository can be retrieved using the following code:</p>
<pre><code class="language-csharp">var repository = GetGitRepository.Invoke(new()
{
ProjectId = project.Id,
Name = project.Name
});
</code></pre>
<p>We can also choose to create a new Git repository like this:</p>
<pre><code class="language-csharp">var repository = new Git("AzureReadyADORepository", new()
{
ProjectId = project.Id,
Initialization = new GitInitializationArgs()
{
InitType = "Clean",
SourceType = "Git",
SourceUrl = "https://repo.com",
ServiceConnectionId = ""
},
DefaultBranch = "refs/heads/main"
});
</code></pre>
<blockquote>
<p>ℹ We should not have to set the <code>SourceType</code>, <code>SourceUrl</code> and <code>ServiceConnectionId</code> properties as we are initializing a clean Git repository, not importing one, but it's a workaround because of this <a href="https://github.com/pulumi/pulumi-azuredevops/issues/66">issue</a> on the provider.</p>
</blockquote>
<h3 id="configure-the-arm-service-connection-in-azure-devops">Configure the ARM Service Connection in Azure DevOps</h3>
<p>In the Azure DevOps provider, the Azure Resource Manager service connection is called a <a href="https://www.pulumi.com/registry/packages/azuredevops/api-docs/serviceendpointazurerm/#workload-identity-federation-manual-azurerm-service-endpoint-subscription-scoped">ServiceEndpointAzureRM</a>. We can create such a resource like this:</p>
<pre><code class="language-csharp">var serviceConnection = new ServiceEndpointAzureRM("AzureServiceConnection", new()
{
ProjectId = project.Id,
ServiceEndpointName = "azure-with-oidc",
ServiceEndpointAuthenticationScheme = "WorkloadIdentityFederation",
AzurermSpnTenantid = tenantId,
AzurermSubscriptionId = subscriptionId,
AzurermSubscriptionName = subscriptionName,
Credentials = new ServiceEndpointAzureRMCredentialsArgs()
{
Serviceprincipalid = servicePrincipal.ApplicationId,
}
});
</code></pre>
<p>Do not worry about the service principal, we will see in the next section how to create it. The tenant and the subscription identifiers can be retrieved from the current configuration of the Azure Native provider (using the <code>GetClientConfig.Invoke</code> function):</p>
<pre><code class="language-csharp">var azureConfig = GetClientConfig.Invoke();
var tenantId = azureConfig.Apply(c => c.tenantId);
var subscriptionId = azureConfig.Apply(c => c.SubscriptionId);
</code></pre>
<p>For the subscription name, it's more complicated as we don't have it, and no easy way to retrieve it. To be frank, I think having to provide the subscription name while we already provide the subscription identifier is completely useless but that's how the Azure DevOps provider works.</p>
<p>The Azure Classic provider offers a <a href="https://www.pulumi.com/registry/packages/azure/api-docs/core/getsubscription/#azure-core-getsubscription">function</a> to get a subscription by its identifier but it's not available in the Azure Native provider. I don't want to add the Azure Classic provider to my project solely for this purpose. However, it's not a big deal as it allows us to experience one of the advantages of using Pulumi: when something is not available you can just implement it or use any library that can help you, such as the <a href="https://www.nuget.org/packages/Azure.ResourceManager">Azure SDK</a> in this case.</p>
<pre><code class="language-csharp">var subscriptionName = subscriptionId.Apply(s =>
{
var armClient = new ArmClient(new DefaultAzureCredential());
var subscription = armClient.GetSubscriptionResource(new ResourceIdentifier($"/subscriptions/{s}")).Get();
return subscription.Value.Data.DisplayName;
});
</code></pre>
<h3 id="set-up-the-necessary-microsoft-entra-id-resources">Set up the necessary Microsoft Entra ID resources</h3>
<p>We need to set up the following resources in Microsoft Entra ID:</p>
<ul>
<li>an Application that represents the Azure DevOps service connection identity</li>
<li>a Service Principal (related to the application above) that has the contributor role on the Azure subscription</li>
<li>credentials for the CI/CD pipeline to authenticate to Azure on behalf of this Microsoft Entra ID application</li>
</ul>
<p>Let's take care of the first 2 points:</p>
<pre><code class="language-csharp">var azureConfig = GetClientConfig.Invoke();
var aadApplication = new Application("ADOAzureReadyApp", new()
{
DisplayName = "ADO Azure Ready App"
});
var servicePrincipal = new ServicePrincipal("AzureReadyServicePrincipal", new()
{
ApplicationId = aadApplication.ApplicationId,
});
var subscriptionId = azureConfig.Apply(c => c.SubscriptionId);
new RoleAssignment("contributor", new()
{
PrincipalId= servicePrincipal.Id,
PrincipalType= PrincipalType.ServicePrincipal,
RoleDefinitionId = AzureBuiltInRoles.Contributor,
Scope = Output.Format($"/subscriptions/{subscriptionId}")
});
</code></pre>
<blockquote>
<p>ℹ️ It's worth mentioning that using an Application and its associated Service Principal is not the only way to proceed, we could have created instead a <a href="https://www.pulumi.com/registry/packages/azure-native/api-docs/managedidentity/userassignedidentity/">User Assigned Identity</a></p>
</blockquote>
<p>Now that everything is created, we can create the Federated identity credentials:</p>
<pre><code class="language-csharp">new ApplicationFederatedIdentityCredential("ADOAzureReadyAppFederatedIdentityCredential", new()
{
ApplicationObjectId = aadApplication.ObjectId,
DisplayName = "AzureReadyDeploys",
Description = "Deployments for azure-ready-repository",
Audiences = new(){"api://AzureADTokenExchange" },
Issuer = serviceConnection.WorkloadIdentityFederationIssuer,
Subject = Output.Format($"sc://{organisationName}/{project.Name}/{serviceConnection.ServiceEndpointName}")
});
</code></pre>
<p>You can observe that the federation subject adheres to a particular format (<code>sc://<org>/<project>/<service connection name></code>), which identifies the service connection authorized for authentication with Azure.</p>
<h3 id="create-the-deployment-pipeline">Create the deployment pipeline</h3>
<p>We have completed the configuration of an ARM Service Connection that employs Workload Identity Federation for authentication with Azure. While we could stop at this point, it would be nice to automate the creation of a pipeline that utilizes this service connection and seize the opportunity to ensure everything works properly.</p>
<p>For this purpose, I have written a very simple YAML pipeline that runs the <code>AzureCLI</code> task to show information about the Azure subscription associated with the previously created service connection.</p>
<pre><code class="language-csharp">trigger:
- main
pool:
vmImage: ubuntu-latest
steps:
- task: AzureCLI@2
inputs:
azureSubscription: 'azure-with-oidc'
scriptType: 'pscore'
scriptLocation: 'inlineScript'
inlineScript: 'az account show --query id -o tsv'
</code></pre>
<p>We can add this file in the Git repository:</p>
<pre><code class="language-csharp">var pipelineFile = new GitRepositoryFile("AzurePipeline", new()
{
File = "azure-pipelines.yaml",
RepositoryId = repository.Apply(r => r.Id),
CommitMessage = "Add preconfigured pipeline file",
Content = File.ReadAllText("azure-pipelines.yml"),
Branch = "refs/heads/main"
});
</code></pre>
<p>Now, we have to create the pipeline itself:</p>
<pre><code class="language-csharp">var pipeline = new BuildDefinition("deployToAzure", new()
{
ProjectId = project.Id,
Repository = new BuildDefinitionRepositoryArgs()
{
RepoId = repository.Apply(r => r.Id),
BranchName = "refs/heads/main",
YmlPath = pipelineFile.File,
RepoType = "TfsGit"
}
});
</code></pre>
<p>To complete the automation process, we can authorize the pipeline to utilize the service connection, eliminating the need for manual intervention through the portal:</p>
<pre><code class="language-csharp">new PipelineAuthorization("azureOidcPipelineAuthorization", new()
{
ProjectId = project.Id,
Type = "endpoint",
PipelineId = pipeline.Id.Apply(int.Parse),
ResourceId = serviceConnection.Id
});
</code></pre>
<p>The last thing we can do is create a stack output to expose the URL of the created pipeline:</p>
<pre><code class="language-csharp">return new Dictionary<string, object?>
{
["pipelineUrl"] = Output.Format($"{organizationUrl}{project.Name}/_build?definitionId={pipeline.Id}")
};
</code></pre>
<p>Now we can execute the <code>pulumi up</code> command to provision all these resources and then open the pipeline page in our browser to test the pipeline.</p>
<blockquote>
<p>💡On Windows, you can use the <code>start $(pulumi stack output pipelineUrl)</code> command to directly open the browser on the pipeline page. If you are using <a href="https://www.nushell.sh/">Nushell</a> the command will be <code>pulumi stack output pipelineUrl | start $in</code></p>
</blockquote>
<img src="/posts/images/azuredevopsoidc_portal.webp" class="img-fluid centered-img" alt="Results of the pipeline run in Azure DevOps">
<p>Everything is working as expected.</p>
<h2 id="to-conclude">To conclude</h2>
<p>In this article, we demonstrated how to automate the configuration of an Azure DevOps project using Workload Identity Federation for secure deployments to Azure. We covered the provisioning of the Microsoft Entra ID and Azure DevOps resources necessary to make this work. It's very similar to <a href="https://www.techwatching.dev/posts/azure-ready-github-repository">what can be done for GitHub</a> but with the specificities of Azure DevOps.</p>
<p>It was an opportunity for me to work with the Azure DevOps provider. Even if it does the job, I must admit I was somewhat disappointed with the developer experience which I found to be not very intuitive, with poorly named resources and an overreliance on strings as parameters. I assume that the Azure DevOps APIs are primarily responsible for this, as they are what the provider calls upon.</p>
<p>One thing I find interesting with Azure DevOps is that YAML pipelines do not need to be updated to take advantage of workload identity federation as long as the Azure Pipelines tasks you are using support it and your ARM service connection has been converted to workload identity federation.</p>
<p>Anyway, regardless of the CI/CD platform you are using, I believe that employing Workload Identity Federation to deploy code to Azure from pipelines is the right approach.</p>
<p>You can find the complete source code used for this article <a href="https://github.com/TechWatching/AzureDevOpsWorkloadIdentity"><strong>in this GitHub repository</strong></a>.</p>
<p>If you are deploying your application to Azure from Azure Pipelines, you might want to leverage the ability to do so without using secrets thanks to Workload identity federation. In this article, I will demonstrate how to automate the configuration of your Azure DevOps project, with everything pre-configured to securely deploy applications to Azure.</p>
https://techwatching.dev/posts/azure-ready-github-repository
Create an Azure-Ready GitHub Repository using Pulumi
2023-07-20T00:00:00Z
<p>Creating an application and deploying it to Azure is not complicated. You write some code on your machine, do some clicks in the Azure portal, or run some Azure CLI commands from your terminal and that's it: your application is up and running in Azure.</p>
<p>Yet, that's not real life, at least not what you will do when working on a professional project. Your code needs to be versioned and pushed to a location where your colleagues can work on it. The provisioning of Azure resources and deployment to Azure should be carried out using a properly configured CI/CD pipeline with the necessary authorization.</p>
<p>That's a lot of work that would need to be done each time you start a new project. So let's automate that using Pulumi to simplify the process and create an "<em>Azure-Ready GitHub repository</em>".</p>
<h2 id="whats-an-azure-ready-github-repository">What's an Azure-Ready GitHub repository?</h2>
<p>"<em>Azure-Ready GitHub repository</em>" is not an official term or concept, it's just something I've come up with to describe a Github repository that has everything correctly configured to provision Azure resources or deploy applications to Azure from a GitHub Actions CI/CD pipeline.</p>
<img src="/posts/images/azurereadygithub_overview_1.webp" class="img-fluid centered-img" alt="Diagram of a GitHub repository interacting with Azure.">
<h3 id="the-github-part">The GitHub part</h3>
<p>On the GitHub side, to have an <em>Azure-Ready GitHub repository</em>, we need:</p>
<ul>
<li><p>the GitHub repository itself (already initialized with a <code>main</code> branch)</p>
</li>
<li><p>the necessary GitHub Actions variables/secrets to authenticate to the correct Azure subscription</p>
</li>
<li><p>a YAML file located in the <code>.github/workflows/</code> folder that contains the CI/CD pipeline that provisions resources in Azure</p>
</li>
</ul>
<img src="/posts/images/azurereadygithub_github_1.webp" class="img-fluid centered-img" alt="A diagram of the GitHub repository to create.">
<h3 id="the-azure-part">The Azure part</h3>
<p>On the Azure side, to have an <em>Azure-Ready GitHub repository,</em> we need:</p>
<ul>
<li><p>the existing Azure subscription to which resources are deployed</p>
</li>
<li><p>an <em>identity</em> in the Azure Active Directory of the desired tenant so that the GitHub CI/CD pipeline can authenticate to Azure and interact with the subscription</p>
<ul>
<li><p>an Azure AD application that represents the GitHub Actions pipeline identity</p>
</li>
<li><p>a Service Principal (related to the Azure AD application) that has the contributor role on the Azure subscription</p>
</li>
<li><p>credentials for the CI/CD pipeline to authenticate to Azure on behalf of this Azure AD application</p>
</li>
</ul>
</li>
</ul>
<img src="/posts/images/azurereadygithub_azure_1.webp" class="img-fluid centered-img" alt="A diagram of the resources to configure in Azure.">
<blockquote>
<p>ℹ️ <em>Azure Active Directory</em> has recently been renamed <em>Microsoft Entra ID</em> (as of the time of writing). However, I will continue to use the term Azure Active Directory throughout the rest of the article. Please note that both terms refer to the same service.</p>
</blockquote>
<h3 id="the-problem-with-secret-credentials">The problem with secret credentials</h3>
<p>People tend to use secret credentials to authenticate their pipeline to Azure and that's not the best thing to do.</p>
<p>From a security standpoint, depending on secrets always poses a security risk. Even if in that case the secret would be safely stored in a GitHub secret and never exposed publicly, it's still better to avoid secrets when we can.</p>
<blockquote>
<p>🔐 That's precisely why when hosting applications in Azure, we use Managed Identities and IAM roles instead of relying on secrets. Yet, here we can't use Managed Identities for GitHub Actions pipelines.</p>
</blockquote>
<p>From a practical standpoint, depending on secrets can quickly become problematic as they expire and thus require rotation. Of course, you can set up alerting or automate secret rotation but that's something you would prefer to avoid managing.</p>
<blockquote>
<p>💬 I recently encountered a situation in Azure DevOps where a deployment failed due to the expiration of an Azure AD Application secret associated with the Service Connection used in the pipeline, and we were not alerted about it. That's the kind of scenario that can easily happen with secrets and that you want to avoid.</p>
</blockquote>
<p>So what can we do about that?</p>
<p>👉 We can stop using secret credentials and use <a href="https://learn.microsoft.com/en-us/azure/active-directory/workload-identities/workload-identity-federation">Workload identity federation</a> instead. I suggest you have a look at this <a href="https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect">GitHub documentation page</a> as well to better understand how it works but basically, you can remember the following:</p>
<ul>
<li><p>this mechanism relies on Open ID Connect and trust between Azure and GitHub</p>
</li>
<li><p>the GitHub pipeline does not need an Azure AD application secret anymore to authenticate to Azure</p>
</li>
<li><p>it's not an Azure thing only, it's an open standard that also works with other cloud providers and other platforms than Github</p>
</li>
</ul>
<p>To establish the trust relationship between the Azure AD application and the GitHub repository, a <em>Federated Identity Credential</em> must be created in the Azure Active Directory. You can find how to do that manually from the portal in the <a href="https://learn.microsoft.com/en-us/azure/active-directory/workload-identities/workload-identity-federation-create-trust?pivots=identity-wif-apps-methods-azp">documentation</a> but we are going to directly automate that 😉.</p>
<h3 id="the-complete-solution-to-implement">The complete solution to implement</h3>
<img src="/posts/images/azurereadygithub_overview_2.webp" class="img-fluid centered-img" alt="A diagram showing the interactions between Azure and GitHub.">
<h2 id="why-use-pulumi-in-that-context">Why use Pulumi in that context?</h2>
<p>You might wonder why I chose to automate this process using Pulumi instead of writing a Bash or PowerShell script that would execute commands from the GitHub CLI and the Azure CLI.</p>
<blockquote>
<p>💡By the way, you should check <a href="https://cli.github.com/">GitHub CLI</a> if you have not done it yet, it's very handy. And if you have read my article about <a href="https://www.techwatching.dev/posts/welcome-azure-cli">Azure CLI</a>, you know it's a very convenient tool as well.</p>
</blockquote>
<p>I think Pulumi is a better choice here because:</p>
<ul>
<li><p>a script is imperative by nature, but declarative infrastructure seems more suitable to avoid dealing with idempotency</p>
</li>
<li><p>Pulumi can interact with both GitHub and Azure using its providers</p>
</li>
<li><p>the code will be easier to write and maintain</p>
</li>
<li><p>the code could be integrated into any application (including a future self-service infrastructure portal) using Pulumi Automation API</p>
</li>
</ul>
<p>In this article, the Pulumi code will be in TypeScript but it would work in any language supported by Pulumi.</p>
<h2 id="automate-the-creation-of-the-azure-ready-github-repository">Automate the creation of the Azure-Ready GitHub Repository</h2>
<h3 id="create-the-pulumi-project">Create the Pulumi project</h3>
<p>Let's start by scaffolding a new Pulumi project using TypeScript:</p>
<pre><code class="language-powershell">pulumi new typescript -n AzureOIDC -s dev -d "A program to set up an Azure-Ready GitHub repository"
</code></pre>
<p>This command creates a new pulumi project and stack from the TypeScript template:</p>
<ul>
<li>The name of the project "<em>AzureOIDC"</em> is specified using the <code>-n</code> option</li>
<li>The description of the project "<em>A program to set up an Azure-Ready GitHub repository</em>" is specified using the <code>-d</code> option</li>
<li>The stack of the project "<em>dev</em>" is specified using the <code>-s</code> option</li>
</ul>
<blockquote>
<p>ℹ By default, the <code>pulumi new</code> command installs the dependencies when creating the project. You can prevent this by specifying the <code>-g</code> option, which is useful when you want to use another package manager than the default one (<code>pnpm</code> instead of <code>npm</code> for instance).</p>
</blockquote>
<p>This project will need 3 different providers:</p>
<ul>
<li>the <a href="https://www.pulumi.com/registry/packages/azure-native/">Azure Native provider</a></li>
<li>the <a href="https://www.pulumi.com/registry/packages/azuread/">Azure Active Directory provider</a></li>
<li>the <a href="https://www.pulumi.com/registry/packages/github/">GitHub provider</a></li>
</ul>
<p>So we can add the following packages to our <code>package.json</code> file:</p>
<ul>
<li>@pulumi/azure-native</li>
<li>@pulumi/azuread</li>
<li>@pulumi/github</li>
</ul>
<h3 id="create-the-repository-on-github">Create the repository on GitHub</h3>
<p>To use the GitHub provider, we have to provide GitHub credentials. For that, we can create a personal access token (I prefer to create a <a href="https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#fine-grained-personal-access-tokens">fine-grained personal access token</a> although a <a href="https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-personal-access-token-classic">classic personal access token</a> would also work). Next, we simply set the GitHub token in our Pulumi configuration, and the GitHub provider will automatically use it:</p>
<pre><code class="language-powershell">pulumi config set github:token XXXXXXXXXXXXXX --secret
</code></pre>
<blockquote>
<p>🔐 Don't forget to include the <code>--secret</code> option when setting sensitive configurations, as this ensures that Pulumi encrypts the information. By doing so, we can safely commit the configuration files without creating security risks.</p>
</blockquote>
<p>Now, it's time to create our GitHub repository!</p>
<pre><code class="language-typescript">import * as github from "@pulumi/github";
const repository = new github.Repository("azure-ready-repository", {
name: "azure-ready-repository",
visibility: "public",
autoInit: true
});
export const repositoryCloneUrl = repository.httpCloneUrl;
</code></pre>
<p>Pulumi has an <a href="https://www.pulumi.com/docs/concepts/resources/names/#autonaming">auto-naming capability</a> that is very convenient to prevent name collisions or to ensure zero-downtime resource updates. Yet, in this context, I prefer to avoid a random suffix in my GitHub repository name, that's why I am specifying the <code>name</code> property to override the auto-naming behavior.</p>
<p>The last line creates a stack <a href="https://www.pulumi.com/docs/concepts/stack/#outputs">output</a> named <code>repositoryCloneUrl</code> so that we can easily get the URL to clone our newly created repository.</p>
<blockquote>
<p>ℹ I wanted the repository to be initialized, that's why I set the <code>autoInit</code> property to <code>true</code> but you should set it to <code>false</code> if you have an existing local git repository that you want to push on this GitHub repository.</p>
</blockquote>
<h3 id="create-the-identity-in-azure-active-directory-for-the-github-actions-workflow">Create the <em>identity</em> in Azure Active Directory for the GitHub Actions workflow</h3>
<p>Creating an Azure AD application and its service principal is not very complicated:</p>
<pre><code class="language-typescript">import * as azuread from "@pulumi/azuread";
const aadApplication = new azuread.Application("AzureReadyApp", { displayName: "Azure Ready App" });
const servicePrincipal = new azuread.ServicePrincipal("AzureReadServicePrincipal", {
applicationId: aadApplication.applicationId,
});
</code></pre>
<p>The OIDC trust thing is a bit more complex. Fortunately, Microsoft's documentation has a detailed page <a href="https://learn.microsoft.com/en-us/azure/active-directory/workload-identities/workload-identity-federation-create-trust?pivots=identity-wif-apps-methods-azp"><em>Configuring an app to trust an external identity provider</em></a> that explains everything and shows how to add a federated identity for GitHub Actions using the Azure Portal, Azure CLI, or Azure PowerShell.</p>
<p>Let's do the same thing using TypeScript and Pulumi Azure AD provider:</p>
<pre><code class="language-typescript">new azuread.ApplicationFederatedIdentityCredential("AzureReadyAppFederatedIdentityCredential", {
applicationObjectId: aadApplication.objectId,
displayName: "AzureReadyDeploys",
description: "Deployments for azure-ready-repository",
audiences: ["api://AzureADTokenExchange"],
issuer: "https://token.actions.githubusercontent.com",
subject: pulumi.interpolate`repo:${repository.fullName}:ref:refs/heads/main`,
});
</code></pre>
<p>The <code>subject</code> property is what identifies the repository where the GitHub Actions workflow will be authorized to exchange its GitHub token for an Azure access token. It's worth noting that it will only work if the GitHub Actions workflow is run on the git reference (branch or tag) or the environment you specify in <code>subject</code>. You can also specify that only workflows triggered by a pull request should be authorized. Here, I have used the <code>main</code> branch but I could create multiple Federated Identity Credentials with different subjects if needed.</p>
<p>With this configuration, the GitHub Actions workflow we create next will be able to obtain a valid Azure access token.</p>
<p>If you are interested in gaining a better understanding of how all this works, you can refer to <a href="https://learn.microsoft.com/en-us/azure/active-directory/workload-identities/workload-identity-federation#how-it-works">this diagram</a> from Microsoft's documentation (with GitHub serving as the external identity provider in our case).</p>
<img src="/posts/images/azurereadygithub_identityfederation.webp" class="img-fluid centered-img" alt="Sequence diagram explaining Azure OIDC.">
<h3 id="authorize-the-service-principal-to-provision-resources-on-the-subscription">Authorize the Service Principal to provision resources on the subscription</h3>
<p>We have created everything we need to get a valid Azure access token, but we still have not authorized the application to provision resources on our subscription.</p>
<p>We can do that by giving the Contributor role to our service principal.</p>
<pre><code class="language-typescript">import * as authorization from "@pulumi/azure-native/authorization";
import { azureBuiltInRoles } from "./builtInRoles";
new authorization.RoleAssignment("contributor", {
principalId: servicePrincipal.id,
principalType: authorization.PrincipalType.ServicePrincipal,
roleDefinitionId: azureBuiltInRoles.contributor,
scope: pulumi.interpolate`/subscriptions/${subscriptionId}`,
});
</code></pre>
<p>I intentionally did not declare the variable <code>subscriptionId</code> in the code above. It's because it's up to you to choose how you will provide it. You may want to set it in the configuration and retrieve it from it :</p>
<pre><code class="language-typescript">const config = new pulumi.Config();
const subscriptionId = config.get("subscriptionId");
</code></pre>
<p>Or your might want to retrieve it from the current configuration of the Azure native provider :</p>
<pre><code class="language-typescript">const azureConfig = pulumi.output(authorization.getClientConfig());
const subscriptionId = azureConfig.subscriptionId;
</code></pre>
<p>Concerning, the contributor role definition identifier, I could have dynamically retrieved it using Azure APIs (like <a href="https://github.com/pulumi/examples/blob/master/azure-ts-call-azure-sdk/index.ts">here</a>). But honestly, as these identifiers don't change it's much easier to hardcode it in a dedicated <code>builtInRoles.ts</code> file.</p>
<pre><code class="language-typescript">export const azureBuiltInRoles = {
contributor : "/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c"
};
</code></pre>
<blockquote>
<p>💡Please note that you don't have to work on the subscription scope. If you prefer to assign the contributor role (or any other role) to an existing resource group rather than the entire subscription, you can certainly do that as well.</p>
</blockquote>
<h3 id="add-the-configuration-for-the-github-actions-workflow">Add the configuration for the GitHub Actions workflow</h3>
<p>The next step is to correctly set the configuration for the GitHub Actions of our Azure-Ready GitHub repository.</p>
<p>The workflow requires three pieces of information for the OIDC authentication to function properly:</p>
<ol>
<li>The identifier of the Azure tenant</li>
<li>The identifier of the Azure subscription</li>
<li>The application identifier (also known as client ID) of the previously created Azure AD application</li>
</ol>
<p>These identifiers are not secrets, they are just identifiers so we could directly set them as GitHub Actions variables like this:</p>
<pre><code class="language-typescript">new github.ActionsVariable("tenantId", {
repository: repository.name,
variableName: "ARM_TENANT_ID",
value: azureConfig.tenantId,
});
</code></pre>
<p>However, I like to keep my tenant id and my subscription id private so we will store them in GitHub secrets but that's not mandatory at all.</p>
<pre><code class="language-typescript">const azureConfig = pulumi.output(authorization.getClientConfig());
new github.ActionsSecret("tenantId", {
repository: repository.name,
secretName: "ARM_TENANT_ID",
plaintextValue: azureConfig.tenantId,
});
new github.ActionsSecret("subscriptionId", {
repository: repository.name,
secretName: "ARM_SUBSCRIPTION_ID",
plaintextValue: azureConfig.subscriptionId,
});
new github.ActionsSecret("clientId", {
repository: repository.name,
secretName: "ARM_CLIENT_ID",
plaintextValue: aadApplication.applicationId,
});
</code></pre>
<blockquote>
<p>ℹ Please note that could also use <a href="https://docs.github.com/en/actions/deployment/targeting-different-environments/using-environments-for-deployment">environments for deployment</a> and their associated secrets and variables.</p>
</blockquote>
<h3 id="create-the-github-actions-workflow">Create the GitHub Actions workflow</h3>
<p>Everything seems to be properly configured to provision Azure resources from a GitHub Actions workflow in this new repository, except for the workflow itself. The goal here is to have a properly configured pipeline in the repository to get started provisioning Azure infrastructure.</p>
<p>Here is such a pipeline:</p>
<pre><code class="language-yaml">name: infra
on:
workflow_dispatch:
permissions:
id-token: write
contents: read
jobs:
provision-infra:
runs-on: ubuntu-latest
steps:
- name: 'Az CLI login'
uses: azure/login@v1
with:
client-id: ${{ secrets.AZURE_CLIENT_ID }}
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
- name: 'Run az commands'
run: |
az account show
az group list
</code></pre>
<p>This workflow first authenticates to Azure using OIDC with the <code>azure/login</code> action and then performs some Azure CLI commands to interact with Azure resources. That's fine and probably enough to get you started but you surely want to provision your infrastructure using a more declarative solution than an Azure CLI script. So let's see a more interesting pipeline still authenticating via Azure OIDC but using Pulumi to provision the Azure resources.</p>
<pre><code class="language-yaml">name: infra
on:
workflow_dispatch:
permissions:
id-token: write # required for OIDC auth
contents: read # required to perform a checkout
jobs:
provision-infra:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install pnpm
uses: pnpm/action-setup@v2
with:
version: latest
- name: Set node version to 18
uses: actions/setup-node@v3
with:
node-version: 18
cache: 'pnpm'
- name: Install dependencies
run: pnpm install
- name: Provision infrastructure
uses: pulumi/actions@v4.4.0
id: pulumi
with:
command: up
stack-name: dev
env:
ARM_USE_OIDC: true
PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
ARM_CLIENT_ID: ${{ secrets.ARM_CLIENT_ID }}
ARM_TENANT_ID: ${{ secrets.ARM_TENANT_ID }}
ARM_SUBSCRIPTION_ID: ${{ secrets.ARM_SUBSCRIPTION_ID }}
</code></pre>
<p>A permission section is required with 2 settings (more details <a href="https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect#adding-permissions-settings">here</a>):</p>
<ul>
<li><code>id-token: write</code> ➡️ needed to request the OIDC token</li>
<li><code>contents: read</code> ➡️ needed to perform checkout action</li>
</ul>
<blockquote>
<p>ℹ When you start to specify specific permissions, you have to specify all the permissions you need for the job because the default permissions won't apply anymore.</p>
</blockquote>
<p>The 3 steps following the checkout step are actions to specify the Node.js version to use, install and correctly configure <a href="https://bordeauxcoders.com/series/pnpm-101">pnpm</a>. We assume here the infrastructure will be provisioned using TypeScript (and Pulumi of course) but there would have been similar steps with other runtimes/languages (a <code>setup-dotnet</code> and a <code>dotnet retore</code> action for .NET for instance).</p>
<p>The last action is the Pulumi action to provision the infrastructure by running the <code>pulumi up</code> on the <code>dev</code> stack. We can see that this action uses environment variables whose values are based on the GitHub Actions secrets we defined earlier. To tell Pulumi to use OIDC, we just have to set the <code>ARM_USE_OIDC</code> environment variable to <code>true</code>.</p>
<pre><code class="language-yaml"> env:
ARM_USE_OIDC: true
PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
ARM_CLIENT_ID: ${{ secrets.ARM_CLIENT_ID }}
ARM_TENANT_ID: ${{ secrets.ARM_TENANT_ID }}
ARM_SUBSCRIPTION_ID: ${{ secrets.ARM_SUBSCRIPTION_ID }}
</code></pre>
<p>A GitHub Actions secret we did not talk about is <code>PULUMI_ACCESS_TOKEN</code> that is a <a href="https://www.pulumi.com/docs/pulumi-cloud/access-management/access-tokens/">Pulumi access token</a> to use Pulumi Cloud as our backend to store the infrastructure state and encrypt secrets. This token should be:</p>
<ol>
<li><p>Created from Pulumi Cloud (following the documentation <a href="https://www.pulumi.com/docs/pulumi-cloud/access-management/access-tokens/#personal-access-tokens">here</a>)</p>
</li>
<li><p>Stored in the stack configuration using the following command <code>pulumi config set pulumiTokenForRepository ******* --secret</code></p>
</li>
<li><p>Stored in a GitHub Actions secret using this code</p>
<pre><code class="language-typescript">new github.ActionsSecret("pulumiAccessToken", {
repository: repository.name,
secretName: "PULUMI_ACCESS_TOKEN",
plaintextValue: config.requireSecret("pulumiTokenForRepository"),
});
</code></pre>
</li>
</ol>
<p>The last thing to do is to add this workflow file to the GitHub repository:</p>
<pre><code class="language-typescript">import { readFileSync } from "fs";
const pipelineContent = readFileSync("main.yml", "utf-8");
new github.RepositoryFile("pipelineRepositoryFile", {
repository: repository.name,
branch: "main",
file: ".github/workflows/main.yml",
content: pipelineContent,
commitMessage: "Add preconfigured pipeline file",
commitAuthor: "Alexandre Nédélec",
commitEmail: "15186176+TechWatching@users.noreply.github.com",
overwriteOnCreate: true,
});
</code></pre>
<p>This code:</p>
<ol>
<li>reads the <code>main.yml</code> file that contains the workflow we saw previously</li>
<li>creates a file with this content in the repository in the <code>.github/workflows/</code> folder for the GitHub Actions workflows</li>
<li>makes a commit when creating the file (or modifying it)</li>
</ol>
<blockquote>
<p>💬 To read the YAML file, I use the <code>readFileSync</code> method from the File System API <code>fs</code>. That's one of the things I love about Pulumi: you use the things you already know and that already exist in your ecosystem. No need to look for a module or wait for someone to write one, there is probably something standard or a popular community library you can use.</p>
</blockquote>
<h2 id="test-the-azure-ready-github-repository">Test the Azure-Ready GitHub Repository</h2>
<p>Now that the infrastructure code to provision the Azure-Ready GitHub repository is written, let's run it with the <code>pulumi up</code> command and see if it works!</p>
<img src="/posts/images/azurereadygithub_pulumi_1.webp" class="img-fluid centered-img" alt="Ouput of the pulumi up command with all the resources created.">
<p>All the resources are correctly created and our new GitHub repository is ready to be used.</p>
<img src="/posts/images/azurereadygithub_github_2.webp" class="img-fluid centered-img" alt="Picture of the Azure Ready GitHub repository">
<p>Let's clone it.</p>
<pre><code class="language-bash">git clone https://github.com/TechWatching/azure-ready-repository; cd azure-ready-repository
</code></pre>
<p>We want to verify that the GitHub project is properly configured and can provision Azure resources from its GitHub Actions workflow.</p>
<p>Let's add some infrastructure code that provisions a few Azure resources to check that:</p>
<pre><code class="language-bash">pulumi new azure-typescript -n "AzureReadyGitHuRepository" -y --force
</code></pre>
<p>The <code>--force</code> option allows us to create the code within a non-empty directory.</p>
<p>I used the <code>azure-typescript</code> template that creates a storage account and outputs retrieve its primary access key.</p>
<blockquote>
<p>🔒 In the SDK, the outputs of the function that lists the storage access keys are not currently marked as secrets. There is currently an <a href="https://github.com/pulumi/pulumi-azure-native/issues/2408">open issue</a> to change that but in the meantime, I have just modified the code to label the stack output as secret ensuring its encryption.</p>
</blockquote>
<p>Let's run a <code>pnpm install</code> to install the dependencies and generate the <code>pnpm-lock.yaml</code> file. Then, we can push the code to GitHub and run the pipeline to see how it goes.</p>
<img src="/posts/images/azurereadygithub_github_3.webp" class="img-fluid centered-img" alt="Logs of the pipeline run showing that the workflow successfully created a storage account.">
<p>That's it, we succeeded to provision a storage account from our new GitHub repository whose creation and configuration were entirely automated using Pulumi.</p>
<h2 id="to-conclude">To conclude</h2>
<h3 id="additional-information">Additional information</h3>
<p>There are different platforms you can use to host your Git repositories: GitHub, GitLab, and Azure DevOps to name a few. We use GitHub in this article but you can easily apply the same logic with other platforms (Pulumi has providers for GitLab and Azure DevOps as well).</p>
<p>Even though the Azure-Ready GitHub repository is provisioned using Pulumi, there's nothing stopping you from using another Infrastructure as Code solution that supports Azure OIDC (such as Azure CLI, which was mentioned in the article, Azure Bicep, or even Terraform) in the GitHub Actions workflow of the created repository. You don't even have to provision infrastructure; you can use this workflow to simply deploy an application to an existing Azure resource.</p>
<h3 id="potential-enhancements">Potential Enhancements</h3>
<p>There are many aspects that could be improved in the infrastructure code provisioning the Azure-Ready GitHub repository, but I believe the current solution serves as a good starting point. Nevertheless, here are some ideas for potential enhancements:</p>
<ul>
<li>make additional items, such as the commit author, configurable</li>
<li>authorize an environment and not only a branch to retrieve an Azure token</li>
<li>use environment variables/secrets instead of variable/secrets at the repository scope</li>
</ul>
<p>I think it would be interesting as well to put that code behind an API or a Web application using Pulumi Automation API to have a self-service solution to create Azure-Ready GitHub repository on the fly.</p>
<h3 id="related-articles">Related articles</h3>
<p>Here are some articles on the same topic I wanted to mention:</p>
<ul>
<li><p><a href="https://leebriggs.co.uk/blog/2022/01/23/gha-cloud-credentials"><strong>Stop using static cloud credentials in GitHub Actions</strong></a> <strong>by Lee Briggs</strong><br />
<strong>➡️</strong> This post provides examples for configuring OIDC authentication with GitHub Actions for AWS, Azure, and GCP. The code for Azure is quite similar to the code I showed here. Yet, it doesn't go so far as to initialize a pipeline ready to deploy resources with Pulumi. Anyway, it's awesome to have the code for all 3 major providers.</p>
</li>
<li><p><a href="https://xaviergeerinck.com/2023/05/16/configuring-github-actions-to-azure-authentication-with-oidc/"><strong>Configuring GitHub Actions to Azure authentication with OIDC</strong></a> <strong>by Xavier Geerinck</strong><br />
<strong>➡️</strong>This post also shows how to configure OIDC authentication with GitHub Actions and Azure but using an Azure CLI script. Although the GitHub repository creation and configuration are done manually, automating the Azure part with a few lines of script is nice.</p>
</li>
<li><p><a href="https://samcogan.com/getting-rid-of-passwords-for-deployment-with-pulumi-oidc-support/"><strong>Getting Rid of Passwords for Deployment with Pulumi OIDC Support</strong></a> <strong>by Sam Cogan</strong><br />
➡️ If you don't care about automating everything and simply want to configure OIDC authentication through the Azure portal, that's the post you will want to read. There is also an example of a pipeline to provision Azure infrastructure using a .NET Pulumi program.</p>
</li>
</ul>
<h3 id="complete-code-solution">Complete code solution</h3>
<p>In this article, I aimed to provide a step-by-step explanation of how to automate the creation of a GitHub repository with a properly configured workflow to interact with Azure using OpenID Connect. Consequently, the article turned out to be quite lengthy. I apologize for that, but I didn't want to present the code without adequate explanation.</p>
<p>Anyway, now that we've covered everything, here is the complete code, which is just 75 lines long:</p>
<pre><code class="language-typescript">import * as pulumi from "@pulumi/pulumi";
import * as github from "@pulumi/github";
import * as azuread from "@pulumi/azuread";
import * as authorization from "@pulumi/azure-native/authorization";
import { azureBuiltInRoles } from "./builtInRoles";
import { readFileSync } from "fs";
const config = new pulumi.Config();
const repository = new github.Repository("azure-ready-repository", {
name: "azure-ready-repository",
visibility: "public",
autoInit: true
});
export const repositoryCloneUrl = repository.httpCloneUrl;
const aadApplication = new azuread.Application("AzureReadyApp", { displayName: "Azure Ready App" });
const servicePrincipal = new azuread.ServicePrincipal("AzureReadyServicePrincipal", {
applicationId: aadApplication.applicationId,
});
new azuread.ApplicationFederatedIdentityCredential("AzureReadyAppFederatedIdentityCredential", {
applicationObjectId: aadApplication.objectId,
displayName: "AzureReadyDeploys",
description: "Deployments for azure-ready-repository",
audiences: ["api://AzureADTokenExchange"],
issuer: "https://token.actions.githubusercontent.com",
subject: pulumi.interpolate`repo:${repository.fullName}:ref:refs/heads/main`,
});
const azureConfig = pulumi.output(authorization.getClientConfig());
const subscriptionId = azureConfig.subscriptionId;
new authorization.RoleAssignment("contributor", {
principalId: servicePrincipal.id,
principalType: authorization.PrincipalType.ServicePrincipal,
roleDefinitionId: azureBuiltInRoles.contributor,
scope: pulumi.interpolate`/subscriptions/${subscriptionId}`,
});
new github.ActionsSecret("tenantId", {
repository: repository.name,
secretName: "ARM_TENANT_ID",
plaintextValue: azureConfig.tenantId,
});
new github.ActionsSecret("subscriptionId", {
repository: repository.name,
secretName: "ARM_SUBSCRIPTION_ID",
plaintextValue: azureConfig.subscriptionId,
});
new github.ActionsSecret("clientId", {
repository: repository.name,
secretName: "ARM_CLIENT_ID",
plaintextValue: aadApplication.applicationId,
});
new github.ActionsSecret("pulumiAccessToken", {
repository: repository.name,
secretName: "PULUMI_ACCESS_TOKEN",
plaintextValue: config.requireSecret("pulumiTokenForRepository"),
});
const pipelineContent = readFileSync("main.yml", "utf-8");
new github.RepositoryFile("pipelineRepositoryFile", {
repository: repository.name,
branch: "main",
file: ".github/workflows/main.yml",
content: pipelineContent,
commitMessage: "Add preconfigured pipeline file",
commitAuthor: "Alexandre Nédélec",
commitEmail: "15186176+TechWatching@users.noreply.github.com",
overwriteOnCreate: true,
});
</code></pre>
<p>You can find the complete source code used for this article <a href="https://github.com/TechWatching/AzureOIDC">in this GitHub repository</a>.</p>
<p>I hope you enjoyed this article. Please feel free to share your thoughts in the comments, ask questions, or make suggestions. Keep learning.</p>
<p>Creating an application and deploying it to Azure is not complicated. You write some code on your machine, do some clicks in the Azure portal, or run some Azure CLI commands from your terminal and that's it: your application is up and running in Azure.</p>
https://techwatching.dev/posts/pnpm-who-is-using
Who is using pnpm?
2023-07-06T00:00:00Z
<p>You may have come across pnpm through discussions with fellow developers, reading blog posts, watching videos, or attending developer conferences. You have probably heard its praises: it's fast, disk-space efficient, and great for monorepos.</p>
<p>However, you might wonder: who is actually using pnpm?</p>
<h2 id="a-growing-popularity">A growing popularity</h2>
<p>At the time of writing, pnpm has over 24k stars on GitHub, and this number is rapidly increasing. The pnpm Twitter account maintains a thread that tracks the number of stars. Each time the GitHub repository gains 1k stars, a new tweet is posted. For quite some time now, it has been growing by 1K every two months.</p>
<div align="center">
<?# Twitter 1666004997840986116 /?>
</div>
<p>Another indicator of its growing popularity is its number of downloads. If you go to npm stats you can see how this number evolved compared to npm and yarn.</p>
<img src="/posts/images/pnpm101_whouses_stats.webp" class="img-fluid centered-img" alt="npm vs yarn vs pnpm downloads per day">
<p>I believe this diagram speaks for itself 🚀.</p>
<h2 id="which-companies-are-using-pnpm">Which companies are using pnpm?</h2>
<p>There is a page on pnpm's documentation about well-known companies using pnpm.</p>
<img src="/posts/images/pnpm101_whouses_companies.webp" class="img-fluid centered-img" alt="Screeshot of the documentation showing companies using pnpm">
<p>You can also see some other companies on the StackShare website (but it seems not many companies took the time to fill in the fact that they were using pnpm in their stack).</p>
<img src="/posts/images/pnpm101_whouses_companies_2.webp" class="img-fluid centered-img" alt="Screeshot of the StackShare page showing companies using pnpm">
<h2 id="which-popular-open-source-projects-are-using-pnpm">Which popular open-source projects are using pnpm?</h2>
<p>If you see a pnpm-lock.yaml or a pnpm-workspace.yaml file in a GitHub repository, then that project is definitively using pnpm to manage its dependencies. You can use this technique to find GitHub projects using pnpm by querying them with GitHub code search.</p>
<p>I thought it would be interesting to explore which package managers are utilized in the development of popular JavaScript framework projects. And guess what? Many JavaScript frameworks are developed using pnpm 💖.</p>
<blockquote>
<p>💡 To check that these projects were using pnpm, I not only verify the presence of pnpm specific files but also checked their continuous integration pipelines (contained in the .github folder) to see what they were using to manage their dependencies.</p>
</blockquote>
<p>Here is a non-exhaustive list of popular JavaScript web frameworks that use pnpm as their package manager:</p>
<ul>
<li>Vue</li>
<li>Nuxt</li>
<li>Next.js</li>
<li>SvelteKit</li>
<li>SolidStart</li>
<li>Astro</li>
<li>Qwik</li>
</ul>
<p>That's quite an impressive list: most modern JavaScript web frameworks seem to have chosen pnpm. That's also the case for popular frontend tooling projects like Vite or Turbo.</p>
<blockquote>
<p>💡The fact that pnpm is utilized by maintainers for internal development of these frameworks does not imply that these frameworks can only be used with pnpm. Typically, JavaScript frameworks are "package manager" agnostic, allowing you to use your preferred package manager when developing a project with one of these frameworks.</p>
</blockquote>
<h2 id="should-you-use-pnpm-because-others-do">Should you use pnpm because others do?</h2>
<p>Short answer: no.</p>
<p>Choosing a technology solely based on its popularity is not advisable. While popularity is a factor to consider, it should not be the only determining aspect. Thus, you should not use pnpm because well-known companies or popular open-source projects use it.</p>
<p>However (and here's the long answer 😉), you should consider exploring pnpm, as there must be a reason why all these intelligent individuals have chosen it over npm or yarn. Investigate the issues pnpm resolves for them; perhaps you face similar challenges in your projects. See what problems pnpm solves for them, maybe you have the same problems in your projects. You might not even be aware of certain problems (such as lengthy CI due to time-consuming package installations, excessive space occupied by node modules, or issues with hoisted node modules), but pnpm could potentially make some things easier. Nevertheless, if you are satisfied with your current package manager, there is no need to switch just to imitate the popular frameworks projects.</p>
<p>I believe people are familiar with npm since it is the default package manager for Node.js projects. They might also know about yarn because it was initially developed by Facebook (who created React) and addressed some issues with npm. However, people recognize and utilize pnpm due to its performance and ability to resolve the problems they might encounter with npm package management. That's also why I use pnpm; it does the job, and it does it quickly.</p>
<p>Now you know that you're not alone in using pnpm; from renowned companies to popular open-source projects, many people are utilizing it.</p>
<p>You may have come across pnpm through discussions with fellow developers, reading blog posts, watching videos, or attending developer conferences. You have probably heard its praises: it's fast, disk-space efficient, and great for monorepos.</p>