Jump to content
Xtreme .Net Talk

  • Posts

    • The Push Notification Hub (PNH) service recently went through significant modernization. We migrated from legacy components like .NET Framework 4.7.2 and custom HTTP server called “RestServer”, to .NET 8 and ASP.NET Core 8. Moreover, for handling outgoing requests, we moved from custom HTTP client/handler called “HttpPooler”, to Polly v8 and SocketsHttpHandler. This article describes the journey thus far and its impact on PNH performance. What is PNH and how does it affect Teams users? First, we should start with a description of what PNH is and what role it plays in Microsoft’s real time communication infrastructure. PNH is a central and critical component in distribution of any kind of event notifications to the end users and currently handles traffic for Teams, Skype and a couple of other applications from Microsoft’s portfolio. These notifications can be delivered via actual push channels like FCM (Google’s Firebase Cloud Messaging for relaying messages to Android users) or APNS (Apple Push Notification Service for relaying messages to iOS) but first and foremost as real-time notifications through Microsoft’s internal WebSocket channel. This real-time channel is used when app is being presented to the user and the background push channel is used to reach applications when it is in the background, for example on mobile devices. All communications, including messages, calls and meetings are ultimately directed to PNH in the form of events. PNH then loads a list of registered devices for the target user, notification configuration and other metadata. Utilizing the provided information (transformed input message payload, channel configuration etc.), the requests are constructed and subsequently dispatched to the designated notification channels. The send-message flow can be illustrated by this high-level diagram: In reality, PNH part can be broken down into several subservices and flows, for example just the mechanism of discovering the list of devices that need to be contacted is a whole separate service. So, PNH serves as a conduit for any kind of push events to be delivered to chat room members. Be it text messages in chats, channels, meetings, or typing/calling notifications. Yes, even calling notification can be thought of as a special message (it’s essentially a signal for Teams mobile app to display incoming call screen and start ringing your phone). The nature of these messages places some unique requirements for PNH. There are two main scenarios. Calling messages want least amount of latency (e.g. target device to start ringing as soon as possible). On the other hand, text messaging tends to prioritize throughput over latency. To give you a general idea about the volume of traffic…on a typical day, PNH makes HTTP requests counting in the hundreds of billions! Therefore, health and performance of PNH are important factors in overall application experience. They have a direct impact on how quickly and reliably users receive notifications. High resource consumption for PNH means more requests are going to end up in the queue, resulting in more timeouts and missed delivery deadlines. The service is also costlier to run, which negatively impacts scalability. Our expectations PNH was slated for migration (along with other services) to drive down operational costs and also to bring in latest tech and security improvements introduced in .NET Core. It was running on legacy stack, using .NET Framework 4.7.2 ecosystem. Because of this basis, many other libraries were out of date, missing performance and, perhaps more importantly, security improvements. By observing other services, we expected at least 25% improvement in “Q-factor” post migration to .NET Core stack. What is Q-factor? Q-factor is a metric by which we can measure performance evolution. General formula: Q-factor = (Requests served) / (CPU consumption) For PNH, the formula computes the “Work done” per “Resources spent” so it can increase if the service handles more requests with the same CPU consumption, or if the same volume of request is handled more efficiently. This value can then be used to judge relative improvements (or degradations) in performance after some feature is rolled out. Migration phases Our journey to .NET Core can be broken down into several distinct phases. Let’s go over each phase in more detail. Start of the migration We used RestServer as the HTTP server/listener component to facilitate incoming requests. To make outgoing HTTP requests we used HttpPooler. This was the old component built with some basic resiliency capabilities, most of which carried over to its spiritual successor, internally named R9 that was built on top of Polly v7. Parts of R9 were then later integrated into Polly v8. HttpPooler used stock HttpClientHandler to transfer the requests over the wire. Both RestServer and HttpPooler did not have any easy replacement on .NET Core. That would normally have forced us to branch our code heavily and use different components on each platform. Given the extensive integration of these elements into the service, we ultimately decided against proceeding with this. Instead, we opted for first migrating off of RestServer/HttpPooler to ASP.NET Core 2.2 and R9 coupled with WinHttpHandler respectively. The reason is that these new components can be used as-is on both .NET Framework and .NET Core. Although .NET Standard 2.0 packages have been released, they are quite outdated nowadays. Completing this preparation step gave us much more comfortable foundation that we could continue the .NET Core migration on and also gave us confidence during rollouts that both versions of our service will behave the same with regards to business logic (ie. less regressions and rollbacks). Initial phase So the first step towards .NET Core was to get rid of RestServer (handles incoming requests) and HttpPooler (handles outgoing requests and resiliency). We went through the code (~190k lines according to VS Code Metrics) and made tons of necessary changes as these components were embedded quite deep. RestServer replacement RestServer was replaced with ASP.NET Core 2.2, latest version to support both .NET Framework and .NET Core (later versions dropped the .NET Framework support). As server implementation we used the widely adopted HttpSys, despite wishing for Kestrel instead. This was because HttpSys resembled RestServer’s behavior a bit more and also that, at the time, Kestrel had serious performance degradation issue on our hosting platform due to negative interaction with Windows Defender. HttpPooler replacement HttpPooler was replaced with combination of R9 (handles HTTP resiliency, like retries, rate limits etc.) and WinHttpHandler (handles over-the-wire HTTP communication). Note R9 (Rejuvenate) is a .NET SDK designed to provide a strong foundation upon which high-performance and high-availability services can be built. R9 strives to insulate services from the nitty-gritty details of the platform they are executing on, and includes a growing set of utility features which have proven valuable to service developers. The HTTP resiliency components are based on Polly v7. It’s worth mentioning that our expectations regarding performance increases were initially conservative. After all we were replacing already fine-tuned components. Indeed moving to ASP.NET Core 2.2 and WinHttpHandler did not come with any unexpected performance fluctuations. However, we were later pleasantly surprised after replacing WinHttpHandler with .NET Core’s own SocketsHttpHandler (more on that later in this article). A lot of these changes took the whole rollout roundtrip: Merge new code Deploy it to live servers Gradually enable the feature following safe rollout practices (takes its time) Verify it works and is stable under normal conditions, iron out any bugs Cleanup old code Transition to .NET Core runtime During this phase we slowly transitioned from .NET Framework (aka NetFx) to .NET Core runtime. First step was to introduce multitargeting to our projects so that the entire business logic can be compiled under both .NET Framework 4.7.2 and .NET 8. Next, we added the actual .NET 8 implementation of our service and a switching mechanism (based on deployment variables) that allowed us to transition any of our servers back and forth. Then after world-wide rollout, we’d do the legacy code cleanup. Runtime switch Q-factor impact Significant performance gains were expected as runtime and BCL in .NET 8 vs. Framework 4.7.2 has been improved in many ways. This was later confirmed as the switch had a profound, positive effect on Q-factor where we saw a big and clear jump up (the good direction). Consider the following graph that covers 3 peak traffic periods, with obvious Framework <-> Core transition point in the middle, measured on one of our production deployments: Note Feel free to ignore the unit of the Y axis, it is not important, what is important is the relative change between .NET Framework and .NET Core. Let’s take a look at Q-factor numbers over a typical business weekend: Platform Mean Q-factor Improvement .NET Framework 4.7.2 1573 .NET 8 2331 48% This means that just by switching runtimes, our Q-factor went up 48%! Not too bad considering it’s basically the same business logic code (save for couple of minor compatibility fixes and if-defs). On latest tech Being done with .NET 8 rollout and having finally removed the legacy components from code enabled us to continue to switch to some of the new technologies at our disposal. ASP.NET 8 and Minimal API ASP.NET Core 8 brings tons of features, fixes, and improvements, and opens the door to further optimizations into the future like Native AOT. Minimal APIs is new way to define endpoints and routing in an ASP.NET Core application. It is designed to be high-performance and simpler to use than its predecessor, the classic MVC (model-view-controller pattern). Minimal API allowed us to completely remove controllers and controller-based routing in PNH. Our endpoint definition code was slashed down from several code files to around 1 page of text. Another benefit that we immediately took advantage of is the ability to define multiple incoming request pipelines based on the URL of the request. This allowed us to restrict full pipelines only to actual business endpoints, while technical endpoints (like health checks) only have simplified (faster, less memory allocations) pipeline defined. More to the point, requests with invalid paths or verbs (think possible attacks or automated vulnerability scanners) have virtually empty pipeline and are cheaply discarded. SocketsHttpHandler Let’s talk about improvements to our outgoing HTTP pipeline. Http handler is a software component at the end of the pipeline (after telemetry/resilience and other components) that is responsible for actually sending the requests over the wire to the intended target. We used WinHttpHandler as the go-to handler (for outgoing traffic) after HttpPooler. It served its purpose being multi-platform and mature tech. Once on .NET 8 however, we could finally switch to the SocketsHttpHandler, the recommended handler for .NET Core platform built from the ground up with strong focus on performance and reliability. This brought improved performance, reliability, security as well as compatibility (supporting the newest HTTP standards like HTTP3, etc.). We actually discovered and reported a new bug in WinHttpHandler, regarding client certificate corruption under very heavy load. SocketsHttpHandler does not have this issue as it is handling certificates carefully behind the scenes. The effects of introducing SocketsHttpHandler were quite significant. Both Q-factor and latency improved greatly across the board (to be more precise, its 99th percentile latency of all successful calls). Let’s go over some highlights. SHH Q-factor impact Important These improvements are measured on top of .NET 8 runtime switch (that brought its own set of performance benefits). Q-factor impact over a typical week while doing gradual rollout: HTTP handler Mean Q-factor Improvement WinHttpHandler 2264 SocketsHttpHandler 2744 21% SocketsHttpHandler is a clear winner here in terms of raw performance. Together with the runtime switch, it increased Q-factor of PNH by around 70%! APNS latency Let us demonstrate the impact of SocketsHttpHandler on one of our more network heavy integrations, the Apple Push Notification Service (in effect whenever you message or call some iOS device). APNS uses HTTP 2.0-based binary communication which in itself poses unique challenges as well as optimization opportunities for the HTTP handler component of your choice. With switching to SocketsHttpHandler, latency for successful requests dropped almost by half. It does show that SocketsHttpHandlers stack is much better optimized! HTTP handler Mean P99 latency Improvement WinHttpHandler 99.8 ms SocketsHttpHandler 61.1 ms 39% Realtime notifications latency Realtime notifications are especially important, they are latency-sensitive (for example calling) and they constitute for most of our world-wide traffic (even more than APNS). We are happy to report that this traffic experienced quite significant latency reduction in the range of hundreds of ms! To put it into numbers: HTTP handler Mean P99 latency Improvement WinHttpHandler 506.4 ms SocketsHttpHandler 329.2 ms 35% This is big improvement in push notification latency that could be felt world-wide! Polly v8 Note Polly is a .NET resilience and transient-fault-handling library that allows developers to express policies such as Retry, Circuit Breaker, Timeout, Bulkhead Isolation, and Fallback in a fluent and thread-safe manner. .NET 8 enabled us to use the latest version of R9 at first, but then later its de facto successor, the Polly v8. This resulted in retiring even more legacy code while improving overall reliability and security of PNH. Polly had profound, positive impact on memory allocations. QFactor stayed more or less the same but .NET 8 event counter showed big improvement. Average heap size: Closing thoughts PNH is deriving great benefits from .NET 8. Overall performance improved, as evidenced by the Q-Factor metric, by about 70%. Performance is a major factor for a service like this and will reflect positively in basically all flows on Teams platform that got to do with messaging. The results actually exceeded our expectations by significant margin. Essentially, PNH is now faster and cheaper, improved latency means everyone can now enjoy snappier calling and messaging notifications. Reduced resource consumption means we can afford more servers for improved redundancy (think less outages). It can also translate to denser global coverage, further reducing latency wherever the user might be located on the globe. And that is just first of a series of milestones… Next steps Now that PNH is on .NET 8 and .NET Framework code is all cleaned up (we no longer need to support it) our hands are untied to adopt even more cool technologies! Sneak peek of our future plans: Migrate to System.Text.Json PNH currently uses Json.NET (a.k.a Newtonsoft.Json) to do all its json (de)serialization (and there’s a lot of that going on behind the scenes as we rely on JSON for all requests and responses). System.Text.Json proves to be superior in terms of performance. It also has great support for async code flows and has methods that are optimized for being awaited. This is important to high-load API like PNH that uses async code heavily as it helps to smooth out thread utilization and avoid thread starvation. Utilize Span<T> and Memory<T> tooling PNH does a lot of string processing and reprocessing. Parsing, injecting, removing and transforming data from one form to another. These operations could benefit greatly from new tools like Spans. Concepts like memory slicing, stack-allocated memory or parsing directly from spans can bring substantial improvements in CPU/memory consumption, further driving down operational cost. Native AOT Native Ahead-Of-Time compilation is something we’d like to explore. It has potential to improve startup and runtime performance. This is possible on .NET 8+ as it adds support for native AOT for ASP.NET Core. Possibility to host on Linux Bringing PNH to .NET 8 means we are now on a actual cross-platform framework. Concretely it opens the way to host our service on Linux that has potential to further improve performance and overall stability of the service. Kestrel We are using HttpSys to listen to and process incoming connections. Kestrel is lightweight and performance-focused alternative. Advantages of Kestrel include: High performance and low overhead. It’s optimized for handling a large number of concurrent connections, making it suitable for high-traffic applications. Designed to run on multiple platforms, including Windows, Linux, and macOS. This makes it a versatile choice for applications that need to be deployed across different environments. Integrates better with ASP.NET Core. Expanded configuration options. Connection middlewares! Does not need admin rights to listen to port numbers under 1024. Actively developed and maintained with the latest security patches and standards. .NET 9 The newest version of .NET was released during the writing of this article. It offers an optimized runtime and provides opportunities for cost savings through fine-tuning PNH code. Conclusion The modernization of PNH has been a significant step forward for our team. By leveraging .NET 8, we’ve achieved notable improvements in performance, scalability, and efficiency. .NET 8 also brought much-needed security enhancements to the critical components that our code uses and brought many new exciting language features that are paving the way towards further performance optimizations and modern C# code practices. These changes directly enhance the experience for Teams users, ensuring faster and more reliable notifications. As we look ahead, we’re excited to explore the possibilities that .NET 9 and other emerging technologies offer. The journey of modernization is ongoing, and we’re committed to continuously improving our services. We’d love to hear about your experiences with modernizing your applications or adopting the latest .NET technologies in the comments below. The post Modernizing push notification API for Teams appeared first on .NET Blog. View the full article
    • This is a continuation of the sample .NET MAUI – UI testing with Appium and NUnit created as part of Gerald’s blog Getting started with UI testing .NET MAUI apps using Appium. In this blog, we will see how to use BrowserStack App Automate to run the previously written tests in the cloud on real devices! This blog provides a guide on setting up BrowserStack with your existing Appium user-interface (UI) tests of your .NET MAUI apps, and we will also see how to setup a continuous integration (CI)/continuous delivery (CD) pipeline to run these tests in an automated fashion. What is BrowserStack App Automate? BrowserStack’s App Automate lets you test your native and hybrid apps on a variety of mobile and tablet devices. The devices you access are all real, physical devices housed in their data centers. Run tests on 2000+ real iOS and Android devicesApp Automate gives you instant access to 2000+ real iOS and Android devices, with varying OS versions and form factors in the cloud, thereby reducing the cost of building and managing an in-house lab. You can test your app under real-world conditions and identify issues that might not appear in emulators or simulators. Run test suites on BrowserStack in minutesUse the BrowserStack SDK that allows you to integrate your test suites with App Automate in minutes. Simply install the SDK, set up a YAML file, and trigger your tests to get started. You can also leverage features like parallel orchestration and local testing with ease using the SDK. All this, with no code changes! Test apps in real-world user conditionsBrowserStack lets you test the behavior of your app on different devices under different network conditions. App Automate offers several presets also lets you define your own custom network conditions to check your app’s behavior. You can also change the network conditions mid-way during the test run, just like in real world where the end-user’s network varies. Steps to Add BrowserStack to Existing Appium UITests If you already have Appium UI tests written for your .NET MAUI App, the following steps show you how to run these tests on the devices in the cloud provided by BrowserStack App Automate. To follow along with this blog, you can refer to this repository with sample code. In order to be able to run your .NET MAUI iOS and Android UI tests with BrowserStack, follow these steps: Sign Up for BrowserStack: Create an account on BrowserStack, a free trial to get started is available. Find pricing details for BrowserStack Subscriptions here. Create BrowserStack Credentials: Obtain your BROWSERSTACK_USERNAME and BROWSERSTACK_ACCESS_KEY from the BrowserStack account settings. Android and iOS BrowserStack Configuration Files: The repository includes BrowserStack configuration files for both Android and iOS projects. These files define the specific settings and capabilities required to run the tests on BrowserStack. The browserstack.yml file for the Android project can be found at BasicAppiumNunitSample/UITests.Android/browserstack.yml and the browserstack.yml file for the iOS project can be found at BasicAppiumNunitSample/UITests.iOS/browserstack.yml. The configuration in these files should mostly be self-explanatory, but here are a few of the key parts in this configuration file: userName & accessKey: These are your BrowserStack credentials. They can be hardcoded or set as environment variables/GitHub Action secrets. automationName: Specifies the automation engine to be used. For iOS, use XCUITest and for Android use UIAutomator2. appiumVersion: Specifies the version of Appium to use, this should match the version of Appium used in the UITests. app: Path to the iOS app (IPA) or Android app (APK) to be tested. For .NET MAUI Apps, if the app is built part of an automated pipeline, you can add the path to the publish folder here. For example: ./MauiApp/bin/Release/net8.0-android/publish/com.companyname.basicappiumsample-Signed.apk browserstackLocal: Set to false to run on BrowserStack App Automate, which runs the tests on the selected devices on the Device Cloud. This setting is mostly for web-based solutions and therefore should always be false for .NET MAUI apps. The configuration of the test devices to be used is determined by so-called capabilities. This ranges from the operating system version to be used, to what logs should be captured, and many more device specific features like enabling Apple Pay, simulating a geolocation coordinate, etc. To help you configure these capabilities, BrowserStack also has a really great Capability Generator Tool that walks you through generating the browserstack.yml file. After that, you can simply copy the generated file from the tool into your repo. Update UITest Projects to Use BrowserStack: Add the browserstack.yml files that you have just created/generated for each platform to the respective UITests.Android and UITests.iOS folders. You will need to use separate browserstack.yml files for each platform. Just place the file in the root folder for each test project. To the respective UITests.Android and UITests.iOS projects, add the BrowserStack.TestAdapter nuget package. <PackageReference Include="BrowserStack.TestAdapter" Version="0.13.3" /> Run BrowserStack App Automate Tests from Local Machine: Follow the informative Documentation provided by BrowserStack to run the tests from your local machine. The tests can be run from Visual Studio on Windows or by BrowserStack CLI for Mac. Run BrowserStack App Automate Tests in CI/CD Workflow: For Azure DevOps Pipelines, the steps can be found in the BrowserStack Documentation. To setup the GitHub Actions Workflow to run the BrowserStack App Automate Test as part of your CI/CD Automation, follow the next section of this blog. GitHub Actions Workflow The GitHub Actions workflow file .github/workflows/browserStackTests.yml is set up to run the UI Tests on BrowserStack for both iOS and Android platforms. For details on the steps that build the .NET MAUI App, you can read this blog Getting Started with DevOps and .NET MAUI, this section will focus specifically on the steps to run the tests on BrowserStack App Automate. Prerequisites for GitHub Actions You should set your BrowserStack Username and Access Key as GitHub Action Secrets, i.e. BROWSERSTACK_USERNAME and BROWSERSTACK_ACCESS_KEY respectively. Store Credentials as GitHub Secrets: Go to your GitHub repository. Navigate to Settings > Secrets and variables > Actions. Add two new secrets: BROWSERSTACK_USERNAME and BROWSERSTACK_ACCESS_KEY. Let’s look at some of the important steps in the GitHub Actions pipeline, which sets up the workflow to run the tests in BrowserStack App Automate. Install Appium: Installs Appium and the XCUITest or UIAutomator2 driver respectively per platform. These are needed to perform the interactions with the running application. The driver knows how to perform a click, or scroll or switch the device to dark theme, for instance. npm install -g appium # For Android appium driver install uiautomator2 # For iOS appium driver install xcuitest For Mac Runners with Apple Silicon Chips Only: When using a Mac Runner with Apple Silicon processor, we need the following extra step that installs the BrowserStack SDK .NET tool and sets it up. dotnet tool install browserstack-sdk --version 1.16.3 --create-manifest-if-needed dotnet browserstack-sdk setup-dotnet --dotnet-path "." --dotnet-version "8.0.403" --yes BrowserStack CLI Docs Step 3: [Only for Macs with Apple Silicon] Install dotnet x64 on MacOS Build Appium BrowserStack Tests: Builds the Appium tests for Android or iOS project. # For Android dotnet build BasicAppiumNunitSample/UITests.Android/UITests.Android.csproj # For iOS dotnet build BasicAppiumNunitSample/UITests.iOS/UITests.iOS.csproj Run Appium BrowserStack Tests: Runs the Appium tests on BrowserStack. # For Android dotnet test BasicAppiumNunitSample/UITests.Android/UITests.Android.csproj # For iOS ./dotnet test BasicAppiumNunitSample/UITests.iOS/UITests.iOS.csproj BrowserStack Test Reports and Dashboard When you run your tests on BrowserStack, detailed test reports are generated. These reports include information such as test execution logs, screenshots, and even videos of the test runs and more. You can access these reports through the BrowserStack Dashboard. BrowserStack App Automate Dashboard The BrowserStack App Automate Dashboard provides detailed and comprehensive overview of the test execution. Alternatively, if you need to integrate with your own custom dashboard, you can use the REST API provided by BrowserStack. Some highlights of the App Automate test report includes: Session Video: Captures the recording of the test as it happens in the session. Use this recording to go at a precise point in time when an error occurred and debug. https://devblogs.microsoft.com/dotnet/wp-content/uploads/sites/10/2025/03/TestRunVideo.mp4 Logs tab: Select Text Logs, Console Logs, or Screenshots tab to view detailed logs. The Logs also include Appium Logs and Network Logs! Summary This blog demonstrates how to integrate BrowserStack App Automate with Appium NUnit tests for .NET MAUI applications. It provides this sample code to show how to run BrowserStack App Automate with your existing Appium UI Tests and explains the GitHub Actions workflow used in this repository. UITesting is crucial for ensuring that your application behaves as expected from the user’s perspective. Running tests on real devices, as opposed to emulators or simulators, helps identify issues that might only appear under real-world conditions, such as different network environments, device-specific quirks, and actual user interactions. BrowserStack App Automate is a service that makes it easy to run your existing Appium NUnit tests on their App Automate Device Cloud. Check out the more samples at dotnet/maui-samples. Please let us know if anything is unclear or what you would like to see in follow up posts. The post Use BrowserStack App Automate with Appium UI Tests for .NET MAUI Apps appeared first on .NET Blog. View the full article
    • Announcing the general availability of custom instructions for VS Code. Read the full article View the full article
    • The .NET team has just released preview2 of .NET 10, and it’s got a bundle of new features and improvements. You might want to give these a try but don’t want to mess with your local development environment. A great way to try out a .NET preview is by using dev containers. In this blog post, we’ll walk you through the steps to set up and use dev containers for experimenting with a new .NET release. What are Dev Containers? Dev containers are pre-configured, isolated environments that allow developers to work on projects without worrying about dependencies and configurations. They are particularly useful for trying out new technologies, as they provide a consistent and reproducible setup. Many development environments, including Visual Studio Code, support dev containers. This allows you to easily create and manage these environments. You can also use dev containers in GitHub Codespaces, which provide a cloud-based development environment. Types of .NET Container Images There are many types of .NET container images available, each designed for different scenarios. .NET container images are published to the Microsoft Artifact Registry. These images are regularly updated to include the latest patches and features, ensuring that you have access to the most secure and up-to-date versions. If you want to get more information about a specific .NET container image, such as mcr.microsoft.com/dotnet/nightly/sdk:9.0, you can use one of the following methods: From the Microsoft Artifact Registry: The Microsoft Artifact Registry documentation provides comprehensive information about the .NET container images, including how they are tagged and updated. Use the docker inspect Command: If you have already pulled the image, you can use the docker inspect command to get detailed information about the image. For example: docker inspect mcr.microsoft.com/dotnet/nightly/sdk:9.0 Some of these images are designed specifically for running .NET applications in production. For exploring a new .NET release, you’ll want to use a dev container image that includes the .NET SDK and runtime. For exploring a preview release, you’ll probably want a dev container for a current GA release augmented with the preview version of .NET 10 you want to try out. Container Type Best For Example Tag Notes SDK Development mcr.microsoft.com/dotnet/sdk:9.0 Includes full SDK, runtime, and development tools Runtime Production mcr.microsoft.com/dotnet/runtime:9.0 Smaller image with just the runtime ASP.NET Web apps mcr.microsoft.com/dotnet/aspnet:9.0 Includes ASP.NET Core runtime Nightly Testing previews mcr.microsoft.com/dotnet/nightly/sdk:10.0 Latest preview builds Dev Container Local development mcr.microsoft.com/devcontainers/dotnet:1-8.0 Pre-configured development environment with additional tools Setting Up Your Dev Container To get started with dev containers, you’ll need Docker and Visual Studio Code with the Dev Containers extension installed. Follow these steps to set up your dev container for exploring a new .NET release. Create a Dev Container Configuration In your project directory, create a .devcontainer folder and add a devcontainer.json file. The easiest way to do this is with the Dev Container extension in Visual Studio Code. Open the Command Palette (Ctrl+Shift+P) and select “Dev Containers: Add Development Container Configuration Files…”. You can store the configuration files in the workspace or in the user data folder, outside the workspace — I generally choose the workspace option. Choose the “C#(.NET)” template, and it will create a .devcontainer folder with a devcontainer.json file that you can then customize as needed. For GA versions of .NET, there are prebuilt dev containers available. For preview versions, you can create a custom dev container configuration. Add a Dockerfile In my devcontainer configuration, I use a Dockerfile to pull in all the versions of .NET I need. I typically want the most recent LTS version and the most recent STS version as a base image — currently that means .NET 8 and .NET 9. These are useful for running dotnet based tools that depend on one of these GA versions. On top of this I install the preview version of .NET 10 I want. The Dockerfile lives in the same directory as the devcontainer.json file. You point to from the devcontainer.json file using the “dockerFile” property of the “build” property, as follows: "build": { "dockerfile": "./Dockerfile", "context": "." }, This replaces the “image” property in the stock devcontainer.json file. I use the devcontainer of the most recent .NET LTS as the base image for my dev container. FROM mcr.microsoft.com/devcontainers/dotnet:1-8.0 Then I “install” the most recent STS version of .NET, which is currently .NET 9, and the preview version of .NET 10 I want to try out. To install these SDK versions, I copy the SDK from the corresponding SDK image, using the Docker COPY command, as follows: # Install the current .NET STS release on top of that COPY --from=mcr.microsoft.com/dotnet/sdk:9.0 /usr/share/dotnet /usr/share/dotnet # Finally install the most recent .NET 10.0 preview using the dotnet-install script COPY --from=mcr.microsoft.com/dotnet/nightly/sdk:10.0.100-preview.2 /usr/share/dotnet /usr/share/dotnet You can see my complete devcontainer.json and Dockerfile for .NET 10 preview2 in my aspnet-whats-new repo. Other dev container configuration options You can customize your dev container further by adding additional configuration options in the devcontainer.json file. What’s really great about dev containers is that you can tailor them to your specific needs, meaning that you only install the specific tools and dependencies you need for your project. Here are a few common options you might want to consider: Extensions: You can specify any Visual Studio Code extensions you want to install in your dev container. For example, to install the C# DevKit extension, add the following line to your devcontainer.json file: "extensions": [ "ms-dotnettools.csdevkit", ] Features: You can specify additional features to include in your dev container. For example, to include the Azure CLI, add the following line to your devcontainer.json file: "features": { "azure-cli": "latest" } You can see the full list of available features in the dev container features documentation. Post-Create Command: You can specify a command to run after the dev container is created. This is useful for installing additional dependencies or running setup scripts. For example, to install the latest version of the .NET CLI, add the following line to your devcontainer.json file: "postCreateCommand": "dotnet tool install -g dotnet-ef" Be aware that the postCreateCommand runs every time you start the dev container, in contrast to commands in the Dockerfile, which are only run when the dev container is built. Build and Start Your Dev Container Once you have your dev container configuration set up, you can build and start your dev container. The easiest way to do this is with the “Dev Containers: Open Folder in Container” command. This will build and start your dev container based on the configuration files you created. You can check that the .NET versions are installed correctly with the dotnet –list-sdks command: Note that once the container image is built, it is cached locally, so that you don’t have to rebuild it every time you start the dev container. However, this means that if there are new service releases of .NET 8 or .NET 9, you will need to rebuild the dev container to pick up those changes. You can do this with the “Dev Containers: Rebuild Container Without Cache” command. This will force a rebuild of the dev container image, and it will pull the latest version of the base images. Conclusion Using dev containers is a fantastic way to try out new .NET releases without affecting your local development environment. With a consistent and isolated setup, you can explore new features and enhancements with ease. Give it a try and let us know what you think! The post Exploring new .NET releases with Dev Containers appeared first on .NET Blog. View the full article
    • Xbox services powers many of the core experiences in Xbox Gaming. Our services are primarily HTTP based microservices and enable experiences that range from telling our users who are online and what games are being played, to the ability to log in, to chat services. These services run on Azure compute and are primarily .NET based. Given that our service history spans all the way from Xbox 360 to the current generation of consoles, and that we must maintain compatibility across the multiple device types as well as individual games we support, it is key that any migrations or updates are performed carefully. Streamlining Innovation with .NET Aspire For the past couple of years, we have been modernizing our codebase to adopt the latest patterns and versions of .NET as well as a focus on the latest security best practices. This includes upgrading from .NET framework to the latest versions of .NET, and moving to modern orchestration platforms like Azure Kubernetes Service (AKS). As we pushed more on our move to AKS and started to iterate, we realized that doing a full validation in a ‘true’ environment is quite slow! After making our changes, and deploying the code, we’d often find out we missed something subtle in our telemetry with how we send them or a naming issue that required yet another iteration. As we heard more about .NET Aspire, we investigated and found that .NET Aspire was a perfect fit! It lets us find all of those minor issues locally, and removes much of the need for full deployment to do our basic hookup validation. With the ability to do this development locally, hit breakpoints, and make quick changes, .NET Aspire quickly became a key tool for Xbox service. Now, the team leverages .NET Aspire’s capabilities to streamline the transition. .NET Aspire lets us flight locally the newly transitioned service, including seeing the critical metrics and logs so that we may debug locally prior to deployment. This reduces the need of deploying to a ‘real’ environment to detect many issues. .NET Aspire also automates emulator usage for Azure dependencies out of the box, saving developers time to focus on writing their own code. No more keeping emulators up to date and writing scripts to wire it all together. .NET Aspire enables our goal of ‘clone + F5’, which will ultimately ease developer onboarding and debugging. Setting up .NET Aspire .NET Aspire app host is a new type of project in Visual Studio — I think of the .NET Aspire app host project as a superpowered startup script, and it comes with deep integration with Azure and .NET out of the box. And you get to use C# to wire things up! Using .NET Aspire’s app host, you can fire up Azure emulators, connect them to your service, providing local environmental overrides as well as generate test data to inject at startup directly into the emulated resource. It’s a familiar programming model and formalizes the fact that we are all cloud devs first, and that it’s a fact of life that we would need to integrate many processes + Azure on our own anyway. Having any reliance on real Azure resources during local development comes with a ton of risk, not much repeatability, and a challenge for developers to share those types of resources — so having an easily hooked up local Azure emulator is great. If you are on the latest .NET, using ASP.NET and integrating with Azure, it’s a no-brainer to kick off your emulators. For our example, we have a worker role consume from an Azure Event Hub and publish to a Cosmos DB, while a microservice frontdoor reads the processed event from the same Cosmos DB. So, we call AddAzureCosmosDB and AddAzureEventHubs and run them as emulators. IResourceBuilder<AzureCosmosDBResource> cosmosdb = builder.AddAzureCosmosDB(cosmosDbResourceName) .RunAsEmulator(c => c.WithLifetime(containerLifetime)); var eventHub = builder.AddAzureEventHubs(eventHubsConnectionName) .RunAsEmulator() .AddEventHub(eventHubResourceName) Next, we can set up our front door microservice, making sure our frontdoor will wait for the emulators to finish initializing and then go about injecting test data and off we go. Be sure to also set up your OTEL exporters to get deep telemetry. var fd = builder.AddProject<FrontDoor>("frontdoor") .WithEnvironment("ASPNETCORE_ENVIRONMENT", "Development") .WithReference(cosmosdb) .WithOtlpExporter() .WaitFor(cosmosdb) Below, we are outputting the ‘browser Cosmos DB’ explorer endpoint so we can manually browse the emulator, and then we create relevant databases and inject our test data. private static async Task<CosmosClient> CreateCosmosCollections(IResourceBuilder<AzureCosmosDBResource> cosmosDbResource) { CosmosClient client = await TestDataGenerator.GetCosmosClient(cosmosDbResource.Resource); Console.WriteLine($"https://localhost:{client.Endpoint.Port}/_explorer/index.html"); using CancellationTokenSource cts = new CancellationTokenSource(CosmosDbInitLoopTimeout); // Set up the database/containers DatabaseResponse dbRef = await client.CreateDatabaseIfNotExistsAsync(DatabaseId, cancellationToken: cts.Token); // Set up container for documents. _ = await dbRef.Database.CreateContainerIfNotExistsAsync( new ContainerProperties { // Exercise for the reader }, ThroughputProperties.CreateAutoscaleThroughput(10000), cancellationToken: cts.Token); return client; } private static async Task LoadCosmosCollections(CosmosClient client) { var documents = TestDataGenerator.GenerateDocs([ ]); foreach (var d in documents) { await client.GetContainer(DatabaseId, ContainerLocator.Get<Document>()) .CreateItemAsync<Document>(d, new PartitionKey(d.PartitionKey)); } } From there, we have a full end-to-end for anyone who clones and runs the repo! Once you kick off your solution, you get a nice view of all the emulators and services running. Below, shows the Cosmos DB emulator, Event Hub emulator, and then our processor/frontdoor roles. This drastically speeds up the onboarding process for developers and eases our local debugging experience. Want to test a specific type of test data? You can manually explore to CosmosDB to inject or change the raw data, then perform your experiment. Later, you could formalize your data and turn that into a test. Aspire Dashboard and Observability Key to Xbox service’s high availability is our logs, metrics, and traces. Since we are aligning on OTEL, which .NET Aspire supports out of the box, we can see all our data right away, and in real-time. While we are migrating our services to adopt new frameworks, we use .NET Aspire to ensure that we are emitting the right metrics locally (And named properly! This is key!!!) and fully exercise all our call paths to see the output data. From the initial .NET Aspire dashboard, we can then view all the relevant console output, structured logs, tracing, and metrics that our app produces. We get to see our custom XAC (Xbox Availability Counter) metrics as well! By providing a unified view of all telemetry data, .NET Aspire reduces the need for custom local monitoring tools and makes for a more efficient local development environment all up. Ultimately, this means we can fine-tune our service and telemetry locally! Streamlining Xbox Services: The Role of .NET Aspire in Development and Testing Once setup, .NET Aspire has helped us with multiple development scenarios — Seemingly simple things like just making sure the service starts properly, not even taking traffic yet. For example, we are moving our legacy services to a Dependency Injection (DI) pattern, and there are a lot of subtle issues that can crop up, especially for devs new to the pattern. Just having .NET Aspire get our project up and running, and connecting to emulators, and having the service respond to a simple health check is enough to force a lot of code to initialize and startup. Thanks to .NET Aspire, we can find and iterate on a lot of issues with this simple type of test! Metrics, tracing, and logs are so important for running our high scale services. We also have downstream consumers we need to ensure that the names and formats align, so having small errors can trigger alerts to not fire and dashboards to not work. Using .NET Aspire, we get a chance to double check all of this locally, before any costly (in time) deployments to a ‘real’ environment The ability to spin up our own .NET Aspire hosting integrations has been valuable in sharing some of our core services. This capability allows other services to depend on integrations we are creating for our common services for their local development, providing early integrations and testing. Since the .NET Aspire integrations also output their own telemetry, we can start to see how the services interact and both output metrics/trace/logs. This also gives us a fantastic baseline prior to migration to more modern platforms! The third-party integration capabilities of .NET Aspire are also impressive, particularly with tools like Wiremock to simulate REST services that we don’t otherwise own. This allows us to test our services in a controlled environment, ensuring that they can handle various scenarios and integrations seamlessly. It has made our testing process more comprehensive and reliable! Conclusion By providing a comprehensive and unified view of all telemetry data, and allowing us to easily hook our services all up together, .NET Aspire allows for the seamless integration and observation of logs, metrics, and traces. This has streamlined our local development environment — Clone the repo, hit F5 and call your service! Another major advantage is the elimination of disparate tools. With .NET Aspire, we can programmatically wire up our services/emulators, and put all relevant data in one place, reducing complexity and saving time. This has significantly improved our local dev loop debugging experience! The post Xbox + .NET Aspire: Transforming Local Development Practices appeared first on .NET Blog. View the full article
  • Member Statistics

    • Total Members
      47759
    • Most Online
      704

    Newest Member
    Arena Properties
    Joined
  • Forum Statistics

    • Total Topics
      31.9k
    • Total Posts
      126.2k
    • 2 replies
    • 2.6k views
    Winston
  1. Texture Dimensions

    • 1 reply
    • 1.8k views
    TrueADM
  2. Mortgage Schedule

    • 0 replies
    • 717 views
    barski
  3. Tooltip bug in .NET ?

    • 2 replies
    • 1.8k views
    linesh
  4. Web Service and Files

      • Administrators
    • 2 replies
    • 936 views
    bri189a
  5. Multiple Interfaces

      • Administrators
    • 6 replies
    • 1k views
    Cags
    • 2 replies
    • 1.1k views
      • Administrators
      • *Experts*
      • Leaders
    • 22 replies
    • 2.4k views
    mskeel
  6. Fly-out toolbar in.NET2005

    • 1 reply
    • 1.2k views
      • Administrators
      • Leaders
    • 5 replies
    • 6.8k views
    • 2 replies
    • 1.2k views
      • *Experts*
      • Leaders
    • 13 replies
    • 3.6k views
    • 3 replies
    • 1.6k views
    bri189a
      • Leaders
    • 4 replies
    • 3.2k views
      • Leaders
    • 3 replies
    • 1.1k views
    Cags
  7. Delete Draw...

    • 2 replies
    • 2.1k views
  8. Easy question (I think)

    • 7 replies
    • 3.2k views
      • Leaders
    • 2 replies
    • 2k views
  9. Exiting Nested Loops

      • Administrators
    • 10 replies
    • 1.7k views
    bri189a
  • Who's Online   0 Members, 0 Anonymous, 114 Guests (See full list)

    • There are no registered users currently online
×
×
  • Create New...