All Activity
- Past hour
-
.NET Multi-platform App UI (.NET MAUI) continues to evolve with each release, and .NET 9 brings a focus on trimming and a new supported runtime: NativeAOT. These features can help you reduce application size, improve startup times, and ensure your applications run smoothly on various platforms. Both developers looking to optimize their .NET MAUI applications and NuGet package authors are able to take advantage of these features in .NET 9. We’ll also walk through the options available to you as a developer for measuring the performance of your .NET MAUI applications. Both CPU sampling and memory snapshots are available via dotnet-trace and dotnet-gcdump respectively. These can give insights into performance problems in your application, NuGet packages, or even something we should look into for .NET MAUI. Background By default, .NET MAUI applications on iOS and Android use the following settings: “Self-contained”, meaning a copy of the BCL and runtime are included with the application. Note This makes .NET MAUI applications suitable for running on “app stores” as no prerequisites such as installing a .NET runtime are required. Partially trimmed (TrimMode=partial), meaning that code within your applications or NuGet packages are not trimmed by default. Note This is a good default, as it is the most compatible with existing code and NuGet packages in the ecosystem. Full Trimming This is where full-trimming (TrimMode=full) can make an impact on your application’s size. If you have a substantial amount of C# code or NuGet packages, you may be missing out on a significant application size reduction. To opt into full trimming, you can add the following to your .csproj file: <PropertyGroup> <TrimMode>full</TrimMode> </PropertyGroup> For an idea on the impact of full trimming: Note MyPal is a sample .NET MAUI application that is a useful comparison because of its usage of several common NuGet packages. See our trimming .NET MAUI documentation for more information on “full” trimming. NativeAOT Building upon full trimming, NativeAOT both relies on libraries being trim-compatible and AOT-compatible. NativeAOT is a new runtime that can improve startup time and reduce application size compared to existing runtimes. Note NativeAOT is not yet supported on Android, but is available on iOS, MacCatalyst, and Windows. To opt into NativeAOT: <PropertyGroup> <IsAotCompatible>true</IsAotCompatible> <PublishAot>true</PublishAot> </PropertyGroup> For an idea on the impact of NativeAOT and application size: And startup performance: Note macOS on the above graphs is running on MacCatalyst, the default for .NET MAUI applications running on Mac operating systems. See our NativeAOT deployment documentation for more information about this newly supported runtime. NuGet Package Authors As a NuGet package author, you may wish for your package to run in either fully trimmed or NativeAOT scenarios. This can be useful for developers targeting .NET MAUI, mobile, or even self-contained ASP.NET microservices. To support NativeAOT, you will need to: Mark your assemblies as “trim-compatible” and “AOT-compatible”. Enable Roslyn analyzers for trimming and NativeAOT. Solve all the warnings. Begin with modifying your .csproj file: <PropertyGroup> <IsTrimmable>true</IsTrimmable> <IsAotCompatible>true</IsAotCompatible> </PropertyGroup> These properties will enable Roslyn analyzers as well as include [assembly: AssemblyMetadata] information in the resulting .NET assembly. Depending on your library’s usage of features like System.Reflection, you could have either just a few warnings or potentially many warnings. See the documentation on preparing libraries for trimming for more information. XAML and Trimming Sometimes, taking advantage of NativeAOT in your app can be as easy as adding a property to your project file. However, for many .NET MAUI applications, there can be a lot of warnings to solve. The NativeAOT compiler removes unnecessary code and metadata to make the app smaller and faster. However, this requires understanding which types can be created and which methods can and cannot be called at runtime. This is often impossible to do in code which heavily uses System.Reflection. There are two areas in .NET MAUI which fall into this category: XAML and data-binding. Compiled XAML Loading XAML at runtime provides flexibility and enables features like XAML hot reload. XAML can instantiate any class in the whole app, the .NET MAUI SDK, and referenced NuGet packages. XAML can also set values to any property. Conceptually, loading a XAML layout at runtime requires: Parsing the XML document. Looking up the control types based on the XML element names using Type.GetType(xmlElementName). Creating new instances of the controls using Activator.CreateInstance(controlType). Converting the raw string XML attribute values into the target type of the property. Setting properties based on the names of the XML attributes. This process can not only be slow, but it presents a great challenge for NativeAOT. For example, the trimmer does not know which types would be looked up using the Type.GetType method. This means that either the compiler would need to keep all the classes from the whole .NET MAUI SDK and all the NuGet packages in the final app, or the method might not be able to find the types declared in the XML input and fail at runtime. Fortunately, .NET MAUI has a solution – XAML compilation. This turns XAML into the actual code for the InitializeComponent() method at build time. Once the code is generated, the NativeAOT compiler has all the information it needs to trim your app. In .NET 9, we implemented the last remaining XAML features that the compiler could not handle in previous releases, especially compiling bindings. Lastly, if your app relies on loading XAML at runtime, NativeAOT might not be suitable for your application. Compiled Bindings A binding ties together a source property with a target property. When the source changes, the value is propagated to the target. Bindings in .NET MAUI are defined using a string “path”. This path resembles C# expressions for accessing properties and indexers. When the binding is applied to a source object, .NET MAUI uses System.Reflection to follow the path to access the desired source property. This suffers from the same problems as loading XAML at runtime, because the trimmer does not know which properties could be accessed by reflection and so it does not know which properties it can safely trim from the final application. When we know the type of the source object at build time from x:DataType attributes, we can compile the binding path into a simple getter method (and a setter method for two-way bindings). The compiler will also ensure that the binding listens to any property changes along the binding path of properties that implement INotifyPropertyChanged. The XAML compiler could already compile most bindings in .NET 8 and earlier. In .NET 9 we made sure any binding in your XAML code can be compiled. Learn more about this feature in the compiled bindings documentation. Compiled bindings in C# The only supported way of defining bindings in C# code up until .NET 8 has been using a string-based path. In .NET 9, we are adding a new API which allows us to compile the binding using a source generator: // .NET 8 and earlier myLabel.SetBinding(Label.TextProperty, "Text"); // .NET 9 myLabel.SetBinding(Label.TextProperty, static (Entry nameEntry) => nameEntry.Text); The Binding.Create() method is also an option, for when you need to save the Binding instance for later use: var nameBinding = Binding.Create(static (Entry nameEntry) => nameEntry.Text); .NET MAUI’s source generator will compile the binding the same way the XAML compiler does. This way the binding can be fully analyzed by the NativeAOT compiler. Even if you aren’t planning to migrate your application to NativeAOT, compiled bindings can improve the general performance of the binding. To illustrate the difference, let’s use BenchmarkDotNet to measure the difference between the calls to SetBinding() on Android using the Mono runtime: // dotnet build -c Release -t:Run -f net9.0-android public class SetBindingBenchmark { private readonly ContactInformation _contact = new ContactInformation(new FullName("John")); private readonly Label _label = new(); [GlobalSetup] public void Setup() { DispatcherProvider.SetCurrent(new MockDispatcherProvider()); _label.BindingContext = _contact; } [Benchmark(Baseline = true)] public void Classic_SetBinding() { _label.SetBinding(Label.TextProperty, "FullName.FirstName"); } [Benchmark] public void Compiled_SetBinding() { _label.SetBinding(Label.TextProperty, static (ContactInformation contact) => contact.FullName?.FirstName); } [IterationCleanup] public void Cleanup() { _label.RemoveBinding(Label.TextProperty); } } When I ran the benchmark on Samsung Galaxy S23, I got the following results: Method Mean Error StdDev Ratio RatioSD Classic_SetBinding 67.81 us 1.338 us 1.787 us 1.00 0.04 Compiled_SetBinding 30.61 us 0.629 us 1.182 us 0.45 0.02 The classic binding needs to first parse the string-based path and then use System.Reflection to get the current value of the source. Each subsequent update of the source property will also be faster with the compiled binding: // dotnet build -c Release -t:Run -f net9.0-android public class UpdateValueTwoLevels { ContactInformation _contact = new ContactInformation(new FullName("John")); Label _label = new(); [GlobalSetup] public void Setup() { DispatcherProvider.SetCurrent(new MockDispatcherProvider()); _label.BindingContext = _contact; } [IterationSetup(Target = nameof(Classic_UpdateWhenSourceChanges))] public void SetupClassicBinding() { _label.SetBinding(Label.TextProperty, "FullName.FirstName"); } [IterationSetup(Target = nameof(Compiled_UpdateWhenSourceChanges))] public void SetupCompiledBinding() { _label.SetBinding(Label.TextProperty, static (ContactInformation contact) => contact.FullName?.FirstName); } [Benchmark(Baseline = true)] public void Classic_UpdateWhenSourceChanges() { _contact.FullName.FirstName = "Jane"; } [Benchmark] public void Compiled_UpdateWhenSourceChanges() { _contact.FullName.FirstName = "Jane"; } [IterationCleanup] public void Reset() { _label.Text = "John"; _contact.FullName.FirstName = "John"; _label.RemoveBinding(Label.TextProperty); } } Method Mean Error StdDev Ratio RatioSD Classic_UpdateWhenSourceChanges 46.06 us 0.934 us 1.369 us 1.00 0.04 Compiled_UpdateWhenSourceChanges 30.85 us 0.634 us 1.295 us 0.67 0.03 The differences for a single binding aren’t that dramatic but they add up. This can be noticeable on complex pages with many bindings or when scrolling lists like CollectionView or ListView. The full source code of the above benchmarks is available on GitHub. Profiling .NET MAUI Applications Attaching dotnet-trace to a .NET MAUI application, allows you to get profiling information in formats like .nettrace and .speedscope. These give you CPU sampling information about the time spent in each method in your application. This is quite useful for finding where time is spent in the startup or general performance of your .NET applications. Likewise, dotnet-gcdump can take memory snapshots of your application that display every managed C# object in memory. dotnet-dsrouter is a requirement for connecting dotnet-trace to a remote device, and so this is not needed for desktop applications. You can install these tools with: $ dotnet tool install -g dotnet-trace You can invoke the tool using the following command: dotnet-trace Tool 'dotnet-trace' was successfully installed. $ dotnet tool install -g dotnet-dsrouter You can invoke the tool using the following command: dotnet-dsrouter Tool 'dotnet-dsrouter' was successfully installed. $ dotnet tool install -g dotnet-gcdump You can invoke the tool using the following command: dotnet-gcdump Tool 'dotnet-gcdump' was successfully installed. From here, instructions differ slightly for each platform, but generally the steps are: Build your application in Release mode. For Android, toggle <AndroidEnableProfiler>true</AndroidEnableProfiler> in your .csproj file, so the required Mono diagnostic components are included in the application. If profiling mobile, run dotnet-dsrouter android (or dotnet-dsrouter ios, etc.) on your development machine. Configure environment variables, so the application can connect to the profiler. For example, on Android: $ adb reverse tcp:9000 tcp:9001 # no output $ adb shell setprop debug.mono.profile '127.0.0.1:9000,nosuspend,connect' # no output Run your application. Attach dotnet-trace (or dotnet-gcdump) to the application, using the PID of dotnet-dsrouter: $ dotnet-trace ps 38604 dotnet-dsrouter ~/.dotnet/tools/dotnet-dsrouter.exe ~/.dotnet/tools/dotnet-dsrouter.exe android $ dotnet-trace collect -p 38604 --format speedscope No profile or providers specified, defaulting to trace profile 'cpu-sampling' Provider Name Keywords Level Enabled By Microsoft-DotNETCore-SampleProfiler 0x0000F00000000000 Informational(4) --profile Microsoft-Windows-DotNETRuntime 0x00000014C14FCCBD Informational(4) --profile Waiting for connection on /tmp/maui-app Start an application with the following environment variable: DOTNET_DiagnosticPorts=/tmp/maui-app For iOS, macOS, and MacCatalyst, see the iOS profiling wiki page for more information. Note For Windows applications, you might just consider using Visual Studio’s built-in profiling tools, but dotnet-trace collect -- C:\path\to\an\executable.exe is also an option. Now that you’ve collected a file containing performance information, opening them to view the data is the next step: dotnet-trace by default outputs .nettrace files, which can be opened in PerfView or Visual Studio. dotnet-trace collect --format speedscope outputs .speedscope files, which can be opened in the Speedscope web app. dotnet-gcdump outputs .gcdump files, which can be opened in PerfView or Visual Studio. Note that there is not currently a good option to open these files on macOS. In the future, we hope to make profiling .NET MAUI applications easier in both future releases of the above .NET diagnostic tooling and Visual Studio. Note Note that the NativeAOT runtime does not have support for dotnet-trace and performance profiling. You can use the other supported runtimes for this, or use native profiling tools instead such as Xcode’s Instruments. See the profiling .NET MAUI wiki page for links to documentation on each platform or a profiling demo on YouTube for a full walkthrough. Conclusion .NET 9 introduces performance enhancements for .NET MAUI applications through full trimming and NativeAOT. These features enable developers to create more efficient and responsive applications by reducing application size and improving startup times. By leveraging tools like dotnet-trace and dotnet-gcdump, developers can gain insights into their application’s performance. For a full rundown on .NET MAUI trimming and NativeAOT, see the .NET Conf 2024 session on the topic. The post .NET MAUI Performance Features in .NET 9 appeared first on .NET Blog. View the full article
- Yesterday
-
We’re excited to announce the Chroma C# SDK. Whether you’re building AI solutions or enhancing existing projects with advanced search capabilities, you now have the option of using Chroma as a database provider in your .NET applications. What is Chroma? Chroma is an open-source database for your AI applications. With support for storing embeddings, metadata filtering, vector search, full-text search, document storage, and multi-modal retrieval, you can use Chroma to power semantic search and Retrieval Augmented Generation (RAG) features in your app. For more details, check out the Chroma website. Get started with Chroma in your C# application In this scenario, we’ll be using the ChromaDB.Client package to connect to a Chroma database and search for movies using vector search. The easiest way to start is locally using the Chroma Docker image. You can also deploy an instance in Azure. Connect to the database Create a C# console application. Install the ChromaDB.Client NuGet package. Create a ChromaClient with configuration options. using ChromaDB.Client; var configOptions = new ChromaConfigurationOptions(uri: "http://localhost:8000/api/v1/"); using var httpClient = new HttpClient(); var client = new ChromaClient(configOptions, httpClient); When using a hosted version of Chroma, replace the uri with your hosted endpoint. Create a collection Now that you have a client, create a collection to store movie data. var collection = await client.GetOrCreateCollection("movies"); To perform operations on that collection, you’ll then need to create a collection client. var collectionClient = new ChromaCollectionClient(collection, configOptions, httpClient); Add data to your collection Once your collection is created, it’s time to add data to it. The data we’re storing will consist of: Movie IDs Embeddings to represent the movie description. Metadata containing the movie title ID Title Embedding Movie Description 1 The Lion King [0.10022575, -0.23998135] The Lion King is a classic Disney animated film that tells the story of a young lion named Simba who embarks on a journey to reclaim his throne as the king of the Pride Lands after the tragic death of his father. 2 Inception [0.10327095, 0.2563685] Inception is a mind-bending science fiction film directed by Christopher Nolan. It follows the story of Dom Cobb, a skilled thief who specializes in entering people’s dreams to steal their secrets. However, he is offered a final job that involves planting an idea into someone’s mind. 3 Toy Story [0.095857024, -0.201278] Toy Story is a groundbreaking animated film from Pixar. It follows the secret lives of toys when their owner, Andy, is not around. Woody and Buzz Lightyear are the main characters in this heartwarming tale. 4 Pulp Fiction [0.106827796, 0.21676421] Pulp Fiction is a crime film directed by Quentin Tarantino. It weaves together interconnected stories of mobsters, hitmen, and other colorful characters in a non-linear narrative filled with dark humor and violence. 5 Shrek [0.09568083, -0.21177962] Shrek is an animated comedy film that follows the adventures of Shrek, an ogre who embarks on a quest to rescue Princess Fiona from a dragon-guarded tower in order to get his swamp back. List<string> movieIds = ["1", "2", "3", "4", "5" ]; List<ReadOnlyMemory<float>> descriptionEmbeddings = [ new [] { 0.10022575f, -0.23998135f }, new [] { 0.10327095f, 0.2563685f }, new [] { 0.095857024f, -0.201278f }, new [] { 0.106827796f, 0.21676421f }, new [] { 0.09568083f, -0.21177962f }, ]; List<Dictionary<string,object>> metadata = [ new Dictionary<string, object> { ["Title"] = "The Lion King" }, new Dictionary<string, object> { ["Title"] = "Inception" }, new Dictionary<string, object> { ["Title"] = "Toy Story" }, new Dictionary<string, object> { ["Title"] = "Pulp Fiction" }, new Dictionary<string, object> { ["Title"] = "Shrek" }, ]; await collectionClient.Add(movieIds, descriptionEmbeddings, metadata); Search for movies (using vector search) Now that your data is in the database, you can query it. In this case, we’re using vector search. Text Embedding A family friendly movie [0.12217915, -0.034832448] List<ReadOnlyMemory<float>> queryEmbedding = [new([0.12217915f, -0.034832448f])]; var queryResult = await collectionClient.Query( queryEmbeddings: queryEmbedding, nResults: 2, include: ChromaQueryInclude.Metadatas | ChromaQueryInclude.Distances); foreach (var result in queryResult) { foreach (var item in result) { Console.WriteLine($"Title: {(string)item.Metadata["Title"] ?? string.Empty} {(item.Distance)}"); } } The result should look similar to the following output. Title: Toy Story 0.028396977 Title: Shrek 0.032012463 Watch it live Join Jiří Činčura on the .NET Data Community Standup on February 26 to learn more about how to use Chroma and the new C# SDK. Conclusion This latest addition enhances the growing AI ecosystem in .NET. It paves the way for a simpler implementation of the existing Semantic Kernel connector and seamless integration into your .NET apps using foundational components like Microsoft.Extensions.VectorData and Microsoft.Extensions.AI. We’d like to thank @ssone95 for his work and contributions to the project. We’re excited to continue building partnerships and working with the community to enable .NET developers to build AI applications. To learn how you can start building AI apps using databases like Chroma, check out the .NET AI documentation. Try out the Chroma C# SDK today and provide feedback. The post Announcing Chroma DB C# SDK appeared first on .NET Blog. View the full article
- Earlier
-
If you are building web apps with Razor, we have some great new features that you are going to love for both Visual Studio and Visual Studio Code! Extract to Component refactoring and the new Roslyn-based C# tokenizer are now available and are designed to improve your productivity in Razor files, let’s take a look. Extract to Component Extract to Component, available in Visual Studio 17.12, is a new refactoring that automates the process of creating a new Razor/Blazor component. Instead of manually creating a new file and copy/pasting the code you want to extract, selecting this feature will do that work for you by selecting the lightbulb refactoring (CTRL + .) after highlighting the code (or tag) you want to extract. This feature makes it easier to create reusable components, allowing for a cleaner and more manageable codebase. In this first iteration of the feature, Extract to Component focuses on support for basic, mostly HTML-based extraction scenarios. However, we have plans to add additional improvements and more advanced scenarios (i.e. more consistent extractions involving variable dependencies, C#, parameters, etc.). Roslyn C# Tokenizer The C# tokenizer / lexer update brings significant improvements to how Razor handles C# code. Many users have expressed frustrations with not being able to use raw string literals and verbatim interpolated strings in Razor files, and the new Roslyn C# lexer fixes that! In addition to these string formats, the lexer also adds support for binary literals and improves the handling of C# preprocessor directives, ensuring they follow C# rules. Ultimately, the new lexer will also make it easier to support new C# language features going forward. This new tokenizer is not on by default until .NET 10 but is available in both Visual Studio (17.13) and Visual Studio Code for .NET 9. To enable the C# tokenizer today, check the Use the C# tokenizer for Razor files in the IDE option under Tools > Options > Preview Features and add <Features>use-roslyn-tokenizer;$(Features)</Features> to a property group in your .csproj or directory.props file: This new lexer does currently come with some breaking changes, particularly around preprocessor directives, so we encourage you to please share any related issues you may experience in the Razor Github repository. Summary These two updates, Extract to Component and the C# tokenizer, help enhance your Razor productivity. By adopting these features, you can ensure cleaner code, better language support, and an overall more efficient development process. However, there’s always room for improvement! To share your Razor feedback, submit issues in our Razor Github repo, the Developer Community, or check out this survey to share your Extract to Component feedback! Finally, if you’d like to chat directly with the Razor team about our upcoming roadmap and how we’re addressing your issues, you can join our upcoming .NET Community Standup on February 18th! The post New Features for Enhanced Razor Productivity! appeared first on .NET Blog. View the full article
-
Today we’re excited to introduce a new hands-on course designed for .NET developers who want to explore the world of Generative AI. Generative AI for Beginners - .NET Our focus in this course is code-first, to teach you what you need to know to be confident building .NET GenAI applications today. What is this course about? As generative AI becomes more accessible, it’s essential for developers to understand how to use it responsibly and effectively. To fill this need, we created a course that covers the basics of Generative AI for the .NET ecosystem, including how to set up your .NET environment, core techniques, practical samples, and responsible use of AI. You’ll learn how to create real-world .NET AI-based apps using a variety of libraries and tools including Microsoft Extensions for AI, GitHub Models and Codespaces, Semantic Kernel, Ollama, and more. We’ve included several lessons and they all include: Short 5–10 minute videos explaining each concept. Fully functional .NET code samples ready to run and explore. Integration with GitHub Codespaces and GitHub Models for quick, convenient setup. Guidance on using GitHub Models and local models with Ollama for flexibility and privacy. Lessons Overview These lessons provide a guided roadmap, starting with core generative AI concepts for .NET developers and how to configure your environment to access AI models in the cloud or locally via Ollama. You’ll then explore techniques that go beyond text processing, such as assembling practical solutions with chatbots including adding video and real-time audio to chat. You’ll also learn about the world of AI Agents, or autonomous intelligent agents that act on the user’s behalf. Finally, you’ll learn about the importance of responsible AI use, ensuring your applications remain ethical and secure. Here’s an example of the semantic search feature you’ll build: And here’s what that real-time voice chat looks like: Getting Started All that’s required is some .NET experience and a desire to learn! You can clone the repo and start working all locally. Even better, we’ve done our best to reduce all of the friction from getting started! You can run everything in GitHub Codespaces and use GitHub Models to access the various LLMs we’ll use in the course – all for free. Check out the course repository, and explore the lessons at your own pace. Watch an overview on the .NET AI Community Standup Check out the .NET AI Community Standup where we gave a sneak peek into the Generative AI for Beginners .NET course, showcasing how .NET developers can harness the power of Generative AI in real-world scenarios. Contribute and Connect Join us on GitHub, contributions are welcome! Submit issues, add new code samples, or create pull requests. You can also join the Azure AI Community Discord to connect with other AI enthusiasts. We look forward to seeing what you build with us! Get started right away and discover how simple it can be to bring AI into your .NET projects. The post Announcing Generative AI for Beginners – .NET appeared first on .NET Blog. View the full article
-
Announcing the Next Edit Suggestions and Agent Mode for GitHub Copilot in Visual Studio Code. Read the full article View the full article
-
Here is a list from this month’s .NET releases including .NET 9.0.2 and .NET 8.0.13. It should be noted that this month’s release does not include any new security updates. .NET 8.0 .NET 9.0 Release Notes 8.0.13 9.0.2 Installers and binaries 8.0.13 9.0.2 Container Images images images Linux packages 8.0.13 9.0.2 Known Issues 8.0 9.0 Release changelogs ASP.NET Core: 8.0.13 | 9.0.2 EF Core: 9.0.2 Runtime: 8.0.13 | 9.0.2 SDK: 8.0.13 | 9.0.2 Windows Forms: 8.0.13 | 9.0.2 Share feedback about this release in the Release feedback issue. .NET Framework February 2025 Updates This month, there are no new security and non-security updates. For recent .NET Framework servicing updates, be sure to browse our release notes for .NET Framework for more details. See you next month That’s it for this month, make sure you update to the latest service release today. The post .NET and .NET Framework February 2025 servicing releases updates appeared first on .NET Blog. View the full article
-
Responding to your feedback, the team has been rolling out a series of updates aimed at enhancing the user’s experience and improving performance and reliability. These updates are designed to make coding in C# more efficient, enjoyable, and productive for developers using VS Code. Solution Explorer Updates You told us you don’t always need a solution file in your workspace. Solution-less workspace mode is now in preview. This feature allows developers to work on C# projects without the need for a solution file (.sln), streamlining the workflow and reducing overhead. Try it out now by setting dotnet.previewSolution-freeWorkspaceMode to true. .NET Aspire Orchestration Also, in preview now, you can make any solution a .NET Aspire solution by adding the .NET Aspire App Host and Service Defaults projects to your solution, letting .NET Aspire simplify your run, debug, and deployment process for your existing application. Open the command palette and select .NET: Add .NET Aspire Orchestration, tell it which projects to orchestrate, name the AppHost and ServiceDefaults projects and you are on your way. Razor/Blazor experience Improvements to the Razor/Blazor experience include some improvements in Hot Reload (currently in experimental mode) and enhancements to Razor error management and IntelliSense. For Hot Reload, enable this by setting csharp.experimental.debug.hotReload to true. We continue to improve this experience and have made it more reliable, working toward this feature’s general availability. For IntelliSense, we’ve addressed several issues around go-to definition reliability and erroneous errors appearing in the problems pane. When you fix a problem, the error goes away without a build now, making your Razor editing experience much more productive. Debugging Enhancements The debugging capabilities of the C# Dev Kit have been improved, including enhancements to Blazor web page debugging, and the ability to locally debug Azure Functions apps (including Azure Functions within .NET Aspire apps). These updates make it easier for developers to identify and resolve issues in their cloud native code, leading to faster and more effective debugging sessions. As always, you can debug your solutions without creating a launch.json file. Just press F5 (or select Run > Start Debugging), select C# from the menu, and select which project is your start-up project and a debug session will begin. Testing Testing has seen several improvements as well, fixing issues with test diffing and adding support for call stacks in test failures. And, if you experience issues with your testing experience, we’ve added a diagnostic level for the testing experience to help us troubleshoot and get to a resolution quicker. To enable it, set csharp.debug.testExplorerVerbosity to diagnostic. Try the new features and give us your feedback We work from your feedback and will continue working through the issues submitted to help bring a more reliable and more productive C# editing experience in VS Code. If you haven’t installed the C# Dev Kit yet, install it now from the Visual Studio Marketplace. For those already using the C# Dev Kit, make sure to update to the newest release to try out the new features and enhancements. Get the latest C# Dev Kit The post C# Dev Kit Updates: .NET Aspire, Hot Reload, and More! appeared first on .NET Blog. View the full article
-
A year ago, we launched Microsoft.Testing.Platform as part of the MSTest Runner announcement. Our goal was to create a reliable testing platform for .NET projects, focused on extensibility and modularity. We are excited to announce that Microsoft.Testing.Platform has now reached 20+ downloads. We are thrilled to see the adoption of the platform by all major .NET test frameworks. Whether you are using Expecto, MSTest, NUnit, TUnit, or xUnit.net, you can now leverage the new testing platform to run your tests. In this post, we’ll highlight the test frameworks that have embraced Microsoft.Testing.Platform, share their unique characteristics, and provide resources for getting started. What is Microsoft.Testing.Platform Microsoft.Testing.Platform is a lightweight and portable alternative to VSTest for running tests in all contexts, including continuous integration (CI) pipelines, CLI, Visual Studio Test Explorer, and VS Code Text Explorer. The Microsoft.Testing.Platform is embedded directly in your test projects, and there’s no other app dependencies, such as vstest.console or dotnet test needed to run your tests. Microsoft.Testing.Platform is open source. To submit an issue or contribute to the project, you can find Microsoft.Testing.Platform code in microsoft/testfx GitHub repository. Key features Microsoft.Testing.Platform is designed as a modular and extensible testing platform, allowing you to include only the components you need and to extend any part of the test execution. The core platform is designed to be portable and dependency-free allowing you to produce test applications that can run anywhere .NET is supported. Microsoft.Testing.Platform is also integrated with Visual Studio Test Explorer, VS Code Test Explorer in C# Dev Kit, Azure DevOps and .NET SDK providing a seamless experience for developers. Additional resources Overview Comparison with VSTest dotnet test support Available extensions GitHub repository Microsoft.Testing.Platform for extension authors Enabling Microsoft.Testing.Platform in your favorite test framework The test frameworks are ordered alphabetically. All examples below will assume the following production source code: Contoso.csproj: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>net9.0</TargetFramework> <ImplicitUsings>enable</ImplicitUsings> <Nullable>enable</Nullable> </PropertyGroup> </Project> Calculator.cs: public class Calculator { public int Add(int a, int b) { return a + b; } } Expecto Expecto aims to make it easy to test CLR based software; be it with unit tests, stress tests, regression tests or property based tests. Expecto tests are parallel and async by default, so that you can use all your cores for testing your software. This also opens up a new way of catching threading and memory issues for free using stress testing. With the release of v0.15.0, YoloDev.Expecto.TestSdk now supports running test through the new testing platform. To opt-in, simply edit your project’s project file to set <EnableExpectoTestingPlatformIntegration>true</EnableExpectoTestingPlatformIntegration> and <OutputType>Exe</OutputType>. Expecto Sample Application Contoso.Tests.fsproj: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>net9.0</TargetFramework> <EnableExpectoTestingPlatformIntegration>true</EnableExpectoTestingPlatformIntegration> <OutputType>Exe</OutputType> <TestingPlatformDotnetTestSupport>true</TestingPlatformDotnetTestSupport> </PropertyGroup> <ItemGroup> <PackageReference Include="YoloDev.Expecto.TestSdk" Version="0.15.0" /> </ItemGroup> <ItemGroup> <Compile Include="Test.fs" /> </ItemGroup> </Project> Test.fs: open Expecto let tests = testList "Calculator Tests" [ test "Add function returns sum" { let calculator = Calculator() let result = calculator.Add(1, 2) Expect.equal result 3 "Expected sum to be 3" } ] [<EntryPoint>] let main argv = runTestsWithArgs defaultConfig argv tests MSTest MSTest, Microsoft Testing Framework, is a fully supported, open source, and cross-platform test framework with which to write tests targeting .NET Framework, .NET Core, .NET, UWP, and WinUI on Windows, Linux, and Mac. With v3.2.0 or later, MSTest.TestAdapter supports running tests through the new testing platform. To opt-in, simply edit your project’s project file to set <EnableMSTestRunner>true</EnableMSTestRunner> and <OutputType>Exe</OutputType>. MSTest Sample Application Contoso.Tests.csproj: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>net9.0</TargetFramework> <ImplicitUsings>enable</ImplicitUsings> <Nullable>enable</Nullable> <EnableMSTestRunner>true</EnableMSTestRunner> <OutputType>Exe</OutputType> <TestingPlatformDotnetTestSupport>true</TestingPlatformDotnetTestSupport> </PropertyGroup> <ItemGroup> <PackageReference Include="MSTest" Version="3.7.3" /> </ItemGroup> </Project> Test.cs: [TestClass] public class CalculatorTests { [TestMethod] public void Add_WhenCalled_ReturnsSum() { var calculator = new Calculator(); var result = calculator.Add(1, 2); Assert.AreEqual(3, result); } } NUnit NUnit is a unit-testing framework for all .NET languages. Initially ported from JUnit, the current production release has been completely rewritten with many new features and support for a wide range of .NET platforms. With the release of v5, NUnit3TestAdapter now supports running test through the new testing platform. To opt-in, simply edit your project’s project file to set <EnableNUnitRunner>true</EnableNUnitRunner> and <OutputType>Exe</OutputType>. NUnit Sample Application Contoso.Tests.csproj: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>net9.0</TargetFramework> <ImplicitUsings>enable</ImplicitUsings> <Nullable>enable</Nullable> <EnableNUnitRunner>true</EnableNUnitRunner> <OutputType>Exe</OutputType> <TestingPlatformDotnetTestSupport>true</TestingPlatformDotnetTestSupport> </PropertyGroup> <ItemGroup> <PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.12.0" /> <PackageReference Include="NUnit" Version="4.3.2" /> <PackageReference Include="NUnit.Analyzers" Version="4.6.0"/> <PackageReference Include="NUnit3TestAdapter" Version="5.0.0" /> </ItemGroup> </Project> Test.cs: public class CalculatorTests { [Test] public void Add_WhenCalled_ReturnsSum() { var calculator = new Calculator(); var result = calculator.Add(1, 2); Assert.That(result,Is.EqualTo(3)); } } TUnit TUnit is a modern, flexible and fast testing framework for C#, featuring with Native AOT and Trimmed Single File application support! This new test framework is built solely on top of Microsoft.Testing.Platform. TUnit Sample Application Contoso.Tests.csproj: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>net9.0</TargetFramework> <ImplicitUsings>enable</ImplicitUsings> <Nullable>enable</Nullable> <OutputType>Exe</OutputType> </PropertyGroup> <ItemGroup> <PackageReference Include="TUnit" Version="0.8.4" /> </ItemGroup> </Project> Test1.cs: public class CalculatorTests { [Test] public async Task Add_WhenCalled_ReturnsSum() { var calculator = new Calculator(); var result = calculator.Add(1, 2); await Assert.That(result).IsEqualTo(3); } } xUnit.net xUnit.net is a free, open source, community-focused unit testing tool for the .NET Framework. Written by the original inventor of NUnit v2, xUnit.net is the latest technology for unit testing C#, F#, VB.NET and other .NET languages. With the release of xunit.v3, xUnit.net now supports running test through the new testing platform. To opt-in, simply edit your project’s project file to set <UseMicrosoftTestingPlatformRunner>true</UseMicrosoftTestingPlatformRunner>. xUnit.net Sample Application Contoso.Tests.csproj: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>net9.0</TargetFramework> <ImplicitUsings>enable</ImplicitUsings> <Nullable>enable</Nullable> <UseMicrosoftTestingPlatformRunner>true</UseMicrosoftTestingPlatformRunner> <OutputType>Exe</OutputType> <TestingPlatformDotnetTestSupport>true</TestingPlatformDotnetTestSupport> </PropertyGroup> <ItemGroup> <PackageReference Include="xunit.v3" Version="1.0.1" /> <PackageReference Include="xunit.runner.visualstudio" Version="3.0.1" /> <PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.12.0" /> </ItemGroup> </Project> Test.cs: public class CalculatorTests { [Fact] public void Add_WhenCalled_ReturnsSum() { var calculator = new Calculator(); var result = calculator.Add(1, 2); Assert.Equal(3, result); } } Looking Ahead We would like to extend our heartfelt appreciation to the framework authors we have collaborated with and continue to work closely with. We are thrilled to witness the ongoing evolution of this platform and its ability to empower developers. We eagerly anticipate numerous contributions from the community and look forward to the innovative extensions that will be created. If you haven’t already, we encourage you to explore the platform, experiment with your preferred framework, and share your feedback. Together, let’s continue to build an outstanding .NET testing ecosystem! The post Microsoft.Testing.Platform: Now Supported by All Major .NET Test Frameworks appeared first on .NET Blog. View the full article
-
Continuing our tradition, we are excited to share a blog post highlighting the latest and most interesting changes in the networking space with the new .NET release. This year, we are introducing updates in the HTTP space, new HttpClientFactory APIs, .NET Framework compatibility improvements, and more. HTTP In the following section, we’re introducing the most impactful changes in the HTTP space. Among which belong perf improvements in connection pooling, support for multiple HTTP/3 connections, auto-updating Windows proxy, and, last but not least, community contributions. Connection Pooling In this release, we made two impactful performance improvements in HTTP connection pooling. We added opt-in support for multiple HTTP/3 connections. Using more than one HTTP/3 connection to the peer is discouraged by the RFC 9114 since the connection can multiplex parallel requests. However, in certain scenarios, like server-to-server, one connection might become a bottleneck even with request multiplexing. We saw such limitations with HTTP/2 (dotnet/runtime#35088), which has the same concept of multiplexing over one connection. For the same reasons (dotnet/runtime#51775), we decided to implement multiple connection support for HTTP/3 (dotnet/runtime#101535). The implementation itself tries to closely match the behavior of HTTP/2 multiple connections. Which, at the moment, always prefer to saturate existing connections with as many requests as allowed by the peer before opening a new one. Note that this is an implementation detail and the behavior might change in the future. As a result, our benchmarks showed a nontrivial increase in requests per seconds (RPS), comparison for 10,000 parallel requests: client single HTTP/3 connection multiple HTTP/3 connections Max CPU Usage (%) 35 92 Max Cores Usage (%) 971 2,572 Max Working Set (MB) 3,810 6,491 Max Private Memory (MB) 4,415 7,228 Processor Count 28 28 First request duration (ms) 519 594 Requests 345,446 4,325,325 Mean RPS 23,069 288,664 Note that the increase in Max CPU Usage implies better CPU utilization, which means that the CPU is busy processing requests instead of being idle. This feature can be turned on via the EnableMultipleHttp3Connections property on SocketsHttpHandler: var client = new HttpClient(new SocketsHttpHandler { EnableMultipleHttp3Connections = true }); We also addressed lock contention in HTTP 1.1 connection pooling (dotnet/runtime#70098). The HTTP 1.1 connection pool previously used a single lock to manage the list of connections and the queue of pending requests. This lock was observed to be a bottleneck in high throughput scenarios on machines with a high number of CPU cores. We resolved this problem (dotnet/runtime#99364) by replacing an ordinary list with a lock with a concurrent collection. We chose ConcurrentStack as it preserves the observable behavior when requests are handled by the newest available connection, which allows collecting older connections when their configured lifetime expires. The throughput of HTTP 1.1 requests in our benchmarks increased by more than 30%: Client .NET 8.0 .NET 9.0 Increase Requests 80,028,791 107,128,778 +33.86% Mean RPS 666,886 892,749 +33.87% Proxy Auto Update on Windows One of the main pain points when debugging HTTP traffic of applications using earlier versions of .NET is that the application doesn’t react to changes in Windows proxy settings (dotnet/runtime#70098). The proxy settings were previously initialized once per process with no reasonable ability to refresh the settings. For example (with .NET 8), HttpClient.DefaultProxy returns the same instance upon repeated access and never refetch the settings. As a result, tools like Fiddler, that set themself as system proxy to listen for the traffic, weren’t able to capture traffic from already running processes. This issue was mitigated in dotnet/runtime#103364, where the HttpClient.DefaultProxy is set to an instance of Windows proxy that listens for registry changes and reloads the proxy settings when notified. The following code: while (true) { using var resp = await client.GetAsync("https://httpbin.org/"); Console.WriteLine(HttpClient.DefaultProxy.GetProxy(new Uri("https://httpbin.org/"))?.ToString() ?? "null"); await Task.Delay(1_000); } produces output like this: null // After Fiddler's "System Proxy" is turned on. http://127.0.0.1:8866/ Note that this change applies only for Windows as it has a unique concept of machine wide proxy settings. Linux and other UNIX-based systems only allow setting up proxy via environment variables, which can’t be changed during process lifetime. Community contributions We’d like to call out community contributions. CancellationToken overloads were missing from HttpContent.LoadIntoBufferAsync. This gap was resolved by an API proposal (dotnet/runtime#102659) from @andrewhickman-aveva and an implementation (dotnet/runtime#103991) was from @manandre. Another change improves a units discrepancy for the MaxResponseHeadersLength property on SocketsHttpHandler and HttpClientHandler (dotnet/runtime#75137). All the other size and length properties are interpreted as being in bytes, however this one is interpreted as being in kilobytes. And since the actual behavior can’t be changed due to backward compatibility, the problem was solved by implementing an analyzer (dotnet/roslyn-analyzers#6796). The analyzer tries to make sure the user is aware that the value provided is interpreted as kilobytes, and warns if the usage suggests otherwise. If the value is higher than a certain threshold, it looks like this: The analyzer was implemented by @amiru3f. QUIC The prominent changes in QUIC space in .NET 9 include making the library public, more configuration options for connections and several performance improvements. Public APIs From this release on, System.Net.Quic isn’t hidden behind PreviewFeature anymore and all the APIs are generally available without any opt-in switches (dotnet/runtime#104227). QUIC Connection Options We expanded the configuration options for QuicConnection (dotnet/runtime#72984). The implementation (dotnet/runtime#94211) added three new properties to QuicConnectionOptions: HandshakeTimeout – we were already imposing a limit on how long a connection establishment can take, this property just enables the user to adjust it. KeepAliveInterval – if this property is set up to a positive value, PING frames are sent out regularly in this interval (in case no other activity is happening on the connection) which prevents the connection from being closed on idle timeout. InitialReceiveWindowSizes – a set of parameters to adjust the initial receive limits for data flow control sent in transport parameters. These data limits apply only until the dynamic flow control algorithm starts adjusting the limits based on the data reading speed. And due to MsQuic limitations, these parameters can only be set to values that are power of 2. All of these parameters are optional. Their default values are derived from MsQuic defaults. The following code reports the defaults programmatically: var options = new QuicClientConnectionOptions(); Console.WriteLine($"KeepAliveInterval = {PrettyPrintTimeStamp(options.KeepAliveInterval)}"); Console.WriteLine($"HandshakeTimeout = {PrettyPrintTimeStamp(options.HandshakeTimeout)}"); Console.WriteLine(@$"InitialReceiveWindowSizes = {{ Connection = {PrettyPrintInt(options.InitialReceiveWindowSizes.Connection)}, LocallyInitiatedBidirectionalStream = {PrettyPrintInt(options.InitialReceiveWindowSizes.LocallyInitiatedBidirectionalStream)}, RemotelyInitiatedBidirectionalStream = {PrettyPrintInt(options.InitialReceiveWindowSizes.RemotelyInitiatedBidirectionalStream)}, UnidirectionalStream = {PrettyPrintInt(options.InitialReceiveWindowSizes.UnidirectionalStream)} }}"); static string PrettyPrintTimeStamp(TimeSpan timeSpan) => timeSpan == Timeout.InfiniteTimeSpan ? "infinite" : timeSpan.ToString(); static string PrettyPrintInt(int sizeB) => sizeB % 1024 == 0 ? $"{sizeB / 1024} * 1024" : sizeB.ToString(); // Prints: KeepAliveInterval = infinite HandshakeTimeout = 00:00:10 InitialReceiveWindowSizes = { Connection = 16384 * 1024, LocallyInitiatedBidirectionalStream = 64 * 1024, RemotelyInitiatedBidirectionalStream = 64 * 1024, UnidirectionalStream = 64 * 1024 } Stream Capacity API .NET 9 also introduced new APIs to support multiple HTTP/3 connections in SocketsHttpHandler (dotnet/runtime#101534). The APIs were designed with this specific usage in mind, and we don’t expect them to be used apart from very niche scenarios. QUIC has built-in logic for managing stream limits within the protocol. As a result, calling OpenOutboundStreamAsync on a connection gets suspended if there isn’t any available stream capacity. Moreover, there isn’t an efficient way to learn whether the stream limit was reached or not. All these limitations together didn’t allow the HTTP/3 layer to know when to open a new connection. So we introduced a new StreamCapacityCallback that gets called whenever stream capacity is increased. The callback itself is registered via QuicConnectionOptions. More details about the callback can be found in the documentation. Performance Improvements Both performance improvements in System.Net.Quic are TLS related and both only affect connection establishing times. The first performance related change was to run the peer certificate validation asynchronously in .NET thread pool (dotnet/runtime#98361). The certificate validation can be time consuming on its own and it might even include an execution of a user callback. Moving this logic to .NET thread pool stops us blocking the MsQuic thread, of which MsQuic has a limited number, and thus enables MsQuic to process higher number of new connections at the same time. On top of that, we have introduced caching of MsQuic configuration (dotnet/runtime#99371). MsQuic configuration is a set of native structures containing connection settings from QuicConnectionOptions, potentially including certificate and its intermediaries. Constructing and initializing the native structure might be very expensive since it might require serializing and deserializing all the certificate data to and from PKS #12 format. Moreover, the cache allows reusing the same MsQuic configuration for different connections if their settings are identical. Specifically server scenarios with static configuration can notably profit from the caching, like the following code: var alpn = "test"; var serverCertificate = X509CertificateLoader.LoadCertificateFromFile("../path/to/cert"); // Prepare the connection option upfront and reuse them. var serverConnectionOptions = new QuicServerConnectionOptions() { DefaultStreamErrorCode = 123, DefaultCloseErrorCode = 456, ServerAuthenticationOptions = new SslServerAuthenticationOptions { ApplicationProtocols = new List<SslApplicationProtocol>() { alpn }, // Re-using the same certificate. ServerCertificate = serverCertificate } }; // Configure the listener to return the pre-prepared options. await using var listener = await QuicListener.ListenAsync(new QuicListenerOptions() { ListenEndPoint = new IPEndPoint(IPAddress.Loopback, 0), ApplicationProtocols = [ alpn ], // Callback returns the same object. // Internal cache will re-use the same native structure for every incoming connection. ConnectionOptionsCallback = (_, _, _) => ValueTask.FromResult(serverConnectionOptions) }); We also built it an escape hatch for this feature, it can be turned off with either environment variable: export DOTNET_SYSTEM_NET_QUIC_DISABLE_CONFIGURATION_CACHE=1 # run the app or with an AppContext switch: AppContext.SetSwitch("System.Net.Quic.DisableConfigurationCache", true); WebSockets .NET 9 introduces the long-desired PING/PONG Keep-Alive strategy to WebSockets (dotnet/runtime#48729). Prior to .NET 9, the only available Keep-Alive strategy was Unsolicited PONG. It was enough to keep the underlying TCP connection from idling out, but in a case when a remote host becomes unresponsive (for example, a remote server crashes), the only way to detect such situations was to depend on the TCP timeout. In this release, we complement the existing KeepAliveInterval setting with the new KeepAliveTimeout setting, so that the Keep-Alive strategy is selected as follows: Keep-Alive is OFF, if KeepAliveInterval is TimeSpan.Zero or Timeout.InfiniteTimeSpan Unsolicited PONG, if KeepAliveInterval is a positive finite TimeSpan, -AND- KeepAliveTimeout is TimeSpan.Zero or Timeout.InfiniteTimeSpan PING/PONG, if KeepAliveInterval is a positive finite TimeSpan, -AND- KeepAliveTimeout is a positive finite TimeSpan By default, the preexisting Keep-Alive behavior is maintained: KeepAliveTimeout default value is Timeout.InfiniteTimeSpan, so Unsolicited PONG remains as the default strategy. The following example illustrates how to enable the PING/PONG strategy for a ClientWebSocket: var cws = new ClientWebSocket(); cws.Options.KeepAliveInterval = TimeSpan.FromSeconds(10); cws.Options.KeepAliveTimeout = TimeSpan.FromSeconds(10); await cws.ConnectAsync(uri, cts.Token); // NOTE: There should be an outstanding read at all times to // ensure incoming PONGs are promptly processed var result = await cws.ReceiveAsync(buffer, cts.Token); If no PONG response received after KeepAliveTimeout elapsed, the remote endpoint is deemed unresponsive, and the WebSocket connection is automatically aborted. It also unblocks the outstanding ReceiveAsync with an OperationCanceledException. To learn more about the feature, you can check out the dedicated conceptual docs. .NET Framework Compatibility One of the biggest hurdles in the networking space when migrating projects from .NET Framework to .NET Core is the difference between the HTTP stacks. In .NET Framework, the main class to handle HTTP requests is HttpWebRequest which uses global ServicePointManager and individual ServicePoints to handle connection pooling. Whereas in .NET Core, HttpClient is the recommended way to access HTTP resources. On top of that, all the classes from .NET Framework are present in .NET, but they’re either obsolete, or missing implementation, or are just not maintained at all. As a result, we often see mistakes like using ServicePointManager to configure the connections while using HttpClient to access the resources. The recommendation always was to fully migrate to HttpClient, but sometimes it’s not possible. Migrating projects from .NET Framework to .NET Core can be difficult on its own, let alone rewriting all the networking code. Expecting customers to do all this work in one step proved to be unrealistic and is one of the reasons why customers might be reluctant to migrate. To mitigate these pain points, we filled in some missing implementations of the legacy classes and created a comprehensive guide to help with the migration. The first part is expansion of supported ServicePointManager and ServicePoint properties that were missing implementation in .NET Core up until this release (dotnet/runtime#94664 and dotnet/runtime#97537). With these changes, they’re now taken into account when using HttpWebRequest. For HttpWebRequest, we implemented full support of AllowWriteStreamBuffering in dotnet/runtime#95001. And also added missing support for ImpersonationLevel in dotnet/runtime#102038. On top of these changes, we also obsoleted a few legacy classes to prevent further confusion: ServicePointManager in dotnet/runtime#103456. Its settings have no effect on HttpClient and SslStream while it might be misused in good faith for exactly that purpose. AuthenticationManager in dotnet/runtime#93171, done by community contributor @deeprobin. It’s either missing implementation or the methods throw PlatformNotSupportedException. Lastly, we put up together a guide for migration from HttpWebRequest to HttpClient in HttpWebRequest to HttpClient migration guide. It includes comprehensive lists of mappings between individual properties and methods, e.g., Migrate ServicePoint(Manager) usage and many examples for trivial and not so trivial scenarios, e.g., Example: Enable DNS round robin. Diagnostics In this release, diagnostics improvements focus on enhancing privacy protection and advancing distributed tracing capabilities. Uri Query Redaction in HttpClientFactory Logs Starting with version 9.0.0 of Microsoft.Extensions.Http, the default logging logic of HttpClientFactory prioritizes protecting privacy. In older versions, it emits the full request URI in the RequestStart and RequestPipelineStart events. In cases where some components of the URI contain sensitive information, this can lead to privacy incidents by leaking such data into logs. Version 8.0.0 introduced the ability to secure HttpClientFactory usage by customizing logging. However, this doesn’t change the fact that the default behavior might be risky for unaware users. In the majority of the problematic cases, sensitive information resides in the query component. Therefore, a breaking change was introduced in 9.0.0, removing the entire query string from HttpClientFactory logs by default. A global opt-out switch is available for services/apps where it’s safe to log the full URI. For consistency and maximum safety, a similar change was implemented for EventSource events in System.Net.Http. We recognize that this solution might not suit everyone. Ideally, there should be a fine-grained URI filtering mechanism, allowing users to retain non-sensitive query entries or filter other URI components (e.g., parts of the path). We plan to explore such a feature for future versions (dotnet/runtime#110018). Distributed Tracing Improvements Distributed tracing is a diagnostic technique for tracking the path of a specific transaction across multiple processes and machines, helping identify bottlenecks and failures. This technique models the transaction as a hierarchical tree of Activities, also referred to as spans in OpenTelemetry terminology. HttpClientHandler and SocketsHttpHandler are instrumented to start an Activity for each request and propagate the trace context via standard W3C headers when tracing is enabled. Before .NET 9, users needed the OpenTelemetry .NET SDK to produce useful OpenTelemetry-compliant traces. This SDK was required not just for collection and export but also to extend the instrumentation, as the built-in logic didn’t populate the Activity with request data. Starting with .NET 9, the instrumentation dependency (OpenTelemetry.Instrumentation.Http) can be omitted unless advanced features like enrichment are required. In dotnet/runtime#104251, we extended the built-in tracing to ensure that the shape of the Activity is OTel-compliant, with the name, status, and most required tags populated according to the standard. Experimental Connection Tracing When investigating bottlenecks, you might want to zoom into specific HTTP requests to identify where most of the time is spent. Is it during a connection establishment or the content download? If there are connection issues, it’s helpful to determine whether the problem lies with DNS lookups, TCP connection establishment, or the TLS handshake. .NET 9 has introduced several new spans to represent activities around connection establishment in SocketsHttpHandler. The most significant one HTTP connection setup span which breaks down to three child spans for DNS, TCP, and TLS activities. Because connection setup isn’t tied to a particular request in SocketsHttpHandler connection pool, the connection setup span can’t be modeled as a child span of the HTTP client request span. Instead, the relationship between requests and connections is being represented using Span Links, also known as Activity Links. Note The new spans are produced by various ActivitySources matching the wildcard Experimental.System.Net.*. These spans are experimental because monitoring tools like Azure Monitor Application Insights have difficulty visualizing the resulting traces effectively due to the numerous connection_setup → request backlinks. To improve the user experience in monitoring tools, further work is needed. It involves collaboration between the .NET team, OTel, and tool authors, and may result in breaking changes in the design of the new spans. The simplest way to set up and try connection trace collection is by using .NET Aspire. Using Aspire Dashboards it’s possible to expand the connection_setup activity and see a breakdown of the connection initialization. If you think the .NET 9 tracing additions might bring you valuable diagnostic insights, and you want to get some hands-on experience, don’t hesitate to read our full article about Distributed tracing in System.Net libraries. HttpClientFactory For HttpClientFactory, we’re introducing the Keyed DI support, offering a new convenient consumption pattern, and changing a default Primary Handler to mitigate a common erroneous usecase. Keyed DI Support In the previous release, Keyed Services were introduced to Microsoft.Extensions.DependencyInjection packages. Keyed DI allows you to specify the keys while registering multiple implementations of a single service type—and to later retrieve a specific implementation using the respective key. HttpClientFactory and named HttpClient instances, unsurprisingly, align well with the Keyed Services idea. Among other things, HttpClientFactory was a way to overcome this long-missing DI feature. But it required you to obtain, store and query the IHttpClientFactory instance—instead of simply injecting a configured HttpClient—which might be inconvenient. While Typed clients attempted to simplify that part, it came with a catch: Typed clients are easy to misconfigure and misuse (and the supporting infra can also be a tangible overhead in certain scenarios). As a result, the user experience in both cases was far from ideal. This changes as Microsoft.Extensions.DependencyInjection 9.0.0 and Microsoft.Extensions.Http 9.0.0 packages bring the Keyed DI support into HttpClientFactory (dotnet/runtime#89755). Now you can have the best of both worlds: you can pair the convenient, highly configurable HttpClient registrations with the straightforward injection of the specific configured HttpClient instances. As of 9.0.0, you need to opt in to the feature by calling the AddAsKeyed() extension method. It registers a Named HttpClient as a Keyed service for the key equal to the client’s name—and enables you to use the Keyed Services APIs (e.g., [FromKeyedServices(...)]) to obtain the required HttpClients. The following code demonstrates the integration between HttpClientFactory, Keyed DI and ASP.NET Core 9.0 Minimal APIs: var builder = WebApplication.CreateBuilder(args); builder.Services.AddHttpClient("github", c => { c.BaseAddress = new Uri("https://api.github.com/"); c.DefaultRequestHeaders.Add("Accept", "application/vnd.github.v3+json"); c.DefaultRequestHeaders.Add("User-Agent", "dotnet"); }) .AddAsKeyed(); // Add HttpClient as a Keyed Scoped service for key="github" var app = builder.Build(); // Directly inject the Keyed HttpClient by its name app.MapGet("/", ([FromKeyedServices("github")] HttpClient httpClient) => httpClient.GetFromJsonAsync<Repo>("/repos/dotnet/runtime")); app.Run(); record Repo(string Name, string Url); Endpoint response: > ~ curl http://localhost:5000/ {"name":"runtime","url":"https://api.github.com/repos/dotnet/runtime"} By default, AddAsKeyed() registers HttpClient as a Keyed Scoped service. The Scoped lifetime can help catching cases of captive dependencies: services.AddHttpClient("scoped").AddAsKeyed(); services.AddSingleton<CapturingSingleton>(); // Throws: Cannot resolve scoped service 'System.Net.Http.HttpClient' from root provider. rootProvider.GetRequiredKeyedService<HttpClient>("scoped"); using var scope = provider.CreateScope(); scope.ServiceProvider.GetRequiredKeyedService<HttpClient>("scoped"); // OK // Throws: Cannot consume scoped service 'System.Net.Http.HttpClient' from singleton 'CapturingSingleton'. public class CapturingSingleton([FromKeyedServices("scoped")] HttpClient httpClient) //{ ... You can also explicitly specify the lifetime by passing the ServiceLifetime parameter to the AddAsKeyed() method: services.AddHttpClient("explicit-scoped") .AddAsKeyed(ServiceLifetime.Scoped); services.AddHttpClient("singleton") .AddAsKeyed(ServiceLifetime.Singleton); You don’t have to call AddAsKeyed for every single client—you can easily opt in “globally” (for any client name) via ConfigureHttpClientDefaults. From Keyed Services perspective, it results in the KeyedService.AnyKey registration. services.ConfigureHttpClientDefaults(b => b.AddAsKeyed()); services.AddHttpClient("foo", /* ... */); services.AddHttpClient("bar", /* ... */); public class MyController( [FromKeyedServices("foo")] HttpClient foo, [FromKeyedServices("bar")] HttpClient bar) //{ ... Even though the “global” opt-in is a one-liner, it’s unfortunate that the feature still requires it, instead of just working “out of the box”. For full context and reasoning on that decision, see dotnet/runtime#89755 and dotnet/runtime#104943. You can explicitly opt out from Keyed DI for HttpClients by calling RemoveAsKeyed() (for example, per specific client, in case of the “global” opt-in): services.ConfigureHttpClientDefaults(b => b.AddAsKeyed()); // opt IN by default services.AddHttpClient("keyed", /* ... */); services.AddHttpClient("not-keyed", /* ... */).RemoveAsKeyed(); // opt OUT per name provider.GetRequiredKeyedService<HttpClient>("keyed"); // OK provider.GetRequiredKeyedService<HttpClient>("not-keyed"); // Throws: No service for type 'System.Net.Http.HttpClient' has been registered. provider.GetRequiredKeyedService<HttpClient>("unknown"); // OK (unconfigured instance) If called together, or any of them more than once, AddAsKeyed() and RemoveAsKeyed() generally follow the rules of HttpClientFactory configs and DI registrations: If used within the same name, the last setting wins: the lifetime from the last AddAsKeyed() is used to create the Keyed registration (unless RemoveAsKeyed() was called last, in which case the name is excluded). If used only within ConfigureHttpClientDefaults, the last setting wins. If both ConfigureHttpClientDefaults and a specific client name were used, all defaults are considered to “happen” before all per-name settings for this client. Thus, the defaults can be disregarded, and the last of the per-name ones wins. You can learn more about the feature in the dedicated conceptual docs. Default Primary Handler Change One of the most common problems HttpClientFactory users run into is when a Named or a Typed client erroneously gets captured in a Singleton service, or, in general, stored somewhere for a period of time that’s longer than the specified HandlerLifetime. Because HttpClientFactory can’t rotate such handlers, they might end up not respecting DNS changes. It is, unfortunately, easy and seemingly “intuitive” to inject a Typed client into a singleton, but hard to have any kind of check/analyzer to make sure HttpClient isn’t captured when it wasn’t supposed to. It might be even harder to troubleshoot the resulting issues. On the other hand, the problem can be mitigated by using SocketsHttpHandler, which can control PooledConnectionLifetime. Similarly to HandlerLifetime, it allows regularly recreating connections to pick up the DNS changes, but on a lower level. A client with PooledConnectionLifetime set up can be safely used as a Singleton. Therefore, to minimize the potential impact of the erroneous usage patterns, .NET 9 makes the default Primary handler a SocketsHttpHandler (on platforms that support it; other platforms, e.g. .NET Framework, continue to use HttpClientHandler). And most importantly, SocketsHttpHandler also has the PooledConnectionLifetime property preset to match the HandlerLifetime value (it reflects the latest value, if you configured HandlerLifetime one or more times). The change only affects cases when the client was not configured to have a custom Primary handler (via e.g. ConfigurePrimaryHttpMessageHandler<T>()). While the default Primary handler is an implementation detail, as it was never specified in the docs, it’s still considered a breaking change. There could be cases in which you wanted to use the specific type, for example, casting the Primary handler to HttpClientHandler to set properties like ClientCertificates, UseCookies, UseProxy, etc. If you need to use such properties, it’s suggested to check for both HttpClientHandler and SocketsHttpHandler in the configuration action: services.AddHttpClient("test") .ConfigurePrimaryHttpMessageHandler((h, _) => { if (h is HttpClientHandler hch) { hch.UseCookies = false; } if (h is SocketsHttpHandler shh) { shh.UseCookies = false; } }); Alternatively, you can explicitly specify a Primary handler for each of your clients: services.AddHttpClient("test") .ConfigurePrimaryHttpMessageHandler(() => new HttpClientHandler() { UseCookies = false }); Or, configure the default Primary handler for all clients using ConfigureHttpClientDefaults: services.ConfigureHttpClientDefaults(b => b.ConfigurePrimaryHttpMessageHandler(() => new HttpClientHandler() { UseCookies = false })); Security In System.Net.Security, we’re introducing the highly sought support for SSLKEYLOGFILE, more scenarios supporting TLS resume, and new additions in negotiate APIs. SSLKEYLOGFILE Support The most upvoted issue in the security space was to support logging of pre-master secret (dotnet/runtime#37915). The logged secret can be used by packet capturing tool Wireshark to decrypt the traffic. It’s a useful diagnostics tool when investigating networking issues. Moreover, the same functionality is provided by browsers like Firefox (via NSS) and Chrome and command line HTTP tools like cURL. We have implemented this feature for both SslStream and QuicConnection. For the former, the functionality is limited to the platforms on which we use OpenSSL as a cryptographic library. In the terms of the officially released .NET runtime, it means only on Linux operating systems. For the latter, it’s supported everywhere, regardless of the cryptographic library. That’s because TLS is part of the QUIC protocol (RFC 9001) so the user-space MsQuic has access to all the secrets and so does .NET. The limitation of SslStream on Windows comes from SChannel using a separate, privileged process for TLS which won’t allow exporting secrets due to security concerns (dotnet/runtime#94843). This feature exposes security secrets and relying solely on an environmental variable could unintentionally leak them. For that reason, we’ve decided to introduce an additional AppContext switch necessary to enable the feature (dotnet/runtime#100665). It requires the user to prove the ownership of the application by either setting it programmatically in the code: AppContext.SetSwitch("System.Net.EnableSslKeyLogging", true); or by changing the {appname}.runtimeconfig.json next to the application: { "runtimeOptions": { "configProperties": { "System.Net.EnableSslKeyLogging": true } } } The last thing is to set up an environmental variable SSLKEYLOGFILE and run the application: export SSLKEYLOGFILE=~/keylogfile ./<appname> At this point, ~/keylogfile will contain pre-master secrets that can be used by Wireshark to decrypt the traffic. For more information, see TLS Using the (Pre)-Master-Secret documentation. TLS Resume with Client Certificate TLS resume enables reusing previously stored TLS data to re-establish connection to previously connected server. It can save round trips during the handshake as well as CPU processing. This feature is a native part of Windows SChannel, therefore it’s implicitly used by .NET on Windows platforms. However, on Linux platforms where we use OpenSSL as a cryptographic library, enabling caching and reusing TLS data is more involved. We first introduced the support in .NET 7 (see TLS Resume). It has its own limitations that in general are not present on Windows. One such limitation was that it was not supported for sessions using mutual authentication by providing a client certificate (dotnet/runtime#94561). It has been fixed in .NET 9 (dotnet/runtime#102656) and works if one these properties is set as described: ClientCertificateContext LocalCertificateSelectionCallback returns non-null certificate on the first call ClientCertificates collection has at least one certificate with private key Negotiate API Integrity Checks In .NET 7, we added NegotiateAuthentication APIs, see Negotiate API. The original implementation’s goal was to remove access via reflection to the internals of NTAuthentication. However, that proposal was missing functions to generate and verify message integrity codes from RFC 2743. They’re usually implemented as cryptographic signing operation with a negotiated key. The API was proposed in dotnet/runtime#86950 and implemented in dotnet/runtime#96712 and as the original change, all the work from the API proposal to the implementation was done by a community contributor filipnavara. Networking Primitives This section encompasses changes in System.Net namespace. We’re introducing new support for server-side events and some small additions in APIs, for example new MIME types. Server-Sent Events Parser Server-sent events is a technology that allows servers to push data updates on clients via an HTTP connection. It is defined in living HTML standard. It uses text/event-stream MIME type and it’s always decoded as UTF-8. The advantage of the server-push approach over client-pull is that it can make better use of network resources and also save battery life of mobile devices. In this release, we’re introducing an OOB package System.Net.ServerSentEvents. It’s available as a .NET Standard 2.0 NuGet package. The package offers a parser for server-sent event stream, following the specification. The protocol is stream based, with individual items separated by an empty line. Each item has two fields: type – default type is message data – data itself On top of that, there are two other optional fields that progressively update properties of the stream: id – determines the last event id that is sent in Last-Event-Id header in case the connection needs to be reconnected retry – number of milliseconds to wait between reconnection attempts The library APIs were proposed in dotnet/runtime#98105 and contain type definitions for the parser and the items: SseParser – static class to create the actual parser from the stream, allowing the user to optionally provide a parsing delegate for the item data SseParser<T> – parser itself, offers methods to enumerate (synchronously or asynchronously) the stream and return the parsed items SseItem<T> – struct holding parsed item data Then the parser can be used like this, for example: using HttpClient client = new HttpClient(); using Stream stream = await client.GetStreamAsync("https://server/sse"); var parser = SseParser.Create(stream, (type, data) => { var str = Encoding.UTF8.GetString(data); return Int32.Parse(str); }); await foreach (var item in parser.EnumerateAsync()) { Console.WriteLine($"{item.EventType}: {item.Data} [{parser.LastEventId};{parser.ReconnectionInterval}]"); } And for the following input: : stream of integers data: 123 id: 1 retry: 1000 data: 456 id: 2 data: 789 id: 3 It outputs: message: 123 [1;00:00:01] message: 456 [2;00:00:01] message: 789 [3;00:00:01] Primitives Additions Apart from server sent event, System.Net namespace got a few small other additions: IEquatable<Uri> interface implementation for Uri in dotnet/runtime#97940 Which allows using Uri in functions that require IEquatable like Span.Contains-0)) or SequenceEquals-system-readonlyspan((-0)))) span-based (Try)EscapeDataString)) and (Try)UnescapeDataString)) for Uri in dotnet/runtime#40603 The goal is to support low-allocation scenarios and we now take advantage of these methods in FormUrlEncodedContent. new MIME types for MediaTypeNames in dotnet/runtime#95446 These types were collected over the course of the release and implemented in dotnet/runtime#103575 by a community contributor @CollinAlpert. Final Notes As each year, we try to write about the interesting and impactful changes in the networking space. This article can’t possibly cover all the changes that were made. If you are interested, you can find the complete list in our dotnet/runtime repository where you can also reach out to us with question and bugs. On top of that, many of the performance changes that are not mentioned here are in Stephen’s great article Performance Improvements in .NET 9. We’d also like to hear from you, so if you encounter an issue or have any feedback, you can file it in our GitHub repo. Lastly, I’d like to thank my co-authors: @antonfirsov who wrote Diagnostics. @CarnaViire who wrote HttpClientFactory and WebSockets. The post .NET 9 Networking Improvements appeared first on .NET Blog. View the full article
-
Learn what is new in the Visual Studio Code January 2025 Release (1.97) Read the full article View the full article
-
We recently reshipped ASP.NET Core 2.1 as ASP.NET Core 2.3 for ASP.NET Core users that are still on .NET Framework. To stay in support, all ASP.NET Core users on .NET Framework should update to this new version. Note This post only applies if you’re using ASP.NET Core on .NET Framework. If you’re using ASP.NET Core 2.x on .NET Core 2.x, it is already out of support, and you should upgrade to a supported version such as .NET 8. How to upgrade To upgrade ASP.NET Core apps running on .NET Framework to ASP.NET Core 2.3: Upgrade your NuGet packages: Update your project to use ASP.NET Core 2.3 packages. These packages are the same as ASP.NET Core 2.1 but re-versioned. Remove any dependency on changes introduced in ASP.NET Core 2.2: ASP.NET Core 2.2 apps that depend on changes in ASP.NET Core 2.2 will need to remove any dependency on these changes. Test your application: Thoroughly test your application to verify that everything works as expected after the upgrade. Background Early versions of ASP.NET Core were provided for .NET Framework and .NET Core. ASP.NET Core 2.1 has been supported on .NET Framework to facilitate migrations to later .NET versions. However, ASP.NET Core 2.2 went out of support with the rest of .NET Core 2.2 on all platforms in 2019. ASP.NET Core 2.2 shipped before we had a predictable schedule and alternating releases of Standard Term Support (STS) and Long Term Support (LTS). Many users upgraded to ASP.NET Core 2.2, not realizing that this reduced their support duration. As a result, some users are inadvertently running on the unsupported version of ASP.NET Core 2.2 on .NET Framework. Since ASP.NET Core 2.x for .NET Framework is shipped as a set of packages, downgrading isn’t easy; there are well over one hundred packages to downgrade with inconsistent version numbers. Some NuGet packages also now require ASP.NET Core 2.2, so downgrading to ASP.NET Core 2.1 could result in NuGet dependency with errors. In order to make staying in support easier, we’ve reshipped ASP.NET Core 2.1 as ASP.NET Core 2.3, so you can simply upgrade to a supported version. By reshipping ASP.NET Core 2.1 as ASP.NET Core 2.3, we provide users on ASP.NET Core 2.2 an off ramp to the supported version via a regular NuGet upgrade. Users updating from ASP.NET Core 2.2 to 2.3 will need to remove any dependencies on changes introduced in ASP.NET Core 2.2. Users on ASP.NET Core 2.1 should also update to 2.3 with the assurance that it’s the same code as 2.1. Moving forward, any servicing updates to ASP.NET Core for .NET Framework will be published based on 2.3. The following table summarizes the state of support of the various ASP.NET Core 2.x version on .NET Framework: Product .NET Framework Support ASP.NET Core 2.1 Unsupported, replaced by ASP.NET Core 2.3 ASP.NET Core 2.2 Ended December 23, 2019 ASP.NET Core 2.3 Supported, same code as 2.1 Caution ASP.NET Core 2.2 is not supported and went out of support over five years ago. If you’re using ASP.NET Core 2.2 on .NET Framework, we strongly recommend updating to ASP.NET Core 2.3 as soon as possible in order to stay supported and to receive relevant security fixes. Why we’re reshipping ASP.NET Core 2.1 as ASP.NET Core 2.3 You might wonder why we don’t reship ASP.NET Core 2.2 as 2.3 instead. The reason is that ASP.NET Core 2.2 includes breaking changes. ASP.NET Core 2.2 went out of support five years ago, while ASP.NET Core 2.1 remained supported. We don’t want existing supported ASP.NET Core 2.1 apps to break when updating to ASP.NET Core 2.3. Summary ASP.NET Core users on .NET Framework should update to the latest ASP.NET Core 2.3 release to stay in support. This update enables ASP.NET Core 2.2 users to update to a supported version by doing a NuGet package upgrade instead of a downgrade. ASP.NET Core 2.1 users updating to ASP.NET Core 2.3 should experience no change in behavior as the packages contain the exact same code. ASP.NET Core 2.2 users may need to remove any dependencies on ASP.NET Core 2.2 specific changes. Any future servicing fixes for ASP.NET Core on .NET Framework will be based on ASP.NET Core 2.3. Questions? Please ask in this issue: ASP.NET Core 2.1 becomes ASP.NET Core 2.3. The post ASP.NET Core on .NET Framework servicing release advisory: ASP.NET Core 2.3 appeared first on .NET Blog. View the full article
-
The DeepSeek R1 model has been gaining a ton of attention lately. And one of the questions we’ve been getting asked is: “Can I use DeepSeek in my .NET applications”? The answer is absolutely! I’m going to walk you through how to use the Microsoft.Extensions.AI (MEAI) library with DeepSeek R1 on GitHub Models so you can start experimenting with the R1 model today. MEAI makes using AI services easy The MEAI library provides a set of unified abstractions and middleware to simplify the integration of AI services into .NET applications. In other words, if you develop your application with MEAI, your code will use the same APIs no matter which model you decide to use “under the covers”. This lowers the friction it takes to build a .NET AI application as you’ll only have to remember a single library’s (MEAI’s) way of doing things regardless of which AI service you use. And for MEAI, the main interface you’ll use is IChatClient. Let’s chat with DeepSeek R1 GitHub Models allows you to experiment with a ton of different AI models without having to worry about hosting. It’s a great way to get started in your AI development journey for free. And GitHub Models gets updated with new models all the time, like DeepSeek’s R1. The demo app we’re going to build is a simple console application and it’s available on GitHub at codemillmatt/deepseek-dotnet. You can clone or fork it to follow along, but we’ll talk through the important pieces below too. First let’s take care of some prerequisites: Head on over to GitHub and generate a personal access token (PAT). This will be your key for GitHub Models access. Follow these instructions to create the PAT. You will want a classic token. Open the DeepSeek.Console.GHModels project. You can either open the full solution in Visual Studio or just the project folder if using VS Code. Create a new user secrets entry for the GitHub PAT. Name it GH_TOKEN and paste in the PAT you generated as the value. Now let’s explore the code a bit: Open the Program.cs file in the DeepSeek.Console.GHModels project. The first 2 things to notice are where we initialize the modelEndpoint and modelName variables. These are standard for the GitHub Models service, they will always be the same. Now for the fun part! We’re going to initialize our chat client. This is where we’ll connect to the DeepSeek R1 model. IChatClient client = new ChatCompletionsClient(modelEndpoint, new AzureKeyCredential(Configuration["GH_TOKEN"])).AsChatClient(modelName); This uses the Microsoft.Extensions.AI.AzureAIInference package to connect to the GitHub Models service. But the AsChatClient function returns an IChatClient implementation. And that’s super cool. Because regardless of which model we chose from GitHub Models, we’d still write our application against the IChatClient interface! Next up we pass in our question, or prompt, to the model. And we’ll use make sure we get a streaming response back, this way we can display it as it comes in. var response = client.CompleteStreamingAsync(question); await foreach (var item in response) { Console.Write(item); } That’s it! Go ahead and run the project. It might take a few seconds to get the response back (lots of people are trying the model out!). You’ll notice the response isn’t like you’d see in a “normal” chat bot. DeepSeek R1 is a reasoning model, so it wants to figure out and reason through problems. The first part of the response will be it’s reasoning and will be delimited by \<think> and is quite interesting. The second part of the response will be the answer to the question you asked. Here’s a partial example of a response: <think> Okay, let's try to figure this out. The problem says: If I have 3 apples and eat 2, how many bananas do I have? Hmm, at first glance, that seems a bit confusing. Let me break it down step by step. So, the person starts with 3 apples. Then they eat 2 of them. That part is straightforward. If you eat 2 apples out of 3, you'd have 1 apple left, right? But then the question shifts to bananas. Wait, where did bananas come from? The original problem only mentions apples. There's no mention of bananas at all. ... Do I have to use GitHub Models? You’re not limited to running DeepSeek R1 on GitHub Models. You can run it on Azure or even locally (or on GitHub Codespaces) through Ollama. I provided 2 additional console applications in the GitHub repository that show you how to do that. The biggest difference between the GitHub Models version is where the DeepSeek R1 model is deployed, the credentials you use to connect to it, and the specific model name. If you deploy on Azure AI Foundry, the code is exactly the same. Here are some instructions on how to deploy the DeepSeek R1 model into Azure AI Foundry. If you want to run locally on Ollama, we’ve provided a devcontainer definition that you can use to run Ollama in Docker. It will automatically pull down a small parameter version of DeepSeek R1 and start it up for you. The only difference is you’ll use the Microsoft.Extensions.AI.Ollama NuGet package and initialize the IChatClient with the with OllamaChatClient. Interacting with DeepSeek R1 is the same. Note: If you run this in a GitHub Codespace, it will take a couple of minutes to start up and you’ll use roughly 8GB of space – so be aware depending on your Codespace plan. Of course these are simple Console applications. If you’re using .NET Aspire, it’s easy to use Ollama and DeepSeek R1. Thanks to the .NET Aspire Community Toolkit’s Ollama integration, all you need to do is add one line and you’re all set! var chat = ollama.AddModel("chat", "deepseek-r1"); Check out this blog post with all the details on how to get going. Summary DeepSeek R1 is an exciting new reasoning model that’s drawing a lot of attention and you can build .NET applications that make use of it today using the Microsoft.Extensions.AI library. GitHub Models lowers the friction of getting started and experimenting it with. Go ahead and try out the samples and checkout our other MEAI samples! The post Build Intelligent Apps with .NET and DeepSeek R1 Today! appeared first on .NET Blog. View the full article
-
If you’ve never seen the movie Analyze This, here’s the quick pitch: A member of, let’s say, a New York family clan with questionable habits decides to seriously considers therapy to improve his mental state. With Billy Crystal and Robert De Niro driving the plot, hilarity is guaranteed. And while Analyze This! satirically tackles issues of a caricatured MOB world, getting to the root of problems with the right analytical tools is crucial everywhere. All the more in a mission critical LOB-App world. Enter the new WinForms Roslyn Analyzers, your domain-specific “counselor” for WinForms applications. With .NET 9, we’re rolling out these analyzers to help your code tackle its potential issues—whether it’s buggy behavior, questionable patterns, or opportunities for improvement. What Exactly is a Roslyn Analyzer? Roslyn analyzers are a core part of the Roslyn compiler platform, seamlessly working in the background to analyze your code as you write it. Chances are, you’ve been using them for years without even realizing it. Many features in Visual Studio, like code fixes, refactoring suggestions, and error diagnostics, rely on or even just are Analyzers or CodeFixes to enhance your development process. They’ve become such an integral part of modern development that we often take them for granted as just “how things work”. The coolest thing: This Roslyn based compiler platform is not a black box. They provide an extremely rich API, and not only Microsoft’s Visual Studio IDE or Compiler teams can create Analyzers. Everyone can. And that’s why WinForms picked up on this technology to improve the WinForms coding experience. It’s Just the Beginning — More to Come With .NET 9 we’ve laid the foundational infrastructure for WinForms-specific analyzers and introduced the first set of rules. These analyzers are designed to address key areas like security, stability, and productivity. And while this is just the start, we’re committed to expanding their scope in future releases, with more rules and features on the horizon. So, let’s take a real look of what we got with the first sets of Analyzers we’re introducing for .NET 9: Guidance for picking correct InvokeAsync Overloads With .NET 9 we have introduced a series of new Async APIs for WinForms. This blog post describes the new WinForms Async feature in detail. This is one of the first areas where we felt that WinForms Analyzers can help a lot in preventing issues with your Async code. One challenge when working with the new Control.InvokeAsync API is selecting the correct overload from the following options: public async Task InvokeAsync(Action callback, CancellationToken cancellationToken = default) public async Task<T> InvokeAsync<T>(Func<T> callback, CancellationToken cancellationToken = default) public async Task InvokeAsync(Func<CancellationToken, ValueTask> callback, CancellationToken cancellationToken = default) public async Task<T> InvokeAsync<T>(Func<CancellationToken, ValueTask<T>> callback, CancellationToken cancellationToken = default) Each overload supports different combinations of synchronous and asynchronous methods, with or without return values. The linked blog post provides comprehensive background information on these APIs. Selecting the wrong overload however can lead to unstable code paths in your application. To mitigate this, we’ve implemented an analyzer to help developers choose the most appropriate InvokeAsync overload for their specific use cases. Here’s the potential issue: InvokeAsync can asynchronously invoke both synchronous and asynchronous methods. For asynchronous methods, you might pass a Func<Task>, and expect it to be awaited, but it will not. Func<T> is exclusively for asynchronously invoking a synchronous called method – of which Func<Task> is just an unfortunate special case. So, in other words, the problem arises because InvokeAsync can accept any Func<T>. But only Func<CancellationToken, ValueTask> is properly awaited by the API. If you pass a Func<Task> without the correct signature—one that doesn’t take a CancellationToken and return a ValueTask—it won’t be awaited. This leads to a “fire-and-forget” scenario, where exceptions within the function are not handled correctly. If such a function then later throws an exception, it will may corrupt data or go so far as to even crash your entire application. Take a look at the following scenario: private async void StartButtonClick(object sender, EventArgs e) { _btnStartStopWatch.Text = _btnStartStopWatch.Text != "Stop" ? "Stop" : "Start"; await Task.Run(async () => { while (true) { await this.InvokeAsync(UpdateUiAsync); } }); // **** // The actual UI update method // **** async Task UpdateUiAsync() { _lblStopWatch.Text = $"{DateTime.Now:HH:mm:ss - fff}"; await Task.Delay(20); } } This is a typical scenario, where the overload of InvokeAsync which is supposed to just return something other than a task is accidentally used. But the Analyzer is pointing that out: So, being notified by this, it also becomes clear that we actually need to introduce a cancellation token so we can gracefully end the running task, either when the user clicks the button again or – which is more important – when the Form actually gets closed. Otherwise, the Task would continue to run while the Form gets disposed. And that would lead to a crash: private async void ButtonClick(object sender, EventArgs e) { if (_stopWatchToken.CanBeCanceled) { _btnStartStopWatch.Text = "Start"; _stopWatchTokenSource.Cancel(); _stopWatchTokenSource.Dispose(); _stopWatchTokenSource = new CancellationTokenSource(); _stopWatchToken = CancellationToken.None; return; } _stopWatchToken = _stopWatchTokenSource.Token; _btnStartStopWatch.Text = "Stop"; await Task.Run(async () => { while (true) { try { await this.InvokeAsync(UpdateUiAsync, _stopWatchToken); } catch (TaskCanceledException) { break; } } }); // **** // The actual UI update method // **** async ValueTask UpdateUiAsync(CancellationToken cancellation) { _lblStopWatch.Text = $"{DateTime.Now:HH:mm:ss - fff}"; await Task.Delay(20, cancellation); } } protected override void OnFormClosing(FormClosingEventArgs e) { base.OnFormClosing(e); _stopWatchTokenSource.Cancel(); } The analyzer addresses this by detecting incompatible usages of InvokeAsync and guiding you to select the correct overload. This ensures stable, predictable behavior and proper exception handling in your asynchronous code. Preventing Leaks of Design-Time Business Data When developing custom controls or business control logic classes derived from UserControl, it’s common to manage its behavior and appearance using properties. However, a common issue arises when these properties are inadvertently set at design time. This typically happens because there is no mechanism in place to guard against such conditions during the design phase. If these properties are not properly configured to control their code serialization behavior, sensitive data set during design time may unintentionally leak into the generated code. Such leaks can result in: Sensitive data being exposed in source code, potentially published on platforms like GitHub. Design-time data being embedded into resource files, either because necessary TypeConverters for the property type in question are missing, or when the form is localized. Both scenarios pose significant risks to the integrity and security of your application. Additionally, we aim to avoid resource serialization whenever possible. With .NET 9, the Binary Formatter and related APIs have been phased out due to security and maintainability concerns. This makes it even more critical to carefully control what data gets serialized and how. The Binary Formatter was historically used to serialize objects, but it had numerous security vulnerabilities that made it unsuitable for modern applications. In .NET 9, we eliminated this serializer entirely to reduce attack surfaces and improve the reliability of applications. Any reliance on resource serialization has the potential to reintroduce these risks, so it is essential to adopt safer practices. To help you, the developer, address this issue, we’ve introduced a WinForms-specific analyzer. This analyzer activates when all the following mechanisms to control the CodeDOM serialization process for properties are missing: SerializationVisibilityAttribute: This attribute controls how (or if) the CodeDOM serializers should serialize the content of a property. The property is not read-only, as the CodeDOM serializer ignores read-only properties by default. DefaultValueAttribute: This attribute defines the default value of a property. If applied, the CodeDOM serializer only serializes the property when the current value at design time differs from the default value. A corresponding private bool ShouldSerialize<PropertyName>() method is not implemented. This method is called at design (serialization) time to determine whether the property’s content should be serialized. By ensuring at least one of these mechanisms is in place, you can avoid unexpected serialization behavior and ensure that your properties are handled correctly during the design-time CodeDOM serialization process. But…this Analyzer broke my whole Solution! So let’s say you’ve developed a domain-specific UserControl, like in the screenshot above, in .NET 8. And now, you’re retargeting your project to .NET 9. Well, obviously, at that moment, the analyzer kicks in, and you might see something like this: In contrast to the previously discussed Async Analyzer, this one has a Roslyn CodeFix attached to it. If you want to address the issue by instructing the CodeDOM serializer to unconditionally never serialize the property content, you can use the CodeFix to make the necessary changes: As you can see, you can even have them fixed in one go throughout the whole document. In most cases, this is already the right thing to do: the analyzer adds the SerializationVisibilityAttribute on top of each flagged property, ensuring it won’t be serialized unintentionally, which is exactly what we want: . . . [DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)] public string NameText { get => textBoxName.Text; set => textBoxName.Text = value; } [DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)] public string EmailText { get => textBoxEmail.Text; set => textBoxEmail.Text = value; } [DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)] public string PhoneText { get => textBoxPhone.Text; set => textBoxPhone.Text = value; } . . . Copilot to the rescue! There is an even more efficient way to handle necessary edit-amendments for property attributes. The question you might want to ask yourself is: if there are no attributes applied at all to control certain aspects of the property, does it make sense to not only ensure proper serialization guidance but also to apply other design-time attributes? But then again, would the effort required be even greater—or would it? Well, what if we utilize Copilot to amend all relevant property attributes that are really useful at design-time, like the DescriptionAttribute or the CategoryAttribute? Let’s give it a try, like this: Depending on the language model you picked for Copilot, you should see a result where we not only resolve the issues the analyzer pointed out, but Copilot also takes care of adding the remaining attributes that make sense in the context. Copilot shows you the code it wants to add, and you can merge the suggested changes with just one mouse click. And those kind of issues are surely not the only area where Copilot can assist you bigtime in the effort to modernize your existing WinForms applications. But if the analyzer flagged hundreds of issues throughout your entire solution, don’t panic! There are more options to configure the severity of an analyzer at the code file, project, or even solution level: Suppressing Analyzers Based on Scope Firstly, you have the option to suppress the analyzer(s) on different scopes: In Source: This option inserts a #pragma warning disable directive directly in the source file around the flagged code. This approach is useful for localized, one-off suppressions where the analyzer warning is unnecessary or irrelevant. For example: #pragma warning disable WFO1000 public string SomeProperty { get; set; } #pragma warning restore WFO1000 In Suppression File: This adds the suppression to a file named GlobalSuppressions.cs in your project. Suppressions in this file are scoped globally to the assembly or namespace, making it a good choice for larger-scale suppressions. For example: [assembly: System.Diagnostics.CodeAnalysis.SuppressMessage( "WinForms.Analyzers", "WFO1000", Justification = "This property is intentionally serialized.")] In Source via Attribute: This applies a suppression attribute directly to a specific code element, such as a class or property. It’s a good option when you want the suppression to remain part of the source code documentation. For example: [System.Diagnostics.CodeAnalysis.SuppressMessage( "WinForms.Analyzers", "WFO1000", Justification = "This property is handled manually.")] public string SomeProperty { get; set; } Configuring Analyzer Severity in .editorconfig To configure analyzer severity centrally for your project or solution, you can use an .editorconfig file. This file allows you to define rules for specific analyzers, including their severity levels, such as none, suggestion, warning, or error. For example, to change the severity of the WFO1000 analyzer: # Configure the severity for the WFO1000 analyzer dotnet_diagnostic.WFO1000.severity = warning Using .editorconfig Files for Directory-Specific Settings One of the powerful features of .editorconfig files is their ability to control settings for different parts of a solution. By placing .editorconfig files in different directories within the solution, you can apply settings only to specific projects, folders, or files. The configuration applies hierarchically, meaning that settings in a child directory’s .editorconfig file can override those in parent directories. For example: Root-level .editorconfig: Place a general .editorconfig file at the solution root to define default settings that apply to the entire solution. Project-specific .editorconfig: Place another .editorconfig file in the directory of a specific project to apply different rules for that project while inheriting settings from the root. Folder-specific .editorconfig: If certain folders (e.g., test projects, legacy code) require unique settings, you can add an .editorconfig file to those folders to override the inherited configuration. /solution-root ├── .editorconfig (applies to all projects) ├── ProjectA/ │ ├── .editorconfig (overrides root settings for ProjectA) │ └── CodeFile.cs ├── ProjectB/ │ ├── .editorconfig (specific to ProjectB) │ └── CodeFile.cs ├── Shared/ │ ├── .editorconfig (applies to shared utilities) │ └── Utility.cs In this layout, the .editorconfig at the root applies general settings to all files in the solution. The .editorconfig inside ProjectA applies additional or overriding rules specific to ProjectA. Similarly, ProjectB and Shared directories can define their unique settings. Use Cases for Directory-Specific .editorconfig Files Test Projects: Disable or lower the severity of certain analyzers for test projects, where some rules may not be applicable. # In TestProject/.editorconfig dotnet_diagnostic.WFO1000.severity = none Legacy Code: Suppress analyzers entirely or reduce their impact for legacy codebases to avoid unnecessary noise. # In LegacyCode/.editorconfig dotnet_diagnostic.WFO1000.severity = suggestion Experimental Features: Use more lenient settings for projects under active development while enforcing stricter rules for production-ready code. By strategically placing .editorconfig files, you gain fine-grained control over the behavior of analyzers and coding conventions, making it easier to manage large solutions with diverse requirements. Remember, the goal of this analyzer is to guide you toward more secure and maintainable code, but it’s up to you to decide the best pace and priority for addressing these issues in your project. As you can see: An .editorconfig file or a thoughtfully put set of such files provides a centralized and consistent way to manage analyzer behavior across your project or team. For more details, refer to the .editorconfig documentation. So, I have good ideas for WinForms Analyzers – can I contribute? Absolutely! The WinForms team and the community are always looking for ideas to improve the developer experience. If you have suggestions for new analyzers or enhancements to existing ones, here’s how you can contribute: Open an issue: Head over to the WinForms GitHub repository and open an issue describing your idea. Be as detailed as possible, explaining the problem your analyzer would solve and how it could work. Join discussions: Engage with the WinForms community on GitHub or other forums. Feedback from other developers can help refine your idea. Contribute code: If you’re familiar with the .NET Roslyn analyzer framework, consider implementing your idea and submitting a pull request to the repository. The team actively reviews and merges community contributions. Test and iterate: Before submitting your pull request, thoroughly test your analyzer with real-world scenarios to ensure it works as intended and doesn’t introduce false positives. Contributing to the ecosystem not only helps others but also deepens your understanding of WinForms development and the .NET platform. Final Words Analyzers are powerful tools that help developers write better, more reliable, and secure code. While they can initially seem intrusive—especially when they flag many issues—they serve as a safety net, guiding you to avoid common pitfalls and adopt best practices. The new WinForms-specific analyzers are part of our ongoing effort to modernize and secure the platform while maintaining its simplicity and ease of use. Whether you’re working on legacy applications or building new ones, these tools aim to make your development experience smoother. If you encounter issues or have ideas for improvement, we’d love to hear from you! WinForms has thrived for decades because of its passionate and dedicated community, and your contributions ensure it continues to evolve and remain relevant in today’s development landscape. Happy coding! The post WinForms: Analyze This (Me in Visual Basic) appeared first on .NET Blog. View the full article
-
If you’re attending NDC London 2025, we can’t wait to meet you! From January 29-31, Microsoft will be on-site to showcase the latest in .NET, Azure integration, and AI-powered development. This is your chance to engage with our experts, attend technical sessions, and explore how .NET can help you take your applications to the next level. What to Expect from Microsoft at NDC London 2025 Keynote from Scott Hanselman: Start the conference with inspiration as Scott Hanselman delivers a keynote exploring the latest trends and innovations in the developer world, highlighting how .NET empowers developers to build the future. 27+ Technical Sessions by Microsoft Leaders and MVPs: Dive into expert-led sessions covering everything from cloud-native development with .NET Aspire to building modern applications with AI and .NET 9. These talks are designed to equip you with the tools and knowledge to level up your development projects. Visit the Microsoft Booth: Our booth is your gateway to the latest innovations: Live Demos: See .NET 9 and Azure migration tooling in action. Interactive Activities: Network with the community and engage with our experts. Swag Giveaways: Walk away with exclusive Microsoft goodies. Customer Meetups: Schedule a 1:1 session with Microsoft speakers like Scott Hunter, Scott Hanselman, and others. Whether you’re looking for advice on technical challenges or insights into modernizing your applications with Azure, these meetups are the perfect opportunity to engage directly with our thought leaders. Join Us at NDC London 2025 Don’t miss your chance to learn, connect, and grow with the .NET team at NDC London. Whether you’re attending to sharpen your skills, discover new tools, or meet fellow developers, the event promises to deliver value for everyone in the community. We’re excited to meet you! Visit our booth, attend our sessions, and book a 1:1 meeting with our experts to make the most of your NDC London experience. Stay Connected Follow @dotnet for updates throughout the event, and keep an eye on our blog for post-event highlights. Let’s build the future together at NDC London 2025! The post Meet the .NET Team at NDC London 2025 appeared first on .NET Blog. View the full article
-
Welcome to our combined .NET servicing updates for January 2025. Let’s get into the latest release of .NET & .NET Framework, here is a quick overview of what’s new in these releases: Security Improvements .NET updates .NET Framework updates Security improvements This month you will find several CVEs that have been fixed this month: CVE # Title Applies to CVE-2025-21171 .NET Remote Code Execution Vulnerability .NET 9.0 CVE-2025-21172 .NET Remote Code Execution Vulnerability .NET 8.0, .NET 9.0 CVE-2025-21176 .NET and .NET Framework Denial of Service Vulnerability .NET 8.0, .NET 9.0, .NET Framework 4.6.2, 4.7, 4.7.1, 4.7.2, 4.8, 4.8.1 CVE-2025-21173 .NET Elevation of Privilege Vulnerability .NET 8.0, .NET 9.0 .NET January 2025 Updates Below you will find a detailed list of everything from the .NET release for January 2025 including .NET 9.0.1 and .NET 8.0.12: .NET 8.0 .NET 9.0 Release Notes 8.0.12 9.0.1 Installers and binaries 8.0.12 9.0.1 Container Images images images Linux packages 8.0.12 9.0.1 Known Issues 8.0 9.0 .NET Improvements ASP.NET Core: 9.0.1 EF Core: 8.0.12 Runtime: 8.0.12 | 9.0.1 SDK: 8.0.12 | 9.0.1 Share feedback about this release in the Release feedback issue. .NET Framework January 2025 Updates This month, there are security and non-security updates, be sure to browse our release notes for .NET Framework for more details. See you next month Let us know what you think of these new combined service release blogs as we continue to iterate to bring you the latest news and updates for .NET. The post .NET and .NET Framework January 2025 servicing releases updates appeared first on .NET Blog. View the full article
-
Arena Properties joined the community
-
.NET Aspire enhances the local development process with its powerful orchestration feature for app composition. In the .NET Aspire App Host, you specify all the projects, executables, cloud resources, and containers for your application in one centralized location. When you run the App Host project, .NET Aspire will automatically run your projects and executables, provision cloud resources if necessary, and download and run containers that are dependencies for your app. .NET Aspire 9 added new features to give you more control over how container lifetimes are managed on your local machine to speed up development when working with containers. Containers with .NET Aspire Let’s look at a simple example of a .NET Aspire App Host that creates a local Redis container resources, waits for it to become available, and then configures the connection string for the web projects: // Create a distributed application builder given the command line arguments. var builder = DistributedApplication.CreateBuilder(args); // Add a Redis server to the application. var cache = builder.AddRedis("cache"); // Add the frontend project to the application and configure it to use the // Redis server, defined as a referenced dependency. builder.AddProject<Projects.MyFrontend>("frontend") .WithReference(cache) .WaitFor(cache); When the App Host is started, the call to AddRedis will download the appropriate Redis image. It will also create a new Redis container and run it automatically. When we stop debugging our App Host, .NET Aspire will automatically stop all of our projects and will also stop our Redis container and delete the associated volume that typically is storing data. Container lifetimes While this fits many scenarios, if there aren’t going to be any changes in the container you may just want the container to stay running regardless of the state of the App Host. This is where the new WithLifetime API comes in allowing you to customize the lifetime of containers. This means that you can configure a container to start and stay running, making projects start faster because the container will be ready right away and will re-use the volume. var builder = DistributedApplication.CreateBuilder(args); // Add a Redis server to the application and set lifetime to persistent var cache = builder.AddRedis("cache") .WithLifetime(ContainerLifetime.Persistent); builder.AddProject<Projects.MyFrontend>("frontend") .WithReference(cache) .WaitFor(cache); Now, when we run our App Host if the container isn’t found it will still create a new container resource and start it, however if it is found with the specified name, .NET Aspire will use that resource instead of creating a new one. When the App Host shuts down, the container resource will not be terminated and will allow you to re-use it across multiple runs! You will be able to see that the container is set to Persistent with a little pin icon on the .NET Aspire dashboard: How does it work? By default, several factors are taken into consideration when .NET Aspire determines whether to use an existing container or to create a new one when ContainerLifetime.Persistent is set. .NET Aspire will first generate a unique name for the container based on a hash of the App Host project path. This means that a container will be persistent for a specific App Host, but not globally if you have multiple App Host projects. You can specify a container name with the WithContainerName method, which would allow for a globally unique persistent container. In addition to the container name, .NET Aspire will consider the following: Container image Commands that start the container and their parameters Volume mounts Exposed container ports Environment variables Container restart policy .NET Aspire takes all of this information and creates a unique hash from it to compare with any existing container data. If any of these settings are different then the container will NOT be reused and a new one will be created. So, if you are curious why a new container may have been created, it’s probably because something has changed. This is a pretty strict policy that .NET Aspire started with for this new option, and the team is looking for feedback on future iterations. What about persisting data? Now that we are persisting our containers between launches of the App Host, it also means that we are re-using the volume that was associated with it. Volumes are the recommended way to persist data generated by containers and have the benefit that they can store data from multiple containers at a time, offer high performance, and are easy to back up or migrate. So, while yes we are re-using the volume, a new container may be created if settings are changed. Having more control of the exact volume that is used and being reused allows us to do things such as: Maintain cached data or messages in a Redis instance across app launches. Work with a continuous set of data in a database during an extended development session. Test or debug a changing set of files in an Azure Blob Storage emulator. So, let’s tell our container resource what volume to use with the WithDataVolume method. By default it will assign a name based on our project and resource names: {appHostProjectName}-{resourceName}-data, but we can also define the name that will be created and reused which is helpful if we have multiple App Hosts. var cache = builder.AddRedis("cache") .WithLifetime(ContainerLifetime.Persistent) .WithDataVolume("myredisdata"); Now, a new volume will be created and reused for this container resource and if for some reason a new container is created it will still use the myredisdata volume. Using volumes are nice because they offer ideal performance, portability, and security. However, you may want direct access and modification of files on your machine. This is where data bind mounts come in when you need real-time changes. var cache = builder.AddRedis("cache") .WithLifetime(ContainerLifetime.Persistent) .WithDataBindMount(@"C:\Redis\Data"); Data bind mounts rely on the filesystem to persist the Redis data across container restarts. Here, the data bind mount is mounted at the C:\Redis\Data on Windows in the Redis container. Now, in the case of Redis we can also control persistence for when the Redis resource takes snapshots of the data at a specific interval and threshold. var cache = builder.AddRedis("cache") .WithLifetime(ContainerLifetime.Persistent) .WithDataVolume("myredisdata") .WithPersistence(interval: TimeSpan.FromMinutes(5), keysChangedThreshold: 100); Here, the interval is the time between snapshot exports and the keysChangedThreshold is the number of key change operations required to trigger a snapshot. Integrations have their own specifications for WithDataVolume and WithBindMount, so be sure to check the integration documentation for the ones you use. More control over resources We now have everything set up, persisted, and ready to go in our App Host. As a bonus, .NET Aspire 9 also added the ability to start, stop, and restart resources including containers directly from the dashboard! This is really nice to be able to test resiliency of your applications without having to leave the dashboard. Upgrade to .NET Aspire 9 There is so much more in .NET Aspire 9, so be sure to read through the What’s new in .NET Aspire 9.0 documentation and easily upgrade in just a few minutes with the full upgrade guide. There is also newly updated documentation on container resource lifetimes, persisting data with volumes, and the new dashboard features. Let us know what you think of this new feature in .NET Aspire 9 and all of the other great features in the comments below. The post .NET Aspire Quick Tip – Managing Container & Data Lifetime appeared first on .NET Blog. View the full article
-
.Net joined the community
-
VS Code joined the community
-
AWS
started following Available Styles and Broken Links
-
DecemberChild started following Create XML file with C#
-
It has been an absolutely outstanding year of content for .NET from creators around the globe sharing their passion for .NET and the .NET team giving insight into the latest and greatest in the world of .NET. From events, live streams, and plenty of on-demand content dropping on the .NET YouTube nearly every single day, it is a great way to stay up to date, but also get involved and give feedback to the team in real-time. This year, developers tuned into the .NET YouTube more than ever before with over 8 million views of content, left over 6,000 comments, smashed the like button over 120,000 times, and over 50,000 new subscribers joined the channel. There is now more variety of content than ever and that has led to over 700K hours of watch time this year alone! This is over 29,000 days watched or to go even a step further… nearly 80 years! Top .NET videos of 2024 It was fun looking back at this year’s top videos as it really was a wide range of content. The most watched video on the channel was Scott Hanselman and David Fowler’s What is C#? video in the C# for Beginner’s series. However, if we take a look at just new videos released in 2024 then Scott shows up yet again, but this time with Stephen Toub in the first entry in Deep .NET on async/await. That was closely followed up with Dan Roth and Safia Abdalla’s What is ASP.NET Core? that went directly into the new front-end and back-end beginner series that launch this year. There is so much more to recap though as there were over 260 new videos released on the .NET YouTube this year! Let’s take a look at what else the community has been tuning into. Deep .NET If you are looking for deep technical content, then look no further than Scott Hanselman and Stephen Toub’s series, Deep .NET. Each episode, Scott and Stephen go in-depth on a topic which has ranged from async/await, Span, RegEx, LINQ, ArrayPool, Parallel Programming, and more. Recently they have been hosting more .NET team members including Eric Erhardt who went deep on Native AOT. Scott and Stephen will be back in 2025 with even more Deep .NET episodes and you will hear from even more voices from the .NET team. So, if you love this type of content, be sure to reach out to Scott & Stephen or leave a comment on YouTube and tell them who and what you want to hear about on Deep .NET. .NET Conf 2024 At this year’s .NET Conf, the 14th entry in the series, we celebrated the launch of .NET 9 and the amazing .NET community. Completely free & virtual, this year’s 3-day event featured a BONUS 4th day of exclusive YouTube premier sessions and the 3rd iteration of the .NET Conf – Student Zone! With over 90 sessions to explore, there is something for everyone. Not to mention that there is still time to link up with your local community with .NET Conf local events happening through January 2025! .NET Conf wasn’t the only major .NET streaming event this year. In August, .NET Conf: Focus on AI highlighted the latest in AI development with .NET. We also celebrated the launch of .NET Aspire 8.1 with a full day of content at .NET Aspire Developers Day. If you are looking for more cloud content for .NET applications, the Azure Developers YouTube ran events on all things .NET on Azure and another event all about using Azure with .NET Aspire! ASP.NET Core Beginner Series Dan Roth and Safia Abdalla re-introduced the world to ASP.NET Core and then went deeper with full beginner series on both front-end web development and back-end API development with .NET. For front-end web development, Dan dives deep into Blazor, Razor, components, render modes, and so much more to build a complete application from scratch. If you are more into API development, then Safia has you covered with all things ASP.NET Core for APIs including building, testing, adding middleware, dependency injection, and so much more. These are just a few of the new beginner series that launched this year to help developers jumpstart their development journey with .NET. Introduction to .NET Aspire Can you believe that it was just 7 months again that .NET Aspire officially launched, helping developers streamline their development process and build better distributed applications? So much has happened in the world of .NET Aspire including several new releases, the launch of the .NET Aspire Community Toolkit, and plenty of .NET Aspire content. One of the most watched series on the .NET YouTube was Welcome to .NET Aspire, where the team took developers through all of the different parts of .NET Aspire. Looking to get started and want to see how to integrate .NET Aspire into your existing apps? Then Jeff Fritz has you covered with the brand new .NET Aspire beginner series, a 90 minute deep dive into all things .NET Aspire. Top .NET Live Streams of 2024 Events and on-demand content weren’t the only thing happening on the .NET YouTube. There was a live stream taking place nearly every other day with over 150 taking place in 2024 alone! Let’s take a look at the top stream. Let’s Learn .NET – Blazor The Let’s Learn .NET series is a world wide live stream interactive event where you can follow along at home to learn a new .NET technology and ask questions live. Besides events, this year’s #1 most watched live stream was the Let’s Learn Blazor event walking through the latest and greatest in building full-stack web apps with .NET. That was only the start for Let’s Learn .NET as they continued throughout the year and included .NET Aspire, Containers, and AI with Semantic Kernel. It has been really exciting to see the series grow and now be live streamed in 7 different languages for developers everywhere! On .NET Live: Modular Monoliths with ASP.NET Core Steve Smith is iconic when it comes to ASP.NET Core architecture videos and guidance. His session at .NET Conf every year consistently is one of the most watched and commented on. This year, the On .NET Live team had him on to talk all about making decisions between monolith and microservice based architecture. Every week the On .NET Live team brings on amazing community members to talk about a wide range of topics, so be sure to browse the entire catalog of live streams. .NET Community Standups Hear and interact directly with team members building .NET here at Microsoft. That is what the .NET Community Standups are all about, they are a behind the scenes look at its development and a great way to have your voice heard and get your questions answered. In 2024, over 100K developers tuned in live and another 300K developers caught up on past community standup streams. Here are the top community standups of 2024 for each team: ASP.NET Core – .NET 9 Roadmap & Blazor Hybrid in .NET 9 Languages & Runtime – C# 13 and beyond .NET Data – Database Concurrency .NET AI – Get Started with AI in .NET .NET MAUI – .NET MAUI and .NET Aspire That’s a wrap! Thanks to everyone that created, enjoyed, commented, smashed that like button in 2024! We have tons of great new content coming your way in 2025 so make sure you go and subscribe to the .NET YouTube if you haven’t yet to stay up to date. If you don’t have access to YouTube, don’t worry as all .NET videos are also available on Microsoft Learn! What were your favorite videos and live streams of 2024? What are you looking forward to in 2025? Let us know in the comments below. The post Top .NET Videos & Live Streams of 2024 appeared first on .NET Blog. View the full article
-
ChrisHite joined the community
-
Pause joined the community
-
Pasquale47 joined the community
-
Welcome to Pages! Pages extends your site with custom content management designed especially for communities. Create brand new sections of your community using features like blocks, databases and articles, pulling in data from other areas of your community. Create custom pages in your community using our drag'n'drop, WYSIWYG editor. Build blocks that pull in all kinds of data from throughout your community to create dynamic pages, or use one of the ready-made widgets we include with the Invision Community. View our Pages documentation
-
GeorgeDop joined the communityTOZKatja1 joined the communityWe are currently making an unexpected change to the way that .NET installers and archives are distributed. This change may affect you and may require changes in your development, CI, and/or production infrastructure. We expect that most users will not be directly affected, however, it is critical that you validate if you are affected and to watch for downtime or other kinds of breakage. The most up-to-date status is being maintained at dotnet/core #9671. Please look to that issue to stay current. If you are having an outage that you believe is caused by these changes, please comment on the reference GitHub issue and/or email us at dotnet@microsoft.com. [HEADING=1]Affected domains[/HEADING] We maintain multiple Content Delivery Network (CDN) instances for delivering .NET builds. Some end in[iCODE]azureedge.net[/iCODE]. These domains are hosted by edg.io, which will soon cease operations due to bankruptcy. We are required to migrate to a new CDN and will be using new domains going forward. It is possible that [iCODE]azureedge.net[/iCODE] domains will have downtime in the near-term. We expect that these domains will be permanently retired in the first few months of 2025. Note No other party will ever have access to use these domains. Affected domains: [iCODE]dotnetcli.azureedge.net[/iCODE] [iCODE]dotnetbuilds.azureedge.net[/iCODE] Unaffected domains: [iCODE]dotnet.microsoft.com[/iCODE] [iCODE]download.visualstudio.microsoft.com[/iCODE] [HEADING=1]Our response[/HEADING] We made several changes in response. We have tried to reduce what you need to do to react. In many cases, you won’t need to do anything special. New CDNs: Official builds: [iCODE]builds.dotnet.microsoft.com[/iCODE] CI builds: [iCODE]ci.dot.net[/iCODE] Updated .NET install script: The install script now uses the new domains, per dotnet/install-scripts #555 This script has been deployed to the official locations, as described in dotnet-install scripts reference Addressing CI installers: GitHub Actions has been updated to use the new domains, per actions/setup-dotnet #570 We expect that GitHub Enterprise Server will be addressed in January. Azure DevOps [iCODE]UseDotnetTask[/iCODE] will be updated in January We do not yet have a date for updating Azure DevOps Server. [HEADING=1]Domain configuration[/HEADING] We are in the process of changing the configuration of our domains. At present, they may be using a combination of Akamai, Azure Front Door, and edgio. Our highest priority has been maintaining domain operation while we initiate new service with other CDN providers and validate their capability in our environment. We are using Azure Traffic Manager to split traffic between them, primarily for reliability. [HEADING=1]Call to action[/HEADING] There are several actions you can take to determine if you have any exposure to [iCODE]azureedge.net[/iCODE] retirement. Search your source code, install scripts, Dockerfiles and other files for instances of [iCODE]azureedge.net[/iCODE]. We also noticed that there is a lot of use of our storage account: [iCODE]dotnetcli.blob.core.windows.net[/iCODE]. Please also search for it. The storage account is unaffected, however, it would be much better for everyone if you used our new CDN. It will deliver better peformance. Update [iCODE]dotnetcli.azureedge.net[/iCODE] to [iCODE]builds.dotnet.microsoft.com[/iCODE] Update [iCODE]dotnetcli.blob.core.windows.net[/iCODE] to [iCODE]builds.dotnet.microsoft.com[/iCODE] Note The new CDN is path-compatible with those servers. It’s only the domain that needs to change. Please check for copies of the install script that you may have within your infrastructure. You will need to update it. You will need to move to the latest version of the GitHub Action and Azure DevOps Task installers to ensure that you are protected from downtime. Please check firewall rules that might prevent you from accessing our new CDNs, similar to this conversation. [HEADING=1]Closing[/HEADING] We are sorry that we are making changes that affect running infrastructure and asking you to react to them during a holiday period. As you can see, the need for these changes was unexpected and we are trying to make the best choices under a very compressed schedule. We are hoping that the mitigations that we put into place will result in most users being unaffected by this situation. With every crisis, there are opportunities for learning. We realized that we are missing public documentation on how to best use all of the installation-related resources we provide, to balance reliability, security, performance, and productivity. We will be working on producing this documentation in the new year. The post Critical: .NET Install links are changing appeared first on .NET Blog. Continue reading...We are currently making an unexpected change to the way that .NET installers and archives are distributed. This change may affect you and may require changes in your development, CI, and/or production infrastructure. We expect that most users will not be directly affected, however, it is critical that you validate if you are affected and to watch for downtime or other kinds of breakage. The most up-to-date status is being maintained at dotnet/core #9671. Please look to that issue to stay current. If you are having an outage that you believe is caused by these changes, please comment on the reference GitHub issue and/or email us at dotnet@microsoft.com. Affected domains We maintain multiple Content Delivery Network (CDN) instances for delivering .NET builds. Some end inazureedge.net. These domains are hosted by edg.io, which will soon cease operations due to bankruptcy. We are required to migrate to a new CDN and will be using new domains going forward. It is possible that azureedge.net domains will have downtime in the near-term. We expect that these domains will be permanently retired in the first few months of 2025. Note No other party will ever have access to use these domains. Affected domains: dotnetcli.azureedge.net dotnetbuilds.azureedge.net Unaffected domains: dotnet.microsoft.com download.visualstudio.microsoft.com Our response We made several changes in response. We have tried to reduce what you need to do to react. In many cases, you won’t need to do anything special. New CDNs: Official builds: builds.dotnet.microsoft.com CI builds: ci.dot.net Updated .NET install script: The install script now uses the new domains, per dotnet/install-scripts #555 This script has been deployed to the official locations, as described in dotnet-install scripts reference Addressing CI installers: GitHub Actions has been updated to use the new domains, per actions/setup-dotnet #570 We expect that GitHub Enterprise Server will be addressed in January. Azure DevOps UseDotnetTask will be updated in January We do not yet have a date for updating Azure DevOps Server. Domain configuration We are in the process of changing the configuration of our domains. At present, they may be using a combination of Akamai, Azure Front Door, and edgio. Our highest priority has been maintaining domain operation while we initiate new service with other CDN providers and validate their capability in our environment. We are using Azure Traffic Manager to split traffic between them, primarily for reliability. Call to action There are several actions you can take to determine if you have any exposure to azureedge.net retirement. Search your source code, install scripts, Dockerfiles and other files for instances of azureedge.net. We also noticed that there is a lot of use of our storage account: dotnetcli.blob.core.windows.net. Please also search for it. The storage account is unaffected, however, it would be much better for everyone if you used our new CDN. It will deliver better peformance. Update dotnetcli.azureedge.net to builds.dotnet.microsoft.com Update dotnetcli.blob.core.windows.net to builds.dotnet.microsoft.com Note The new CDN is path-compatible with those servers. It’s only the domain that needs to change. Please check for copies of the install script that you may have within your infrastructure. You will need to update it. You will need to move to the latest version of the GitHub Action and Azure DevOps Task installers to ensure that you are protected from downtime. Please check firewall rules that might prevent you from accessing our new CDNs, similar to this conversation. Closing We are sorry that we are making changes that affect running infrastructure and asking you to react to them during a holiday period. As you can see, the need for these changes was unexpected and we are trying to make the best choices under a very compressed schedule. We are hoping that the mitigations that we put into place will result in most users being unaffected by this situation. With every crisis, there are opportunities for learning. We realized that we are missing public documentation on how to best use all of the installation-related resources we provide, to balance reliability, security, performance, and productivity. We will be working on producing this documentation in the new year. The post Critical: .NET Install links are changing appeared first on .NET Blog. View the full articleIn 2024, the .NET blog continued to be a central hub of knowledge, delivering valuable insights and updates straight from the source. With over 130 posts and more than 260,000 words published, these blogs remain a critical resource for developers looking to stay up-to-date with the latest advancements in .NET. Alright, let’s explore the top blogs from the .NET team that made the biggest impact this year. [HEADING=1]Announcing .NET 9[/HEADING] .NET 9 is here! It is the most productive, modern, secure, intelligent, and performant release of .NET yet! We started the year by sharing our vision for .NET 9 and our strategy for engaging deeper with the developer community around the release. This meant that we pivoted our content on the blog to focus on .NET 8, the current shipping version of .NET at the time. This led to a new form of extremely detailed release notes on GitHub for every preview release. In addition, we focused on ensuring that as .NET 9 progressed every feature was documented and maintained on Microsoft Learn. The means on launch day that developers could not only read the announcement on .NET 9, but they could also dive deep into documentation around all part of what’s new in .NET 9 including the Runtime, Libraries, SDK, C# 13, F# 9, ASP.NET Core, .NET Aspire, .NET MAUI, EF Core, WPF, and Windows Forms. Want to go deeper on all things .NET 9? Be sure to browse all of the blog entries this year covering .NET 9 updates, , and of course the where you can watch me walk across the beautiful new bridge on the Microsoft campus for 5 minute straight! [HEADING=1]Performance Improvements in .NET 9[/HEADING] It wouldn’t be a new release with Stephen Toub’s complete deep dive into the vast performance improvements in .NET. When printed to PDF the blog spans over 320 pages covering the over 1,000 performance related pull-requests in .NET 9. From enhancements to garbage collection, Native AOT, threading, reflection, LINQ, loops, JIT, and so much more it is an absolute must read. If you want to spend your holiday break enjoying the entire history of performance improvements in Toub’s ongoing series, then check out previous posts on .NET 8, .NET 7, .NET 6, .NET 5, .NET Core 3.0, .NET Core 2.1, and .NET Core 2.0. If you are like me and would rather watch a video on all these improvements, then Toub has you covered again with ! [HEADING=1]Introducing ASP.NET Core metrics and Grafana dashboards in .NET[/HEADING] .NET Aspire includes a fantastic developer dashboard for OpenTelemetry, but did you know you can easily setup your own custom Grafana dashboards? This blog post introduces new metrics in .NET for ASP.NET Core, including HTTP request counts, duration, and error handling diagnostics. It highlights the pre-built Grafana dashboards for monitoring apps in production, and how you can create custom metrics and use tools like [iCODE]dotnet-counters[/iCODE] for live metrics viewing. [ATTACH type=full" alt="A Grafana dashboard showing metrics]6085[/ATTACH] [HEADING=1]General Availability of .NET Aspire: Simplifying .NET Cloud-Native Development[/HEADING] .NET Aspire is officially here! A new stack designed to simplify the development of .NET projects with tools, templates, and integrations to streamline building distributed applications. Key features include the .NET Aspire Dashboard for viewing OpenTelemetry data, support for various databases and cloud services, and the ability to orchestrate local development with the App Host project. Take an indepth look and how to get started with .NET Aspire using Visual Studio, the .NET CLI, or Visual Studio Code. Also, be sure to browse through all .NET Aspire blog posts, the What’s new in .NET Aspire 9 session from .NET Conf, the brand new , and free Microsoft Learn training and .NET Aspire credential. [HEADING=1]Introducing .NET Smart Components – AI-powered UI Controls[/HEADING] .NET Smart Components are a set of AI-powered UI controls for .NET apps, initially available for Blazor, MVC, and Razor Pages. These components include Smart Paste, Smart TextArea, and Smart ComboBox, which enhance user productivity by automating form filling, autocompleting text, and providing intelligent suggestions. You can try these components today and checkout full sample apps and provide feedback to help improve them on GitHub. [ATTACH type=full" alt="Animation of copy and pasting an address with Smart Paste]6086[/ATTACH] Since the first announcements of Smart Components, an entire ecosystem has grown around the initiative. Read about the thriving smart components ecosystem from popular component vendors to easily add AI to your .NET apps. [HEADING=1]C# 12 Blog Series[/HEADING] The team also experimented with some new series on the blog including “Refactor your C# code” from David Pine who explored various C# 12 features and how to integrate them into your every day coding including: Primary cnstructors Collection expressions Aliasying any type Default lambda parameters [HEADING=1]AI + .NET Blogs[/HEADING] It is now easier than ever to find blogs on the latest in AI development with .NET with the AI category on the .NET blog. You can dive into great posts on big announcements, getting started, and in-depth tutorials on using the latest models. Here are some of my favorites: Introducing Microsoft.Extensions.AI Announcing the stable release of the official OpenAI library for .NET How we build GitHub Copilot into Visual Studio eShop infused with AI Using local AI models with .NET Aspire [HEADING=1]Go Deep on Developer Workloads[/HEADING] There is so much more on the .NET blog to revist with great content across our workloads for building mobile, desktop, and web applications with .NET. Here are some of my top picks across .NET MAUI, ASP.NET Core, Blazor, Entity Framework and more. .NET MAUI welcome Syncfusion open-source contributions Learn to build your first Blazor Hybrid app! Creating bindings for .NET MAUI with Native Library Interop MongoDB EF Core Provider: What’s New? How to use a Blazor QuickGrid with GraphQL The FAST and the Fluent: A Blazor Story OpenAPI document generation in .NET 9 Adding .NET Aspire to your existing .NET Apps Build & test resilient apps in .NET with Dev Proxy Note You can easily view all recent posts for our top focus areas like .NET Aspire, AI, etc. by using the dropdown menu in the blog navigation. [HEADING=1]A fresh new look![/HEADING] You may have noticed a fresh new look for all of the developer blogs here at Microsoft. This brand new look and feel comes with some great new features including a full table of contents, a Read Next sections, easier sharing, and improved navigation. [ATTACH type=full" alt="Screenshot of blog page with items circled including TOC]6087[/ATTACH] There you have it, the top .NET blog posts of 2024! What were your favorites? What do you want to see more of in 2025? Let us know and share your favorite .NET blogs in the comments below. Don’t forget to subscribe to the blog in your favorite RSS reader or through e-mail notifications so you never miss a .NET blog again. Don’t forget to go download .NET 9 today! The post Top .NET Blogs Posts of 2024 appeared first on .NET Blog. Continue reading...In 2024, the .NET blog continued to be a central hub of knowledge, delivering valuable insights and updates straight from the source. With over 130 posts and more than 260,000 words published, these blogs remain a critical resource for developers looking to stay up-to-date with the latest advancements in .NET. Alright, let’s explore the top blogs from the .NET team that made the biggest impact this year. Announcing .NET 9 .NET 9 is here! It is the most productive, modern, secure, intelligent, and performant release of .NET yet! We started the year by sharing our vision for .NET 9 and our strategy for engaging deeper with the developer community around the release. This meant that we pivoted our content on the blog to focus on .NET 8, the current shipping version of .NET at the time. This led to a new form of extremely detailed release notes on GitHub for every preview release. In addition, we focused on ensuring that as .NET 9 progressed every feature was documented and maintained on Microsoft Learn. The means on launch day that developers could not only read the announcement on .NET 9, but they could also dive deep into documentation around all part of what’s new in .NET 9 including the Runtime, Libraries, SDK, C# 13, F# 9, ASP.NET Core, .NET Aspire, .NET MAUI, EF Core, WPF, and Windows Forms. Want to go deeper on all things .NET 9? Be sure to browse all of the blog entries this year covering .NET 9 updates, videos from .NET Conf, and of course the .NET Conf 2024 Keynote where you can watch me walk across the beautiful new bridge on the Microsoft campus for 5 minute straight! Performance Improvements in .NET 9 It wouldn’t be a new release with Stephen Toub’s complete deep dive into the vast performance improvements in .NET. When printed to PDF the blog spans over 320 pages covering the over 1,000 performance related pull-requests in .NET 9. From enhancements to garbage collection, Native AOT, threading, reflection, LINQ, loops, JIT, and so much more it is an absolute must read. If you want to spend your holiday break enjoying the entire history of performance improvements in Toub’s ongoing series, then check out previous posts on .NET 8, .NET 7, .NET 6, .NET 5, .NET Core 3.0, .NET Core 2.1, and .NET Core 2.0. If you are like me and would rather watch a video on all these improvements, then Toub has you covered again with his session from .NET Conf 2024! Introducing ASP.NET Core metrics and Grafana dashboards in .NET .NET Aspire includes a fantastic developer dashboard for OpenTelemetry, but did you know you can easily setup your own custom Grafana dashboards? This blog post introduces new metrics in .NET for ASP.NET Core, including HTTP request counts, duration, and error handling diagnostics. It highlights the pre-built Grafana dashboards for monitoring apps in production, and how you can create custom metrics and use tools like dotnet-counters for live metrics viewing. General Availability of .NET Aspire: Simplifying .NET Cloud-Native Development .NET Aspire is officially here! A new stack designed to simplify the development of .NET projects with tools, templates, and integrations to streamline building distributed applications. Key features include the .NET Aspire Dashboard for viewing OpenTelemetry data, support for various databases and cloud services, and the ability to orchestrate local development with the App Host project. Take an indepth look and how to get started with .NET Aspire using Visual Studio, the .NET CLI, or Visual Studio Code. Also, be sure to browse through all .NET Aspire blog posts, the What’s new in .NET Aspire 9 session from .NET Conf, the brand new .NET Aspire beginner series, and free Microsoft Learn training and .NET Aspire credential. Introducing .NET Smart Components – AI-powered UI Controls .NET Smart Components are a set of AI-powered UI controls for .NET apps, initially available for Blazor, MVC, and Razor Pages. These components include Smart Paste, Smart TextArea, and Smart ComboBox, which enhance user productivity by automating form filling, autocompleting text, and providing intelligent suggestions. You can try these components today and checkout full sample apps and provide feedback to help improve them on GitHub. Since the first announcements of Smart Components, an entire ecosystem has grown around the initiative. Read about the thriving smart components ecosystem from popular component vendors to easily add AI to your .NET apps. C# 12 Blog Series The team also experimented with some new series on the blog including “Refactor your C# code” from David Pine who explored various C# 12 features and how to integrate them into your every day coding including: Primary cnstructors Collection expressions Aliasying any type Default lambda parameters AI + .NET Blogs It is now easier than ever to find blogs on the latest in AI development with .NET with the AI category on the .NET blog. You can dive into great posts on big announcements, getting started, and in-depth tutorials on using the latest models. Here are some of my favorites: Introducing Microsoft.Extensions.AI Announcing the stable release of the official OpenAI library for .NET How we build GitHub Copilot into Visual Studio eShop infused with AI Using local AI models with .NET Aspire Go Deep on Developer Workloads There is so much more on the .NET blog to revist with great content across our workloads for building mobile, desktop, and web applications with .NET. Here are some of my top picks across .NET MAUI, ASP.NET Core, Blazor, Entity Framework and more. .NET MAUI welcome Syncfusion open-source contributions Learn to build your first Blazor Hybrid app! Creating bindings for .NET MAUI with Native Library Interop MongoDB EF Core Provider: What’s New? How to use a Blazor QuickGrid with GraphQL The FAST and the Fluent: A Blazor Story OpenAPI document generation in .NET 9 Adding .NET Aspire to your existing .NET Apps Build & test resilient apps in .NET with Dev Proxy Note You can easily view all recent posts for our top focus areas like .NET Aspire, AI, etc. by using the dropdown menu in the blog navigation. A fresh new look! You may have noticed a fresh new look for all of the developer blogs here at Microsoft. This brand new look and feel comes with some great new features including a full table of contents, a Read Next sections, easier sharing, and improved navigation. There you have it, the top .NET blog posts of 2024! What were your favorites? What do you want to see more of in 2025? Let us know and share your favorite .NET blogs in the comments below. Don’t forget to subscribe to the blog in your favorite RSS reader or through e-mail notifications so you never miss a .NET blog again. Don’t forget to go download .NET 9 today! The post Top .NET Blogs Posts of 2024 appeared first on .NET Blog. View the full articleWe're excited to announce an all new free plan for GitHub Copilot, available for everyone today in VS Code. All you need is a GitHub account. No trial. No subscription. No credit card required. Enable GitHub Copilot Free You can click on the link above or just enable GitHub Copilot right from within VS Code like so... With GitHub Copilot Free you get 2000 code completions/month. That's about 80 per working day - which is a lot. You also get 50 chat requests/month, as well as access to both GPT-4o and Claude 3.5 Sonnet models. If you hit these limits, ideally it's because Copilot is doing its job well, which is to help you do yours! If you find you need more Copilot, the paid Pro plan is unlimited and provides access to additional models like o1 and Gemini (coming in the new year). With this announcement, GitHub Copilot becomes a core part of the VS Code experience. The team has been hard at work, as always, improving that experience with brand new AI features and capabilities. Let’s take a look at some of the newer additions to GitHub Copilot that dropped in just the past few months. This is your editor, redefined with AI. Work with multiple files using Copilot Edits Copilot Edits is a multi-file editing experience that you can open from the top of the chat side bar. Given a prompt, Edits will propose changes across files including creating new files when needed. This gives you the conversational flow of chat combined with the power of Copilot's code generation capabilities. The result is something you have to try to believe.Try this: Build a native mobile app using Flutter. I built a game last weekend and I've never used Flutter in my life. Multiple models, your choice Whether you're using Chat, Inline Chat, or Copilot Edits, you get to decide who your pair programmer is. Try this: Use 4o to generate an implementation plan for a new feature and then feed that prompt to Claude in GitHub Copilot Edits to build it. Custom instructions Tell GitHub Copilot exactly how you want things done with custom instructions. These instructions are passed to the model with every request, allowing you to specify your preferences and the details that the model needs to know to write code the way you want it. You can specify these at the editor or project level. We'll even pick them up automatically if you include a .github/copilot-instructions.md file in your project. These instructions can easily be shared with your team, so everyone can be on the same page - including GitHub Copilot. For example... ## React 18 * Use functional components * Use hooks for state management * Use TypeScript for type safety ## SvelteKit 4 * Use SSR for dynamic content rendering * Use static site generation (SSG) for pre-rendered static pages. ## TypeScript * Use consistent object property shorthand: const obj = { name, age } * Avoid implicit any Copy Try this: Ask Copilot to generate the command to dump your database schema to a file and then set that file as one of your custom instructions. Full project awareness GitHub Copilot has AI powered domain experts that you can mention with the @ syntax. We call these, "participants". The @workspace participant is a domain expert in the area of your entire codebase.GitHub Copilot will also do intent detection (as seen in the video) and include the @workspace automatically if it sees you are asking a question that requires full project context. Try this: Type /help into the chat prompt to see a list of all the particpants in GitHub Copilot and their various areas of expertise, as well as slash commands that can greatly reduce prompting. Naming things and other hard problems They say naming things is one of the hardest problems in computer science. Press F2 to rename something, and GitHub Copilot will give you some suggestions based on how that symbol is implemented and used in your code.Try this: If you don't know what to call something, don't overthink it. Just call it foo and implement it. Then hit F2 and let GitHub Copilot suggest a name for you. Speak your mind Select the microphone icon to start a voice chat. This is powered by the free, cross-platform VS Code Speech extension that runs on local models. No 3rd party app required. Try this: Use Speech with GitHub Copilot Edits to prototype your next app. You can literally talk your way to a working demo. Be a terminal expert With terminal chat, you can do just about anything in your terminal. Press Cmd/Ctrl + i while in the VS Code terminal and tell GitHub Copilot what you want to do. Copilot can also explain how to fix failed shell commands by analyzing the error output. For instance, I know that I can use the ffmpeg library to extract frames from videos, but I don't know the syntax and flags. No problem! Try this: The next time you get an error in your terminal, look for the sparkle icon next to your prompt. Select it to have GitHub Copilot fix, explain, or even auto-correct the shell command for you. No fear of commitment No more commits that say "changes". GitHub Copilot will suggest a commit message for you based on the changes you've made and your last several commit messages. You can use custom instructions for commit generation to format the messages exactly the way you want. Try this: Go beyond commits. Install the GitHub Pull Requests and Issues extension and you can generate pull request descriptions, get summaries of pull requests and even get suggested fixes for issues. All without leaving VS Code. Extensions are all you need Every VS Code extension can tie directly into the GitHub Copilot APIs and offer a customized AI experience. Check out MongoDB with their extension that can write impressively complex queries, use fuzzy search and a lot more... Try this: Build your own extension for GitHub Copilot using GitHub Copilot! We've created some new tutorials that show you how to build a code tutor chat paricipant or generate AI-powered code annotations. A vision for the future This last one is a preview of something we're adding to GitHub Copilot soon, but it's way too cool not to show you right now. Install the Vision Copilot Preview extension and ask GitHub Copilot to generate an interface based on a screenshot or markup. Or use it to generate alt text for an image. Try this: Mock up a UI using Figma or Sketch (or PowerPoint - it's ok if you do that. I do it too). Then use @vision to generate the UI. You can even tell it which CSS framework to use. Note: Vision is in preview today and requires you to have your own OpenAI, Anthropic, or Gemini API key. The key will not be required when we release it as part of GitHub Copilot. Coming Soon! Keeping up with GitHub Copilot There's so much more GitHub Copilot we want to show you, but nothing can replace the experience of trying it for yourself. If you're just getting started, we recommend you check out these 3 short videos to bring you up to speed quickly on the Copilot UI, as well as learning some prompt engineering best practices. We ship updates and new features for GitHub Copilot every month. The best way to keep up with the latest and greatest in AI coding is to follow us on X, Bluesky, LinkedIn, and even TikTok. We'll give you the updates as they drop - short and sweet - right in your feed. And if you've got feedback, we'd love to hear it. Feel free to @ us on social or drop an issue or feature request on the GitHub Copilot extension issues repo. GitHub Copilot in other places As part of the free tier, you will also be able to use GitHub Copilot on GitHub.com. While we work with GitHub to build the Visual Studio Code experience, Copilot itself is not exclusive to VS Code. You may be wondering about editors like Visual Studio. Will those users get a free Copilot offering as well? Yes. Absolutely. Check out this blog post from the VS team on what works today and what’s coming shortly. The AI code editor for everyone 2025 is going to be a huge year for GitHub Copilot, now a core part of the overall VS Code experience. We hope that you’ll join us on the journey to redefine the code editor. Again. Enable GitHub Copilot Free Continue reading...Announcing a free plan for GitHub Copilot in Visual Studio Code. Read the full article View the full articleAt .NET Conf 2024 we celebrated the official launch of .NET 9 alongside groundbreaking announcements across the entire .NET ecosystem and a deeper dive into the world of .NET for developers worldwide. Organized by Microsoft and the .NET community, the event was a huge success, providing .NET developers with 3 days of incredible, free .NET content. For the first time ever, this year also included a bonus “day 4” of YouTube premieres following the initial 3 days, which brought even more great content to .NET developer. If you have been wondering if James really did walk all the way across the new bridge on the Microsoft campus for the keynote, he did! With help from Cameron and Maddy they recorded one continuous cut for the keynote, and while we did walk slow and steady for the recording it actually takes around 5 minutes to walk across the bridge at a standard pace. [ATTACH type=full" alt="Keynote filming]6080[/ATTACH] [HEADING=1]On-Demand Recordings[/HEADING] If you missed the event, feel free to catch up on the sessions via our on-demand playlists on YouTube or Microsoft Learn. This year, we streamed 92 sessions over 4 days with most of those sessions delivered live. Day 1 featured the official release of .NET 9, including a 1-hour keynote and sessions led by the .NET team to introduce new features and enhancements related to .NET 9 including topics like .NET Aspire, AI, .NET MAUI, web development, Visual Studio, and more. Day 2 provided a deeper dive into .NET capabilities, continuously broadcast for 24 hours to reach all time zones. Day 3 was a continuation of the 24-hour broadcast, offering a wide range of sessions from speakers around world. Day 4 was a new “bonus” addition this year, which included pre-recorded YouTube premieres that included a range of topics from the .NET community. [HEADING=1].NET 9 Announcements[/HEADING] The kickoff of .NET Conf 2024 included the launch of .NET 9, the most productive, modern, secure, intelligent, and performant release of .NET yet. Full details on the .NET 9 release can be found in the Announcing .NET 9 blog post. Other major announcements that were made during the event included: Visual Studio 2022 v17.12 GA .NET Aspire Community Toolkit Azure Functions Support for .NET Aspire (Preview) Microsoft.Extensions.AI – .NET AI Library (Preview) Syncfusion Toolkit for .NET MAUI [HEADING=1]Explore Slides & Demo Code[/HEADING] Access the PowerPoint slide decks, source code, and more from our amazing speakers on the official .NET Conf 2024 GitHub page. Plus, grab your 2024/DigitalSwag at main · dotnetConf/2024! [HEADING=1]Upskill on .NET Aspire[/HEADING] .NET Aspire training and credential on Microsoft Learn: To earn the Build distributed apps with .NET Aspire credential learners demonstrate the ability to build distributed apps with .NET Aspire. Through the training and credential, learners will learn the following: Add .NET Aspire to a solution Configure service discovery Configure components Monitor resources with the .NET Aspire dashboard Create tests with .NET Aspire Prepare for deployment .NET Aspire for Beginners video series: Are you completely new to .NET Aspire? This beginner video series teaches you how to get started with .NET Aspire and implement it into your applications. [HEADING=1]Customer Stories[/HEADING] There was an astonishing amount of customer evidence presented this year at .NET Conf including some exciting videos and mentions during the keynote presentation. Please see below for some of the customer evidence highlights. Microsoft Copilot Discover how a small team of five developers at Microsoft transformed the Copilot backend in just four months using .NET & .NET Aspire. Join Pedram Rezaei, a developer on the Copilot backend team, as he shares their journey to improve performance, scalability, and reliability for millions of users worldwide. Whether you’re a .NET developer or interested in building scalable, reliable services efficiently, this inspiring story demonstrates what’s possible with the right tools and a dedicated team. Fidelity Investments Discover Fidelity’s latest innovation in trading technology with Active Trader Pro, built on Microsoft’s .NET MAUI platform. This powerful, cross-platform trading solution brings seamless performance to both Windows and Mac users, backed by real-time data streaming, advanced tools, and Microsoft’s support. Join Fidelity’s SVP Mark Burns as he shares how .NET MAUI enables Fidelity to deliver a fast, reliable, and scalable experience for active traders everywhere. Chevron Phillips Chemical We partnered with Chevron Phillips Chemical Company to showcase their migration story with .NET and Azure. We presented the slide below during the and also had their Cloud Architect Manager present live at our .NET session at Ignite. KPMG KPMG is another company we have been partnering with to promote the positive outcomes of utilizing .NET and Azure, specifically for KPMG Clara. We showcased the below slide in the . Xbox The Xbox team recently started using .NET Aspire in Xbox services as they are going through a large migration to the latest .NET. They shared how .NET Aspire has helped them speed up and tighten their inner development loop. [HEADING=1]Local .NET Conf Events[/HEADING] The learning journey continues with community-run events. Join us in celebrating .NET around the globe! Find an event near you. [ATTACH type=full" alt=".NET Conf Local Events]6081[/ATTACH] [HEADING=1]Join the Conversation[/HEADING] Share your thoughts and favorite moments from .NET Conf 2024 in the comments below or on social media using #dotNETConf2024. Let’s keep the conversation going! [ATTACH type=full" alt="🎥]6082[/ATTACH] Catch Up on Sessions: Watch all the sessions you missed or rewatch your favorites on on-demand playlists or Microsoft Learn. [ATTACH type=full" alt="🚀]6083[/ATTACH] Get Started with .NET 9: Download the latest release of .NET 9 and explore the groundbreaking features it has to offer. [ATTACH type=full" alt="📚]6084[/ATTACH] Upskill on .NET Aspire: Begin your journey with .NET Aspire by watching the beginner video series and earning the Microsoft Learn credential. Let’s continue building, innovating, and empowering developers with .NET! The post .NET Conf 2024 Recap – Celebrating .NET 9, AI, Community, & More appeared first on .NET Blog. Continue reading...
×
- Create New...