Microsoft Build 2025 - Sessions

Azure AI Foundry: The AI app and Agent Factory - Yina Arenas & Scott Hansleman
Introduction
Azure AI Foundry is the AI factory and Scott Hanselman took the opportunity to record an episode of his podcast with Yina during the session. Azure AI Foundry is your AI app and agent factory to bring AI into apps for developers to take your idea to code and then to production. Hansleminutes podcast factory is an example, Hansleminutes is a podcast Scott has been doing for many years with almost one thousand episodes and almost five hundred hours of content. Scott's aim is to use AI in the parts that suck, such as show notes. Hansleminutes workflow is guest intake, which is a bunch of toil including collecting bios, socials, collecting release forms along with preparation, recording, editing, packaging such as exporting audio, writing show notes and generating transcripts plus publishing, promotion to generate social media copy or localisation and an archival step. The first step was to take the audio files to take transcriptions and take steps to erase toil at scale and still retain control without losing quality.
Agentic AI
Agentic AI to erase toil at scale including unstructured content glut such as audio, video and documents which can be hard to manually search and transcribe, and an agentic pattern can use multimodal and speech agents to auto ingest content and label. Repetitive knowledge packaging such as show notes, documentation and FAQs can be hard without AI as it is copy-paste drudgery with potential for style drift, but an agentic pattern can have LLM-powered compose agents with brand guardrails. Reference and link curation can be hard with scattered sources or lead to stale links but with AI retrieval augmented agents can pull fresh de-duplicated references. Guest & topic metadata upkeep is hard without AI due to hand-entered bios prone to errors but with AI a knowledge-graph agent can enrich and reuse data. Quality and compliance checks need spot-checking by humans but could use policy-enforcing guard-agents to catch issues early. Analytics and feedback loops can lead to siloed dashboards without AI but with AI can have observability agents that emit unified metrics and auto-tune prompts.
Foundry Models
Azure AI Foundry has many models with an explosion of foundational models which create new choices and opportunities for functionality for developers with over 11,000 models and Microsoft are bringing unified access and to be easily switch models in code. You can see what models are available in the model catalogue including announcements, leaderboards and then filter the models by features such as industry or capabilities. You can also use the Foundry Agent to help pick the correct model for your use case or see the leaderboards to see what the best models are based on benchmarks including quality vs cost, quality vs safety and quality vs throughput.
You can also use your own data to evaluate data, or an AI generated data set to see what the models can offer. There is also model router which takes selection toil from hour head which supports OpenAI models but soon will support more models and you can try out models in the Chat playground. Ideally the cost needs to be as low as possible so you can fine tune the model based on some episodes to then process later episodes at a lower cost. Foundry Local can be used to run AI models locally on your own hardware and GPU to use information obtained online to then be processed locally to produce a bio for the podcast episode and do this work locally.
Agents
What is an agent which can take input such as system events, user messages and agent messages from other agents it can then use an LLM, instructions or tools which can make tool calls for retrieval, actions or memory and then output can be agent messages or tool results. You can do process automation and have an agent which can act and make any API calls and can support multi modal inputs such as text, speech or images and can be invoked by other agents.
Multi-agent orchestration allows you to not need to have a lot of functionality in one agent but create an agent that has specific functionality and can orchestrate them together to do a process or task, you can give one agent the abilities to another by connecting them and can have multi-agent workflows where can have a human in the loop. Hanselminutes podcast factory can have a voice-enabled orchestrator to do guest intake for agents to do guest sourcing, bio generation and scheduling and then for packaging with agents for transcript and show notes generation along with a link resolver agent plus for promotion with agents for social copy generation, content localisation and scheduling of an episode of the podcast.
Build agents your way with platform integrations for an agentic flow with code handling every detail with infrastructure as a service with code handling every detail using Azure AI Infrastructure where can bring your own frameworks, platform as a service using code with managed services using Azure AI Foundry and Foundry Agent Service and software as a service with drag-and-drop UI using Copilot Studio which is the instant agent runtime. Azure AI Foundry Agent Service enables you to create agents declaratively and use different sets of models and tools which delivers enterprise readiness for trust for data, networking and security and choice with model choice and tools for enterprise connectivity.
Activities you can do with an agent includes Agentic Retrieval with Azure AI Search to ask questions of the large amount of content available. Agents can be built as needed which can include the content needed along with link verification to make sure these are correct. You can use Semantic Kernel to perform actions in an Agent including being able to call other agents in a workflow which is done in YAML but to understand this you can visualise this with a Mermaid diagram in Visual Studio Code. You can add knowledge to models including files, Bing Custom Search for grounding AI models to make sure that a chat bot is restricted and focus on what you need it to do, so you can keep a model small and ground it only in the knowledge you need.
Observability
Foundry Observability is a set of tools to support the entire development lifecycle aligned with your end-to-end workflow to power visibility, monitoring and optimisation across the entire AI development lifecycle. When going into production you want o be able to continuously monitor your solution to generate traces and have capabilities to do debugging and hitting the right reliability level for your application with support for Open Telemetry rather than log files being parsed with a regular expression and be able to use existing observability systems such as .NET Aspire. You can evaluate relevance, intent resolution and even task adherence to make sure agents are only doing things they should be and make sure if agents go out to the web don't perform any actions based upon that that they shouldn't be doing. There's also integration with CI/CD pipelines and have any evaluations run and there also integration with application analytics for monitoring for total tokens, inference calls or any errors and can see quality improve over time if making any changes.
End-to-end security for your AI investments is important with Microsoft Purview for data, Microsoft Entra for authentication and authorisation along with Microsoft Intune and Microsoft Defender for security plus Microsoft Sentinel. Agents you create get specific identities with Entra where you can assign entitlements and do governance and you can send any data to Purview.
Conclusion
Hanselminutes podcast factory of agents can reduce workload to minutes using minimal customised fine-tuned models where just need to make any small corrections or changes. Azure AI Foundry can infuse applications like the podcast factory with AI behaviour and deliver a return on your effort and use AI to reduce toil and take away task you don't want to do and focus on the fun stuff. Companies are creating agents to make significant gains in their processes and create the future of AI with Azure AI Foundry.
An Overview of Windows AI Foundry - Tucker Burns & Dian Hartono
Introduction
Why customers want to use local AI as don't always want to run in the cloud but on client devices and we are at a turning point for local models and power comes with flexibility supporting a hybrid approach for the best of both worlds of client and cloud. There is a great need for local AI for privacy and security including governance which allows full control of user data, latency and performance are only possible without network latency and running models locally makes this viable and running models close to sensors and places without reliability internet but need high availability and not all applications need cloud scale and models can run in the background with newer hardware.
Windows AI Foundry has many capabilities and features and supports built in and third-party AI models for a versatile platform for AI models. Built-in AI APIs include natural language and vision and there are powerful tools for integrating these with ease along with customisation with LoRA for Phi Silica and Knowledge Retrieval to tailor capabilities to specific developer needs and the open-source ecosystem is being embraced with Foundry Local and Model Catalogue which makes it easy to use local models and Windows with SDKs and APIs to integrate them and can switch between local and cloud inference. Windows ML enables execution of AI models from CPU to GPU and NPU while minimising dependency management. Windows AI Foundry provided versatility and capabilities for first-party and third-party models on Windows.
Windows ML
You can bring your own model including PyTorch and ONNX, AI toolkit helps you convert, quantise and evaluate models, simplifies dependencies including execution providers, drivers and ONNX runtime out-of-the-box and is build for speed to easily scale AI workloads and provide native powerful Windows APIs along with reference documentation and sample code. Windows ML is the foundation on what Foundry Local is built on and provides developers a high degree of flexibility about what models to run or where and can leverage models, prepare models and use models within your applications.
Developer experience is bringing in PyTorch models with AI Toolkit for Visual Studio Code and can perform conversion and quantisation to create a hardware optimised model that can then use the Windows ML APIs using NPU, CPU or GPU execution providers. There is the AI Dev Gallery application with demos and examples that can show what you can do with Windows AI Foundry such as samples to classify an image, you can select a downloadable model you want to use or add an ONNX model from the file system. You can chose what device to run on from NPU, GPU or CPU or use smarter settings for efficiency, performance or power which will pick the right hardware for the requirements needed and in the future can pull information from the model file to determine where it should be run from available execution providers.
Windows AI APIs
AI Dev Gallery allows you to try out different capabilities and you can explore the Windows AI Apis such as generate text and you can try out an example or recognise text with OCR or can explore imaging APIs such as Image Super Resolution to scale images to a specific size and Image Segmentation which removes the background or Object Erase which will remove an object from an image. Image Description generates a description of an image and you can see an example in AI Dev Gallery and you can explore the API that powers this particular behaviour and it follows similar patterns as other APIs, it will check that the model feature is available and each API is different so there will be some differences when calling the model but you can copy the code and integrate it into the application or export the sample as a Visual Studio Project or try it out with your own images and explore the Windows AI APIs in AI Dev Gallery.
Windows AI APIs are powered by in-box models available on Copilot + PCs and are distributed via Windows Update, they are a level of abstraction from the models it is delivered for developers, so don't have to care about the exact model but only the capability that is being delivered as part of Windows App SDK. If you want to do more and customise on device models with Windows AI APIs customisation to make inbox models work better for your domain, for example with brand voice to make it sound like your own company by using the LoRA customisation path which can also be used for task adaptation to optimise or format for your own workflows or combined with knowledge retrieval to use domain-specific language such as legal, medical or technical terms or for knowledge retrieval to provide private knowledge grounding to answer based on y our documents and content. LoRA fine-tuning can nudge a model to particular tone or task of your domain or knowledge retrieval powered by semantic search to ground answers based on local knowledge.
LoRA Adapter developer flow starts with deciding the evaluation criteria, try prompt API and create a dataset for your specific scenario and then the rest is handled for you including to train an adapter, use the adapter for Phi Silica and evaluate the adapter. LoRA fine-tuning can be done via the Visual Studio Code AI Toolkit to fine-tine the Phi Silica model local on your device where you can create a project and which model to fine-tine, then select the training and test data set to create the adapter for any particular use cases and there are also other configurations that can be performed if needed. The jobs to fine-tune an AI model are done online using your Azure account in your own subscription and resources with any data and LoRA adapter is just yours. You can actually evaluate an adapter from within AI Dev Gallery with a prompt and will generate with and without an adapter to see if the adapter is behaving as needed and you don't need to know how to configure anything in detail to do this.
Knowledge retrieval is another way you can customise your experience by customising what the model knows, when a user queries this can search relevant model and use an LLM for a user query with contexts from the knowledge base for the query with top matching contexts. You don't need to embed the knowledge in the model but use the right data to feed into the prompt such as application data and user content which can be big, dynamic and constantly changing. You could have an application which provides the information from the application as the basis of the knowledge needed to go along with the prompt such as notes within a note application.
Windows AI APIs are now available in Windows App SDK with generally available or stable APIs including rewrite, image description, object erase, image segmentation, image super resolution, optical character recognition, text summary and text to table. Public preview or experimental APIs include LoRa for Phi Silica, Phi Silica prompt and conversation summary and private preview or private APIs include knowledge retrieval and semantic search. Windows AI experiences powered by Windows AI Foundry include Recall, Live Captions, Windows Studio Effects, Cocreator, Restyle Image, Super Resolution, Image Creator, Generative fill, improved Windows search and click to do. Web developers can take advantage of Web AI APIs available in Microsoft Edge including Prompt API, Text Rewrite, Text Summarisation, Text Write and coming soon Text Translate with many being proposed as web standards to be supported in many browsers.
Foundry Local
Foundry Local combines Windows AI Foundry with Azure AI Foundry to help evolve the platform of AI deployment which is now available in public preview. Azure AI Foundry provides models for cloud execution or download, and Windows AI Foundry brings local execution on GPU, CPU and NPU to provide easy to use flexible models to developers. Foundry local provides ready to use open source and other models pre-optimised for GPUs, CPUs and NPUs and has a built in command line interface to download and test models locally and model management service allows models to be distributed from model catalogues and as an open platform Windows also supports other model catalogues. Foundry local allows developers to browse models that available and relevant for your device and then you can then trigger a download of a model or make it ready for inference if it has already been downloaded. You can also see what models have already been downloaded onto your device which can then be ran, loaded into memory and can then submit a prompt and try out the model.
It easy to work with Foundry local in a simple and intuitive way there is a REST Api and can be used with any SDK which is compatible with the OpenAI API specification which makes it easy to embrace the hybrid paradigm of local and cloud-based models. Managing and distributing models directly onto a local users' device makes it easy so you don't have to embed the models into your applications. When using the Foundry Local SDK, you can easily modify your code to hit a local endpoint instead of a cloud endpoint to leverage models. When multiple applications are using the same model only one version of the model needs to be downloaded with Foundry local which is embedded into Windows 11 and Windows App SDK. Windows AI Foundry provides task-based APIs and access to open-source, third-party or your own models to build AI into your own applications.
SQL Server 2025: The Database Developer Reimagined - Bob Ward, Muazma Zahid
Introduction
SQL Server 2025 infuses AI and has a new logo! Data is the fuel that powers AI, without data AI is pretty hard to achieve and now SQL Server 2025 is part of this. SQL Server 2025 is the AI-ready enterprise database from ground to cloud with best-in-class security and performance, AI built-in, made for developers and with Cloud agility through Azure. You will be able to use T-SQL and be able to choose the AI model you want, and it is more than AI as this is the most significant release of SQL Server in a decade.
SQL Server 2025 builds on a foundation of innovation. SQL Server 2017 enabled SQL Server on Linux with support for containers, adaptive query processing, automatic tuning, graph database and machine learning services. SQL Server 2019 added data virtualisation, intelligent query processing, accelerated database recovery and data classification. SQL Server 2022 was cloud connected and supported IQP NextGen, Ledger, Data Lakes and T-SQL enhancements. SQL Server 2025 has innovations and enhancements from past releases, and you can develop once and deploy anywhere for SQL Server 2025, Azure SQL and SQL database in Fabric supporting T-SQL for developers, SQL engine, tools along with Fabric, AI and Copilots.
SQL Server 2025 has AI built-in and can develop modern data applications and integrate your data with Fabric, it is secure by default with a mission critical engine that can be connected with Arc and assisted by Copilots along with being a benchmark leader, optimised for the latest hardware and build for all platforms including Azure, Windows, Linux and Kubernetes. SQL Server Management Studio 21 is based on Visual Studio 2022 64-bit and supports Copilot in public preview, Git and TFS along with a migration assistant and a most request feature of dark mode, it also features a new connection experience, query editor improvements and always encrypted assessment.
AI Built-in
What problems are you trying to solve with AI? SQL Server 2025 delivers smarter searching on your existing text data, bringing in other documents or text for centralised vector searching, provides building blocks for intelligent assistants to connect with Retrieval Augmented Generation and AI agents along with being able to take advantage of AI in a secure and scalable fashion including co-locating data plus overcome complexity by using the familiar T-SQL language with extensions to enable AI capabilities.
Building scalable AI applications for agentic RAG with vector search built into SQL Server 2025 to store vectors and data together for consistency and search for most relevant data, operational RAG to retrieve the most semantically relevant data from your database and use this to ground Large Language Models for specific scenarios such as model context protocol situation and normal structured queries to allow Large Language Models to query structured data and take advantage of rich metadata and query optimisation.
Build enterprise AI-ready applications by building agentic RAG patterns inside the engine with a Vector Store with native vector data type for a column you store in the table and DiskANN index on top of this, model management for ability to declare model definitions to point to the model of your choice on ground or cloud using T-SQL, embeddings built in for text chunking and built-in multimodal embedded generations, simple semantic searching with vector distance (KNN) and vector search (ANN) based on the index along with supporting framework integrations including LangChain, Semantic Kernel and Entity Framework Core and will support OpenAI models and Ollama models.
SQL Server 2025 Vector Search has AI inferencing endpoints for a wide range of models that are possible they don't reside in the engine but will use REST APIs to talk to AI models of your choice in a secure manner. You will declare a model definition in T-SQL where you can choose from three different protocols for the API format from Azure OpenAI, OpenAI compatible and Ollama where it exists, you declare once and use multiple and you can create embeddings which will use these models and will be stored in your vector type which is a sequence of numbers that models use to understand your data. In your application you can send a natural language prompt and can create another embedding and can use vector search to find most similar results based on the prompt using your data, this can also use the new JSON data type for JSON data. If there is a model that isn't supported, you can use the sp_invoke_external_rest_endpoint to talk to any model endpoint for greater extensibility and multimodal and can talk to any REST endpoint.
Security, SQL and AI in SQL Server 2025 is you control all access with SQL security with roles, permissions and authentication. You also control which AI models to use, and these AI models can exist on ground or cloud, or a combination are isolated from SQL. You can use Role Level Security, TDE and Dynamic Data Masking, track everything with SQL Server auditing and there is a ledger for chat history and feedback if building a chat application. This can be a very secure process to use AI in your application when using SQL Server 2025 and with SQL Server Management Studio and Copilot you can get assistance about your data including determining what embeddings are in your database such as supporting multiple languages, embeddings can be generated with T-SQL and can use what models you want to use either locally or on cloud.
Developers, Developers, Developers
SQL Server 2025 has the most developer features in a decade. SQL Server 2025 is built for developers to develop modern data applications there is a Data API builder which can convert any table or stored procedure into a GraphQL endpoint you can connect to in your app for efficient data applications. SQL Server 2025 supports JSON type, index and updated T-SQL functions along with support for Regular Expressions and other T-SQL functions. There is a new capability called Change Event Streaming where can consume log change events and there is a REST API to connect your data to any REST interface including GraphQL from T-SQL which is useful for AI or any downstream application you have access to that you can expose through REST. GitHub Copilot for SQL is now available in Visual Studio Code and there is a Python Driver coming for SQL Server.
GitHub Copilot for SQL has the context of your database including the schema so you can get the queries you need in T-SQL based on your tables, but you can also write code with the SQL context to get code to help build an application to connect with your database for example using Semantic Kernel. SQL Server 2025 introduces Standard Developer Edition which is free for development and test purposes with all the features and limits from the SQL Server 2025 Standard Edition on all platforms with no licence costs.
JSON before SQL Server 2025 was varchar/nvarchar based and any index that supports this but in SQL Server 2025 there is a native json datatype for JSON using binary storage up to 2GB along with a json index and then functions to use when building applications and are brining NoSQL inside SQL as part of your queries and your code. You can use JSON_VALUE to get any value from a JSON document and there are new capabilities for aggregation with JSON_ARRAYAGG and there are key / value operations using JSON_OBJECTAGG. You can use modify to update individual JSON values for complex JSON payloads and do this as part of the SQL Server 2025 engine and can operate directly with values. To performantly query JSON you can create a json index so can then search a document using the index and you can see this in the execution plan from SQL Server Management Studio when using SQL Server 2025 with a json index. Additional T-SQL support includes regular expression functions, Base 64 functions, fuzzy string match functions, substring with optional length, string concatenation operator along with CURRENT_DATE to get the date and DATEADD which now supports bigint.
Capturing changes with SQL Server includes Change Tracking for cache invalidation and sync, Change Data Capture for data warehousing and auditing. SQL Server 2025 supports Change Event Streaming for event-driven architectures, microservice integrations, real-time analytics, cache sync and AI agents. With Change Event Streaming in SQL Server 2025 you want to take an action based on a change in data from SQL Server which can scale as your data scales and is near-real time and is push based not pull based which can be enabled for your database and when setting up an event stream you specify the Azure Event Hub and table where changes will be pushed to and these are JSON-based in a format called cloud event which contains schema information and column names. You can use Azure Functions to take events from an Azure Event Hub to create an AI Agent and can even use GitHub Copilot to accept the event from the Event Hub to parse out the information including the data itself from the event triggered from Change Event Streaming and then pass this to an AI agent.
SQL Server 2025 is integrated with Microsoft Fabric which integrates your operation data with a unified data platform, Microsoft Fabric includes Data Factory, Real-Time Intelligences, Databases, Analytics, Industry Solutions, Power BI along with partner solutions. Mirroring from SQL Server 2019 and later replicates databases to Fabric with zero ETL and data I replicated to One Lake and is kept up to date in near real-time, mirroring protects operation databases from analytical queries and computer replication is included with your Fabric capacity for no cost and there is free mirroring storage for replicas tiered to Fabric Capacity.
Enhancing the industry engine in SQL Server 2025 includes many changes to Security including security cache improvements, authentication using system-assigned managed identity and backup to URL with managed identity. Performance improvements include optimised locking, change tracking cleanup and batch mode optimisations. Availability changes include improved health diagnostics, communication control flow tuning and DAG sync improvements.
Mission Critical Engine for performance and availability of your modern database including improved concurrency with optimised locking, abort query hint and tempdb resource governance along with accelerating performance with IQP enhancements, columnstore indexes and query store on read replicas in a production environment and increase HADR with reliable failover for availability groups, availability group tuning and diagnostics plus backup enhancements. Transaction locks help avoid blocking with lock escalation and can be turned on in SQL Server 2025 to prevent an entire table being locked with optimised locking.
Conclusion
SQL Server 2025 Platform Architecture is something new that supports new applications and new platforms to build optimised applications, AI agents and connect to Microsoft Fabric or use embeddings from Azure AI Foundry where AI agents can then use Azure EventHub to respond to events from the database or call external REST endpoints to include AI models locally or in the cloud as well as other online capabilities using REST APIs.
Elevating Development with .NET Aspire: AI, Cloud, and Beyond - Damian Edwards, David Fowler, Maddy Montaquila
What is Aspire?
Aspire allows you to build intelligent applications intelligently where you can bring together infusing AI into your .NET apps with Agentic Apps with Agentic DevOps which are AI-powered agents operating as a member of your development team. The dev loop kills velocity, and local dev is fragile with high friction onboarding with slow ramp up time, inconsistent dev experiences make things hard to troubleshoot and critical workflows rely on scripts, hacks and tribal knowledge.
Aspire lets developers just be developers again where can onboard an intern in an afternoon instead of a couple of weeks, be able to run full integration suite locally without spinning up CI/CD or build machines, or with the dashboard able to test telemetry flow without waiting for a full deployment and don't have to be an expert in Bicep to be able to deploy applications. Aspire lets you develop again and can turn on and instrument things such as Open Telemetry, logging and resiliency with Polly including retries and be able to have your own standards and ship your own extension methods and have a deployment story that is improved.
Devis Lucato
Devis works on many AI projects in the office of the CTO including Semantic Kernel to make AI development more productive including developing SDKs and moving to encapsulating this into a single API. Were asked how to secure memory for a knowledge base with Kernel Memory and create a solution to inject and process data with a pipeline with a web service that can point any language to which allowed internal components to be changed so enabled many extensions to create a flexible pipeline but it was hard to set up but with Aspire it is very easy to spin things up locally and also be able to deploy to the cloud and Aspire is integrated into many IDEs including JetBrains Rider. All the resources will be spun up as needed rather than having to start each one manually when working locally and can also combine other resources such as those in Python and Node.js
Aspire
Aspire enabled code-first control, is modular and extensible with observable from the start for flexible deployments and has been around for a year and there are over 130 integrations, over 700 community PRs and 70% of top .NET customers are using Aspire. Aspire with Copilot is an AI-assisted experience directly in the Aspire developer dashboard where it can identify resource issues, find bottlenecks and help with spotting problems at development time, you can ask Copilot about specific traces and find root causes of errors, and it works seamlessly with the rest of your Copilot agents in Visual Studio.
Aspire is released every six weeks and things are being fixed and value is being added, and it is easy and low friction to update .NET Aspire. You can update the .NET templates if the .NET Aspire templates have been installed with “dotnet new update” from Command Line or Terminal to the latest version of the templates for .NET Aspire but there is work in progress to make the update experience work in Visual Studio. For a .NET Aspire project you can also use “dotnet outdated -inc Aspire -u” from the Command Line to update the .NET Aspire packages and you will also need to update the SDK version in the csproj in Visual Studio or Visual Studio Code and once updated you can take advantage of the latest features in .NET Aspire.
Your Aspirations Feedback
.NET Developers understand features such as Project files etc but other developers won't understand this so Microsoft has aspiration to broaden the horizons of .NET Aspire but other developers have to get over the dotnetisms for this so there will be a command line experience for .NET Aspire with “aspire run” which is a .NET command line tool which takes normal things you do in Visual Studio but making these available in a Command Line Interface where you can create the templates and also trust the developer certificate automatically to make it easier, the point of this is to hide the default things that .NET Aspire has to do. You can also dee a small Dashboard-like experience in the Command Line Interface to show anything that has been started without needing to start Visual Studio or Visual Studio Code. You can also add additional .NET Aspire packages with “aspire add” to add these to your project and you can also add npm applications such as Vite + Vue.js.
Software Developer Lifecycle
Onboard, Develop, Test and Deploy focus of .NET Aspire has been Develop and Test, the Command Line Interface targets Onboard. With Deploy this has been revisited in regards to boosting productivity of teams in Microsoft teams using .NET Aspire to help them move faster who were using it for local development including Xbox and Copilot teams but many teams were asking for help to deploy and looked at building an experience for those teams to deploy using internal processes to then help everyone else to deploy to Azure and Azure Container Applications by adding support to be able to deploy everywhere.
The application host in .NET Aspire is a great place to see the structure of the application and see the application's configuration as you need to tell .NET Aspire what your application is including the services and how they talk to each other which can be visualised in the Resource table and graph in the .NET Aspire dashboard. From .NET Aspire you can see the traces for any requests between different parts of the application. From the AppHost you can add compute resources to model environments such as an Azure Container App and you can turn on various things you may need when the application is deployed into these environments, and the environment informs the target of where things should be by creating the relevant setup in Bicep after running infra synth for Azure.
There is also preview support for Azure App Service in .NET Aspire which creates the relevant setup in Bicep for this environment after running infra synth. If you want to split compute environments for example for front end and back end you can specify different compute environments to disambiguate them, when infra synth is executed there will be different ones for the different parts of the application, and with some hints in the App Host can infer the networking that may be required for the components of the application to be able to communicate with each other, targeting different things should be as simple as possible such as cross-cloud deployments and be able to glue these together and represent these from your App Host to help represent the deployments used by teams in Microsoft.
The future of web development with ASP.NET Core & Blazor - Mike Kistler & Daniel Roth
Introduction
ASP.NET Core has come a long way since it first shipped in 2016 as part of .NET Core, it is trusted by millions of developers, powers the world's largest services, industry leading performance and is the foundation of model cloud and AI powered application leveraged as part of Microsoft 365, Microsoft Bing. Microsoft Teams, Microsoft Copilot and Xbox along with most of Microsoft's Azure services. Microsoft uses ASP.NET Core as it is faster than other frameworks and is a mature and robust framework.
Build AI powered web applications with ASP.NET Core with Microsoft.Extensions.AI which are AI primitives and building blocks for .NET, Evaluations to evaluate the quality and safety of your AI applications, integrate with VectorData for vector databases for semantic search and embeddings along with AI templates with Blazor and ASP.NET Core featuring out-of-the-box guidance with code ready to go, C# MCP SDK to extend AI apps with new tools and data plus Semantic Kernel for multi-agentic workflows and orchestration.
ASP.NET Core developers love .NET Aspire which helps you build, test and deploy applications seamlessly from code to cloud and helps with a streamlined inner loop, includes a developer dashboard features integrations and assists with deployment which you can add to any ASP.NET Core application and ready for any cloud and lights up features to follow cloud best practices.
ASP.NET Core in .NET 10 is focusing on areas including making it easier to create more secure applications, app observability with more traces and metrics to help troubleshoot applications, targeted performance improvements for higher throughput and lower memory utilisation along with addressing top pain points and gaps in ASP.NET Core.
Easier to create secure applications
ASP.NET Core in .NET will support the latest security standards and best practices including WebAuthN and passkey authentication along with OAuth 2 refresh token support to automatically refresh it when it expires from a refresh token without impacting the user experience which enables a shorter time to live for tokens. Make authentication easier to setup & use as if they aren't easier to use people won't use them and you will get security problems with authentication scaffolding along with identify and authentication documentation improvements as through GitHub issues and social media to make this easier to consume including scenario-based tutorials and video content.
Passkeys are cryptographic credentials that replace traditional passwords and consist of a public-private key pair scoped to a specific account and origin where private key is stored in an Authentication and this requires user verification before allowing authentication and Passkeys cannot be shared across applications and are Phishing resistant and secure by design. Will be building in support for Passkeys to ASP.NET Core Identity inspired by fido2-net-lib but there's no support yet for attestation as there isn't yet a full standard for this so may change if it were integrated at the moment and project templates will be extended to optionally add Passkey support and existing projects using ASP.NET Identity can add Passkey support but this will require database schema changes. Authentication scaffolding will be based on “dotnet scaffold” to generate code into your projects, existing ASP.NET Core & Blazor Identity scaffolders will be updated for Passkey and other .NET 10 support and can add ASP.NET Core Identity Endpoints, Entra ID authentication and add to Blazor Hybrid and .NET MAUI applications.
App observability and diagnostics
ASP.NET Core in .NET 10 to make it easier to monitor and troubleshoot applications by adding more metrics such as Kestrel memory pool, authentication and authorisation metrics along with Blazor specific metrics including how many circuits are connected and how many are not in a Blazor application. Adding activities to Blazor Server for distributed tracing and diagnostics for Blazor WebAssembly for performance profiling and memory usage to diagnose issues more easily. Will be building in OpenTelemetry for trace instrumentation without needing an external package and integrating Microsoft.IdentityModel logs such as features handling JWT so don't have to look around for them.
Targeted performance improvements
ASP.NET Core in .NET 10 will the fastest version of .NET yet include releasing memory from Kestrel memory pool, API JSON deserialization performance improvements for building APIs using new Pipe reader support to System.Text.Json. Microsoft will investigate performance of anti-forgery tokens to make this faster and Blazor WebAssembly startup improvements, so applications load faster.
Kestrel gets smarter about memory as Kestrel grows the memory pool as needed but it doesn't release this memory, so Kestrel holds onto that memory even after traffic has dropped. This has kept applications stuck at high memory watermarks such as after load spikes. In .NET 10 Kestrel trims memory over time reducing an applications footprint which results in better scaling, loader idle costs and smarter resource usage and have been experimenting with this internally which is reducing memory usage and will be coming soon to a .NET 10 preview release.
Blazor WebAssembly startup improvements to speed up Blazor load time by making sure Blazor framework scripts are fingerprinted which adds a unique filename to cache these as long as possible by browsers and scripts are then compressed and cached as static web assets along with preloading Blazor WebAssembly resources to make sure that Blazor applications can load faster in .NET 10.
Address top pain points & gaps
ASP.NET Core in .NET 10 for backend adds key feature asks for minimal APIs using System.Text.Json which will be the recommended place to implement APIs and where strategic based investments from Microsoft will be happening including minimal API validation and support for server-sent events being returned by minimal APIs. Ongoing OpenAPI investments include support for the newest version of the standard OpenAPI v3.1 which will be the default version and XML doc comments-based OpenAI descriptions. Improve build-time OpenAPI doc generation and support emitting OpenAPI documents in YAML format rather than JSON. Other backend pain points being addressed includes JSON Patch which supports updating JSON data with just changes needed or can test to see if changes would be valid and this uses System.Text.Json as before this required the old NewtonSoft.Json package. ASP.NET Core in .NET 10 will also generate correct responses for unauthorized requests for APIs in Web apps.
ASP.NET Core in .NET 10 for frontend focuses efforts in Blazor including Blazor state persistence support, reconciling NavigationManager behaviour across render modes Blazor supports and creating a consistent way when doing page navigations and handling Not Found responses. Improving interaction between enhanced navigation and scroll position and experience with QuickGrid and Entity Framework Core. JavaScript interop adds support for invoking constructors, properties and supplying callbacks and smoothing out support for invoking .NET code from JavaScript via WebAssembly along with improved automated browser testing with WebApplicationFactory and Kestrel. Blazor state persistence simplifies persisting the prerendered state with a new declarative model for persistent state when prerendering, improve resiliency by persisting circuit state with option to persist circuit state on server when client is disconnected such as when user has switched browser tabs or network lost for a prolonged period and improve scalability by persisting idle circuits with APIs for proactively persisting or evicting circuits, but this is not automatic so need to implement an app specific policy.
Build the next gen of AI apps with .NET: Models, Data, Agents, & More - Jon Galloway, Brady Gaster & Jeremy Likness
Introduction
Things are moving at a rapid pace, ChatGPT was announced in 2022, and many people are using it today and it only took five days to get to 100,000,000 users and it is about every seven months to double the length of tasks that can be completed with 50% accuracy so is an exponential pace for this. You can do so many things with generative AI as had to have very specialised libraries to do classification, summarisation and sentiment analysis which can be done with generative AI and in .NET. Generative AI applications are being built in .NET and in production including all the Copilot experiences from Microsoft including Microsoft Copilot, GitHub Copilot and Xbox Copilot for gaming along with third parties to create better experiences for customers. NET ecosystem includes companies providing SDKs for the .NET platform.
.NET "all-in" for AI
.NET provides foundational building blocks for AI with Microsoft.Extensions.AI which has gone general availability along with VectorData extensions for data, semantic search and embeddings. There are also other building blocks for you to interface with AI including .NET and C# based MCP server for intelligent tools and discovery, AI templates with out-of-the-box guidance with code ready to go that goes beyond hello world but has complex scenarios built in such as setting up embeddings, there is also semantic kernel which .NET is a first class part of this for multi-agentic workflows and orchestration along with model evaluations and scoring across multiple dimensions such as the safety of output and how grounded it is.
.NET AI Libraries
.NET AI Libraries with Microsoft.Extensions.AI and Microsoft.Extensions.AI streamlines AI integration with unified APIs with common AI abstractions, standard middleware, vector store operations and abstractions along with interoperability and extensibility. AI Extensions in Action, Microsoft.Extensions.AI provides the IChatClient which has been enabled in the Telerik AI Prompt Component out of the prompt so can use this in your Blazor project by registering the IChatClient and then add the Telerik AI Prompt Component which will use the underlying IChatClient it can also be setup with some predetermined prompt suggestions so with just a few lines of code you can have AI generation in your Blazor applications. AI and vector data extensions are basic building blocks and primitives that can be integrated into cloud, web, desktop and mobile applications that you can build with AI Model Provider SDKs, UI Components, AI Libraries, Vector Store Provider SDKs in Apps or Agent Frameworks from Core AI and Vector extensions.
Agents
Agents are Large Language Models enhanced by different features and services such as tools including functions, models and Model Context Protocol with Memory if there is a long conversation including state management and embeddings along with Data using Retrieval Augmented Generation and structured replies. Agents can be orchestrated to determine what agents are available with routing, scaling and discovery along with being part of workflows for autonomous agents, agentic workflows and agentic DevOps.
Agents are AI designed to perform a task and task can vary in level of complexity and capabilities depending on your need from simple generation of summaries, images, audio and more with an AI model and input to more complex retrieval of information from grounding along with reasoning, summarising and answering user questions to more advanced capabilities to take actions to automate workflows and replace repetitive tasks for users. Agents are ways of breaking down AI so that each agent performs a specific task and AI extensions for .NET are agent ready which can use what is build into .NET with the right tool for the right job.
.NET Agentic applications can have their front end on cloud, web, desktop and mobile with a build layer with Agent SDKs including Semantic Kernel, Azure AI Agent Services, AutoGen and Microsoft 365 Agents SDKs built on a core later with Agent Runtime supporting durable workflows, state management, orchestration and observability. .NET agentic applications can use data with data stores, memory and perform ingestion and retrieval and supports protocols such as MCP and A2A along with supporting monitoring, evaluations and deployment with .NET Aspire and Azure AI Foundry.
AI infused mobile & desktop app development with .NET MAUI - Beth Massi, David Ortinau, Gerald Versluis & Uma Maheswari Chandrabose
Introduction
David spoke about building intelligent applications, intelligently including agentic apps to infuse AI into your .NET applications for cloud, web, desktop, mobile, games and IoT along with tools for agentic DevOps with AI-powered agents operating as a member of your development team in Visual Studio, Visual Studio Code, GitHub and Azure combined with Aspire to build, test and deploy seamlessly from code to cloud. Intelligent applications is bringing AI into your application which is non-deterministic and depends on a bunch of other factors.
Jakob Nielsen who talks about how to build great user experiences since the dawn of the web stated that "AI is introducing the third user-interface paradigm in computing history, shifting to a new interaction mechanism where users tell the computer what they want, not how to do it - thus reversing the locus on control". What could AI potentially do for you to help your users be more effective in the moment is a lot of the thinking Microsoft are doing to help users focus on what they want to do, not how to do it.
AI is changing applications and how we make them with changes in user experience including personalisation context-awareness such as knowing location and calendar and multi-modal interaction. AI can know a whole lot more about the user if they opt into this and can users use the app in different ways such as voice control and the same app can adapt in real time. There was the Windows Phone feature where you could put in a to do for a location and when you got near that location you would get notification about doing that task at that location.
Design Principles for AI Applications
6 Design Principles for Generative AI Applications by Justin D. Weisz et al is to Design Responsibly to solve real user problems not just flash in the pan stuff, minimise user harms so if you let people using a chat window in an unhealthy way you enable an unhealthy outcome, use human-centred design and don't let people harm themselves, expose or limit emergent behaviours along with testing and monitoring for user harms. Design for Mental Models to orient the user to generative variability, teach effective use, understand the user's mental model and teach AI about the user.
Design for Appropriate Trust and Reliance where you should help a user known when they should or should not reply on AI output, be clear about how well an AI performs for a given task, provide rationales for outputs, use friction to prevent overreliance and encourage critical thinking along with signifying the role of the AI. Design for Generative Variability to leverage multiple outputs, visualise the user journey, enable curation and annotation along with drawing attention to differences or variations across outputs.
Design for Co-Creation to work with AI collaboratively and help the user craft effective outcome specifications, provider generic input parameters, provider controls relevant to the use case and technology along with supporting the co-editing of generated outputs. Design for Imperfection by making uncertainty visible, evaluate outputs for quality and offer ways to improve outputs along with providing feedback mechanisms.
Microsoft's AI principles are fairness, reliability and safety, privacy and security, inclusiveness, transparency and accountability. Mitigation layers for user experience include designing for responsible human-AI interaction, for system message and grounding to ground your model and steer its behaviour, for safety system to monitor and protect model inputs and outputs and for the model to choose the right model for your use case.
Mitigation layers are assisted by the HAX toolkit is a set of tools to help create responsible and effective human-AI interactions, recommended system message framework that provides instructions, clear directives and more on the tone and style of responses along with Azure AI content safety which detects and moderates harmful user-generated and AI-generated context and Azure AI Foundry provides a broad selection of models and helpful tools to help guide your choices.
.NET MAUI
.NET MAUI, which is AI infusion ready, is for building native multi-platform applications all in one framework. .NET MAUI is the modern native client stack for .NET featuring a single project and codebase with native controls and styling supporting over sixty common device APIs and access platform specific APIs if needed. .NET MAUI enables app store distribution and reach, with productivity for Visual Studio and Visual Studio Code and the platform is open source on GitHub with a healthy ecosystem of controls and components.
.NET MAUI is increasing year on year for support from developers including over a thousand issues resolved and PRs merged on GitHub. .NET MAUI offers one solution for everything you need with context-awareness, adaptive UI, AI, multi-modal and model support along with UI controls for flexible UI that AI can manipulate. ONNX is the ability to run small purpose-built models on device rather than cloud. It doesn't take much code to integrate AI into .NET MAUI using Microsoft.Extensions.AI and a few more lines of code to integrate MCP and Tool Calling and you can bring it all together to analyse and prioritise tasks. Plugin.Maui.Audio enables audio recording cross-platform and MediaPicker is baked into .NET MAUI to capture photos or input images.
Syncfusion & Microsoft
Syncfusion and Microsoft empower developers together. Uma Maheswari Chandrabose has worked on UI development for various components and are overseeing the Essential Studio product. Syncfusion is a comprehensive suite of over 1,900 UI controls across different frameworks to quickly build mobile, web and desktop applications including Blazor, .NET MAUI and others.
Syncfusion is collaborating for the developer community and are growing alongside the .NET ecosystem to advance the .NET MAUI framework and are contributing to the community in two ways including the Syncfusion Toolkit for .NET MAUI with twenty-nine UI controls and are contributing to the .NET MAUI framework repository to fix framework related bugs and reviewing community PRs at first level before being referred to Microsoft.
Syncfusion have performed over 320 merge requests and over 100 PRs under review and handle 65% of community PRs. The impact of the collaboration with Microsoft has resulted in speeding up development which has been high quality and compatible, got to know more about community requirements, test their controls in beta version of .NET MAUI before official release and the collaboration has resulted in a huge spike in the .NET MAUI Toolkit.
New updates to Syncfusion Toolkit for .NET MAUI include six new controls such as Picker, Date Picker, Time Picker and Date time Picker along with linear and circular progress bars and are optimised for Ahead of Time compilation and trimming to stay fast and be more lightweight. Syncfusion's mission is to simplify development and deliver innovation with ease including AI-powered AI components.
.NET MAUI Hybrid Applications
Beth talked about .NET MAUI hybrid applications which are a blend of native and web technology where the UI is written in web technologies and wrapped in a native application container. Hybrid apps have access to native platform features and device hardware to leverage features that a web application cannot access easily or at all. Hybrid apps can be mobile or desktop applications without the need for separate development and can be distributed by downloading and installing them through apps stores along with being able to support code reuse across device platforms and web browsers to achieve greater developer productivity.
For Web UI and Native UI look and feel for web UI typically looks the same across devices, that is controlled by CSS, while native UI is device-specific by default. Native UI can provide a more seamless user experience tailored to specific devices and Web UI can easily be reused in web application. Skills needed for web would be knowledge of HTML, JavaScript and CSS whereas native UI, on the Microsoft stack, requires knowledge of XAML.
.NET MAUI Hybrid apps with Blazor are enabled with the Blazor Web View including a solution template for Blazor Hybrid + Web where there is a tight inner loop for development with Copilot and Hot Reload and built-in input controls including forms with validation and you can reuse UI across native and web with Blazor. .NET MAUI Hybrid apps with Hybrid Web View allows other JavaScript front ends allows developers to reuse Angular, React and other JavaScript frameworks in .NET MAUI native desktop and mobile apps providing C# to JavaScript interop along with coming in .NET 10 new hooks for intercepting web requests.
Agentic DevOps
Gerald talked about Agentic Dev Ops. Evolving DevOps which started with the union of people, process and technology to enable continuous deliver of value to end users to DevSecOps which was union of people, process and technology with security as a shared responsibility to enable continuous deliver of value to end users. Evolution has continued to Agentic DevOps which are AI powered agents operating as a member of your dev team to automate, optimise and accelerate every stage of the software development lifecycle.
GitHub Copilot is built for the developer experience to automate the mundane and focus on what matters such as creating tests and writing documentation. The next evolution of GitHub Copilot is new agents to help solve real-world problems throughout the software lifecycle including ideation, planning, model evaluation as well as AI assisted coding along with testing, code review, security, deployment and modernisation.
GitHub Copilot Agent Mode supercharges the AI-assisted coding experience directly in Visual Studio to expand your AI capabilities beyond auto complete by assigning multi-step coding tasks, leverage an AI-powered peer programming that can build new features or refactor legacy code plus heal itself when things break. GitHub Copilot Agent mode can analyse a codebase, edit files, run tests, fix errors or make suggestions all from a single prompt and use agent mode in the IDE of your choice with support for Visual Studio, Visual Studio Code and many more.
GitHub Copilot Vision allows you to illustrate your context with images instead of text in Visual Studio where it interprets UI designs and specifications along with redlines and annotations plus project architecture diagrams, data schemas and even napkin sketches. Together with hot reload and live preview you can watch your UI designs come to live and Copilot Vision provides the latest multi-modal Large Language Model Improvements in Visual Studio.
GitHub Copilot Coding agent provides agentic capability in GitHub to enhance team productivity like an AI teammate where you can assign issues to GitHub Copilot and get back pull requests, automatically validated with tests and linters. You can stay in control by working with Copilot just like any other developer and iterate through pull request reviews along with being able to tailor GitHub Copilot to you needs by customising the development environment, adding custom instructions and providing external context through Model Context Protocol servers.
.NET Aspire is code-first control that is modular and extensible with observability from the start along with flexible deployments with many use cases that could work with .NET MAUI. Aspire with Copilot is an AI assisted experience directly in the Aspire developer dashboard where you can have Copilot identify resource issues, find bottlenecks, and help with spotting problems at development time. You can also ask Copilot about specific traces and find root causes of errors and it works seamlessly with the rest of your Copilot agents in Visual Studio.
Crafting great Windows native app experiences - Niels Laute & Sergio Pedri
Introduction
Niels works on Power Toys and AI Dev Kit and Sergio is an engineer on the Microsoft Store Client team, and both talked about building great experiences for Windows. Windows App SDK and WinUI supports new MVVM Toolkit generators to remove boilerplate code to help implement observable properties and commands for MVVM scenarios with minimal code.
New DependencyProperty generators in Toolkit Labs that simplify registration of DependencyProperty field s in your XAML projects available for Universal Windows Platform and WinUI. Native AOT allows you to public your app using the Native AoT runtime for .NET for faster startup performance, lower memory usage, smaller binary size and for self-contained deployment.
Windows App SDK
Windows App SDK is introducing meta packaging to allow including a subset of packages to minimise app size for self-contained applications such as for AI, Base, Direct Write, Foundation, Interactive Experiences, Packages, Widgets and WinUI making it easier to use them in Windows Presentation Foundation. Windows App SDK will have a new experimental release cadence from quarterly to monthly releases based on feedback from developers wanting to have early bits to make it easier to get feedback.
Design
Developers aren't designers so might not know how to create a beautiful user interface but can elevate your application with core design principles with spacing and layout to have consistent spacing which creates visual order, typography where clear text builds hierarchy, colour and material can help users by highlighting what matters and can introduce motion to help users understand what is happening.
Fluent Design uses a consistent spacing of units of four so if you use that this will help layout things more consistently such as sixteen pixels for inner borders of dialogs or 8 pixels for spacing between buttons. Fluent Design typography has a really nice type ramp which if you stick to will be consistent with the rest of Windows itself. Fluent Design has a great set of colours and combinations with the brushes as part of WinUI that look right and be readable for accessibility.
Many of the core components in WinUI support Children Transitions when they are repositioned to look better and these add up to improve the overall application experience. There are also animations that occur on the compositor thread such as implicit show animations and hide which can be done easily and recommended durations and timings are consistent across Windows, so they don't come across too slow or fast.
Microsoft Store
When listing applications on the Microsoft Store there is a Figma design template to see how your application icon will look in the Store, there will also be the ability to see how it looks across Windows itself in the future such as how icons will look in the Windows Taskbar. The same Figma template will also help with application screenshots that contain text and highlight certain parts of your screenshots and then be able to explore these and use them when publishing your application on the Microsoft Store.
Conclusion
Fluent Design in both WinUI and Windows Presentation Foundation help make your application look great and applying core design principles can get you a long way. Design helpers in WinUI Gallery and Windows Community Toolkit can be leveraged along with easy-to-user helpers to add polish to your application. App icon and screenshot templates for Figma allow you to create beautiful assets for application to help make it stand out.
Microsoft Store reaches over 1.2 billion users with an open platform and flexible commerce including Win32, Universal Windows Platform and web applications and they can be packaged with any technology. There is a new zero-cost onboarding fee for individual developers so submitting your application to the Microsoft Store is now completely free for individual developers. If you are publishing Windows applications with MSIX then distributing on the Microsoft Store is recommended and Microsoft are making it easier for people to discover applications.