Microsoft Build 2025 - Keynotes

Microsoft Build Opening
Satya Nadella
Satya started by saying it is always good to be at Build and we are at the stage when things are happening and scaling to build out the open agentic web at scale and going from a few apps with vertically integrated platforms to a more open platform and deliver the developer tools to empower every developer on the planet and it starts with the tools we use to build software. Having the right tools to bring ideas to life and Microsoft are continually improving the tools with the Visual Studio family having over 50 million users and there are a bunch of new updates including for Visual Studio and will be making it better with support for .NET and live preview at design time and are moving to a monthly cadence for releases.
Open-source is at the core of GitHub which is the world's open developer platform and Microsoft are open-sourcing Copilot in Visual Studio Code and are integrating these capabilities into the same open-source repository for Visual Studio Code and will continue to build out GitHub Copilot with more capabilities being added all the time. You can ask an AI for answers and humans can assign tasks to AI and they can execute them, and AI and humans can assign tasks to each other. You can also migrate any on premises code to the cloud.
Microsoft also have a site reliability agent to run apps resiliently in production which can even raise GitHub issues with issues that can be resolved. GitHub Copilot will also haver a coding agent where you can assign issues to it to complete them including building new features and author tests so you can assign an issue to a Coding agent and it can work on an issue itself, it will create branch and create the resources it needs with GitHub Actions and you can see the draft PRs in the logs and it will respect any security measures and will only use MCP servers approved by developers and there will be an open and secure ecosystem with the Copilot control system where individual developers and IT can have any needed controls.
Satya Nadella & Sam Altman
Sam Altman from OpenAI joined Sayta virtually about the various form factors that developers can use such as CLIs and agents but what is the vision of how developers will use various form factors together. GitHub Codex was first developed by OpenAI in 2021, and we now have an agentic coding experience where you have a virtual teammate you can assign work to and issue many requests in parallel and create tests or fix bugs and it will only get better from here and get some pretty amazing stuff done. This allows the developer lifecycle to move faster, and Microsoft are shipping models as OpenAI release them and they will continue to evolve and get simpler to use and more reliable where it just works and people will be surprised how fast the progress will be in this direction. Developers can build agentic apps and do multi-agent orchestration to build high scale agentic applications, the rate of change is one of the most difficult thing and planning on how people will build things in the near future, but these shifts don't happen often but leaning in early and hard is the best way, the challenge is to keep moving at pace and enable developers to do more.
Satya Nadella
Developer tools are getting richer, and it is not just about one form factor but about people coming together and there is a platform opportunity for those working with Microsoft 365 Copilot and the latest update brings together chat, search, notebooks, create and agents as the UI for AI. Chat is grounded on both web and work data which is a game changer, search works across all your applications including outside Microsoft 365 and with notebooks can have heterogeneous collections of data and even get audio reviews and podcasts out of it. With create you can create PowerPoints and with agents you can get reasoning over any application or project and do things like uploading Excel spreadsheets and get an overview an analysis of this. Microsoft Teams can take all of this and multiply it and can at mention an agent in a team and the Teams AI Library now supports MCP for expanding agent skills and A2A for connecting agents to other agents and you can publish your agent to the Agent store across Microsoft Copilot and Microsoft Teams.
Copilot Studio allows you to developer your own agents with computer use for AI automation and model context protocol along with multi-agent orchestration where something like onboarding which has their own expertise and experiences can be brought together. Over a million agents have been developers that integrate in Microsoft Teams and Microsoft Copilot and there are now a new class of agent with Copilot Tuning which allows you to fine-tune agents with your company's knowledge and can do it with a set of references so it can talk, think and work like you such as generating documents that are specific to your company and you can even tune each model to specific knowhows in different industries you work with.
Miti Joshi
Miti Joshi talked about easily scaling productivity solutions with Microsoft 365 Copilot where you can call out agents such as a Researcher to tap into GitHub where you can analyse performance issues this can use the Microsoft Graph to get issues to be ingested from the GitHub API to get backlog items you care about. Copilot Studio can help build agents with little code required that can post into Teams with its own Entra Id, you can describe what the agent needs to do and can choose the model to use and then select the knowledge to ground the agent to ensure it pulls from the right sources and then have tools to trigger the agent such as when an email is received and can also use MCP servers where needed. You can use multi-agent orchestration to take on more complex work and can connect an agent to another agent such as one that carries out compliance checks to perform additional behaviour in a workflow, then can use Copilot Tuning which is a low-code way to fine tune a model. Copilot Studio can access reasoning models and have deterministic workflows and think about these agents orchestrating workflows for every role and business process and have every business application showing up as an MCP server to allow developers to think about the next level of integration using AI.
Satya Nadella
Microsoft is taking everything underneath Copilot and making it a first class platform to build your own applications and application extensions to enable AI applications that can be multi-modal stateful applications that are production ready and building a first-class server is needed and it takes more than a model to build a system and Azure AI Foundry is that complete platform for the AI age for many companies doing enterprise-level deployments. Microsoft are going further with Azure AI Foundry by supporting many different models from OpenAI, Cohere, Databricks, Deep Seek, Hugging Face and more with same day access to OpenAI models. Developers care about many things and Azure OpenAI is best in class with high reliability and great cost controls and leading security, compliance and safety and picking a model can be a bit of a chore and will now have a model router for Azure OpenAI models which offers dynamic model routing to allow applications to become multi-model.
Satya Nadella & Elon Musk
Grok from xAI will be coming to Azure AI Foundry, Elon Musk has deep vision of what AI can be and the roadmap and vision of the platform is to reason from first principles and apply the tools of physics to thinking and test conclusions against those fundamental principles.
Satya Nadella
Switching models will become easier but models are just part of the equation, but you need access to the real-time web and enterprise knowledge and need a more sophisticated query engine that is custom built for agents so can break down a complex query and run these in parallel and return the synthesised results. Foundry Agent Service allows you to create agents and use the service like a managed service for an execution environment and enables multi-agent workflows and is flexible and reliable. Microsoft provide full spectrum of compute for agent to work with container apps and functions and deploy applications as Azure Container Apps or Azure Functions and you can bring your own models to Copilot Studio to automate a workflow or build an agent. Another consideration for app servers is observability where can operate and monitor AI in production and organisations will have people and agents working together and agents will need access to identity, governance and security where agents get their own access controls and for data governance Purview is integrated and for security Defender is integrated.
Kedasha Kerr
Kedasha Kerr showed how to use Azure AI Foundry and GitHub Copilot to make development easier to build an AI travel agent, you can give your agents more grounded and smarter by adding some knowledge with files with reference data or other services but can get a long way by adding grounding with Bing Search and can also give it access to OpenAPI APIs that just work and make agents that are more reliable and less likely to hallucinate and make the agents much better with simple steps needed to improve the output and giving the agent more capabilities to allow it to do more things on your behalf such as booking hotels or even requesting leave from an internal system. You could add capabilities by using GitHub Copilot in agent mode with the most powerful models and actually do the work and get it to get the issues on GitHub by using MCP then GitHub Copilot has the context to be able to help you, for example to add some new functionality you can ask GitHub Copilot to fetch the details for a GitHub Issue and even add a rough design as context for the prompt, it will then ask for specific permissions and allow it to make changes and using the vision capabilities it can understand the sketch and then proceed to make the changes, which can stick to any styles and coding styles to make the changes.
Satya Nadella
Satya mentioned it is really exciting to see the progress and each of these platform shifts has needed an app server and building out the capabilities has helped with this as it takes scale. Microsoft want to bring the power of the app server and capability to local machine with Foundry Local for Windows and macOS including a CLI for development. Windows is the most open platform with massive scale with over a billion users and devices you can reach, and they are making it more secure and reliable. Developers are using on-device capabilities on Windows to light up more capabilities and Microsoft are taking another step with the Windows AI Foundry which is what was used for Recall and is tools to optimise inbox models and you can use open-source models or bring your own Windows ML model and tap into models you can run locally with Foundry AI and with Windows AI Foundry you can customise a local Phi Silica SLM or Small Language Model using a LoRa adapter which will revolutionise what AI looks like on the PC and for the model you have rich semantic APIs and can do hybrid RAG experiences to be contextually relevant and Windows ML allows you to deploy models without needing to do device specific tuning and there will be native support for MCP in Windows and Windows will have several built in MCP servers and there will be a registry to discover these which will be vetted for security and performance.
Divya Venkataramu
Divya Venkataramu showed how easy it to set up a new Linux Distro and deploy a web project into it using GitHub Copilot and MCP on Window. AI Components can access MCP components which can be enabled by users such as Windows Subsystem for Linux and then can connect to this with GitHub Copilot and get it to set up Fedora Linux then set up a project in the installation, there are user consent prompts that will be displayed to ensure access is granted correctly. Once you have an environment setup you can see the steps that were carried out which included it setting up the environment and required packages for the project and ran specific commands directly to carry out any actions. You can refine the project with a prompt to make the web project look like a Figma design which used an MCP server for this to pull the design details directly from Figma and implement the design accordingly in the web project. All of this achieved with just a few sentences in a simple chat interface and has potential to enhance productivity with the power of secure agent driven secure MCP functionality on Windows.
Satya Nadella
Satya said that Windows is becoming the best dev box for the AI agentic web. It was ten years ago where Windows Subsystem for Linux was developed and there was a request to open-source it and there has been a lot of changes and it is more separable and now WSL is fully open-source.
Kevin Scott
Kevin Scott in his ninth Build as CTO of Microsoft to talk to everyone, there have been so many things that have changed in the universe of technology in the past year and the thing that is exciting them right now is that the components that we have been working on has been forming the agentic web which is happening in an open way. There has been an explosion of Agents with a richer ecosystem of them that is being built, and they are being used more frequently than ever before and has more than doubled since last year. The bigger thing that is happening due to the new reasoning models is to take on extremely complicated tasks and perform incredible things but sitting on top of the agent layer is the runtime layer which is an emerging set of components, the reasoning capabilities are more powerful at the moment than is being used at the moment but can think of things that are barely possible in reasoning can be considered now as the models improve. Things that need to be built for agents is really robust agentic memory but the thing you want agents to do is to have a rich memory with high precision with recollection and you can trust them and rely upon them, TypeAgent from Microsoft is one way to tackle something to remember interactions and solve a problem once and retain the solution. Microsoft will offer these things in Azure Foundry which will become richer and richer over the next year but the thing that is super important for an Agent Web is agents being able to take actions on your behalf and need protocols such as MCP and A2A and emerging platforms to help connect agents in a reliable and interoperable way to access content, services and take action to fulfil tasks delegated to them.
Kevin Scott talked about the commitment to MCP which has taken off to fill a niche in the open web and is a simple protocol which allows you to sophisticated things and is a great foundation to layer things on top off which has been important for the web before to solve problems as things developed over time. Microsoft is doing a lot of work for their first party services to be MCP uplifted and is their protocol for agentic services and will be doing a lot of work over the months with Anthropic that enterprise issues are solved such as permissions for agents that need access to systems and support the open community, the most important thing is ubiquity where may have sharp opinions but most important thing is getting something standard. NLWeb is MCP-compatible, so with web we had HTTP then had HTML which are opinionated about the payload, where NLWeb where someone with a website or API can easily make this an agentic application to leverage the full power of large language models where every endpoint can be an MCP server and can add NLWeb experiences to websites and will be open source and Microsoft are looking for feedback on how to make it better to enable you to turn a website into an AI app. Open is so important here where simple components are composable and exposed to creativity to anyone who has an idea and the agentic web is where developers can use their imagination to make it possible.
Satya Nadella
Satya talked about the agentic web vision which gets us close to the original vision and ethos of the web and the intelligence and be more distributed across the web and NL Web democratises the creation of intelligence and distribution of intelligence and is a platform they want to create together and not be the repeat of the past and not be about the aggregator power. For any app the data tier is important, and Microsoft are building this out to enable data processing that is more efficient and capable. SQL Server 2025 will be launching soon, and Azure Cosmos DB will be integrated into Azure Foundry and will be taking this further with Azure Data Bricks and within a PostgreSQL query you can have LLM query and SQL mixed together. Microsoft Fabric is at the heart of analytics and data stack and have brought SQL to this already and will be bringing CosmosDB data to unify an entire data estate with NoSQL and SQL. Power BI will enable you to ask questions of your data and analyse across multiple PowerBI reports and will be available in Microsoft 365 Copilot.
Satya talked about the infrastructure where developers face that optimisation problem where you need to deliver AI experiences with the best cost where Microsoft are optimising the stack to bring it together and offer the lowest cost highest scale infrastructure to deliver cloud and next generation AI workloads and offer the most tokens per watt per dollar with silicon, systems and model optimisation. Azure is bringing the highest throughput for any cloud platform including enabling the latest hardware from nVidia to deliver the best intelligence to the world.
Satya Nadella & Jensen Huang
Jensen Huang from nVidia talked about launching the largest AI supercomputer on Azure last year but with new model technologies that have sped up things dramatically in just a couple of years but the reduction in costs and increase in performance is ongoing with the best of times being yet to come and bring these capabilities to developers.
Satya Nadella
Microsoft has over 70 data centre regions and have opened ten data centres in just the past few months and building a complete AI system with closed loop cooling that doesn't consume any water and data centre design is integrating improvements and connecting data centres with a 400 terrabyte per second backbone and AI applications themselves need storage and computer where efficiency is being driven including Azure Cobalt for ARM-based VMs and there is a fungible fleet of resources available for AI applications. Satya said it is also about digital resilience for responsible operations and confidential computing and maintain that complete control about how systems are governed and who has access to this, and Microsoft also offer Azure Local which offers an alternative with own infrastructure.
Satya talked about will also be real breakthroughs in science to create new compounds and molecules, and Microsoft are bringing the entire stack to science and the scientific process with Microsoft Discovery and it is build on a very powerful Graph-based knowledge engine not just for bringing back facts but also understands the nuanced knowledge of science and bringing highly specialised elements for a science agent.
John Link
John Link showed how to lead a team of agents to make a scientific discovery by reasoning over knowledge, generating hypotheses and conducting research. Microsoft Discovery can use public research and internal knowledge where you can get a summary and a comprehensive report with citation for trusted research, the goal is not just to reason over knowledge but to generate hypotheses. Generate a plan for specific research to build the right workflow where agents can use tools and models and integrate open-source and third-party solutions and use models and simulations to validate any findings and can fine a good approach and proceed to the next phase of experimentation to use the best resources and in future integrate advances in Quantum Computing and Microsoft Discovery can compress days and weeks into hours and can see how to proceed or iterate further and then actually use this to create a product such as creating a new coolant but could also be used to find other materials where the next breakthrough is yours to discover.
Satya Nadella
Satya talked about how Microsoft Discovery is being used to speed up developments. Microsoft is creating a whole new opportunity across the agentic web and across every layer of the stack with GitHub Copilot or Microsoft 365 Copilot enabling agents for every role and Azure AI Foundry to build AI agents on a robust set of rails for management, identity and security and create opportunities to fuel ambitions across the world to help solve problems wherever people are. The big winners are going to be developers who build applications not those who build platforms and it is not about the technology but what people do with it.
Unpacking the Tech
Jay Parikh
Jay supports the new Microsoft CoreAI team and has spent a lot of their career building infrastructure and tools for developers. Core AI team focuses on empowering every developer to shape the future with AI, the principles that guide them is to use AI tools and platforms not just AI tools and platforms and transform the entire experience not just a part of it. Will show off AI-powered tools that extend from the cloud to the edge and help bring in a new age of AI applications and make it easy to test, deploy and monitor applications and help modernise applications and reduce tech debt and they want to give developers their time back.
Jay Parikh & Jessica Deen
Jessica Deen joined Jay with a demo application is possible to ask it to create a PR to improve a readme to help with a new person joining a team but to also use these tools to develop faster. Jessica is a develop who spends most of the time asking who broke this then realising it was them, you can look at the back log and see what tasks you can avoid and get Copilot to instead such as input validation or type hints for API functions which Copilot lives for. Another one is to remove all TODO comments from the codebase and let GitHub Copilot take care of these but instead do other tasks that are more interesting such as a new design for an icon. Copilot can also help with edits when editing your code or bring an agent into Visual Studio Code and have it take care of an issue such as getting a Figma design and it will pick up a Copilot instructions file to follow any coding standards and principles. This is all powered by Model Context Protocol or MCP which gives Copilot access to more tools such as getting designs from Figma and when commit messages you can have Copilot fill out these too.
Jessica talked about the Readme where you can see what Copilot has done with a PR and can add a comment to add something like an architecture diagram using mermaid.js for it to include this in the readme. Copilot allows you to focus on higher impact work and the models you use in GitHub can be used in your own agentic applications and can get code to get started with integrating GitHub Models and Azure AI Foundry into your applications. You could also look at an issue with an incident where an unhandled exception occurred and can see details of an issue such as from the Azure SRE Agent where it can handle an alert and acknowledge this and begin an investigation including forming a hypothesis and detect that the errors were related to a deployment and it was able to initiate a rollback and handle monitoring itself which saves time if you were on call and could use GitHub Copilot to resolve the code change or it could be a migration where you can get the plan for this change and what is needed to upgrade or migrate such as RabbitMQ to Azure Service Bus and see what changes would be needed. Jay talked about using AI to migrate to the latest version of .NET with an AI-powered upgrade agent along with using one to migrate mainframe code which should be available later this year. Jessica talked about an issue for integrating a virtual guestbook service with agentic Spotify integration which could be implemented with GitHub Copilot with agents.
Jay Parikh
Jay mentioned this is just the beginning of this plan, create and operate workflow where you will have a team of AI agents you can work with. Microsoft are also integrating OpenAI and Anthropic agent functionality into GitHub, where GitHub will be the place you organise intelligence where agents can operate effectively and securely with openness of choice and many customers are getting benefits of using GitHub Copilot which includes saving hundreds of hours of manual work.
Jay talked about the platform that powers AI agents which requires new systems, infrastructure and tools with Agent Factory, he spoke to Bill Gates who said that Microsoft's vision was to create a software factory and are taking the same approach to build a full-stack platform for AI agents and applications. Microsoft have been working to ship features quickly, but developers can try out the new features and Microsoft use it internally with Microsoft 365 and Copilot. They have a new leaderboard to see which are the more popular models and hugging face integration and same-time access to latest models from OpenAI in Azure AI Foundry as well and with Azure API Management you can access these with any AI application with MCP and there is GitHub Models to try out any models along with model customisation for your specific use case and you can also do distillation to reduce a model for requirements, cost or performance.
Jay also talked about the other core area of Azure AI Foundry with a complete stack of AI capabilities and there are key services to expand which includes Agent service to scale agents at enterprise scale, there's an agent catalogue to pick from existing agents, agent knowledge allows agents to perform more sophisticated tasks and support tools such as MCP and A2A along with frameworks to help made decisions in a more cohesive way.
Amanda Foster & Elijah Straight
Amanda Foster and Elijah Straight talked about building an Agent in Azure AI Foundry which can use a configuration to make the correct tool calls to make a calendar appointment for example. Visual Studio Code has an extension where you can see what agents are doing including request and response and in-depth details such as tokens used and tools called with thread data which can be stored. You can use multiple agents together in one experience for example in an event planning agent to perform the actions such as creating a LinkedIn post, produce an image for the event and even create an RSVP content for the event and post a draft to LinkedIn. But how can you ensure that the agents are delivering high quality results, can do this by integrating into the CI/CD pipeline including AI quality metrics with the SDK for this.
Jay Parikh & Patrick LeBlanc
Jay talked about digging into the data for the example event with Patrick LeBlanc for looking for speakers who are good at AI and demos for the eveent but were only using a keyword and vector search for this but can use semantic operators in PostgreSQL to search for this for example with the azure_ai.generate semantic operator with the prompt to get the information along with the query.
Jay Parikh & Mehrnoosh Sameki
Jay talked about adopting new AI capabilities being an issue if can't trust or secure them, but this is a core principle for AI by integrating Purview and Defender into agents and Mehrnoosh Sameki talked about measure, protect and monitor. She showed how to use an AI red-team agent to adversary attack agent to test the system and able to see if any attempts are getting though and if they can, create a context filter with guardrails and protect against indirect or direct jailbreak attacks. You can monitor attempts against the system but prompt shield can already block many attempts and can use Defender to fully investigate an incident and see where it coming from and if it is coming from a known attack group could even remove the relevant access from a compromised source, you can put the right controls in place and monitor in production.
Jay Parikh & Seth Juarez
Jay also talked about Cloud to Edge as shouldn't have two platforms for these so Azure AI foundry is being built to extend from the Cloud to the Edge and Seth Juarez showed that there is a bunch of models you can download and use locally easily. Jay talked about securing the system and taking AI from the edge to the cloud but to keep moving and exploring the frontier of what you can do with the tools and platform and are teaming up with Microsoft Research to see what can be included in the platform, there's a lot to figure out and what capabilities they are working on.
Seth mentioned there is a lot of things that need to be done as a data scientist which is to get some data and analyse it by row and by column but instead can use an AI Agent like a mini data scientist in a box and from a day can be an hour and even if there are some formatting issues the agent can compensate for this and get the code you need. Graph RAG allows relationships and entities to be extracted and analysed to understand a data structure or code and then use it to help create new capabilities in code by understanding the code an implementing the changes.
Jay Parikh
Jay mentioned we know how important these tools are to shape our craft and team culture and to realise ambitions, but the opportunity is the idea of breaking fee of the constraints where you can build in multiple dimensions and in parallel with multiple agents and will be doing this together to make superpowers ubiquitous and the limitations are just imagination and drive.
Jatinder Mann
Jatinder Mann talked about Windows AI Foundry which is a way to build AI models and access to a growing catalogue of models with the support you need to get started and can explore a wide range of models tuned for Windows or pull from repositories that span CPU, NPU and GPU. It is not just about the number of models but the quality that can run directly on your device that can reason on multi-stage tasks which needed cloud-scale compute before but can now run directly on your PC. Phi-4 reasoning is a 14 billion parameter AI that can run locally. Windows AI Foundry local is where you can see the list of models optimised for your PC, see which models are already downloaded or run them directly from the CLI and then interact with the model and then can integrate this easily by changing the endpoint to a local host instead of the cloud based.
Jatinder talked about the next step which is to customise these which including tailoring these with parameter efficient fine-tuning with LoRA which can be done on Azure and then a lightweight file can then be used locally. You can select a model and then select the training data and test data and then trigger a fine-tuning job that is triggered on Azure to fine-tune a pre-trained AI model and can even see examples of what output is like before and after a LoRA adapter. You can also make sure a model makes use of the right knowledge at the right time including building natural language search. When you bring your own models then can take advantage of Windows ML to make sure that performance and capabilities are accessible.
Charles Lamanna
Charles Lamanna talked about the fact there will be a lot of agents developed over the years. Microsoft Copilot follows you from app to app and brings in capabilities across applications but can also use this with Microsoft Teams or use this in the background and as we use this will need to secure them with Entra for access, Purview for data and Defender for security. In Microsoft Copilot you can use agents from Microsoft or third-parties and use these such as Researcher to explain and reference a document which can even compare against internal strategies, it knows how you works and will have follow up questions on anything particular to pay attention to, the Researcher agent will execute a sequence of steps to get the information you need and responses are throughout and verbose but can also include straightforward overviews and use the information to take the right approach for education research and preparation where agents can make us more productive. There are many agents available and agents developers build can show up in an open ecosystem and a key part of this is MCP, Model Context Protocol, which is the new way these agents communicate to these backend systems and MCP Services are available for anyone to use and Microsoft will support all open protocols that matter as it will be an open agentic web to create first-class experiences in platforms such as Microsoft Copilot.
Charles talked about the ongoing theme is lots of agents and will need to create some structure around agents, so how do you organise teams so do the same kind of thing with agents to have them appear alongside all of our organisations to have agents and people work alongside each other and look at a new way of getting things done, everything we use will have to change and the way we collaborate will have to evolve and look at new tools to get this done which includes Copilot Studio which is an incredibly easy way to build these and work with Power Apps or enable access to agent capabilities in Microsoft Teams.
Ryan Cunningham
Ryan Cunningham talked about agents in Copilot Studio which starts with natural language instructions and a grounding with knowledge from your organisation, but these agents aren't just waiting to be chatted to, but they have tools and skills to do work in an organisation, if something has an API you can work with it or with MCP to pull though capabilities. You can create very specific prompts about how an agent will respond and select the AI model and see how it performs and can have AI developers do foundational work on a model and leverage this in Copilot Studio.
Ryan mentioned that Power Apps allows agents to be layered on the same secure platform and can be used with agents and create a whole new type of application, you can even manage agents and supervise the work they are doing autonomously as agents won't do 100% of things 100% of the time and can see what agents need assistance and if they got stuck and get the agents unblocked quickly but agents can also notice trends and things humans may not notice and partner with agents to investigate something. You can have a team of agents built into Power Apps that can perform different actions based upon a plan where can add steps and partner with agents at every step and be the foundation for building software and change the way the world works with agents and Power Apps.
Charles Lamanna
Charles mentioned the other place we will work together with agents is in Microsoft Teams with 320,000,000 people using Teams every day and will be seeing many agents working in teams in a chat, channel, meeting or call where you can interact with these AI agents and Microsoft will be making this easier with Teams AI Library to have agents show up.
Farah Shariff
Farah Shariff talked about Teams being the collaboration space for agents and colleagues to work together and in a chat, you can talk to them and access critical information within the flow of work. You could have an agent for a standup to bubble up topics which can include things from other agents such as an incident response and uses the A2A protocol to communicate with them no matter how they were built. You can build agents with the Teams AI Library that can take advantage of MCP or A2A that integrates with AI models and results in faster development and more capable of interacting with humans within Microsoft Teams.
Charles Lamanna
Charles talked about agents showing up in more places such as Microsoft Teams and they need to be secured with Entra AI the same way people and applications are protected today and have Purview to protect data as it flows through.
Shilpa Ranganathan
Shilpa Ranganathan talked about Microsoft Purview having built in safeguards to prevent sensitive information from being leaked accidentally such as customer accounts which can be labelled automatically by Purview and other information that is labelled correctly is safe for agents to use, AI generated content is also automatically labelled and protected with Purview. A data leak scenario could be by asking for something that is sensitive, and the response will be an error saying that this information is restricted instead of responding with the information.
Charles Lamanna
Charles mentioned that with Copilot, Copilot Studio and Azure AI Foundry is to be secure by design with Purview, Entra and Defender and you can use these AI agents today, but they need world-class cloud and infrastructure.
Scott Guthrie
Scott Guthrie talked about offering the highest scale, lowest cost AI infrastructure that powers AI agents with innovations and optimisations with Microsoft Azure with more regions than any provider and are expanding capacity and you can bring applications closer to customers. Much of the capacity in Azure is optimised for AI and Microsoft are building our many Azure AI Datacentres with the most advanced AI super computers to allow developers to leverage AI models at even lower cost. With this power and scale comes responsibility by sourcing energy from renewable energy and Zero-carbon sources and will have 100% renewable energy by the end of 2025. Microsoft are also using innovative approaches in construction with hybrid materials with cross laminated wood and alternatives to concrete foundations.
Scott talked about the Azure AI Datacentre runs clusters of nVidia GB200 AI systems with first server, rack and cluster deployed in Azure and already running production workflows. These GPUs are packed together in a single rack to train larger AI models but the challenge when putting them together is a lot of cooling and they are so power dense you can't use AI cooling but need to use liquid cooling to provider cool liquid and extract hot liquid which is cooled in a continuous closed-loop system with giant fans. Closed loop system means the water is not wasted and all new datacentre designs in future will use zero waste water cooling methods. GPUs need more interconnections and operate as a single interconnected cluster and needs to be hyper optimised with enough optical fibre in one datacentre to wrap around the world many times. There is a 400 terabit AI WAN for an Azure AI Datacentre which enables training in multiple data centres, and every region has exabytes of high-performance storage to drive over two million transactions per second on a single Blob storage account for unprecedented data transfer for AI workloads.
Scott mentioned they have Azure Boost for every new server in Azure whether for AI or not and have Azure Cobalt 100 ARM-based VMs with best performance for price including twice the performance for .NET applications. The Azure AI datacentre will have ten times the performance of the world's fastest supercomputer and is just one of the new datacentres they are delivering. Azure enables cost reductions and powers a new era of AI apps and agents to integrate AI and provide AI solutions that weren't possible before. ChatGPT is on Azure and is the fastest growing app in history with over 500 million weekly active users. ChatGPT needed petabytes of data storage and uses Azure CosmosDB which delivers turnkey scale out with guaranteed uptime and can replicate data to any region in the world where prompts, metadata and more for ChatGPT is stored to deliver a natural language experience with low latency and high scale and they can put data closer to users to result it faster responses to users.
Scott mentioned you can start with Gigabytes of data and scale to Petabytes of data and for CosmosDB you only pay for the data and scale that you use and delivers five nines of availability and has been essential for ChatGPT's growth and success. ChatGPT needs to scale application across ten million cores assisted by Azure Kubernetes Service for cloud native applications which is a fully managed service available in every region with automated deployments, healing and patching along with security safeguards and allows for cloud native scale for unlimited scale notes and don't need to scale linearly. All of this technology is powering a new era of AI apps and agents, and AI innovators are building on Azure.