Microsoft Build 2023 - Day One

Microsoft Build 2023 - Day One

Microsoft Build Opening - Satya Nadella

Welcome to Build! Satya Nadella said at the Opening of Microsoft Build! It is fantastic to be back together and everyone joining online especially platform shifts are in the air. It is exciting to come to Microsoft Build 2023 to see that something big is shifting for developers. When Satya used older versions of GPT which is the pursuit of the dream machine which started from a paper in 1945 with “As We May Think” and continyes through to PCs, the world wide web and today to Chat GPT. Computers are bicycles for the mind - Steve Jobs. Recently is seems we went from the bicycle to the steam engine.

How we build

How do we build is fundamentally changing, codespaces allowed developers to create an environment to be created is seconds or dev box allows you to create an environment in less than an hour or GitHub Copilot to help assist developers and GitHub Actions allows you to say in the command line or stay in the development environment. Stay on task and stay in the flow of programming. How we build software is radically different.

What we build

This will be the story of Microsoft Build. Every week there is something new, Microsoft are infusing AI across all layers starting with Copilot with GitHub then adding this to other products such as Microsoft 365 or with the Copilot for Bing and Edge and the AI infrastructure itself with OpenAI, every layer of the stack is profoundly changing.

ChatGPT - Bringing Bing to ChatGPT - ChatGPT is the fastest growing consumer feature and is available now to GPT+ and will be available to the free tier.

Windows Copilot - Bringing Copilot to the biggest canvas of all, Windows and make every user of Windows a Power user, it will be fully integrated into Windows 11 and can do things such as recommend a playlist from Spotify, simplify workflows within Windows and get more done and access all of Microsoft with Windows Copilot.

Copilot Stack - Apps with plugin extensibility which will allow people to build their own applications with Copilots and have common extensibility across all these services such as ChatGPT, Bing Chat, Microsoft Copilots and Windows Copilots and bring this to billions of users.

Copilots & Plugins - Microsoft are fast delivering on their vision of bringing copilot features to the web which includes integrating Bing into ChatGTP and allows for up-to-date content with citations sourced by Bing. Microsoft will bring interoperability between plugins for ChatGPT and Microsoft Bing so can use the same plugins in both. Adding value to the plugins by allowing them to work across the entire web in Edge and surface those plugins there and deliver incredible productivity features for people. Within Microsoft Word you can use plug ins with the Microsoft 365 Copilot which can include document intelligence to see changes in an easy-to-use way and create documents in a more powerful way. Within Microsoft Teams you could use the Jira plugin with Microsoft 365 Copilot and create tasks and keep you in your flow. Windows Copilot will change how you use your PC forever, you can invoke it from the task bar and show a side pane where you can ask questions with great suggestions which you can put into action with one click such as how to switch to dark mode or use the plugin from Spotify to get a great playlist and can even organise your windows with a natural language request. They already have over fifty plugins and are looking forward to what developers can built that will integrate with these AI Copilots and provide developers opportunities to reach more users.

Azure AI Studio

This is the full toolchain to build AI apps and Copilots and build your own models or ground them with your own data and supports retrieval augmented generation you can also create prompt workflows and has AI safety built in allowing developers to create their own AI powered experiences, with trusted tools developers need to build AI applications.

Azure AI Safety

Responsible AI dashboard with grounding with prompt flow along with media provenance in Bing and Designer and watermarking including audio watermarking. Deployment includes model monitoring and content safety.

Microsoft Fabric

Microsoft Fabric is data analytics platform for the era of AI with unified compute and storage with unified experience, governance and business models, this unification will fuel the next generation of AI.


These are just five of the fifty announcements that are coming in Microsoft Build. How to build these next generation of applications and build them with safety first. Pursuit of GDP growth with increase of this based on technology revolutions such as the printing press, mass production, information technology through to the development of AI. We want the standard of living to go up, that is why we do what we do. Things that we build can in fact make a different to the whole world, not just a small group of people with diffusion that takes days and weeks not years and centuries. We do this in such a way that we manage the energy transition and need to empower every person on the planet to achieve more. What we do in the next weeks and months will have a profound impact on eight billion people.

The era of the AI Copilot - Kevin Scott

A lot has changed over the past four years, but even more has changed over the past year. Wanting to do things that you didn't know were possible, how to make something that was impossible, possible. The power of AI is doing to help to do this and make something possible that was impossible and do something great with it.

What's happening with the rapid progress of AI models and innovation of OpenAI in partnership with Microsoft and are setting the pace of change. Microsoft have most powerful super computers, most capable foundation models from open-source and have the worlds' best AI infrastructure.

Azure is the cloud for AI and is the beginning of this end-to-end infrastructure for AI, which is allowing Microsoft to deliver their AI solutions along with those using Azure. Windows is the best client for AI developers.

Copilot - an application using modern AI to assist you with complex cognitive tasks and Microsoft are opening up their platform to allow developers to create their own Copilots.


Building GPT and ChatGPT which is probably the most interesting Copilot in the world right now. ChatGPT adoption has been huge and the challenge to build it was immense. OpenAI had an idea of a chat system for years, but the moment that when it clicked was GPT-4 and the training they did meant it could follow instructions and have a conversation and had the infrastructure for the earlier models and wasn't designed for chat, but it wanted to chat. CPT-4 was a labour of love and they went back to the drawing board and rebuild the infrastructure, they wanted all the features and details to work correctly. It is sometimes the boring engineering work that often leads to success.

The idea to empower developers to write software to extend the capabilities of ChatGPT and the Copilots Microsoft are building but don't have all the technical issues sorted out yet. Plugins are an amazing opportunity to leverage this technology and make it better for everyone with an Open Standard with a core principle to allow developers to bring the power from any domain is exciting. If you understand the core concepts you can bring something together.

What is over the horizon, we're almost on a tick tock cycle where you come up with an innovation and really push it with features they are still productionising and are able to reduce costs with new models as things change in the future, what's expensive today won't be tomorrow.

In this field the technology is clearly getting better and better but what developers can do is figure out how to make this work in their domain, understand what pain points there are to adopt this kind of technology, developers are the people who will make AI great.


When Microsoft when building copilots they realised that the idea of them is pretty universal and applies to more things that just software development like GitHub Copilot. Microsoft needed to find out what was common across all these things and look at what stack would be needed to deliver these easily and safely, they have been able to develop these by building a stack that would allow them to move quickly with safety.

A platform is when the economic value of everybody that uses it exceeds the value of the company that creates it. Then it's a platform” - Bill Gates

Gives developers the chance to create things with something that otherwise wouldn't exist. Prevent people from baring the burden of building things from the ground-up, but build only the things they need to build. Foundation models are reusable and generalisable and make sure this was a durable property of these systems.

Copilot + Plugins - Foundational models can't do everything. You should have ways to accommodate your application to build it on top of this technology even if that is not complete or perfect. Augment an AI application or Copilot to do more to access APIs, retrieve useful information, perform new computations and safely act on the user's behalf. Plugins are actuators for the digital world to attach those things to Copilots.

Reimage software development with AI - user experience, application architecture but with safety and security that needs to be thought about. What doesn't change is you must build a great product, you have to understand what that unmet user need is, what your unique understanding of this is, then you need to apply the technology but need to still think about what good product making is. The model is not your product, this is just infrastructure enabling your product, don't fixate on infrastructure you need to focus on the product, don't build infrastructure you don't have to build. It's up to you to create great experiences.

Copilot Stack contains the Front End which is the plugin extensibility and User Experience. Orchestration with prompt response filtering and meta prompt with grounding and plugin execution. User Experience is to understand what the machine is capable of and how to express the connection between the human and machine, to fully anticipate the needs of the user and architect this in a familiar way, but you will spend less time on this as they have a natural mechanism to do this is natural language. You just need to think about what it needs to do that the model cannot do, but less of this mapping around of User Interface elements to code, and need to think about what you want the Copilot not to do, you need to restrain it to your domain, need to keep something on task. Orchestration is like the business logic of your Copilot and can decide to have one with Semantic Kernel which is common way of doing this as there is a lot of commonalities but there are also other examples that work well with Azure infrastructure. The fundamental thing you are going to be manipulating will be a bunch of tokens, such as a question or something an application constructs, a big part of this is response and prompt filtering if they don't meet the needs of the user but may have other reasons to do filtering on them. Metaprompt is what is sent in every conversation, this is where the safety tuning occurs and what personality to use such as in Bing Chat to be more balanced or more precise so is a form of fine tuning. Once you get past this you have Grounding where you can add additional context such as retrieval augmented generation, where look at a prompt and then add extra context to provide a better response. You could have a Vector database to return documents or information for a response. Plugin may give extra context in grounding or may do some plugin execution when going back up from the model. You may be doing multiple passes to get what you need from the system. Foundation models and finetuning of them can be used or you can bring your own models. There will be a marketplace of models you can use in your Copilots and you could even train your model from scratch.

Kevin Scott has a podcast Behind the Tech but doesn't like write social media posts so wrote a Copilot that would create the posts. It runs on a Windows PC using mixture of open-source and hosted models and calls a plugin to do its work. The first thing it does is use Whisper model to get a transcript from the episode, then the next stage uses Dolly 2.0 to get information from the transcript such as the name of the guest and then use Bing Search API to get the bio for the guest and combine this together as a packet of information, then use the Azure OpenAI service to get an image from the DALL-E model then invoke a plugin for LinkedIn which will take the information and post this, but before it takes an action on the user's behalf by giving an option to review this.

Coming soon are media provenance tools that will help users understand the continent they see by using cryptographic methods to mark media to know it was generated by AI.

Copilots - new development pattern, unique architecture and will be everywhere.

Next generation AI for developers with the Microsoft Cloud - Scott Guthrie

Every app will be reinvented with AI and will see new apps built with AI that weren't possible before. Microsoft will make it easy to build these solutions using Azure, GitHub and Visual Studio.

Microsoft have the world's most loved developer tools for many languages and platforms with Visual Studio and GitHub allows for one integrated developer platform.

GitHub Copilot allows developers to create applications more easily with GitHub Copilot X allows for chat, in pull requests and more. The auto complete pair programmer is just the starting point. GitHub Copilot Chat allows you to go beyond code suggestions and provide additional context, you can also ask it to make the code more readable and easier to understand and perform refactoring. You can even ask GitHub Copilot Chat to fix the bugs in your code and it will do this. Everyone attending Build in person gets early access to GitHub Copilot X. Almost half of code is AI generated for developers using GitHub Copilot and most developers feel they are doing more satisfying work and majority are doing fewer repetitive tasks.

Microsoft are embracing an open plugin standard and have a developer experience to enable creation of these Plugins, which includes JSON files to define details of the plugin, which can then be exposed from a GitHub Codespace and Visual Studio Code for example and installed where they need to be used such as ChatGPT.

One plugin extensibility model for ChatGPT, Bing Chat, Microsoft 365 Copilot, Power Platform Copilot, Dynamics 365 Copilot, Windows Copilot and your Azure AI Copilots.

Cloud-native development with GitHub + Azure allows access to Azure Container Apps, Azure Functions, Azure Kubernetes Services and Azure App Service alongside Azure Data Services and can automate development and updating of the Plugins and allow for a scaling the features of AI which is used by ChatGPT is the fastest growing application in the history of the web. You could even use GitHub Copilot Chat to explain YAML files used for CI/CD deployment with GitHub Actions.

Microsoft are able to have so many Copilots as they build them on top of Azure AI, which include several categories of AI capabilities such as Azure OpenAI Service including ChatGPT and GPT-4 which are the same ones that Microsoft are using that can be used by any developer. You can use OpenAI on your own data, if you are grounding your data you need to trust your cloud provider. Your Azure OpenAI instance is isolated from very other customer, it is not used to train the foundation model and your data is protect by most robust compliance and security controls in the industry.

Azure AI Studio allows you to ground AI models using your own Data with build in vector indexing, and makes it is simple to build your own Copilots, can use the Retrieval Augmented Generation pattern and augment the model with your data and when getting information, you can surface this information as part of the answer. Everything is exposed as an API and surface it natively in your applications any way you want.

Prompt Flow in Azure AI - orchestrates AI models, prompts and APIs, support for prompt tuning and experimentation with blue / green deployments and supports Semantic Kernel. You can fetch data from multiple sources and put this into a prompt whether it is structured or unstructured. It also works with thousands of AI models which are available in a Model Catalogue.

Azure AI Content Safety - Considering safety is a requirement and need to build a system with this from the very beginning. Detect and assign severity scores to unsafe content, works on human and generated content and is integrated across Azure AI. Mitigation Layers - UX, meta prompt, safety system, model. Have worked with OpenAI to build safety into the model then can build on the safety system and meta prompt. Safety system - help monitor for harmful content and allow prompts to be tested to make sure these are filtered based on the needs of the application. Metaprompt allows control for grounding, tone, safety and jailbreaks, you can get information and metrics to see which flows perform better in Prompt Flow to produce the responses you need.

Azure - the world's AI supercomputer. This is the infrastructure powering ChatCPT and was used to train the models. Microsoft have built the largest AI training centres in the world and offer and deploy Hopper Nvidia GPUs with InfiniBand connections unique to Azure from Microsoft. Microsoft will be adding 120 new datacentres this year with over 60 Azure regions worldwide and more than any other cloud provider. Microsoft are committing to 100% renewable energy by 2025 for their data centres and their AI infrastructure is part of this and by 2030 will be carbon neutral and be carbon negative by 2030 which will have removed all the carbon they have output since their founding in 1975.

NVIDIA AI Enterprise and NVIDIA Omniverse Cloud will be available on Azure to create Metaverse applications such as digital twins and virtual factories.

Microsoft Fabric Data is the fuel that powers AI. AI is only as good as your data. Need to have a good data management feature in place. Microsoft Fabric unifies analytics tools with unification at every layer with a single source of truth for everyone in the business and make sure resources are used in the most effective way and can be used in a software as a service model is lake centric and is open and integrated with Microsoft 365 and is Copilot powered. Supports open data formats, open APIs and is multi-cloud with a data link with OneLake and is compatible with Databricks and other open-source tools and can work across other clouds such as AWS and coming soon Google Cloud.

We have an exciting future ahead by using Azure to build solutions and innovate using AI!

Getting started with generative AI using Azure OpenAI Service - Dom Divakaruni & Pablo Casto

The journey continues with Generative AI which started in 1956 with the idea of Artificial Intelligence, through tom Machine Learning, Deep Learning to Generative AI. ChatGPT has had unprecedented grown taking two months to reach 100 million users, showing how useful it is in people's daily lives.

Azure AI building these innovations so they can be integrated into applications, application platforms, scenario services and allow developers to build applications on AI. OpenAI service has large pretrained foundation AI models such as GPT-3, DALL-E 2, ChatGPT and GPT-4 from OpenAI that are custom-tunable with your parameters and your data. It helps light up use cases that were not possible before including summarisation reasoning over data, writing tools for code generation and supports ChatGPT and the era of Copilots with a foundation of enterprise security, privacy, and compliance.

GPT4 is achieving human level performance in text generation with improved alignment and can generate complex documents, be turned and steered with nuanced instructions, and can instruct and annotate in any language, slang or dialect. What you are able to do with it is really remarkable, prompt engineering used to be an art but now what you can create with it is phenomenal. There is enterprise development with the Azure OpenAI service, such as interactive learning experiences that are personalised. It is also being embedded into products such as search and making it fundamentally better along with helping write emails and other tasks, it can also help novices learn things and be a copilot to guide people along with their tasks. It can also help experts be more efficient in what they do or help the knowledge workers be more effective in what they do. It can also help workers make use of their time more effectively, and create things faster and more accurately than ever, this can also help create more models and help put AI into more places.


Announcements for Azure OpenAI service where you can apply your own data, create plugins for the OpenAI Service, configurable content filters and support provisioned throughput. Innovations are providing more control on how you integrate into your solutions.

Chat Completions API is a versatile interface use for all scenarios, not just chat, model adheres to instructions in “system” message, it sets the behaviour guidelines for the model including responsible AI steering. You can steer the models for your particular application or use case, you could provide examples to help steer it and helps get you a long way compared to fine tuning at a low level, this is a tough way of doing things to break. Increasingly with each generation of model you are not having to fine tune as much as you had to.

Using your own data - I want to reason over very long documents but token limits aren't sufficient so could build vector database to retrieve relevant data and construct prompt at runtime, this has worked for quite a while so Microsoft has made this easier with Azure OpenAI service on your data so can combine a model with your own data and build an assistant that reasons over your own data and does this security and this is not shared or not used to train the models and this will be expanding with new data sources in the weeks and months to come. You can have system messages in Azure AI Studio that you can use to help craft responses to help make them do what you want them to do.

RAG: LLMs + your data

Integrating large language models with your own data is one of the key challenges in AI. The aim with Retrieval Augmented Generation or RAG is to separate the large language model from an externalised knowledge base and all coordinated with an Orchestrator that mediates this. May be building your own experience with UX, orchestrator, calls to retrieve and LLM such as Copilots and in-app chat or are extending other app experiences or building plugins for OpenAI, ChatGTP etc. We use the model for the capability for it to reason, but we don't want it's knowledge we want it from our knowledge base.

Externalising knowledge - find the most relevant snippets in a large data collection using unstructured input as a query, these are search engines. Azure Cognitive Search is the complete retrieval solution with data ingestion, scaling and support for many written languages. Traditional methods for search are often very effective for this but can also use representation-based methods or Vector databases and these are complementary supporting semantic based retrieval. Retrieval using semantic similarity uses vector representations such that “close” vectors represent items with a similar meaning and may encode works, sentences, images, audio etc and some map multiple media types into the same space. If vectors are close together then you have found something that is similar.

Vector-based retrieval - have to think about encoding including pre processing and encoding during ingestion then need to od vector indexing where you store and index lots of n-dimensional vectors, quickly retrieve closest neighbours. Vector Search is native supported in Azure Cognitive Search with a new vector type for index fields, support for pure vector search or hybrid search and combines well with the L2 re-ranker which is powered by models used by Bing that combine keyword and vector-based search and is enterprise grade.

Azure Cognitive Search

Vector search - Revolutionising indexing and retrieval for large language model powered apps, power your retrieval-augmented generation application with support for images, audio, video, graphs and documents. You can use vector or hybrid search, use Azure OpenAI embeddings, or bring your own, can deeply integrate with Azure, scale with replication and partitioning and use this to build generative AI apps and retrieval plugins.


Expanding potential withs challenges to be addressed include accurate translation for a wider range of languages including support for Asian and African languages, integrate vector databases and cloud data stores and use up-to-date information. Azure OpenAI Service Plugins will allow you to build powerful AI Copilot with secure access to Microsoft services, you can security access your data in various data stores, vector databases and the web, data path access controlled via Azure AD and Managed Identities and admin roles to choose what plugins to enable. You could retrieve data with Azure Cognitive Search, Translate with Azure Translator, ground with recent info with Bing search or extract structured data from Azure SQL.

Provisioned Throughput

Azure OpenAI Service Provisioned Throughput is a model processing capacity for high volume production workloads, with predictable performance including stable latency and throughput for uniform workloads, reserved processing capacity to ensure capacity is available to meet demand and with cost savings for high throughput workloads vs token-based consumption.

What's new in .NET 8 for Web, frontends, backends, and futures? - Jeremy Likness & Daniel Roth

.NET is a complete solution for building modern web applications for front-end and back-end with everything you need right out of the box. Some of the largest services in the world are powered by .NET such as Microsoft Teams is using .NET 6 and gained better latency and doubled efficiency. Bing upgraded their high-performance workflow engine (XAP) to .NET 7 and saw dramatic improvements in performance. You can build applications from the front end to the back end.

The best of server & client with Blazor in .NET 8

You can have applications that serve contents for requests such as MVC and Razor Pages from ASP.NET with server-side rendering (SSR) and can use Blazor for client-side rendering (CSR). With Blazor's component model you can create elements that and be used for front-end or back-end. In .NET 8 you can do full server-side rendering with Blazor, you can have a routable component that can be rendered from a request to it.

With Blazor you can do validation and rendering of elements server side and have elements be navigated to and if you use Blazor united you can have server-side rendering where Blazor can update pages and intercept the request and update the DOM but is still doing full server-side rendering. If you need to do an API call this may delay the rendering of the page so you can improve this with streaming rendering where you can have placeholder content and have updates on the same connection if enable streaming rendering and when this data has been obtained the page will be seamlessly updated, this is still rendered on the server and getting pixels on the screen as quickly as possible.

If you want to add some more interactivity to a page such as being able to see a preview of a picture that is being uploaded. You can apply component render modes which can be server based and enable some interactive logic. You can get islands of web server interactivity in an application, and this can be freed up when no longer used. You can also change the component render mode for WebAssembly and for a particular page you could have thedotnet.wasm downloaded on a per page and per component level and this is then cached which can be used later. You can start users on web server then download the donet.wasm in the background and the application can use this by setting the render mode of the component to auto so can then decide on the render mode at runtime, when it is loaded the first time it will use the server-side one but once reload it will see the .NET WebAssembly runtime has been downloaded and will use this.

Full stack web UI with Blazor with server side rende4ring, enhanced navigation & form handling, streaming rendering, add client interactivity per component or page and choose component rendering. Generate static HTML content with components - use components for templating and static HTML rendering, render a component directly to a string or a stream and can render outside the context of ASP.NET Core but will be used for static site generation in Blazor in the future. You could use this for some kind of templating wherever you need it.

Microsoft are making improvements in .NET WebAssembly with partial JIT support with Jiterperter with 20% faster UI rendering with 2x faster JSON deserialisation, supports SIMD & exception handling, multi-threading, along with enhancements to Hot Reload and applications will be fully CSP compatible and will have WebCL to allow Blazor to be run in more environments. Additional enhancements include QuickGrid, Sections, Route to named element, monitor circuit activity and improved authentication where you can completely customise the identity UI using Blazor.

Backend development with .NET 8

.NET 8 has had a lot of focus on identity and authorisation as well as APISs. APIs support form handling where can take contents of form and post these to a minimal API but for file uploads these can be used along with problem details service which is a standard way of returning errors from a REST endpoint there is also the additional of a API explorer and you can test your API endpoints within Visual Studio and helpful analysers that tells you that you are making a mistake with a minimal API.

You could build a backend with a minimal API and a frontend with JavaScript and you can expose endpoints such as MapGroup where you can provide multiple grouped endpoints and you can see different parts of a route better with improved syntax highlighting and you get code completion to see what constraints are available and routes are not just a string but can be interacted with programmatically and you can have objects used that you then get informed of a need to TryParse those values. You can find endpoints more easily with the endpoint explorer and can navigate to the place in the code they are defined, and you can even generate a request and create a “http” file to interact with an API and see what it is doing.

Servers and middleware features include request timeouts so can define a policy that will issue a cancellation token you can deal with, there is also short-circuit middleware where you can short-circuit a response directly out. HTTP 3 is now supported by default and named pipes are supported which is a means of communication between windows so you can have interop between applications you have written. Authentication and authorisation improvements especially in the area of enhancements for Single Player Applications including client-friendly endpoints for identity management, support for tokens in self-host scenarios (without OIDC server) remove IdentityServer from templates (option for OIDC) and simplified custom policies with IAuthorizationRequirementData. You have the identity manager which are some APIs you interact with to manage an identity along with an Identity Store and have added Endpoints so the Client UI can interact with these.

What's new in C# 12 and beyond - Mads Torgersen & Dustin Campbell

You can check out the C# documentation which has a what's new in C# 12 section which will be filled out as features are announced before it launches in November. Most updates to the language have been removing boilerplate.

Something that wasn't fleshed out since C# 1.0 when doing static usings you can use a keyword type without having to spell it out, you can also use other syntactic constructs such as tuple syntax with named tuples. You could use Pointers, but these must be used in an unsafe context so can add unsafe to be able to use this in the using.

There is something that Records have that Classes don't have which is primary constructors which would automatically show up as properties, but this isn't what you want to do this with a class on whether you want to expose these, so you can use primary constructors in classes but these don't autogenerate exposed properties but you can use them as parameters throughout the instance members of the class, these values are captured into something after the enclosing things have occurred and will be in scope in these other places, they would be captured as parameters assigned to members if you implemented this yourself. When you chain constructors, you must always get to the primary constructor and the primary constructor then calls base so would work as expected, this had to be less of a feature than it is for records, but this was harder to design than it was for records. Primary constructors would have been added in C# 6 but there was a choice between that and interpolated strings which were added instead.

When creating new lists can be hard as arrays have an empty for those but you can initialise a list with a collection literal with square brackets around the items and the target type is inferred and you can use [] for an empty list as a collection literal which can also be used with Arrays. Pattern matching also supports this same syntax already for using square brackets for the way objects are created and the way they are consumed in a pattern but have started with a pattern and put this into the syntax for a collection. You can also use .. when defining lists and merge something into a collection literal and where it is the same syntax to get this out with a pattern match. Being able to assign collection literals to var is still something that is being figured out. What about a Dictionary being assigned to with a collection literal by using a colon but this is still in active development and is as yet undecided.

It is more common for features to be added in stages to C#, something that it is high value and then see how it is used and then feature more addons to the language for example lambda expressions didn't have a natural type but this was eventually added, if something isn't there yet it can be added in the next or a future release. Pattern matching was made as efficient as it possibly could and can be more efficient than something you would write yourself and the collection literals when they have a target type will be the most efficient way of creating them so these shouldn't be something you don't use because of performance reasons.

Extension methods were introduced in C# 3, and these are amazing and been a very big hit and allow code to be layered differently and can have other domains that augment a class without changing the original class, but you can only do this is instance methods but have been trying to extend this with other methods. You could declare an explicit extension for an object and then add methods to perform functionality, so this is like a transparent wrapper, this is something you do with something specific with and could have different methods that do something different such as parse as XML and parse as JSON. So can thinly wrap values with this “extension” and extend it with extra things. You could also have an implicit extension where all objects will have the extra members you could have extension methods, you can have extension everything, you don't need to wrap a new thing on the old thing, but it is the same object underneath. The real power of this is you can dot into it and see the intellisense and has discoverability. You could also allow something to implement an interface such as IParseable, the underlying object doesn't need to know this exists. This is the thinking around Extensions and it hasn't been fully decided how to implement this but can have the expressive power to do the separation of concerns and layering of software to give the next breakthrough in software composition.