Qudos .NET Meetup Newcastle - March 2024

Qudos .NET Meetup Newcastle - March 2024

AI in the Modern Enterprise - Dan Oliver

Dan works for EY and runs Azure and AI and spend their time speaking to finance directors and CFOs with top companies about using things with their supply chain and EY now has a few companies within them including a tech branch and this session is about getting AI working in big companies but can take this into any company of any size. Will be covering machine learning, generative AI, document intelligence and AI adoption challenges as most AI comes with a data science conversation. Microsoft doesn't own OpenAI, but they have a large shareholding in them and offer this as a service in Azure and their own services such as Copilot.

People in the North East are used to overcast skies and being out when it is freezing cold and because of climate change there is more demand for convertible cars from rental companies due to the improved weather. There are lots of spikes in demand, but companies don't know if they should get into this market as buying convertible cars is expensive, so their challenge was what stock levels should they have, what supply levels should they have and how should they allocate these resources, but these types of vehicles are in impulse rental so needed to figure out the decisions behind it. They only rent convertibles on warm days and the warmer it gets the more they rent but not everyone goes out and rents convertible cars, but when there was no rain, and it wasn't a work day, people would rent a convertible.

There are more features and labels behind renting the convertibles beyond the temperature but if it were too low or too high it would drop off, so how do you take a model this complex and take in these factors to create a model to predict when to do this. Use Machine learning to take the data set and then find out by looking at historical data such as if it was a working day and number of convertibles and then use the Met Office API to find out the max temperature and humidity as a data set to pull into the model and extract from that the progression by taking in these factors and use Azure Machine learning to create a model based on the data they have, so don't write some code but let the AI create the model and let it come to the conclusions and create the most optimal solution and test this to see how accurate it is. Help answer the question on how many cars they should have in stock to meet demand.

The first thing to do machine learning is to create the resources and runtime but Microsoft Azure makes this really easy, there is a section for AI and Machine Learning many of these services are third-party or many are created by Microsoft including Azure Machine Learning Service. You don't need to be a guru on Azure infrastructure to know how to do this but can easily set this up. You can set the subscription for the billing for the thing that will be run in the cloud and then you have to deploy it into a resource group which is a unit in Azure to keep all the resources you need to keep them logically close together. When you deploy this service, it is a platform as a service offering and by default it is exposed to the internet so need to give it a unique name, then you get to choose where you want to deploy from a list of thirty to forty locations with at least two within the United Kingdom. The choice of region is where are your users to keep your network latency but whatever data you process, or store is going to be subject to the laws of that country. There are a couple of other components such as storage account for storing data and a key vault for users to authenticate them or to encrypt any data so is a highly secure location for secrets. You can also add Application Insights which is logging on steroids and allows you to see what is going wrong and where. If you are going to deploy these services, so the first thing will be the workplace to build the model and a container registry to host the model using a microservices architecture and can choose the SKU of infrastructure needed. When creating the workspace you can make it for example public or private with outbound internet and then you can trust Microsoft with encryption with Microsoft-managed keys but if you don't trust them you can use a customer managed key you can create and manage yourself then can have credential type for access then you can tag the resource accordingly then you can create this to build the resource.

Once the resource is created it has a Machine Learning Workspace, you can check out the networking config and change to private access if needed but can also see any logs which will be populated shortly after the resources have been provisioned and are being used, you can also grant other developers access to the workspace. You can then open the Machine Learning studio to build a solution, this is a nice, neat user interface which is a brilliant repository for different pre-built and pre-trained models. There is also the ability to create and build a model to be deployed onto your infrastructure and allow software developers to create a workspace from their developer tools if needed. You can see different project types and there are three main ones which is Notebook, if feeling coding can get some Python and R and it will run like a function, can do Automatic or ML Ops and give it the data and have the machine learning work out how the model should work. You can create a new Automated ML job and train a model to perform the task of knowing how many car rentals to do on a given day by looking at the data and using that information to predict the number of convertible cars that may be rented. Can pick the different types of models such as classification model so if know data falls into classifications it can do this or can create a model based on computer vision with pictures with labels to see what is in there or can use a regression model. With a regression model can have a data set and need to tell it what kind of data with table for something like a semi-structured database or tabular it is in a CSV or Excel file and can point to locations on Azure or the web or can just upload a CSV file with the information. Once the data has been pulled in it will figure out patterns in the data but if there are any problems can modify the description to match the pattern of the data you have got, it will also identify any data types and then will bring that data into the storage account.

You can then create a training job based on the data that is imported, you need to tell it what the target column is based upon the input information. When you create a model you can also have it create iteratively different models and then find the best model, you can use 90% of the data to train the model and the last 10% to validate the model and can know how accurate it is as can tell it what the values should be and what should come out. If you are training models you need really intense compute resources for a very short period of time so you could use a really powerful and expensive instance to run it and get the data out of it and shut it down, so for a machine learning model it is only used for training so could go for an instance that costs £75,000 per month if only using it for an hour would be around £100 but generally use something that has the resources you need for training the model. Once the resources have been deployed and then the data can be analysed with different models to see which is the most effective. You do need to provide clean data to the model and make sure that data is consistent.

Using APIs to work with Azure ML models, you will have authentication then will have the deployed model then have the Azure Learning Resource which will run the model and will return the JSON from the API and then can consume this using SDKs in any language such as C# and you can take this kind of prediction model to any business. You can also automate the ingesting of new data to train the model, compare the data and even create a more accurate model and replace the model accordingly. You don't just build it and walk away you keep looking at it, oversight and transparency is critical, your organisation's culture has to change when using AI, you need to supervise it and be responsible for what it is doing.

Microsoft Copilot allows them to keep all their data in the same place, you can deploy OpenAI services in Azure but Microsoft also offer Bing Copilot and can sign in and use a generative AI, you can use GitHub Copilot to perform performance checks and improve CI/CD pipelines and look at code and see what CPU resources it may use or look at analytics and see what resources you might need and even help you write code. You can also create Kubernetes clusters using a Copilot for example. If you look at what Bing does is a lot closer to a cognitive search engine and what Bill Gates originally built for it when it started. You can use Copilot for Microsoft 365 with a Microsoft Account and can get information from Teams, Word, Excel, PowerPoint and Outlook. You not only get access from the public internet and your company data, the Copilot has access to everything you have access to for a company account. Microsoft Open Azure AI Service supports the various OpenAI models such a DALL-E, ChatGPT 3.5 / 4 and Embeddings, you can use the Graph API to connect to enterprise data or connect to cloud applications with third-party APIs.

Barriers to AI adoption include lack of vision, skills gap, culture for AI success, legal and ethical worries, compute power, access to data, privacy and security and ability to execute change. Most people will think they don't need AI and pay just a few pounds a month that can save hundreds of pounds per month. The hardest thing is the culture of AI success, will have to be happy to see their staff being less busy or needing to work less and look at their productivity not their presence. Consultancy and leadership can demonstrate the benefits and buying into the vison. Governance and Centre of Excellence is how you can make stuff happen and get access to the data, if operate a tool and make a decision that are responsible for that. Policy says you must do this or must do that, but governance is there are these tools but put guard rails to make sure you don't do something stupid and make sure people don't make mistakes. Cloud computing helps with the resources needed to make AI work and there are a lot of options out there but it is all about understanding productivity.

Azure AoT with DotnetNano - Peter Shaw

Peter Shaw has been a .NET developer for many years and part of LiDNUG. To get DotNetNano on an IoT Device (Stock ESP32) then connect this to Azure IoT Hub, you can take advantage of Application Insights etc there. What is Dotnet Nano? It is your favourite language of C# but for tiny things. Meadow IoT was the first attempt at running C# on an IoT device but had to use their boards with required amount of memory and processor but Dotnetnano is built from the open-source full-blown C# and built in such a way that it compiles for and runs on tiny IoT and embedded devices, although it won't run on everything out there, such as Arduino, but will run on 32bit ARM CPU with at least 64K of RAM and 64K or Flash, although anything from the past couple of years will run it just fine.

To get DotNetNano it is enabled by adding the Dotnet Nano extension to you Visual Studio Setiup and use “nanoff” flashing tool to put the appropriate firmware on your device, pretty much all ESP32 devices are supported as are the vast majority of STM 32, although the blue-pill version doesn't have enough RAM as has only 20Kb of RAM. WiFi enabled devices are recommended and System.Net namespace is used with the NanoFramwork prefix for connectivity. In the case a board is not supported it is fairly easy to create a support package for an existing board and most if not all official STM Discovery boards are fully supported, and Raspberry Pi Pico support is currently in testing and will be released soon but you will be able tp programme in C# to provide output on a HDMI enabled TV.

One Tool to Rule them All - the flashing tool “nanoff” is a dotnet compatible tool and can be installed using the dotnet CLI and it can also be updated and has many options you can change if needed. In order to flash an ESP32 you can use nanoff --platform esp32 --serialport COMxx --update and this will get the latest platform firmware for the ESP32 on a device but if need any assistance then there is a Discord group or can go to github.com/nanoframework and the documentation is absolutely fantastic with articles that will answer your questions and there are sets of pages for different families of devices.

What is Azure IoT Hub? It is MQTT on Steroids which is pub/sub for small devices or could use it to transfer lightweight data and was designed to transfer command sets between devices. It uses a concept called Topics on the server you might create a topic and you can have these hold any data you want from simple values to JSON to Base64 encoded data, when a client connects to this and queries the topic it will get that data, you will have clients that will listen for a topic and a topic will be updated and each thing that is listening gets an event saying it is updated and there is a new value. Azure IoT Hub is basically an MQTT instance managed by Microsoft Azure, it is not “classic” MQTT, so you don't get to name the topics and values you subscribe to or push too and Azure uses their own authentication system for the device to community to the IoT Hub with an SAS Key.

Azure IoT Hub also has specialist message topics and provides a first-class interaction between your device and the cloud. Devices will use the SDK to hook up to the various events and topics and means all you need to write is very simple event handlers. MQTT connection allows you to connect your devices telemetry and other message streams directly to most elements of the Azure cloud platform so can send to blob storage or an Azure Service Hub or even an AI system. Azure IoT Hub can support hundreds if not thousands of devices simultaneously and provide several channels for two-way messaging that are designed to be used in specific ways but can be done with far larger deployments up to the millions of devices with sub-one second processing when having multiple devices transmitting at the same time, you need to make sure what you write can also scale with the number of devices.

Azure IoT hub supports several different channels for the two-way messaging but are designed to be used in different scenarios. Telemetry is the most common for device to cloud for any text-based content you want to send, this is the channel where your device would send regular readings to Azure to be processed it doesn't have to be telemetry but anything you need to send to your cloud app can be sent across using this channel. Cloud-to-Device Messaging is the opposite of the telemetry channel so it is the primary route for your back-end application to send messages to the individual device and this can be anything from JSON to Base 64 encoded binary, both these messages can be fire and forget there does not have to be a response if you don't want it to.

Azure IoT Hub has Twin Properties which are like configuration values they are in two types, desired and reported with desired being properties to configure the device with runtime parameters similar to the .NET application configuration system and desired properties can be sent at device start-up to configure the device when it connects to the Azure IoT Hub. Reported properties are things the device would like to report to the hub but not things that would go through the normal channels such as radio signal strength and device temperature, some deployments use the reported event to notify the hub that the desired properties were received and what the values were, you could even use Machine Learning to predict when devices are about to fail based on those parameters.

Azure IoT Hub supports Direct Commands which is a command response structure that allows you to build in a fully-fledged IPC based communication with your device. Direct Commands are connected directly to methods in your device firmware when received will directly run those methods and return the result they produced, direct commands must be responded to unlike the other type of messages, .NET nanoFramework SDK handles this for you so you don't have to worry about it but other implementations require a lot of plumbing which makes it more complicated to implement but any methods must return data you can't have void methods. You can install the .NET nanoFramework Visual Studio Extension and then can create a Blank Application (.NET nanoFramework) and then you can create the whole application in one file if you want but can also have Class Libraries and Unit Test Projects. In Visual Studio you will also get the Device Explorer which will show any connected device but depending on the device connectivity you may need a driver for this but typically to get a device to show up it has to show up as a serial port. Before running the project, you need to select a device from the Device Explorer as it won't tell you that you don't have a device selected. When installing the firmware with nanoff you don't have to worry about installing the wrong firmware and won't kill your device you can simply install the correct one to get it working. If you install the latest firmware you will need to update your packages to the latest versions as the firmware is tied to NuGet package versions. When you connect to WiFi you can connect using DHCP and at the same time do a time update so that the date and time of the device is synchronised correctly as a lot of the messaging relies on the time so you can make sure this is correct for those requests. There is a 400,000-message quota on Azure IoT Hub but once you go over you will be charged per message after that point, you can send messages to other systems such as Service Bus and even external services where needed. When calling Direct Commands any exceptions won't crash the device itself but will be returned as a message so won't put the device into a stopped state.