Qudos .NET Meetup Newcastle - June 2025

Qudos .NET Meetup Newcastle - June 2025

Power up your AI projects : Azure Integration Services - Mike Stephenson

Mike has been working with Azure for ten years and was at an event recently where 70% of AI projects never make it into production, this is a combination of not knowing what we are trying to do or how to make it into a real-world scenario. Microsoft talks a lot about relationship between data platform and AI, but they missed a trick about talking about integrating AI into processes. Mike also talks about cost management with saving money and works with Turbo360 who help manage Azure and also do a lot of things on YouTube such as how to decide what technology to solve a problem such as Data Factory vs Logic Apps with deep dives about what those questions are about. Will be talking about app modernisation, agentic AI workflow and MCP and will be talking about agents to demystify things. There has been a journey from 2022 with Chat with ChatGPT which was followed by Copilots in 2023 then Azure OpenAI in 2023 and more since then. The integration space has a common scenario is such as a third party where have document intelligence which can look at documents that have been sent using models that specialise in certain documents and can extract data.

Azure Integration Services such as for taking an existing application to AI enable non-AI applications, such as AI enabling Minecraft and creating a bot to build stuff using Computercraft there is an builder that can use inventory to do something with and can talk to other things and can use a tablet in the game where you can send or use commands for example to build a tree house by using a model via an API call. You send a message to a bot which calls out to Azure Service Management via a function app to Azure OpenAI to build something you've asked it for, the response is code that controls the turtle to then create the object as needed with nicer code that was reusable but doesn't make for the nicest object compared to DeepSeek for example but this then had the least understandable code.

Behind the scenes is a logic app which is what is called with a key step performing a chat completion HTTP call with the system role with the description of what you want it to do and the system prompt to perform the action which is quite simple and can just use the right API call for a use-case. There is an art to writing a really good system prompt such as don't describe what it is doing but just return the raw code but do extract the three ticks if this is returned but you could tell a model to return C# or JSON data.

Agent vs Agent where you have agents with inputs and system messages, an agent is a construct around an LLM and have tools which is how you can plug things into the model and make it more powerful with things such as Fabric for Knowledge or Logic Apps and you can build actions the model can execute which enables it to do beyond telling you something but to dynamically execute things. In Azure AI Foundry you can plug in knowledge or actions such as a function or logic app but if you move tools to allow you to have more context for the user then this is better.

Semantic Kernel is a framework when writing a custom application which allows you to call into a model and ask some questions and can register own classes as plugins, where call would come from user to model where app is available of plugins. Semantic Kernel makes it easy to implement things from plugins which is a .NET class decorated with some attributes where model will create a call to this functionality when it needs something from this class and allow a model to interact with this functionality. Function Apps can register skills with a model so when a prompt hits the model it will call back to this with assistance which is the same as a plugin so enables a model to dynamically execute from the Function App to create a microservice to create a service on top of a model.

Build Agents with Logic Apps to create AI agent loops where have a deterministic workflow for a Logic App with predetermined steps and an expected set of things that will happen but in the modern world need non-deterministic workflow so need to have logic that are dynamic and may not know the steps needed to produce the output. Logic Apps Standard is where AI model loops are supported where can do something such as evaluate a customer order so may have different ways of processing it and will call out to agent where the model will dynamically execute the tools available such as if it is a high value order send a notification for example and if needed it could send an email to them. Make sure you pick the right AI solution for the right AI problem but if you don't need to use AI for everything such as deterministic workflows as it is better served by non-deterministic workflows. AI Agent vs Agentic AI is where Agentic AI is more goal oriented such as onboarding a customer with continuous learning which manages dynamic complexity.

What does Logic Apps bring to the equation includes run history where you can see what the AI did and then can tune things and there are a lot of connectors available with a Logic App which performs a specific action that can be encapsulated and reused. There is durability in the workflow to try again later and there is manual retry available with parallel processing at scale where low or high-volume process is possible along with security, governance and observability. Logic Apps when to use an agent and when not to? Agents are ideal for unstructured data, conversational use cases and decisions required by human for over complex use cases with human in the loop with workflow can have industry standard workflows for clear and repeatable processing for high throughput with low latency and for larger payloads.

MCP - What's next for AIs? MCP is Model Context Protocol to make it easier for models to communicate like an API standard for models where can build an MCP server that a model can call and can build these servers in functions and Logic Apps in preview now or soon. MCP host is an application that hosts a model such as Copilot or Visual Studio Code which is the application that interacts with an agent and call MCP services when they need them. You can create an MCP server in Azure where API Manager acts a proxy to an existing API so if can build APIs you can then put them into an agent using API Manager which could enable people to do things from chat to automate processes, or you could integrate functionality into Copilot.

Failure is not an option : Durable Execution + Dapr - Marc Duiker

Failure is inevitable, things are failing every day and sometimes they fail big time, but it is up to us to make sure our users aren't impacted by these and not aware of them. Marc works for Diagrid as a developer using Dapr and creates educational content and is a Dapr Community Manager to create content and invite users to talk about how Dapr is used in production.

Lessons from a Decade of IT Failures was a blog post that is from ten years ago which explores the many ways in which IT failures have squandered money, wated time and generally disrupted people's lives such as a rogue algorithm led to $440,000,000 loss in 45 minutes back in 2012. We are typically building distributed applications with micro services which need to communicate and sometimes these are offline.

Fallacies of distributed computing is the network is reliable, latency is zero, bandwidth is infinite and more, but it is important to understand there are different types of failure such as transient which are temporary and usually resolve these without human invention or if they are permanent failures which are longer lasting and require human intervention to resolve.

Dapr is a distributed application runtime for building secure and reliable microservices. Dapr is not part of your solution, it runs next to your solution where you can make API calls to it, it is like a developer toolbelt with lots of solutions that you can take advantage of such as pub sub messaging or state management so you can just try one thing that you need if you want to. Dapr applications typically run on Kubernetes, and you can use any language with API calls from any language that supports HTTP calls to make use of Dapr.

What problem are we solving, we know systems fail so we need to recover from failure and limit impact of failure. Durable execution means that guarantee that code is run to completion even if the process that runs the code terminates by ensuring that another process is created and the code is executed to completion successfully. Workflow systems are used to automate processes, and they implement the durable execution concept. When you schedule a workflow there this is stored along with any process or output and then can replay to make sure that everything has been executed using data that is stored so there is a lot of I/O for the workflow of your application.

Dapr workflow enables durable execution of potentially long-running business processes and works seamlessly with other Dapr APIS and can author workflows in languages such as C# and can apply patterns such as chaining, fan-in and fan-out. Workflow can start with a message that can then make another call and wait for an incoming event and execute tasks including many in parallel and store results. Dapr uses Workflow as Code which is easily understood by developers, can be tested and is part of source control but there is no visual representation of the workflow.

Workflow patterns include Task Chaining where order of execution is important, and another is Fan-out/Fan-in where there is no dependency between activities and as soon as all activities are done you can do some aggregation of the output. Monitor pattern is for when a recurring activity is executed and can use this to periodically check the status of a system and perform an action based on that status. External System Interaction which is where a workflow can pause until an external event has been triggered by a person who needs to approve a step in a business process such as having to approve a new laptop as the screen has a crack in it. Child workflow is where workflows can call other workflows which allows for the creation of more complex workflows or compose smaller reusable workflows which can be individually tested.

Dapr Workflow Engine with the Dapr Sidecar which will appear next to the application and a gRPC stream will be set up from a Workflow app to schedule workflow activity and return results. Dapr Workflow engine is built on Dapr Actors with Workflow engine, Orchestrator Actor and Activity Actor. You could have a Workflow which works with a Shipping application with activities where can perform actions based on conditions, Dapr applications have unique identifier along with identifiers for parts of the workflow and every query that is done with Dapr uses an instance id for a particular workflow execution. Workflow Management where Dapr has workflow management API where can start a workflow instances, get the instance state along with pause, resume and terminate a workflow instance. This is done using an API with an instance id so there is no batch way of doing this at the moment so will need to capture these instance ids to perform these actions.

Workflow Challenges and Tips are that workflows should be deterministic which does put restrictions so that running with the same input has the same output so don't use non-deterministic operations in a workflow such as date times or GUIDs. Activities should be idempotent and there should be no negative side effects if an activity is run multiple times. Versioning workflows is hard as it is easy to introduce a breaking change that results in mismatched persisted data and the runtime model which happens when instances are in-flight but solutions can include using version suffixes to workflow name but also need to have client be aware of this and need to wait until there are no inflight instances before deploying a new version or by doing blue/green deployments using separate apps where the old app can be removed once in-flight instances have completed. Workflow payloads should be kept small as inputs and outputs are persisted to state store, so can use ids or small classes to help with this.