TechNExt 2025 - Data and AI Hub

Build a Podcast AI with Blazor using GitHub Models - Peter Bull
Podcast AI with Blazor using GitHub Models was my own workshop available from tutorialr.com which has other workshops along with talks and tutorials as co-founder of Cluarantonn which helps people start, grow or feature on a podcast along with connecting people with podcasting or related services and with group or one-to-one support plus regular free monthly events. You can find out about all this and more at cluarantonn.com.
Podcast AI with Blazor using GitHub Models involved building Blazor Podcast AI by setting up .NET which includes Blazor and the Visual Studio Code editor along with using GitHub Models accessed with a new or existing GitHub Account to provide the AI functionality to produce an AI-powered podcast assistant that could be modified to become a custom AI powered assistant. The workshop can be found at tutorialr.com/workshops/blazor-podcast-ai.
The workshop involved getting setup and started with .NET, which is the free-to-use platform to create web, mobile and desktop applications from Microsoft. .NET can target Windows, macOS, iOS, Android and Linux and can be built using the innovative and easy-to-learn C# programming language plus .NET allows existing functionality to be integrated using packages. You can find out more about .NET, including where to download the latest .NET SDK at dot.net.
Blazor which is part of .NET is for building modern interactive web applications that deliver content for browsers using HTML and CSS that is enhanced with C# to create dynamic web applications. Blazor applications can be created that run in a web browser, on the web with ASP.NET Core or both and supports components that can be used throughout web applications and packages to integrate existing functionality. You can find out more about Blazor at blazor.net.
Packages can be used to integrate functionality such as Microsoft.Extensions.AI.OpenAI that allows you to use AI models from OpenAI with features such as chat with models such as the fast and cost-efficient GTP-4o Mini that supports text inputs and responses. Packages can be consumed or produced via NuGet which is the central location for packages. You can find out more about packages available at nuget.org.
Visual Studio Code is a free code editor supporting every major programming language including C# that reduces overwhelm with a clean and friendly interface and even supports extensions that provide functionality such as AI-assistance with GitHub Copilot and is supported on Windows, macOS and Linux. You can find out more about Visual Studio Code at code.visualstudio.com.
GitHub is an online platform for developers to store, manage and collaborate on code projects that can be shared publicly or privately along with being able to track or revert changes or contribute to improvements with pull requests where other developers can fix issues. You can find out more about GitHub at github.com.
GitHub Models is a platform that allows developers to experiment with a free catalogue of AI models including those from OpenAI to prototype applications such as models that power services such as ChatGPT to generate responses based on user input that can be accessed with a Personal Access Token from a GitHub Account. You can find out more about GitHub Models, including a playground to try out models at github.com/models.
Class in C# groups together code and can represent real-world objects with properties for example a Car class could have properties for colour or number of doors. Methods can be used within a class and are blocks of code that perform an action, and these can be used in a class or a component which are self-contained elements that display content for browsers using HTML and CSS with logic using code in C# that can be updated without refreshing the browser.
Prompts such as system prompts define how an AI model behaves and guides responses, they include establishing the role, how the model should communicate along with format of responses and there are user prompts to help define, refine or provide more context along with being able to structure prompts with questions and answers to then change any answers or expand on responses.
Updating a system prompt will dramatically change the behaviour of an AI from being a podcast assistant to something else, the system prompt can be changed to create an AI-powered assistant to help with anything along with being able to change the type, content and tone of responses such as for social media posts or even have fun with model behaviour such as getting it to talk like a pirate.
Podcast AI with Blazor using GitHub Models was a great opportunity to install the .NET SDK which includes Blazor and the Visual Studio Code Editor along with leveraging AI functionality from GitHub Models using a personal access token from a new or existing GitHub Account. It involved implementing a provider class with the functionality needed to use the AI model, add components to interact with or output from the AI model to be able to generate, refine or expand on podcast ideas plus be able to customise the podcast assistant to create any AI-powered assistant. The workshop can be found at tutorialr.com/workshops/blazor-podcast-ai.
AI agents: opportunity and threats - Gordon Murray & Marianne O'Loughlin
AI systems are designed to autonomously make decisions that are non-deterministic to perform actions and roles. If you wanted to deploy an agentic AI for customer support to increase customer satisfaction you could articulate the role, the outcomes and how you want it to work and give it access to tools and integrations to help with resolution of tickets direct to customer or escalate to agents and work with uses without a line of code if needed to create digital coworkers that don't need to eat or sleep but do need supervision.
You have Sam Altman from OpenAI and Microsoft and others talking about Agentic AI about absorbing this into platforms and software delivery tools and operations. Many organisations are thinking about helping organisations to adopt AI, but few organisations are putting in place guardrails to make sure they do the jobs they have been created to do. Agentic AI are reasoning loops in a continuous cycle for high level objectives and agents translate these into sub goals and steps they need to take and maintain memory they need to take and then refine future steps.
Why all the hype? Agentic tool chains have been around but the lack of capabilities in frontier models has slowed progress but in last six months it has exploded to leverage this functionality and reduce burden on people to allow them to do more creative things. AI agents are expected to be one third of enterprise applications in 2028 which as 1% in 2024 and everything will be impacted by agentic AI.
Agentic AI can work 24/7 with no breaks and potential cannot be understated such as scheduling appointments, reminding about medications for patients or for financial services to automate claims and adjusting risk models as new information comes in or help navigate government services and personalise information on behalf of users.
Implementation with organisations who start their journey with simple use case and as confidence grows can then expand upon this to see what parts of the organisation it can be used for. Good guardrails and monitoring are critical success factors into getting real insight which can be hard with thinking models not being able to tell you what you are thinking. Guardian agents can be used to intercept bad behaviour but there are also meta review agents to monitor the way agentic workflow is happening and feed this back into context and refining as you go.
Each agent is given a specific role in the workflow where can use supervisor pattern to manage all other agents and entire process and treats agents like a specialist member of a team and keeps an audit trail of each part of the task. There could be rules agents, domain specific language models for different reasons or might not be another model it is connected to it might be looking at database or backend system to narrow down to systems they need. Meta review agent critiques agents and performs corrections and risk management decides outcome where human interaction is at point such as risk escalation to the human in the loop which could be a dashboard or report.
Challenges and risks where agentic AI have same risks as generative AI, but with agentic AI the human in the loop isn't as present as not managing this directly. Need to balance opportunities of agentic AI with challenges and risks, are you sharing data with the model in a way that is private, who is accountable and what are legal implications if it gets it wrong. How could bad actors misuse or use the service, the new opportunity to automate workflows but websites could do prompt injection into your workflow to extract information from your organisation.
One thing is understanding degree of reliance of agents and how much trust in what they do, how well they do in that role and could have a simple agentic workflow to begin with and could ensure that data is good as if it is bad then the decisions will be bad, choosing the right shape and size of large language model where using a generalised large language model might not be the right choice.
Thinking about the business value with the AI value pyramid is moving up the maturity scale and the business potential is huge such as business critical processes but this can take a bit of time so there is a maturity burden as you scale so need to make sure making the right choices for business and users, in terms of compliancy, transparency and trust and more human controlled is removed the better your plan needs to be.
When introducing AI agents, you are essentially redesigning your business so this needs to be strategically planned and needs deliberate change management and need to deal with concerns and talk to your workforce and need to make sure platforms are scalable and can be integrated. There are job roles that didn't exist before so need to think about the AI product roadmap and this isn't just an AI project but reaches wider with dedicated people and budget for process to take from pilot to scaling up, start small and learn fast and choose use cases that will work best for your business.
People-centred adoption, where Opencast looks at what is problem to solve and what is best way to solve the problem. When adopting AI agents ethically need have clear policies along with dedicated people for compliance and the landscape for this is changing rapidly and for transparency AI agents have to show how they reach conclusions. For business need to bring people with you and have leadership models for key decision making and have feedback channels for domain experts. People centred implementation means you need to constantly assess what you are doing for people in an organisation.
You need to deal with non-technical stakeholders and define what agents are without them being people but define personas which outlines in non-technical terms the goal of the agent and what they can and cannot do which is good for cross-team working when bringing technical and non-technical people together. You need to consider when driving things out at scale, there are potential pitfalls when navigating way through agentic Ai but there are opportunities when adopting agentic AI, be brave, experiment, start small and scale out but most importantly think of challenges.
AI Voices That Engage: How Voice Direction Can Enhance Speech Models & User Trust - Nic Redman
Nic introduced the session and said they wanted to welcome everyone there and asked if anyone had experience of AI voice and those who are involved in tech and AI. Eight years ago, they were working on a Voice Over Social podcast and at the time was in turmoil as Ai voices had arrived and had heard about an AI voice which was intensely realistic. They were renowned about getting to the bottom of things and looked into the AI voice which even then sounded realistic.
Now they are still a voice over artist and are now a highly qualified voice nerd and voice coach and AI voice now should be better but there is an inconsistency as they had an example of a Car AI that wasn't great using the latest technology. With AI voice models we will get there, the tech and algorithms are improving at an exponential rate. Has the process of collecting voice data kept up or are we relying on the tech to do this instead.
Nic was lucky enough to be on a text to speech project on an AI assistant with a brief for a specific volume range, specific tonal quality and emotional energy sounded consistent and neutral with random sentences but their role was making sure the voice was consistent, but this was a long project so needed to do vocal health due to range of voice needed. There was a different process eight years ago which was pushed like actors and emotionally rigorous but produced a tonal and realistic voice.
Voice directors have been used to then and now and it is not new to find this successful as success of AI models depends on the quality of the data going in, so if the data going in isn't expressive the AI will be boring as well. How can we prepare that data better and more rigorously. Need someone who shapes voice as it goes in it comes out well.
What is with the AI in their car, was it pulling in the wrong phoneme, or did it not have the right phoneme in the first place? The interesting thing where you can refine the synthetic voice and before it gets to the public with voice direction for the performance itself or bring them in earlier to structure phrases in a more conversational way at the scripting stage or who is phoning it in when they made a specific sound. Ensure the creative and linguistic expertise is brought in early before it is too late.
Nic asked if anyone had been fooled by an AI voice, the voice sounded convincing but the content it said was said in the same way. Can you be fooled with an AI voice, how can you tell if something is AI, is it the algorithm or is it the input data. You can't talk about voice in AI without ethics, such as those voice over artists who have had their voices scraped to be used.
Creation of AI voices for original input data is more important than ever and dupe us into believing it, you might even have AI voices trained on an AI voice but if you are going to create something that is good you have to invest in good quality data. What is it that people want from their AI voices, do they want bespoke and not sound like anyone else and the tech is phenomenal but with all that tech are we forgetting what makes it work, it is the quality of the data that needs to be in line with the tech.
Are we involving directors at the right point and can they stop low quality voice output happening. There is a bigger existential question is the immediate influence is humans, people with Botox are using ability to empathise as they cannot move their face. The media we listen to impacts our communication and it always has done, it could be American accents influencing the way we speak, people could grow up listening to inconsistent AI voices and start speaking like them. We have a responsibility to ensure we are representing communication in the right way.
The AI Revolution: Decisions That Shape Our World - Colin Tempest
We need to understand the process and understand the potential of AI. Algorithms everywhere such as Netflix in 2007 to curate lists but they are also used behind the scenes to make sometimes life or death decisions, when apply for personal finance you get an instant decision or when turn up to hospital it will decide for you to be admitted based on requirements, for fraud detection and management along with job screening.
AI hype includes talking about a computer that can be conscious of its own existence and reproducing itself which was description of Perceptron which was a computer in 1958 which could be taught to learn shapes without having the shapes programmed into it, was limited by the technology and large language models imply a lot of processing power. Foundations were laid in the 1950s but it went quite for fifty years until the development of imagenet in 2009 which had a good set of training data for visual recognition. Moved from CPU processing to GPU processing which became ubiquitous in 2017 and then there was the big data era to provide massive datasets.
Innovations include deep learning which is a subset of machine learning that mimics human brain structure and function which includes weights to get the outcome and if not passes this back with different weights to get the desired outcome. Deep learning has a concept of different layers which in the visual model is different visual concepts and they get increasingly sophisticated from pixels, edges right to identity of an individual.
Innovation from classification to generation by starting from tag and getting to an image, start with an image and adding random noise to an image to get back to an image and then can take noise and get this back to an image which sounds simple but works better than expected and this can apply to video which have models that eliminate flickering and has consistent content. This can produce reasonably convincing results.
Innovation includes transformer architecture for text, which is different from images as they can have near and far relationships and it can be hard to process as with images spatial correlation is really high. Google had idea of attention mechanisms which has linguistic associations and higher level with characters, locations and narrative arts but this required a lot of processing power, it takes a trillion calculations to get the next word from ChatGPT. If trying to train a model then someone has to go through and see if it is correct and give it feedback which takes a lot of time, but what is easy is you can see what the next word is in sequence, given a prompt what is the next likely word.
Another innovation was fine tuning which starts with a pretrained model which learns language patterns by guessing next word, then you add fine-tuning to refine the model with supervised learning to then follow or reinforcement learning which takes this concept further where a human scores an output to see if it is right, accurate and helpful and the model can use this to adjust weights to be more in line with what a human would expect and it wasn't until this was added that we got the capabilities.
ChatGPT would be good at chess as chess games, books and articles is in there but noughts and crosses has hardly any discussions, AI answers confidently so the model will never say it doesn't know but it will give you an answer, it isn't doing anything wrong but is making a prediction which gets it right more than it gets it wrong.
AI reliability problem means it is hard to know what it would do and bad predictions can be limited in training data, including if there are changes so the training data is different to the environment. Lack of transparency with why predictions have been made, and people have figured out if you have shelves behind yourself on a video you'll do better. There's a risk of using AI without oversight including where a tax authority had child benefit claims that were being accused of fraud without any evidence with strong racial bias where people were disproportionately affected. There is also data leakage where AI has access to privileged information with information not available in the real-world has been included in the training data.
Predicting social outcomes can be challenging, they had a model provided with 10,000 data points and were comparing this model against a simple four-data point base-line model and had 160 teams but the outcome was the vast majority was 50% accurate but nothing out performed the basic models even with more data available, so this is hard but it doesn't stop people trying this for criminal outcomes but it is difficult.
Believe housing AI use cases are to use large language models to automate chatbots and also classify complaints such as someone complaining about a repair, also using Microsoft Copilot across the organisation and have chatbots for HR advice and automation for child benefit for sharing information but longer term want to use AI in a predictive way using smart sensors to help make early interventions and help prevent issues.
They have a risk assessment for AI models, so they need to be transparent and accurate to explain decisions and bias mitigation with diverse training data, inclusive design teams and regular fairness audits along with human oversight with critical decision review, compliance with data privacy regulations and valid reliable results.
Impact on work, there is a pilot shortage, so Airbus and other airline operators are working to remove the copilot on aircraft for single pilot operations. Banking is using AI chatbots to handle many routine enquiries and offer tailored recommendations which generates more revenue and got regulatory filings has been reduced by Goldman Sachs to take advanced complicated processes and reduce these to just minutes.
Next generation AI which is decreasing the cost of ownership of models for example DeepSeek R1 which contained innovative features with chain of reasoning and mixture of experts for different parts of models to solve different problems and have a model which is a 100th size of GPT so needs less hardware so you may be able to run models on modest hardware such as desktops where instead of using generative AI will be using own custom models. Agentic AI which is layering strategy and planning capabilities on top of existing models which is an AI system that uses other AI tools to plan and execute other processes such as smart assistant to come up with a plan and then execute this plan.
Consciousness and superintelligence with AI, AI does not think the way we think, it is calculating and crunching numbers, intelligence is not an all or nothing thing it is a scale and because AI can grasp the environment it is on this scale somewhere. Will AI reach human intelligence is if not when but the pace of change and ability to scale up will be a single event and then after that will get genius level and beyond. This could be in the next few years or decades but depends on breakthroughs that have not yet happened. Physical AI is hard, such as robotics and self-driving cars are taking much longer to deliver on the promises.
Preparing for an AI-driven future is that AI will get significantly better especially in agentic AI and there is a lot that will come from China and AI will change the nature of jobs and sooner than people may think so need to learn as much about the technology and prepare as much as possible.
Making AI Work: Mindset, Fit, and Real Value - Elisio Silva plus Panel Hosted by Erin Kinnee with Elisio Silva, Rohan Nakashe & Danielle Armstrong
Elisio has worked for Accenture for five years and holds a masters in AI has worked on delivering AI and electronic solutions for over a decade. There are people working with AI and in the field of AI. Will be looking at where AI delivers real value and where others fail and where to apply these technologies. Peak of inflated expectations include AGI or AI will take all jobs. There are two extremes that AI is overhyped or underdelivering. Many projects have been abandoned but Ai is transformative, but it isn't magic it can automate, accelerate and augment but it needs clean data, well defined goals so need to treat it as a tool where it can deliver massive value but if treat as a miracle will be disappointed. The problem-first mindset is starting with what problem, identify pain pints, link to business outcomes and avoid tech-first thinking then evaluate AI tools that can help with this. AI technologies are all about a mindset.
OCR is optical character recognition which converts scans into data and has been around for decades, but AI has made a big difference to reduce manual entry to digitise text and understand structures such as tables, checkboxes, signatures and more which works really well and can work in any industry that works with paper. Structured Text Intelligence is finding meaning in text where can extract names, dates and organisations to enrich unstructured content and can also detect personally identifiable information and can be used in applications for contracts, support tickets and more. Generative potential includes drafting content can be drafting content, summarising documents, coding and more but have to be careful as these models think they know things so if you don't check there may be something in there you don't expect and when deployed into a live enrichment have guard rails and direction about how the AI should behave.
LLMs and real data is useful for retrieving from a knowledge base and using LLMs that use retrieved context and answer with the most relevant information for your query, you will give context such as what is needed for the answer to a question which can reduce hallucinations and increase trust. Agentic AI is the next layer which uses agents which plan and execute multi-step tasks, but this is still in progress as don't quite have best solution but will be able to have whole workflows automated but agents are a good and interesting technology.
What really matters is having the right problem-first mindset not trend-chasing and then the cost vs benefit that is AI the best return on investment for the task and can it scale affordability as AI is not free as needs a good infrastructure to run and these models are quite heavy and need a lot of resources, so need to make sure it makes sense and it is better than a simpler solution. AI is as good as the data it receives with idea of trash in and trash out so have good data to work with these models and protect privacy and need to make sure what you are doing is right and are doing the right thing and know where your data goes.
Erin hosted the panel discussion, Danielle has been with Accenture for six years and their background is computer science and has been working with machine learning for four or five years and worked on different projects with a variety of clients. Rohan has been with Accenture for ten years and has spent a lot of time in public service and has built automations and AI tools for public service clients and is enthusiastic about the tech stack including machine learning.
Erin asked if anyone had any questions and one was what applications have been worked on for the public sector, which is nervous about deploying AI tools so is a challenge to gain trust of client, but have build credibility and then can create automations for a public service client, could also be used to detect fraud and other purposes. Elisio talked about a generative AI for DWP which was first for a public sector client and in AWS in the United Kingdom where DWP receives unclassified letters with forms and this solution can read these letters and understand what they are about to classify them or identify vulnerable people to escalate cases to get the support they need, processing these letters at speed is important as these people were so desperate they may just give up so allows them to be supported quickly.
Danielle mentioned that one of the things around explosion of AI was open-source tools such as Hugging Face is a catalogue of AI models you can find the code to run the models with explanation of the model and what it is doing and can find use case for a model or not the model you will find the use-cases and how to use this in AWS, Google and Docker. They use Python and Jupyter notebooks to run models but can also use Google Colab with access to higher specification machines or can pay for even higher usage and can use it out of the box.
The panel was asked how much has personal workload been taken with keeping up to date and to know enough about technology to pitch to clients? Rohan mentioned they are focused on delivering on clients so need to keep up themselves such as when Deepseek was available and every other LLM, so the team is working on specific use-cases, so they try to keep up with understanding and it is a proven technology, but it depends on how risk adverse clients are. It took a while to recommend something like Amazon CodeWhisperer that can write code on its own, but it boils down to who you recommend a solution to. Elisio said it depends on how much time you have to look at information such as news, following blogs and understanding what is going on so you have to keep yourself online and keep checking and testing things.
Danielle talk about how many rules does it take to solve a problem and if it is reasonably manageable then AI isn't the right choice but if you don't know, or you need something to think consciously based on stuff you didn't know then you could design a system like this with machine learning, you can also have a feedback loop where iterate over the data.
Erin asked what most creative and surprising thing was, which from Danielle who was trying to create some synthetic data to run through their systems for bank documents but created images of banks instead. Erin tried to create their own GPT for their team as an advice-giving tool, but it would add answer slyly or sinister laugh and they couldn't figure out how to get rid of this. Rohan talked about a non-existent car auto show which seems like a real car show and trying to use the same exact prompt it wasn't creating the same volume but took lots of iterations but used up all the credits they had.
Danielle mentioned that they use Figma which has an AI attachment tool where would design an application where the code can be generated from the design, there is a lot of resistance by clients to let the application do this especially in public sector they don't like code being generated with Copilot, they are getting more accepting, but developers don't often write the best code so the models have been trained on this. Rohan mentioned latest tech trends is about companies writing code with AI and Facebook mentions that half of their code will be created with AI which may increase in the future.
The panel were asked if AI is stealing jobs? Rohan mentioned it is more of a case of reskilling, this happened from moving to automation before, so it will be doing an assessment of reskilling of the workforce to make people AI literate and people can find their niche in terms of reskilling. Elisio doesn't believe AI will end the job of developers but developers that don't have AI skills those will be replaced by those who have those skills, they have been using AI to develop and it is awesome as long as it has clear instructions and is not something really new as they haven't seen it create something that no-one has seen before but what is there and needs to reason and recombine is awesome. Erin mentioned she has seen new roles that are being created out of necessity and that have a responsibility to take people along for the journey.
Advice for those out of college from Danielle is that Accenture is actively recruiting junior and apprentice engineers and the space is still there for graduates and you have to stay on board with AI technology and do their own learning and research to stay competitive as it is fast moving and constantly moving, so there needs to be a real passion there along with seeing AI as a tool.