
The future of AI
Episode Thirty Two
Intro
I'm Peter and this is the RoguePlanetoid Podcast where you will find insights about Microsoft or related platforms and technology, along with so much more whether you are beginner or an experienced professional or just interested in technology. Keep Current, Keep Coding!
Welcome
Welcome to episode thirty-two of the RoguePlanetoid Podcast about The Future of AI. Tech Connect 2025 was a conference in Newcastle upon Tyne including talks along with a panel about the future of AI. This is a special episode featuring the panel discussion recorded live at Tech Connect 2025 which was organised by Laura Sharpe from Connect Events, which you can find out more about at weareconnectevents.com or check out the link in the show notes.
Panel
Laura Sharpe: Hello to our panel this afternoon. We've got Peter Bull, Mark, Jose, Manila, Maclean and Nigel Hope they're going to tell you all a little bit about themselves shortly. But I just wanted to have just to put the slide up for Slido and just make, you know, put a reminder out there to submit your questions for the panel. What we're looking for from the panel discussion is really a summary of the day. Let's bring the day together before our closing keynote. Suzy blows our minds with tech and space. So, without further ado, I'm going to hand you over to Peter who's going to host the panel, and you've got about 30 minutes.
Peter Bull: Perfect. Thank you, Laura. So, I'm Peter Bull and I wear a couple of different kinds of hats. So, a podcasting hat, that's Cluarantonn. So, me and my wife, we basically help you get into podcasting. So, if you want an authentic human voice on your podcast, we can help you with that. But also, my day job is a software developer as well. So, we're for a company called Klipboard where we're actually doing some pretty cool stuff with AI. We're basically imbuing our products with AI, including making them interact with AI, but also coding with AI. So, it's speeding up our development process, and we even started an exercise of vibe coding a product. So, actually built one of our products using AI driven methodologies, which is basically just a lot of planning. So that's the kind of things we've been working on.
Mark Jose: I've got to be told I've got to use this, but I've got quite a big voice so should be able to come in Anyway, I'm Mark Joyce, I've been writing software and engineer in the North East for a long time. I guess people you'll know. I work for Scott Logic for eight and a half years as their chief engineer. I work for BGSS a little bit. I'm currently working with Hedgehog Lab as their director of consulting and AJ Bell.
Manila McLean: Thanks, Mark. I'm Manila Maclean, I am CIO at a company called Somerset Bridge Group. We are a car insurance company. I've been in financial services for over 25 years. Lots of different types of FS organisations, large retail banks, mutual building societies, life and pensions and I've got a few other hats as well. So, I'm a non-exec in a tech startup business that sells products into financial services and also sit on the advisory board of Dynamo North East which is a community purposeful organisation that's trying to grow the tech Sector in the region.
Nigel Hope: I'm Nigel Hope, currently Plectrum AI again. Started coding and developing many, many years ago. Initially with my own business. We developed and sold security IT systems throughout Europe and North America. Sold that business, worked with as part of that takeover, eventually taken over by Group 4, left became a contractor managing development teams for many years. So, sort of CTO with a company called Gyrovent, more recently with Luminous Group as CTO. And it was actually just at Christmas time where we had a hack.
Peter Bull: Really?
Nigel Hope: You were part of the hack?
Peter Bull: It wasn't me!
Nigel Hope: The hack is a hackathon where you get the whole team. I got the team together, broke up into teams, but the theme was AI and that just blew my mind. I realised what the potential was, so I've actually stepped down since then to start my own thing. Primarily focused on AI, but used to build, if you like human, like avatars and experiences for chats, but hopefully for ethical use.
Mark Jose: Yeah.
Peter Bull: So, Nigel, might as well start with you. So, what have you. Yeah, you've got the mic. What have you thought of today? What sort of stood out to you today?
Nigel Hope: Well, it been brilliant, honestly. All of the talk's fantastic, especially where there's a focus on the Northeast. I'm very proud to be part of this region and anything that could be done to support and help people, especially young people, who have obviously got to live with this and grow with this over the next many years. So, yeah, I think it was Lucy especially and it was Hannah as well. I can see that there's great commitment there to this region, which is great. Shorty, great talk. Obviously, I can relate to a lot of your conversations with Copilot or whatever. It can get very frustrating. I've been through that and you're right to be very cautious. I do feel, however, that it's moving very fast and their tools are improving, the quality of the code's improving. I'd be wary about vibe coding as such. Better to plan and draw the specifications and use some of the latest tools to more or less work like a conventional development team where you have agents representing developers, but you plan it, you draw up your specifications, you produce unit tests, for example. So it's self-testing just to add to that. Yeah, go on.
Mark Jose: I've spoken to a couple of people.
Nigel Hope: During the course of today and my original talk was actually going to be a heck of a. Heck of a lot more visceral towards AI than what I eventually stood up on stage and said. And the reason for that is because I alluded to this whole thing about building a relationship with the AI. But what I didn't actually reveal is that that relationship has now become so good with my chosen use of Copilot. I can find a copilot and I can pick up where I left off the week before with the project. It's got memory, it's all. It's. Yeah, it's actually quite easy, you know, and that's. That mellowed my talk, thank goodness. I mean, my choice would be Claude, Claude code. And again, it's building up memory, the context, it's more. Rather than prompt engineering, it's more context engineering now. It's getting the whole end to end process more or less simulating a development team as well. So you're going through an iterative process. So that was a great. I can relate to a lot of what you were saying in that. And likewise, I think we'd probably touch on some of the other talks. Very interested in Joe's talk about character, AI and that sort of the therapy type because it's something which I'm working with as well and there's definitely some pitfalls there and some concerns. There's also huge potential Manila.
Manila McLean: Being in an insurance company, we are fully embracing AI. But the biggest thing on my mind, I'm working in financial services is when to use it, the ethics around it. So, I've found today incredibly useful. I've picked up a lot of really interesting information and from a breadth of speakers we've had everything from ethics to threat to the working charity to when we should use it, when we should not. I think that there was one line that stood out for me today and it was in Jo's talk, and she said we keep treating human complexity as an edge case in AI. And I think for me that almost sums it up because it will never replace the human and there's the whole debate about will it ever have consciousness or not. And again, I think for me, gut feel is so important, and I don't ever think ones and zeros will ever replace gut feel. I don't think we can train it, ones and zeros to replace gut feel. But yeah, today's been great.
Nigel Hope: Mark.
Mark Jose: Yeah, today I think for me one thing that stands out been obviously a number of conferences. Every conference you go to now people are talking about AI and there's lots of high fiving and chest bumping and AIs got to be brilliant. I think the general consensus today has been caution and I like that. I think I've felt recently, personally I've been Shouting into the wind about this. But lots of things have been mentioned. It's just statistics, the empathy part that Joe was talking about and all of those sorts of things. I think the general feeling in the room is we need to be cautious. Such makes me really happy, to be honest with you. I feel no longer like I'm shouting into the wind of just be careful with this. It's just. Actually, I'm stealing the line. It's just another tool. Because I've been talking about a better spanner for a long time and he has just a better spanner at all.
Peter Bull: It's like the hype sort of died down.
Mark Jose: Exactly, exactly. It's just the next blockchain.
Peter Bull: Hopefully it's not the next blockchain
Mark Jose: Exactly. I think you've got a point. Actually. We talked about this a little bit outside, didn't we? The second that something arrives, which is a great new technology, somebody somewhere will weaponize it. The Oppenheimer thing. And when we're talking today, there's been quite a lot of talk about porn today, I've got to be honest. Okay, I'm down with that. Whatever. But I guess the whole thing about the. I guess on a serious note, that sort of deep fake child, that sort of thing. Worrying. And actually, we should all. We all have a responsibility to do something about that sort of thing.
Peter Bull: I think one of the questions that's actually come from the audience which might be relevant, is like, what are your thoughts on making AI seem more human? Shouldn't we aim for more human connection in a disconnected world instead of replacing it? So, what are your thoughts on that?
Mark Jose: Should I go with that start? So again, we were talking outside a little bit. I've got a friend who does human prosthetics for people who've lost ears and noses and that sort of thing. And they got into that through the film world. I'll be really quick. So, I might go into consciousness. Got into that through the film world. And I feel a lot of people are taking AI into that human world where they've got a bit of robotics going on, backed by AI in terms of expressions and stuff like that. They're trying to make them look more human now. I'll take a robot that looks like a robot all day long. I really don't want that world where we're trying to be human, because it's not. And even if you got the point where it was a perfect replication of a human, I don't think that's a place we ever want to be as humans. If that makes sense. That's just my personal opinion.
Manila McLean: That question reminds me of something I read about recently and an episode of Black Mirror that I watched a couple of years ago. Who's seen the one about when I think the lady's fiancé had passed away and then she recreated him in she through the use of AI. So it started off with it was just text messaging and then it sort of developed to the point where she recreated them and you know, with prosthetics and so on. And again, it just goes back to this point of we might be able to, but should we? And that whole episode was about the complexities of it. So yes, I think it's got its place for supporting human connections, but it's always going to come back to the ethics for me. And how far should we let it go?
Nigel Hope: Well, I love it. My last role was working with immersive virtual reality for training purposes. And one of the problems was getting human like characters in there. And part of the reason I left was so I could work with the different tools called Unreal. They use metahumans and the quality is just mind blowing when you're in there. The texture, the skin, the emotions, the characters, the smiles and have that with the voice as well, the realistic voice, having the conversations. Okay, maybe I've been taken in by the technology, forgetting what the consequences might be. I love the technology, but you're right, it's dangerous. Touched on a couple of examples with character AI and I've been looking at that and it's so easy that it's how we control needs to be handled in a very ethical way. I think also the opportunities for having that one-to-one engagement for training, for teaching, which costs a lot of in schools and education. But you've got to be careful it doesn't become almost addictive that you become totally dependent on that character. How we handle that I don't know. But there needs to be term governance around that. But there are positives and there are negatives.
Mark Jose: Can I ask a question? So, I guess the person in the game Unreal being the game's engine. Yeah.
Nigel Hope: It's using the engine. Yeah.
Mark Jose: So, the Unreal Engine. Obviously when you go into a game, even if it's virtual reality or it.
Nigel Hope: Could be on a screen or on a screen. Yeah.
Mark Jose: You know, it's not real. But how would you feel about walking into, I don't know, a Starbucks and being served by someone who is truly real or not.
Nigel Hope: Suppose because I'm so wrapped up in it, I want to get as much realism and as emotion and sentiment building in the voice and the reactions, the conversation as possible where that leads to something else. It's going to happen anyway. I mean, let's face it, it's how we, how we handle this and support and I think that has to come from government and legislation, if anything.
Peter Bull: Well, one point that definitely came up was about consciousness. Whether something like that is even possible with AI. I mean what we currently experience with AI people didn't think that was possible either. So, to sort of imbue that humanity into AI is consciousness the next logical step. Even a primitive consciousness like which was mentioned today as well as being the main possibility. What would your thoughts on AI, not just mimicking language, but consciousness itself? Yeah, go ahead.
Nigel Hope: It's a good one. I think the word consciousness doesn't. I think it's evolving so fast I don't think that we'll be able to keep up with it. It will evolve. Maybe it's almost like another species. It's evolving like most species do. It's developing its own language. A lot of the AI tools talk to each other. It's not English anymore. It's creating its own language. We don't know how it's working. I think it was Nietzsche, or someone said if a dog could talk, we wouldn't understand what it was saying. A dog didn't understand. So, I don't think we'd understand it, but who knows. But it's a bit scary. My wife went to see Barack Obama last week along with 14,000 other people at the OT. She came back. I mean, I think she fancies him, but anyway. But he spent a fair bit of time talking about AI apparently. And it was, someone mentioned it today, the genie is out of the box and there's no putting it back. And his concern was mainly who is sort of in control of that genie. There's a few middle-aged white men, mostly white, mostly in America and obviously there's China as well. And that's the scary bit. Tend to be a bit narcissistic and in control and how we manage that I have no idea. But that's, I think where the big threat is.
Manila McLean: Yeah, I'll just pick up on that point actually because, you know, I've worked in a regulated, heavily regulated industry for a long time but there are lots of different regulated industries. Innovation runs faster than regulation and governance can. And if I just take FS as an example, we don't have the regulation around AI, but where we're starting to see the regulator moving because the regulator recognises, they can't keep up with the pace of innovation. It's putting the accountability onto the regulated people within an organisation. So, for example, I'm a regulated person within, as a CIO, I make the technology decisions, I'm accountable for the technology. And maybe about a year ago I'd made a decision. I was going to move critical supplier, I was moving a data centre. And typically, when you would notify the regulator and move as a, as a large financial institution and making a, moving a critical supplier, they would be all over you asking you what was your decision-making process and so on, because they didn't have the capacity. What they did is they wrote to me and said, right, just give me a written personal attestation that you personally are comfortable. You've gone through the decision, decision making criteria. Now that didn't make me do anything different than what I should have been doing anyway. Made me feel a bit different because I thought, okay, right, this is just reiterating to me that I am personally accountable for the decision. And that is where I think we'll start to see the regulated entities going with, with AI is asking individuals, key individuals to take personal accountability for it.
Mark Jose: Do not pass go, do not collect £200. That's my thing. I wouldn't want to be a CISO these days. That would be a nightmare. Consciousness was the question, I think, wasn't it? It was so consciousness. The first thing I say is define consciousness. I don't think we understand it. I think we talk about large language models. We understand mostly language because we came up with it, the studies about it. So, I think actually recreating a representation of language I'm going to call it. I love that Daniel mentioned before about. It's artificial, right? Let's just keep in mind it's artificial, a representation of language based on what we know about language because we create it and we curate it and we is easy. I don't think we can define consciousness yesterday. I don't think there's people doing studies. But I don't think as a human race we can define consciousness. So how would we possibly try to recreate that? I might go away and create it and create some horrible approximation of it that might happen. But I'm with Lucy on this one. I think consciousness is a long, long way away because we don't understand it.
Nigel Hope: Perfect superintelligence is an alternative term.
Peter Bull: Yeah, exactly. It might be different. It might exhibit the same sort of qualities we'd associate with consciousness. But wouldn't actually be.
Mark Jose: Still be artificial.
Peter Bull: It'd still be artificial esc. Right, exactly. Artificial consciousness, yeah. Okay, got a really great question from the audience from Marianne. Do you think the environmental impact of AI matters?
Mark Jose: Yes, of course it does. Was sitting over there before. Yes, in my view it does matter. I think we trivialise the use of AI. It's been given everybody as a toy at the moment, and I think that's useful because I think people should play with it and learn its capabilities and learn the pitfalls. What I don't feel a need for, and actually feel it's a degradation of service, actually, is when you go to Google now, when you get the I suggestion at the top, that's processing, processing, processing. I don't need that. I actually get better results when I actually got links. If you just skirt all the sponsored ones, you actually got decent information. So I think it's overused for trivial cases. I think it's got some real use cases. Again, back to what Lucy was saying at the start in terms of medical research and analysing massive chunks of data. Well, either breaking big things down into small things or creating small things and creating big things from them. That generative piece. But I think we trivialise it. I think we just. It's all go to. Especially in the ChatGPT world. There was a conversation before. I don't know who it was who mentioned it, but it was about. It was about. Is it degrading our memory function in that way? We're not thinking for ourselves, we're just going straight AI. I think in that use case, then there's no need for it. It's just. It is. It's got a large impact on the environment.
Manila McLean: This is the second AI I have been at and in the last couple of weeks and one person has asked that question at both events, which I think is brilliant because the last one, it was like. It was the last question of the day, and we almost went. I was on a panel discussion, we almost went, oh, crikey, yeah, I forgot about that. But I think that's the risk, isn't it? And that's probably the need for keeping. To reiterate that question. Maybe a question that I'd throw back is why is it? Why is it the last thing to be thought of? Why are we not talking about it more? And what can we all do in the room to be raising the awareness of the impact of it?
Nigel Hope: It's a tricky one. It's changing so rapidly. Things are becoming more efficient, trips are becoming cheaper. But then you hear the conversations that these people who are closely involved at the top. It's all about compute power. Hungry and hungry for more and more compute power that use up a lot of energy as we know. But again, it's happening, I think. I'm not sure what the actual impact will be on the environment necessarily, but there will be some. Again, it's down to governments to control that. But which governments? I mean it's all over the world. It's in, as I say, in the Far East. I think it was mentioned for instance when as energy is becoming more dependent wherever there's a mountain and a lake together it got bought up straight away because straight away you could create this sort of hydroelectric power. So, it's a complete race constantly by these, these organisations to capitalise on. Doesn't feel healthy. I don't think it's sustainable either. The amount of money going into it. It just doesn't make sense. None of these companies are making money out of it. I really don't know where it's heading, and a lot of the people admit they don't know where it's heading.
Peter Bull: It's like a gamble on the future. It's like when there was the dot com bubble. Nobody really knew what e-commerce was. It took a very. And only the ones that knew that survived. You know, all the smaller ones dropped by the wayside. Everyone was trying different things. I mean especially for AI. Maybe that's where the Northeast stands out, you know, Blyth and places like that were chosen because of their proximity to renewables. You mean you got an offshore wind farm right there at Blyth. That was the reason why that site was ideal for a data centre. You know, maybe that's where the Northeast can sort of say if we invest in renewables, the responsible AI industries will come to us. You know, I mean Microsoft's like firing up Three Mile Island to power their Azure data centres. Nuclear power plants are great, but renewables are definitely the better option. So, if that's something where we can stand out solar, wind and sort of bring those industries here, that's what AI will desire. Not throwing water away but reusing it. All these kinds of things can help. I guess this is a lead into. We talk about threats to AI models, prompt injection, things like that, but what is the threat from AI itself? What impact could AI have that's negative on society, like on jobs or anything else.
Mark Jose: Sorry, I grabbed the mic off there, Nigel. I was going to call it, I was going to call it exactly what you called out, Peter. We were talking about that but obviously there's a massive amount of money that's mentioned before by a Northumberland business partnership coming into Blythe Commerce and it's right next to, like you say, it's right next to one of only two offshore wind testing centres in Europe and that's it. But the conspiracy theory there is, right. America brings the money in here, invests and then harvests our energy and all of the profits go out of the country. So, I think there's an environmental impact, but I think as a community, North East, community, massively North East as well. Nigel and Manila, I think we need to be careful about the fact that all of the money that's being spent there on that energy and is kept in the north as well, in terms of levelling up the north and everything else. Because there is a conspiracy theory that says it will just all the profit will go out of the country, and they'll hoover up all of the energy as well.
Peter Bull: So, we don't want the region to be abused, we want it to be a partnership where we're getting as much out of it as they are.
Mark Jose: Exactly, exactly.
Peter Bull: Yeah.
Nigel Hope: I think, sorry, just on a similar. I mean, obviously proud of the Northeast, but proud of the country as well. In some extent the Western world. There is also a threat from China. I know it gets a bit political, but I really do think that there is. The battle is between the west or America and those large organisations which you may hate, or China, which is a totalitarian state. But the bills for compute power which drives AI is going to be fought and run by one of those sides and we're going to have to live with the consequences of that. So more or less, if one or the other, every prompt you put in will be monitored and will know, or AI or the machines will know exactly what we're doing and that's scary.
Mark Jose: We'll try.
Peter Bull: Luckily, we've got some time to not leave it on that one. How about yourself?
Nigel Hope: Positive.
Peter Bull: Yeah. Anything more positive than that? Or should we?
Manila McLean: Yeah, yeah.
Peter Bull: Another question from the audience then we've got Graham Soult who's sitting, said one thing that's not covered much today is what AI means to ordinary people. So amid all the world's challenges, how would you explain to an ordinary person, not necessarily someone who's here on why AI matters?
Manila McLean: I think so you think about some of the use cases that can help you in your day-to-day life. And I'm quite sure the majority of us here in the room are using LLMs for just helping you write a planner, create a meal plan for the week or a schedule or what you're going to wear or support in decision making and work and so on. Some of the other use cases that I see in my place, it's helping drivers to become better drivers through the use of the data that we collect through telematics style products, the feedback that we provide to drivers, to those individuals themselves. But for young drivers, it's helping their parents see what their behaviours are like and so on. I think there's endless amounts of examples that we've heard today.
Nigel Hope: I know both of you have young children and it was interesting talking. I mean, what's their future going to be like, the very young? And you were talking about how they're actually introducing AI and awareness, if you like, already.
Manila McLean: My daughter's in year four and she came home last week and talked to me about the topics of AI that they had been discussing in school that day and how positive it is. And I did counteract it with her, but sometimes AI can be used for bad. And I started to talk to her about deep fake and what that could be, but it's recognising it. I'm glad I did because then she had a test on it yesterday and one of the questions was, is this a real photograph? And she was able to say, no, that's a deep thing.
Mark Jose: I guess on the young children, if we were talking about Bailey, who's nine, then the world would be full of fluffy cat pictures. That's pretty much what he uses ChatGPT for. I'm like, you're ruining the environment. He's like, yeah, but more fluffy cats in the, in the. In the real world, education. Well, do you want me to talk about education? Yeah, yeah, we can maybe talk about education. I think already the schools are on top of this, at least from my view, from both my kids. So I've got two boys, 11 and 9, and they're both. They both have PSE personal social education where they're doing Internet safety and everything else. I do see some of the stuff coming in that safety. I think that's really important. But actually, I think the other thing is not closing it off to them as well, so. And both of my kids have supervised access to my ChatGPT account. So, I sit with them and they can type in prompts and see the sorts of things they get out of it and they can learn the right times to use it and the right times not to use it.
Nigel Hope: Yeah, just on education, more on higher education. I know in America there's a lot of scepticism about moving into higher education because they feel they don't need it. And a lot of it is behind not using the latest tools here in the North. I know in Northumbria, for example, Andrew Munro's here, who's head of the IT and Development at Northumbria. They've all got Claude AI, Claude code or Claude whatever from Anthropic, which is. It is an ethical organisation compared to some of the others, so very much approve of that, but it means that all the students are using it properly and there's controls over how they use it, so they're going out into the world using it properly. It's also interesting working with the university there, we're looking to introduce a member of his team, which will be an AI member that will be onboarded in the same way and have a look at the face and the characters, just more as an experiment, see how that works. So, education and those. Those organisations, I think in the UK, probably more than anywhere else in the world, are really keeping up to date ahead or preparing as far as possible for what. What's happening?
Peter Bull: Yeah, there's a lot of use cases for where AI can help in education. I mean, a good question, at least on the other side, that is from Jack, and that's how best to identify use cases of AI in a business. So as the majority of users they can think of could be handled just by good code, for example. So, we talked about education for a business. What's the best use of AI?
Manila McLean: You would think it's endless, really. I think the best thing to do is some experimentation with some. Some guardrails and see what works best, get that feedback, whether it be from your customer base or your colleagues. But we need to be embracing it, definitely, and not shying away from it. Governance is needed. And in one of the talks earlier, one of the presenters did make the point that so many organisations don't have those guardrails, don't have the governance. So, you need to be ensuring that you're implementing that within your organisation, but you need to be embracing it. Don't try and block it for colleagues because. Because it will be used anyway. And if it's used outside of guardrails, then you're going to be subject to leaks and data being loaded into unsecure applications.
Mark Jose: I mentioned before about making big things small and making small things big, it depends on the type of business. Right. But I say start with a problem space. I'm a consultant, inherently from an engineering background, but a consultant and there's so many CEOs now saying we need to do AI. And I don't know what that means. Like, what do you mean, do AI? So I think no matter what you pick, start with the problem space, identify a problem and then first and then figure out whether AI is the solution. Generally, AI is very, very good at making like, say, big things small or small things big. So, you've got going to put it out there. I'm a consultant, I'm paid to produce reports. Now I could probably come up with a report on some technology assessment or some strategic plan as A page, an A4 page of bullet points. But if I give someone a page of A4 bullet points, paying the money they're paying, then they're going to go, well, hang on a minute, how have I got one A4 pages of results? So I actually, I elaborate, right? I use AI to elaborate and make things bigger. I do get rid of the oxford commas and the dashes and the hyphens, right? Putting out there full transparency. That's what people expect. Likewise, if I'm testing, I find it great for generating data sets for testing. So I know what my data looks like, I know what the structure is. And actually, in that particular case, hallucinations are actually a good thing because I might get a text string with characters in where I expect a number in my test data. My system should handle that. So, generating large data sets for tests is another use case I use, and I guess making things big, making things small. So, I also work financial service. That's how I know Manila for. For about 16 years, heavily regulated and therefore actually trying to figure out, you know, what the documents look like, right? These are like 400, 500-page documents. Just getting AI to summarise some of those documents and check those against some of the procedures or some great use cases. But big things small, small things big.
Peter Bull: I guess, generally, yeah, AIs got a lot of potential to help us. The best way it seems to work is with us. That's why Microsoft calls it Copilot. It's something that should work alongside as the key thing is not to replace a human, but to aid a human. I know as a developer, it's helped cut down a lot of time. You know, I'm saving a lot of time that I can focus on other things. Doesn't mean I haven't got anything to do half the time. It means I can potentially get twice as much done, which is very great for a business. You know, you've got developers or anyone within your organisation, could be designers. It could be anyone, you know.
Nigel Hope: Yeah, you've been there a long time.
Peter Bull: Exactly. I see people coming. I see there's a lot of reliance on like AI, the trailer trust almost. That code looks fine to me. Ship it. And it's like having to tell them. Just make sure you understand it. Right. That's the key point about when you're using AI to generate a bit of text or a bit of code. Does it need to make sense? Does it make sense to you? And if it doesn't, it might not be the right thing, you know, that's the key point. Like AI hasn't replaced that. I think a lot of people think, think it has. They can just take something. We used to do that back, you know, when Stack Overflow launched, that was something a lot of developers used. People used to come copy and paste code just from there. So, we've got. So, it's the same problem, it's just a lot easier to do that. So I think as, as the younger generation come through, they're going to use AI tools. They may have an advantage because they aren't, they don't have all that sort of almost out of date knowledge and ways of working. They might be the ones who become the better developers because they know how to get the best out of AI.
Nigel Hope: Show them how to use slide rules.
Peter Bull: And oh yeah, still keep, keep with the old school things, but I think that's, I think that's a good place to end it. Thank you.
Laura Sharpe: Thank you very much, Peter and Mark and Manila and Nigel. That has been a very insightful and I think, you know, topical discussion where I think you've raised some additional points and put some of the points from earlier in the day to bed. So, yeah, I very much appreciate that. So, thank you very much everybody. Thank you, thank you.
Conclusion
Tech Connect 2025 was a fantastic conference and would like to thank Laura for allowing me to be part of such an amazing event, as well as to thank my fellow panellists of Mark, Manila and Nigel. Tech Connect 2025 in Newcastle upon Tyne here in the North East of England, was the first time I've ever been on stage for a paid conference, and really enjoyed the experience of hosting the panel, and to be able to share our discussion here on the RoguePlanetoid Podcast. Tech Connect 2025 also featured amazing talks from Lucy Batley, Peter Shaw, Joanna Montgomery, Peter Grainger, Daniel Roe, Hannah Underwood along with a closing keynote from Professor Susie Imber which you can read about at rogueplanetoid.com/articles/tech-connect-2025 or check out the link in the show notes.
Outro
Thanks for listening to the RoguePlanetoid Podcast where each episode you will find insights about Microsoft or related platforms and technology, along with so much more wherever you listen to your podcasts or at rogueplanetoid.com/podcasts for the RoguePlanetoid Podcast whether you are a beginner or an experienced professional or just interested in technology. Keep Current, Keep Coding!
- Website - rogueplanetoid.com/podcast
- X - x.com/rogueplanetoid
- YouTube - youtube.com/@rogueplanetoid
- Connect Events - weareconnectevents.com
- RoguePlanetoid.com - Article - Tech Connect 2025 - rogueplanetoid.com/articles/tech-connect-2025
RoguePlanetoid Podcast is a production of cluarantonn.com
Hosted, Written, Produced and Edited by Peter Bull
Music based on Like a Tiger by Jo Wandrini
Production Company Name by Granny Robertson