AI and the Philosopher's Role
Large Language Models are the most recent, well-known and important development in Artificial Intelligence, the most famous of which being ChatGPT from OpenAI which produced the most impressive results when you could simply type in something you wanted whether that be a question to get an answer or to create something based upon a prompt. Since then that technology has been adopted by companies such as Microsoft with their Copilot they introduced in Bing, then Windows and in Microsoft 365 from Word to PowerPoint and Excel. Large Language Models have also been created by other companies such as Google with Gemini and many more start-ups either building their own large language models or building on those created by OpenAI, Google and others. These Large Language Models increase is sophistication and abilities and can include the latest information publicly available from the world wide web to your own company information that can be kept separate and confidential.
Large Language Models are simply language calculators they take an input in the form of a prompt which they break that down into what is known as a token and then find what the most likely outcome for a token based upon that prompt should be and continue to repeat that process until a stopping condition has been reached, this behaviour in many early and current Large Language Models gives the illusion that it is typing but it is merely the time it takes for the next token to be calculated. There is a factor known as temperature which controls the randomness of results which can be more precise for a lower temperature or more creative with a higher temperature but turn that up too high and the results become too chaotic, but when balanced correctly a Large Language Model will produce something close to the output required and if not users can easily keep changing their prompt or add more to the prompt to get the output they desire. Since Large Language Models can produce any output that is requested of them the addition of a system prompt can be used to control the output by making sure the assistant behaves as needed so a flight booking assistant is told to help only with that activity, or a customer support assistant restricts itself to only the help and information for that activity and not to answer or do anything outside of that activity.
Large Language Models do have problems due to the fact they are the sum of all the human knowledge they possibly could have ingested from public sources as well as offering that knowledge to any system that uses those models as DPD learned that their delivery assistant was giving more than just information about deliveries but would swear and write songs about the company in a less than flattering manner. Another recent example was from a chatbot for flight booking system for Air Canada that told a customer about something that wasn't consistent with the policy of the airline which ended up in court as they airline tried to defend the mistake saying they weren't responsible for the answers it gave implying it was a separate legal entity responsible for its own actions, which of course the court disagreed with and considered it just another source of information that should be just as reliable as a page on their website, the information there in this case was obtained from their own data but the chatbot had hallucinated the information it had provided to that customer that turned out not to be correct but had to be honoured by the airline.
Large Language Models also have another problem, that rather than getting things wrong from information they have ingested but reflect the less idealist parts of society, all the injustice and bias and unfairness of society is reflected back to users like a mirror, we expect AI to be a perfect reflection of what we want but it instead returns things where sexism, racism or other inequalities have tarnished the results. An example of this is a recruitment website that used AI to help target jobs was recommending secretarial jobs to women more than men, as within the data it had ingested historically that was the group that those jobs were most occupied by so the AI continued to perpetuate this stereotype without realising that this job should be shown equally regardless of gender, but the data did not tell the model this so it didn't exhibit this behaviour, in that case the system was specifically driven through the use of the system prompt to evenly recommend this job to everyone regardless of gender to help overcome the inherent bias in the data it was using to make those recommendations. Large Language Models are not the only models that have had problems, this includes image generation models which Google found with their Gemini model, which was making sure that the results of user prompts should create a diverse image but was doing that sometimes where that was completely inappropriate, it was a noble aim to make sure the Gemini did not return stereotypical images based on user prompts but did not consider that this is not appropriate in certain contexts as again the model would need to be told not to do this.
Philosophy is where the fundamental questions about existence, reasoning, knowledge, and more is discussed, it is literally the love of wisdom, and it is that wisdom that is sorely needed with the rise of Large Language Models. They reflect society as it is too well with all those historical biases as at the moment it is often a perfect reflection of the information it has been provided or has access to but it needs to imperfect and be told to override the bias contained in the information within the model, they need to be better than we were to allow us to be better than we are. Large Language Models that are developed now or in future need a philosopher's role to ensure they reflect the ideals of society not just the ideas, it needs to understand that even if historically one group was favoured over another that should not be the case, developers can ensure a Large Language Model works at all so that users can get the information they want but the role of a philosopher may be required to make sure that these models behave as they should be not as they have been. We're rightly appalled when images are either more or less diverse than they would be, answers are more biased than they should be, or results are historically warped than they could be. We can solve this by bring the humanity into Artificial Intelligence with philosophy, when it comes to asking about historical information it should be accurate and show us the values of the past, but when it comes to new information it should be based upon the best possibilities, that may seem idealistic but may produce a Large Language Model that produces results that help better society rather than perpetuate the ills of society.
Philosophers or philosophy should be a key part of designing Large Language Models, as a developer myself I can understand how to write a system prompt or programme the interface and get it to function, others can drive the creation and innovation of them and make sure they include as much or a little information as needed for them to operate as expected, but the philosopher can make sure that the system is producing the output that represents the goals of the system whether that be to produce only information about parcel deliveries, accurate information when booking flights, recommending jobs evenly and fairly or producing content with the appropriate optimism or pessimism. It sadly isn't possible to solve all the injustice, unfairness, inequality, and bias in the world, nor should that be hidden away when it shouldn't be. It should be the goal of anything driven by Artificial Intelligence to deliver what we need and want most even if that needs to be an imperfect reflection of us that results in a more beneficial and balanced outcome that may even, over time allow the reflection to improve as the information becomes less tainted by the mistakes of the past so that future Large Language Models could be greatly improved thanks to philosophy and the love of wisdom with the philosopher's role.