ChatGPT was launched just two short years ago, but its impact on business and society has been undeniable. Whether you hate it or love it, artificial intelligence (AI) is a part of all our daily lives, both professional and personal.
However, even when you take away ethical and environmental concerns surrounding this quickly evolving technology, AI isn’t perfect. And this would be perfectly acceptable for any other tool, if viewed through a lens of innovation: a work in progress, carefully tested, as it’s implemented into different systems. Yet, increasingly, the pressure for businesses to adopt AI is mounting, and many are rushing to bring this technology into unprepared systems and workflows, which can lead to some catastrophic situations.
Why do we feel the pressure to AI-everything?
More than just hype or misguided thirst for innovation, business leaders are increasingly concerned with the very real risk of falling behind. Technology adaptation comes in waves, and early adopters tend to gain a competitive advantage over slower, more cautious investors.
The reality is that AI is here today. It’s not a futuristic utopia or a dystopian scary story. It is, at its core, simply a tool, that can and should be utilised to solve real business problems and to free up humans to work on what they do best, eliminating pointless and manual tasks.
But AI isn’t human. Nor, we might add, does it try to be. And it’s in toeing that line of just how much responsibility we should deposit on artificial intelligence where most problems occur.
Language models have an overconfidence problem
Although some steps have been taken to ensure that Large Language Models (LLMs) such as ChatGPT, inform you when their answers may be inaccurate or further human input is needed, the vast majority of queries are answered with absolute certainty. This includes a phenomenon we call hallucinations, which are instances where the AI will present a false statement as factual. In addition, AI models have an inherent bias because people have them as well. And since AI models are based on real human information, that bias is passed onto their answers. Despite all this, if you ask an LLM for answers, it’s unlikely they will come back with “I don’t know” or “I don’t have enough information to answer accurately”. Instead, you will be provided with a fully confident output which may or may not be real.
Results of this overconfidence can range from making up imaginary references to back up specific arguments, making incorrect predictions, sharing false news stories and even offering misleading or downright dangerous advice.
This can lead to some very real consequences for businesses, as was the case for Air Canada in 2022, whose chatbot promised a passenger a non-existent discount. While the airline tried to argue that the chatbot was “responsible for its own actions”, they were ultimately mandated by the Canadian justice system to pay the customer for damages and fees.
In another instance, a lawyer in the US was fined for using ChatGPT to create a legal brief which turned out to be riddled with false references and citations.
How do LLMs work?
One of the main criticisms of LLMs is that they are simply a more complex version of autocomplete, trying to “guess” what humans want them to say rather than actually providing accurate information. And although that may be true to some extent, it is also a gross oversimplification.
LLMs are only as accurate as the data they are trained on, and this is because they work through pattern detection. Most LLMs operate under something called the Transformer Model, which is a system through which the AI is able to analyse terms based on their relevance to the initial input. This means that, at its current development stage, an AI like ChatGPT won’t be able to understand if what it’s saying is true or false, merely how relevant the output is based on the information that has available on its database. So if the information used to train the LLM is inaccurate, biased or otherwise unfounded, so will the output be.
However, the pool of information from which AI learns is ever-growing since – unless otherwise stated - every time a user inputs text on a LLM, that data will be used to further train the model (which is also a reason businesses should be careful about how they use AI tools). As such, AI models are becoming increasingly rigorous, but it is unclear whether we will ever reach a world where we can expect close to 100% accuracy.
Just recently, however, Open AI has previewed “o1”, a generative pre-trained transformer which allows ChatGPT to “think” and “consider” its replies before offering them to the user. This has meant greatly improved accuracy, fact-checking, mathematical capabilities and more.
As AI technology continues to evolve every single day, businesses should prepare for different scenarios in which AI will directly affect their day-to-day operations. Doing so ahead of the curve will allow for a competitive edge and increased preparation.
Security considerations
As we’ve previously mentioned, AI models are constantly being fed new information by their users. This means that, often, individuals or companies who may not be aware of the way the technology works, are populating open databases with sensitive or confidential information.
If an employee runs a confidential document through an AI for whatever reason, the AI’s database now has access to this confidential document, and there’s really no way to “get it back”. The same thing can happen with proprietary code, which, when ran through ChatGPT, can become part of the training database. This actually happened with technology giant Samsung, which had to ban ChatGPT after employees entered sensitive code into the platform. But there’s even more to it than that. LLMs that access the internet can also now read websites, which means that anything that’s published online is now potentially training data.
This naturally poses an immense challenge for data privacy and cybersecurity, as well as regulatory compliance obligations such as GDPR. Additionally, LLMs can also help with other tasks such as writing code, which means IP protection also becomes an issue.
There are ways around this of course, but companies still need to be aware of the potential risk, especially those handling especially sensitive information. ChatGPT, for example, won’t use paid users information to teach their AI. You can also opt for a provider that you already trust, feeling more secure that your data will not be used for training purposes. The safest option might even be developing your own LLM in a private server, which should have no additional risk other than normal software risks associated with cybersecurity. However, you should always take risk into consideration when using any AI tool and, ideally, have a cybersecurity expert help you navigate the pros and cons of using this technology.
Should businesses use AI?
The short answer is “yes, but -”.
Like most aspects of business and life, context is everything. On the one hand, businesses that fail to adapt to technology are always going to struggle to keep up with the rest of the market, and run the very real risk of falling behind. However, this is not to say that smart businesses should blindly incorporate technology into their operations without careful thought and consideration.
Companies should think about why they’re integrating AI in the first place. If the goal is client satisfaction or engagement, make sure your customers actually want AI before you implement it. It’s easy to get caught up in the hype and think you know the answer when in reality you don’t.
Consider risks vs benefits first and foremost, both internal and external. Some industries, particularly highly regulated ones, will for example need to consider that people with ill-intent are also using AI models, and if their business needs to actively prevent nefarious activity, they need to adapt to a quickly evolving landscape. A good example of this is the field of Anti-Money Laundering (AML) and fraud prevention, an essential part of different financial services and other regulated businesses, which is currently in an arms race to keep up with novel ways to engage in financial crime.
Make AI part of the conversation before moving on to solutions. Create an AI mindset internally throughout the business and integrate it into company culture, even if the technology is not ready to keep up yet. For example, make it a habit to ask if AI can help solve certain business problems that pop up in meetings. Make the best of this time to build the “AI muscle” so you’re ready to implement when the time is right. Educate yourself and employees and make sure everyone is aware of both the opportunity and the threat that comes with this technology.
Which brings us to why businesses should adopt AI, albeit in a carefully planned manner – because you might not be able to afford not to.
In compliance for example, in most cases, the cost of getting it wrong is too high right now to risk using AI. However, in the near future, it might be that you can’t remain complaint without it, due to the ways in which each market evolves. And, in that case, making the best of expert guidance and ensuring every single step towards modernisation is rooted in rigorous testing and training will ensure businesses can keep up with market expectations are ready for new challenges.
AI brings plenty of positive changes to the business world when implemented correctly. Companies are able to free up their teams to perform more complex, focus-intensive work, or eliminating time wasters like manual or repetitive tasks. Business are thus able to cut unnecessary costs, personalise at scale, be more creative, bring broader ideas to the table and make more informed decisions.
How can we mitigate the risk?
Technology in business is usually built in a silo. It is either outsourced or built in-house but separately from other business activities and client relationships. However, LLMs and AI models are only as good as the data they are built from. So ensuring that you keep a close proximity between the technology and the business side of operations is key.
Secondly, testing continuously and gathering more data which can then be used to fine-tune (or RAG) the AI model is imperative, as is keeping humans in the loop as much as possible until you can confidently reduce human involvement. AI is built as a support tool for humans, not as a replacement. They need supervision and guidance, as well as constant and updated data inputs.
Don’t forget to also take into consideration if LLMs are the solution you are looking for at all. They are the technology more prominently featured at the moment, but other technologies, such as machine learning, might be better suited to your specific situation. Make sure you opt for the most suitable technology for your business, rather than the “flavour of the week”.
Similarly, take into account the “personality” of the LLM or model you’re implementing and what biases might be inherited from it. Most LLMs today have been coded mostly by very homogenous teams, and are working from specific sets of data, which will influence its outputs and answers. Be mindful of this, particularly when using AI to reach out to customers directly.
Finally, consider business continuity and AI regulations. Evaluate the use of AI against external and internal risks and your specific set of circumstances. Establishing an AI governance protocol might be helpful in ensuring your company is keeping up with current technology, while at the same time minimising risk and making sure you are ready to face the threats and opportunities of the coming years.
Eman Zerafa from Cleverbit Software