Balancing act: Ethical considerations and responsible use of AI in iGaming

Ethical considerations are key

April 11, 2024| Times of Malta 5 min read

Brought to you by Alexander Assies

Responsible AI is about taking the broader impacts of AI on society into consideration and preventing negative effects.Responsible AI is about taking the broader impacts of AI on society into consideration and preventing negative effects.

AI in its many shapes and sizes is taking over the world by storm. Only two years after ChatGPT managed to get 100 million users in record time, it is hard to find companies that are not looking into how they can apply AI in their industry or niche.

One of the most visible applications of AI comes in the form of assistants and AI-generated content. Whether a chatbot on a website helping to find the information you want, a tool in your office software helping you to compile reports and add graphs and tables in a second, or a generator that can create, text, images and even videos from a simple line of text.

In another way, many companies have already been using AI for years. Think of the algorithms that streaming services and social media use to provide users with suggestions for more to watch or listen to. Increasing concerns about total screen time and social media use, not only show us how effective these techniques are, but also that we should start considering setting boundaries.

One of the most discussed algorithms in the last few years must be TikTok’s, which is even said to be able to ‘read your mind’. Unfortunately, this knowledge is not applied by giving viewers exactly what they want or need to see, but rather by giving them videos that will make them want to watch the next one, and the next.

What is responsible AI?

Responsible AI is about taking the broader impacts of AI on society into consideration and preventing negative effects. Not only can AI be ‘too successful’, leading to overconsumption as discussed above, it also comes with certain other fundamental risks.

Over the last two years, many examples of ‘AI Hallucinations’ have emerged. This is when Large Language Models (LLMs) come up with, or create, false information. These hallucinations vary from totally harmless mistakes, such as saying that Tuesday comes before Monday, to possibly deadly ones, where it suggests a recipe for a refreshing drink that actually is a deadly cocktail.

Other life changing impacts can occur when an AI system is biassed, due to biassed training data. For example, in organisations where AI is used to detect fraud. There are several heartbreaking examples of people's lives ruined by falsely being flagged by an algorithm because of a foreign last name or physical characteristics that systems struggle with.

In other words, the main problem with AI revolves around trust. When can we rely on AI systems to provide us with information that is good enough to confidently use to base important decisions on? Responsible AI therefore needs to ensure that AI systems are open, fair and safe.

Openness

Any system should be open and transparent about how it works: how accurate are predictions; are they traceable; can we understand what is happening, when and why? This is needed in order to help recognize bias and make the necessary adjustments by ‘educating’ the system.

Fairness

As AI is used more and more for important decisions, it needs to be made sure that it is trained with data that is relevant and representative for the people that will use it. Not only does the data need to be diverse, but also the teams that develop AI need to be so too. Different perspectives are essential in helping to recognise and address bias.

Safety

Finally, AI must be designed to be safe. No matter if it is intentional or not, AIs must be able to withstand manipulation that could lead to dangerous outcomes. And of course privacy should be taken into consideration when processing data. Not only do people share more and more personal or sensitive information with such systems, there is also the risk that a model reveals sensitive information with which it was trained.

For these reasons in many fields, including those within the iGaming industry, a hybrid application of AI is the best solution to ensure ethical and responsible use of AI. In such systems AI is used to assist humans rather than to replace them completely.

Applications of AI in iGaming

In the iGaming sector, AI in its many forms could be applied almost anywhere, however the industry has been reluctant in doing so. A possible reason for this could be that they are weary of over performing, which could lead to several negative effects, crossing the ethical line. Let’s have a look at some examples.

Fraud and security

AI can analyse vast amounts of data near real time and for this reason, it is an ideal tool to look for patterns that could indicate problems. Although preventing and fighting fraud in itself, inevitably benefits an organisation, bringing in an AI could have several drawbacks such as:

  • Risk of bias and discrimination
  • Overreliance and complacency
  • Regulatory compliance
  • Financial and resource requirements

Proper implementation of AI that takes all these matters and more into account, would mean an enormous investment required in both time and money. The question is if the current solutions are not more cost-effective for the moment.

Responsible gambling

Recognising and acting on problematic behaviour is the key of responsible gaming, however, it directly touches the bottom line. For this reason, Responsible Gaming in practice is mostly aimed at window dressing and compliance, not at protecting players in the best way possible. Using player data to recognise problematic gambling, would raise the bar to such a level where that the financial impact will be felt immediately.

This could however also be seen as an opportunity to raise the bar when it comes to business ethics. Truly acting with the interest of your customers in mind is always the most sustainable in the long term.

Personalisation and marketing

This brings us back to our first example. AI systems with their feedback loops can lead to an irresistible offering, giving users the exact right dopamine rush at the right time to get them hooked and wanting for more. This is one thing when it comes to social media, but another if you are dealing with demerit goods - goods which are harmful such as smoking, drinking and gambling.

Offering your customers exactly what they want and how they want it makes them want more, but the question is, when is enough considered enough, taking the risks of overconsumption into account? Can the player rely on the casino, or can the casino rely on the player when it comes to setting responsible limits?

To conclude, we can say that the ethical considerations are key when it comes to the iGaming industry implementing AI. It definitely provides many possibilities, but those stand to benefit the player more than the operators and, most important, their bottom line.

Disclaimer: Play responsibly. Players must be over 18. For help visit https://www.gamcare.org.uk/

Sign up to our free newsletters

Get the best updates straight to your inbox:
Please select at least one mailing list.

You can unsubscribe at any time by clicking the link in the footer of our emails. We use Mailchimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to Mailchimp for processing.