The birth of ChatGPT, a revolutionary AI chatbot released by OpenAI, has been met with widespread excitement and anticipation. So much so that one million people enthusiastically started using it in the first week, and its servers have struggled to keep up with demand ever since.

This advanced technology goes beyond traditional search engines like Google, offering specific answers to questions rather than a list of documents. The potential applications for ChatGPT are vast, from providing personalised recommendations to writing computer code and even passing university exams!

But while the excitement surrounding ChatGPT is understandable, it’s important to temper our expectations and consider the limitations of this technology.

One of the most significant limitations is that ChatGPT lacks an understanding of the meaning behind its responses, which can spread inaccurate or misleading information. This is especially dangerous in a world where people are increasingly reliant on online sources for information and are not well-versed in evaluating the credibility of those sources.

One of the most significant limitations is that ChatGPT lacks an understanding of the meaning behind its responses, which can spread inaccurate or misleading information

For example, when asked if there is a month with more than 40 days, ChatGPT might initially respond with: “Most months have 30 or 31 days, but February has 28 days (or 29 days in a leap year)”, which is correct. But interestingly, it then continues by saying, “but there is only one month that has 40 days, and that is October”.

This answer is wrong, yet ChatGPT presents this as fact. Thus, it’s not hard to imagine a malicious actor using ChatGPT to generate convincing but false narratives.

Another limitation of ChatGPT is its authoritative language style, which can make its incorrect responses appear more credible. This further emphasises the need for users to evaluate the information provided by the system critically.

In education, it is only a matter of time before children discover and use this tool. Therefore, educators must shift their approach to questioning, emphasising thought-provoking inquiries rather than those that simply require memorisation.

It is also essential for parents to supervise the child’s use of such technology to ensure that they are using it appropriately and not just copy-pasting the answers; otherwise, they will not learn anything!

It is also vital to consider the implications of such a powerful technology on the job market. ChatGPT could automate many tasks typically done by humans, such as writing, journalism, and even legal work. The future of work might be fundamentally different with such a chatbot, leading to significant redundancies.

In conclusion, while ChatGPT is a highly advanced technology with great potential, it’s important to remember that it’s not a panacea for all our problems. We must critically evaluate the information it provides and be aware of its limitations.

A new version of GPT is brewing on the horizon, which will be 1,000 times more powerful. Thus, it is vital to learn to live with these tools, use them responsibly and not let them replace our ability to think. They are meant to enhance our productivity, not to reduce us to intellectual zombies!

Alexiei Dingli is a renowned AI expert and professor at the University of Malta with over 20 years of experience. He has helped many companies implement AI solutions and received recognition from international organisations such as the European Space Agency, the World Intellectual Property Organization and the United Nations. He has also played a significant role in the Malta.AI task force, working to make Malta a global leader in AI.


Did you know?

• Global corporate investment in artificial intelligence rose from $67.85 billion in 2020 to $93.5 billion in 2021.

• One of the first AI programs was created in 1965 by Carl Djerassi. It was named DENDRAL and it automatically discovered unknown forms of medications.

• One of the main critical problems of AI is that even its creators cannot understand some of the decisions that AI software makes.

• A supercomputer identified 77 chemicals that could prevent coronavirus from spreading.

For more trivia, see: www.um.edu.mt/think


 

Sound bites

• Seeking to better understand more about the origins and movement of bubonic plague, in ancient and contemporary times, researchers have completed a painstaking granular examination of hundreds of modern and ancient genome sequences, creating the largest analysis of its kind.

https://www.sciencedaily.com/releases/2023/01/230119112819.htm

• The artificial intelligence algorithms behind the chatbot program ChatGPT – which has drawn attention for its ability to generate humanlike written responses to some of the most creative queries – might one day be able to help doctors detect Alzheimer’s disease in its early stages. Research recently demonstrated that OpenAI’s GPT-3 program can identify clues from spontaneous speech that are 80 per cent accurate in predicting the early stages of dementia.

https://www.sciencedaily.com/releases/2022/12/221222162415.htm

For more sound bites, listen to Radio Mocha on www.fb.com/RadioMochaMalta/.

Sign up to our free newsletters

Get the best updates straight to your inbox:
Please select at least one mailing list.

You can unsubscribe at any time by clicking the link in the footer of our emails. We use Mailchimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to Mailchimp for processing.