An Australian mayor may become the first person to sue the creators of ChatGPT over claims the AI-powered chatbot falsely labelled him a criminal.
Brian Hood sent a legal notice to parent company OpenAI last month after its web app wrongly implicated him in a bribery and corruption scandal -- a crime he actually blew the whistle on.
ChatGPT has thrust artificial intelligence tools into the headlines, with intrepid users exploiting its impressive power to streamline everything from answering mundane emails to cheating on school exams.
But the mayor of Hepburn Shire -- an area about two hours' drive northwest of Melbourne -- said Thursday that ChatGPT's flaws had shocked him.
"I was horrified; I was stunned at first that it was so incorrect," Hood told Australia's national broadcaster.
He said he had been alerted to the "disturbing" results from the app by friends and colleagues.
"It's one thing to get something a little bit wrong; it's entirely something else to be accusing someone of being a criminal and having served jail time when the truth was the exact opposite," he said.
Before taking office, Hood had helped expose bribery and other crimes at his former employer, Note Printing Australia, leading to criminal charges against several people.
But when ChatGPT was asked about the bribery scandal and Hood's role, it returned results that falsely claimed he had been jailed for corruption, his lawyers said in a statement.
When AFP put a comparable question to the free version of the AI tool on Thursday, it was still giving a similar false answer about the mayor's role.
Hood said he had yet to receive a response from OpenAI after his lawyers issued a legal notice on March 21 demanding a fix to avoid court action.
OpenAI has not yet responded to a request for comment, but its chatbot does carry a disclaimer warning that it "may produce inaccurate information about people, places, or facts".
James Naughton from Gordon Legal, who is representing Hood, said the information supplied by ChatGPT was defamatory and had damaged the mayor's reputation.
"This critical error is an eye-opening example of the reputational harm that can be caused by AI systems such as ChatGPT," Naughton said.