Technology and digital innovation can have a significant role in combatting hate speech when paired with robust legislation that protects end-users, Meta public policy manager Flavio Arzarello has said.

Arzarello, who works for the company previously known as Facebook, was speaking during a panel discussion on safeguarding against digital threats at a conference organised by the Malta Communications Authority. 

He said that, over the years, Meta has invested heavily in combatting harmful content on its platform and, in the last three quarters, managed to reduce hate speech from Facebook by 50 per cent.

“Protecting users from harm has always been one of our top priorities. Over the years, Facebook has had to work to address serious ethical matters and major societal issues that balance digital privacy with security,” he said. 

“Where do we draw the line between freedom of expression and harmful content? How do we distinguish between what is truthful and what is a distraction? We are often in a position to make decisions on content which we would much rather avoid making alone. So, this is why we ask for more regulations because we are fully aware that we, as a platform, should not be in charge of these decisions.”

Arzarello also highlighted that there was a difference between unlawful content and what Facebook considered to be harmful content, primarily because what is unlawful is set out by law and when contested is decided by a court. 

“What we consider to be harmful content depends on a number of factors,” he continued.  “The same content in different contexts can be either harmful or hilarious and this is challenging for us to interpret.” 

He said that robust community standards and innovative technology primarily run on artificial intelligence have been game-changers in combatting the spread of harmful content but that the inclusion of human moderators was still crucial in keeping things balanced.

“To enforce community standards, we employ automatic tools, AI technology with human reviewers,” Arzarello said. 

“Human reviewers will remain crucial because they are the only solution that completely understands the element of balance. We are aware that achieving the optimal solution for everyone is almost impossible but what we can do is do better and have more people working on it.”

Arzarello said that, with 40,000 human reviewers working on safety and security and with an investment of $13 billion since 2016, the prevalence of hate speech was at 0.03 per cent per 10,000 units of content, with Facebook taking down three million fake accounts with the help of its advanced AI systems.

Sign up to our free newsletters

Get the best updates straight to your inbox:
Please select at least one mailing list.

You can unsubscribe at any time by clicking the link in the footer of our emails. We use Mailchimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to Mailchimp for processing.