Twitter shares analysis of racist abuse that followed Euro 2020 final

Technology

Twitter claims identity verification wouldn’t have prevented the torrent of racist abuse against Black players on England’s football team, which followed the team’s loss in the Euro 2020 championship game last month. According to an update posted by Twitter UK today, the majority of the accounts suspended for abusive content during the tournament were not anonymous.

“Of the permanently suspended accounts from the Tournament, 99 percent of account owners were identifiable,” said Twitter. The company also said that, while the racist tweets came from around the world, the majority originated from the United Kingdom.

According to Twitter, its automated tools identified and removed 1,622 racist tweets during the match and in the 24 hours after. Of the removed tweets, only 2 percent were viewed more than a thousand times, says Twitter.

Twitter has had a long-standing problem with abuse on the platform. Following a boycott in 2017, CEO Jack Dorsey pledged that Twitter would take a “more aggressive stance” in the enforcement of its rules. Since then, the company has rolled out more granular features in attempts to curb abuse, like letting people hide replies or limit who can reply to their tweets.

Twitter continues to work on ways to prevent abusive tweets from being sent, including rolling out reply prompts that ask people if they’re sure about using potentially harmful language. Twitter is also developing a feature that “temporarily autoblocks accounts using harmful language, such that they’re stopped from being able to interact with your account.”