Facebook ramps up defences against ugly intrusions into the platform ranging from hate speech to graphic violence
Facebook has committed to spending more on technological solutions to help the social media giant find and remove inappropriate or illegal content on its platform.
Artificial intelligence (AI) has been playing a key role in helping Facebook take action on content in six areas: graphic violence, adult nudity and pornography, terrorist propaganda, hate speech, spam, and fake accounts.
Simon Harari, public policy manager for content, Facebook APAC, said Facebook uses a combination of technological means and reports from the user community to identify violating content on the platform.
“We are also investing heavily in AI to help proactively detect violating content, complementing the reports we receive from our community,” said Harari.
He said fake accounts and spam are mostly detected by AI technology, while hate speech is the most challenge area for using AI to detect because of the local context.
According to a Facebook Community Standards Enforcement Report from January to March 2018, the company said 583 million additional fake accounts were disabled, usually within minutes of registration – and 99 per cent of them before they were reported.
“We took down 837 million pieces of spam, 21 million pieces of adult nudity, and 3.5 million pieces of violent content, amounting to nearly 100 per cent, 96 per cent and 86 per cent, respectively, that we found and flagged before they could be reported,” said Harari.
He said the firm removed 2.5 million pieces of hate speech, some 38 per cent of which were flagged by technology and it took down 1.9 million pieces of terrorist propaganda, 99.5 per cent of these items identified by technology.
“The results showed that our AI systems are getting extremely good at identifying some types of violating content, such as fake accounts and nudity, but that areas like hate speech, which require more local context and understanding, still need human review. The next report will be published this November,” said Harari.
Hate speech is the most challenging problem for Facebook to handle with technology since it is very involved with local context.
“We do not allow hate speech on Facebook because it creates an environment of intimidation and exclusion and in some cases may promote real-world violence,” Harari said.
“We define hate speech as a direct attack on people based on what we call the protected characteristics of Facebook’s community standard for hate speech, including race, religion, national origin, gender identity, caste, sex, ethnicity, serious disability, and sexual orientation.”
Facebook’s content policy team is responsible for developing Facebook’s community standards. The team is based in offices around the world and is made up of subject matter experts on topics such as terrorism, hate speech, and child safety. The team also convenes the Content Standards Forum to discuss and pass policy recommendations.
“Facebook's mission is to give people the power to build community and bring the world closer together. We know that people will only come to Facebook to do this if they feel safe when using our services, which is why our community standards are so crucial in helping us achieve our mission,” said Harari.
“Facebook has always had a public version of our community standards available to the community around the world. We want to be absolutely clear that harmful language, images, and videos have no place on Facebook.”
He said that earlier this year the company published a much more detailed version of the community standards for three main reasons. First, it was done so make it clear that Facebook “absolutely does not allow harmful content such as hate speech, bullying and extremist content on the platform”. Second, it is to show where Facebook draws the line on more nuanced and complex issues. And the last is to start a dialogue about Facebook’s community standards and encourage feedback.
“Our content policy team is responsible for developing our community standards. They are a diverse group made up of people from many different backgrounds, including former prosecutors, human rights lawyers and safety experts, and they are based in offices around the world,” said Harari.
“But they don't develop our community standards in a vacuum. They seek out regular input from external experts, including NGOs, safety experts and academics, who we consult for every new policy recommendation.”
Facebook has published more details about the community standards and published a Community Standards Enforcement Report for the first time. These reports are reviewed by content reviewers who are located across the globe, working 24 hours over seven days to review content in over 50 languages, including Thai.
“We have double the number of people working around the world on safety and security issues at Facebook, reaching 20,000 by the end of this year. We are also working to enhance the work we do proactively, training our classifiers and using machine learning to automatically surface images and keywords for human review,” said Harari.
Alongside these detailed community standards, Facebook also this year expand the appeals channels by allowing the community to ask Facebook to take a second look at take-downs of individual posts, when there is some disagreement over the decision.
Channels for appeal
“We announced this year that we are expanding appeals giving people the opportunity for people to appeal against content decisions,” said Harari.
“People on Facebook will now be able to appeal content decisions. We have now launched appeals for content that was removed for nudity, sexual activity, hate speech, bullying and harassment and violence. To date, appeals were only available to people whose profiles, pages, or groups had been taken down.”
Facebook has also put more focus on harassment, bullying and suicide.
Clair Deevy, director of policy programmes for Facebook APAC, said that Facebook had updated its policies for bullying of private individuals and this would be treated as harassment when content was aimed at a public figure in Messenger.
“The violating messages will not be deleted, but we will let the sender know that they have violated our policies and will prevent them from sending messages for 24 hours as the victim will also receive a message letting them know that we took action against the perpetrator,” said Deevy.
She said that earlier this year the company expanded its policies to guard against the harassment of young public figures on Facebook.
“In the coming weeks, we will further expand our policies to better protect figures against harassment regardless of age,” said Deevy.
On the issue of suicide, Deevy said this tragedy affects people all over the world and that why’s Facebook is also focused on suicide prevention. She said Facebook has been working with experts in suicide prevention for over 10 years to make sure that it offers the best possible support to the community. This includes meeting with experts around the world to discuss the best way to help people in crisis when they are using Facebook Live. Experts stress the importance of “not cutting off the stream too early, people viewing can reach out and people broadcasting can receive support”.
Deevy said Facebook is looking into pattern recognition to help accelerate reviews and prioritise potential suicidal reports faster.