Facebook said it took down 583 million fake profiles in the first three months of the year, usually within minutes of their creation.
The company also scrubbed 837 million pieces of spam and acted on 2.5 million instances of hate speech, it said on Tuesday in its first-ever report on how effectively it’s enforcing community standards.
Facebook came under intense scrutiny earlier this year over the use of private data and the impact of unregulated content on its community of 2.2 billion monthly users, with governments around the world questioning the Menlo Park, California-based company’s policies. Today’s report, which will come out twice a year, can also show how well Facebook’s artificial intelligence (AI) systems learn to flag items that violate the rules before anyone on the site can see them.
The conclusion from the first metrics: some problems are better suited to computerised solutions than others. Almost 100% of the spam and 96% of the adult nudity was flagged for takedown, with the help of technology, before any Facebook users complained about it. But only 38% of hate speech was noticed by the AI. Hate speech is harder to deal with because computers often can’t understand the meaning of a sentence — such as the difference between someone using a racial slur to attack somebody, and someone telling a story about that slur.
“It’s a work in progress always,” Guy Rosen, Facebook’s vice president of product management, said in a briefing. “These are the same metrics we’re using internally to guide the metrics of the teams. We’re sharing them here because we think we need to be accountable.”
Chief Executive Officer Mark Zuckerberg faced several questions during his April congressional testimony about content removal. Why, for example, did Facebook make it possible for people to sell opiates on the site, even though it says that content is banned? Why are certain people banned, even if they did nothing wrong? Zuckerberg explained that Facebook is hiring thousands of people who can, over the course of millions of content decisions, train a better artificial intelligence system. Recently, Facebook released for the first time the internal rules about what stays up and what comes down.
The enforcement of those rules has been spotty, especially in regions where Facebook hasn’t hired enough people who speak local languages, or in subjects unfamiliar to its AI program. The company has come under fire for failing to remove content that has incited ethnic violence in Myanmar, leading Facebook to hire more Burmese speakers. A Bloomberg report last week showed that while Facebook says it’s become effective at taking down terrorist content from al-Qaeda and the Islamic State, recruitment posts for other US-designated terrorist groups are found easily on the site.
While AI is getting more effective at flagging content, Facebook’s human reviewers still have to finish the job. A photo with nudity may be porn, or it may be art, and human eyes can usually tell the difference. The company expects to have 20 000 people working on security and content moderation by the end of the year.
Facebook says it’s going to measure the size of its problems based on “prevalence” of content, or percentage of overall views of items on Facebook. For every 10 000 content views, an estimated 22 to 27 contained graphic violence and 7 to 9 contained nudity and sexual violence that violated the rules, the company said. The estimate is taken from a global sampling of all content in the first quarter, weighted by popularity of that content. Facebook doesn’t yet have a metric for prevalence of other types of content.
“I’m hoping we have more numbers by the next time we report,” Alex Schultz, vice president of data analytics, said in the briefing. “We should measure them well, and be good at explaining to you why they have moved.”