Facebook Removed 3.2 Billion Fake Accounts in Q2, Q3 2019

facebook community standards report november 2019 featured

Social media giant Facebook has released the fourth edition of its Community Standards Enforcement Report detailing the steps it has taken in the previous two quarters of the year to ensure content that violates its community standards doesn’t remain on the site.

In the report, the company highlights that it removed a whopping 3.2 billion fake accounts in the last two quarters, i.e., from April to September this year. These accounts were caught before they were activated on Facebook, which is why they don’t reflect in the company’s reported user-figures. The company estimates that about 5% of its massive 2.45 billion user base is comprised of fake accounts.

The company has also, for the first time in its community standards report, included data from Instagram. The report mentions that the company removed over 4.5 million pieces of content relating to self-injury and suicide from Facebook. On Instagram, the company removed a total of 1.68 million pieces of content that encouraged self-injury or suicide.

Facebook also removed 11.6 million pieces of content relating to child abuse and child pornography from its platform, up from 5.8 million in Q1 2019. The company also claims that it was able to proactively discover and remove 99% of posts relating to child abuse and exploitation. On Instagram, the company removed 1.26 million pieces of content relating to child nudity and sexual violation of children.

Moreover, the company said that over the last two years it has invested in technologies that can proactively detect hate speech on its platform, allowing Facebook to remove such content without someone having to report it, and in many cases, says Facebook, before anyone even gets to see such content. To do this, Facebook is “identifying images and identical strings of text that have already been removed as hate speech, and machine-learning classifiers that look at things like language, as well as the reactions and comments to a post, to assess how closely it matches common phrases, patterns and attacks that we’ve seen previously in content that violates our policies against hate.”

SOURCE Facebook Newsroom
#Tags
comment Comments 0
Leave a Reply