As part of its continuing efforts to scrub its platform off terrorism-related content, Facebook has now announced that it has removed or flagged 1.9 million pieces of extremist, violent and dangerous content linked to notorious terrorist groups Al-Qaeda and Isis in the first quarter of 2018.

According to the company, the number represents roughly a twofold increase over the previous quarter. Facebook further says that only one percent of those posts were reported by Facebook users, while the rest – 99 percent – were identified by Facebook reviewers and by the company’s automated content-detection algorithms.

Under-fire for not doing enough to combat extremism-related content on its platform, the world’s largest social media network last year announced that it was training its AI software to identify extremist content specifically from those two terrorist groups. The company had also said back then that 99 percent of such posts gets removed by the automated mechanism even before they get flagged by users.

Meanwhile, in its latest post, Facebook says that it has employed 50 people to its counter-terrorism team and intends to add many more going forward. The company further states that all the steps it has taken over the past year has made a huge difference in terms of how fast the company is now able to remove violent propaganda from its platform. According to Facebook, “We’ve made significant strides finding and removing (terrorism-related) propaganda quickly and at scale”.

It’s worth noting here that even as the company is focusing on removing terrorist content from its site, its definition of ‘terrorism’ only includes “non-governmental organization(s)”, and leaves out governments, no matter how oppressive. The company, however, says that it will remove “certain content around state-sponsored violence” that may be in breach of its graphic violence policy.