Facebook Has Banned 583M Fake Accounts So Far This Year

Facebook Has Banned 583M Fake Accounts So Far This Year

The firm just released its first content moderation report

Facebook has been enforcing community standard guidelines ever since its inception, but it’s only now that the social media juggernaut is releasing figures how far it goes when it comes to enforcing rules in its site.

The report comes after Facebook published its internal community enforcement guidelines last month. The report, which will be published quarterly, breaks down the company’s efforts to stamp out undesirable content, stuff that violates its TOS. There’s six main areas that Facebook concentrates on, namely graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam and fake accounts.

Facebook reported that it removed around 837 million pieces of spam and 583 million fake accounts in Q1 of this year alone, which is a staggeringly large amount spam and fake users. The company also acted on 21 million pieces of nudity and sexual activity, 3.5 million posts that displayed violent content, 2.5 million posts of hate speech and 1.9 million pieces of terrorist propaganda.

While policing posts and accounts are accomplished by people, Facebook relies heavily on AI to remove many of the offending posts. Facebook’s AI algorithms managed to spot 100 percent of spam and terrorist propaganda posts, and nearly 99 percent of fake accounts and 96 percent of posts with adult nudity and sexual activity. Similarly, it did a great job as well in filtering graphic violence, with automated systems catching 86 percent of reports of graphic violence.

Facebook’s AI struggles when it comes to hate speech, with only 38 percent of posts being caught by their automated systems.

You can read Facebook’s entire report here.

Leave a Reply

Your email address will not be published. Required fields are marked *

Latest News

Latest Reviews

Best Phones in the Philippines

Best Guides

Recent Posts