Facebook Closed 583M Fake Account in Q1'18

Share

In Facebook's first quarterly Community Standards Enforcement Report, the company said most of its moderation activity was waged against fake accounts and spam posts-with 837 million spam posts and 583 million fake accounts being acted upon.

Facebook's vice president of product management, Guy Rosen, said that the company's systems are still in development for some of the content checks. "Accountable to the community".

Facebook said it released the report to start a dialog about harmful content on the platform, and how it enforces community standards to combat it.

On Tuesday, Facebook said it took action on some 2.5 million hateful pieces of content in the first three months of 2018, up from 1.6 million in the last three months of 2017.

Facebook today revealed for the first time how much sex, violence and terrorist, propaganda has infiltrated the platform-and whether the company has successfully taken the content down.

Facebook says AI has played an increasing role in flagging this content. Last week, Alex Schultz, the company's vice president of growth, and Rosen walked reporters through exactly how the company measures violations and how it intends to deal with them.

Now, however, artificial intelligence technology does much of that work. "This is especially true where we've been able to build artificial intelligence technology that automatically identifies content that might violate our standards".

In the area of adult nudity and sexual activity, between 0.07-0.09% of views during the first quarter were of content that violated standards. For the most part, it has not provided more details on the hiring plan, including how many will be full-time Facebook employees and how many will be contractors.

"We took down or applied warning labels to about three and a half million pieces of violent content in Q1 2018, 86 per cent of which was identified by our technology before it was reported to Facebook".

The company said most of the increase was the result of improvements in detection technology.

The social network estimates that it found and flagged 85% of that content prior to users seeing and reporting it - a higher level than previously due to technological advances. In total, it took action on 21 million pieces of content in Q4, similar to Q4. The content screening has nothing to do with privacy protection, though, and is aimed at maintaining a family-friendly atmosphere for users and advertisers. It took down 2.5 million pieces of hate speech during the period, only 38% of which was flagged by its algorithms. "By comparison, we removed two and a half million pieces of hate speech in Q1 2018,38 percent of which was flagged by our technology". The report also doesn't cover how much inappropriate content Facebook missed.

"Hate speech content often requires detailed scrutiny by our trained reviewers to understand context and decide whether the material violates standards, so we tend to find and flag less of it, and rely more on user reports, than with some other violation types", the report says. It says it found and flagged almost 100% of spam content in both Q1 and Q4.

However, it said that most of the 583m fake accounts were disabled "within minutes of registration" and that it prevents "millions of fake accounts" on a daily basis from registering.

Share