Facebook released its first ever Transparency Report, describing the amount of content that it has identified as breaking the rules of the community standard between October 2017 and March 2018. According to the data, Facebook has taken measures against a majority of the content offensive before it is reported by users.
The report is part of the Community Standards Initiative announced for the first time in April. Criteria for violation were divided into six categories: graphic violence, nudity and sexual activity, ISIS terrorist propaganda, Al Qaeda and its affiliates, hate speech, spam and fake accounts.
Facebook says that it uses a combination of machine learning automation and employees to identify content that violates its guidelines on community standards. The company said several times that it planned to recruit at least 10,000 security and safety professionals by the end of 2018 to work on this initiative.
Facebook's transparency report divides the number of content violations it has responded to for each category between the fourth quarter of 2017 and the first quarter of 2018 and the amount of content identified before users report it. It also lists the frequency of content violations in categories of graphic violence and nudity and sexual activity, as well as the frequency of false counts. Here is an overview of the data:
How many content or Facebook accounts has it taken?
- Graphic violence – Q4 2017: 1.2 million | Q1 2018: 3.4 million
- Nudity and sexual activity – Q4 2017: 21 million | Q1 2018: 21 million
- Terrorist propaganda of the IS, al Qaeda and its affiliates – Q4 2017: 1.1 million | Q1 2018: 1.9 million
- Hate speech – Q4 2017: 1.6 million | Q1 2018: 2.5 million
- Spam – Q4 2017: 727 million | Q1 2018: $ 837 million
- False counts – Q4 2017: 694 million | Q1 2018: 583 million
Quantity identified before users report content or accounts
- Graphic violence – Q4 2017: 72% | Q1 2018: 86%
- Nudity and sexual activity – Q4 2017: 94% | Q1 2018: 96%
- Terrorist propaganda of the Islamic State, al-Qaeda and its affiliates – Q4 2017: 97% | Q1 2018: 99.5%
- Hate speech – Q4 2017: 24% | Q1 2018: 38%
- Spam – Q4 2017: 100% | Q1 2018: 100%
- False counts – Q4 2017: 98.5% | Q1 2018: 99.1%
Prevalence of content in violation of Facebook's community standards
- Graphic Violence – Q4 2017: 0.16% to 0.19% | Q1 2018: 0.22% to 0.27%
- Nudity and sexual activity – Q4 2017: 0.06% to 0.08% | Q1 2018: 0.07% to 0.09%
- Terrorist propaganda of the Islamic State, al-Qaeda and its affiliates – Data unavailable
- Hate speech – Data unavailable
- Spam – Data not available
- False accounts – Facebook estimates that fake accounts accounted for about 3 to 4% of monthly active users (MAU) on Facebook in the first quarter of 2018 and the fourth quarter of 2017.
In all categories, with the exception of a hate speech, Facebook has taken action against most of the offending content before it is reported by users. For hate speech, there was a noticeable difference in the percentage of proactively deleted Facebook content. More than 90% of the content was removed without being reported in almost all categories except hate speech. For both quarters, less than 40% (38% in the first quarter of 2018 and 24% in the fourth quarter of 2017) of content identified as hate speech were the subject of action prior to Be reported. This means that more than half of the hate speech violations identified on the platform must be reported by a user in relation to Facebook, identifying them through their own systems.
Facebook notes at the beginning of the report that it is still refining its internal methods to measure its efforts and expects the numbers to become more accurate over time.