In brief: Facebook for years has faced intense criticism about the way it moderates content on its social platforms. After a series of reports earlier this month about worrying internal research results that Facebook allegedly ignored for years, the company is trying to protect its image by providing some missing context.

Earlier this month, a series of scathing reports from The Wall Street Journal revealed that Facebook knows its social platforms have mechanisms that can cause harm. While sifting through leaked internal documents, the publication found the company has been giving special treatment to a supposed elite, playing down Instagram's harmful effects on teenage girls, and making costly mistakes in trying to bring users together and boost positive interactions between them.

During an interview with Axios' Mike Allen, Nick Clegg – who is Facebook's vice president of global affairs – said the reports left no benefit of the doubt for the company and frames a series of complex and difficult-to-solve issues as a conspiracy.

Clegg also wrote a direct response to The Wall Street Journal's revelations where he described the series as full of "deliberate mischaracterizations" of what the company has been doing in light of internal research that shows negative aspects of its social platforms.

Today, Facebook sought to clarify that it has always wanted to innovate responsibly and that it has made progress in addressing major challenges over the past few years. For context, the company claims it has invested over $13 billion in safety and security measures since 2016. Five years later, more than 40,000 Facebook employees are dedicated to this area alone.

The safety and security teams include outside contractors that focus on content moderation, 5,000 of which were added in the past two years. They are aided by advanced AI systems that understand the same concept in multiple languages and can now remove 15 times more harmful content than they could in 2017.

Overall, Facebook is trying to show that it's been a lot more proactive in dealing with safety and security challenges early in the product development process. The company notes it has removed over three billion fake accounts and 20 million pieces of Covid-19 misinformation in the first half of this year, in addition to implementing time management features that remind you to take a break from using the Facebook and Instagram apps.