Friday, March 14, 2025

Can Social Media Platforms Be Trusted to Regulate Misinformation Themselves?


 

By Lilian H. Hill

Social media platforms wield immense influence over public discourse, acting as primary sources of news, political debate, and social movements. While they once advertised their policies intended to combat misinformation, hate speech, and harmful content, their willingness and ability to effectively enforce these policies is problematic. The fundamental challenge is that these companies operate as profit-driven businesses, meaning their primary incentives do not always align with the public good. Myers and Grant (2023) commented that many platforms are investing fewer resources in combating misinformation. For example, Meta recently announced that they have ended their fact-checking program and instead will rely on crowdsourcing to monitor misinformation (Chow, 2025). Meta operates Facebook, Instagram, and Threads. Likewise, X, formerly known as Twitter, slashed it trust and safety staff in 2022. Experts worry that diminished safeguards once implemented to combat misinformation and disinformation decreases trust online (Myers & Grant, 2023).  

 

Key Challenges in Self-Regulation

There are four key challenges to social media platforms’ self-regulation: 

 

 

1.    Financial Incentives and Engagement-Driven Algorithms

Social media platforms generate revenue primarily through advertising, which depends on user engagement. Unfortunately, research has shown that sensationalized, misleading, or divisive content often drives higher engagement than factual, nuanced discussions. This creates a conflict of interest: aggressively moderating misinformation and harmful content could reduce engagement, ultimately affecting their bottom line (Minow & Minow, 2023).

 

For example, Facebook’s own internal research (revealed in the Facebook Papers) found that its algorithms promoted divisive and emotionally charged content because it kept users on the platform longer. YouTube has been criticized for its recommendation algorithm, which has in the past directed users toward conspiracy theories and extremist content to maximize watch time. Because of these financial incentives, social media companies often take a reactive rather than proactive approach to content moderation, making changes only when public pressure or regulatory threats force them to act.

 

 

2.    Inconsistent and Arbitrary Enforcement

Even when platforms enforce their policies, they often do so inconsistently. Factors like political pressure, public relations concerns, and high-profile users can influence moderation decisions. Some influential figures or accounts with large followings receive more leniency than average users. For instance, politicians and celebrities have been allowed to spread misinformation with little consequence, while smaller accounts posting similar content face immediate bans. Enforcement of community guidelines can vary across different regions and languages, with content in English often being moderated more effectively than in less widely spoken languages. This leaves many vulnerable communities exposed to harmful misinformation and hate speech (Minow & Minow, 2023).

 

 

3.    Reduction of Trust and Safety Teams

In recent years, many social media companies have cut back on their Trust and Safety teams, reducing their ability to effectively moderate content. These teams are responsible for identifying harmful material, enforcing policies, and preventing the spread of misinformation. With fewer human moderators and fact-checkers, harmful content is more likely to spread unchecked, especially as AI-driven moderation systems still struggle with nuance, context, and misinformation detection (Minow & Minow, 2023).

 

 

4.    Lack of Transparency and Accountability

Social media companies rarely provide full transparency about how they moderate content, making it difficult for researchers, policymakers, and the public to hold them accountable. Platforms often do not disclose how their algorithms work, meaning users don’t know why they see certain content or how misinformation spreads. When harmful content spreads widely, companies often deflect responsibility, blaming bad actors rather than acknowledging the role of their own recommendation systems. Even when they do act, platforms tend not to share details about why specific moderation decisions were made, leading to accusations of bias or unfair enforcement (Minow & Minow, 2023).

 

 

What Can Individuals Do?

Disinformation and “fake news” pose a serious threat to democratic systems by shaping public opinion and influencing electoral discourse. You can protect yourself from disinformation by:

 

1.     Engaging with diverse perspectives. Relying on a limited number of like-minded news sources restricts your exposure to varied viewpoints and increases the risk of falling for hoaxes or false narratives. While not foolproof, broadening your sources improves your chances of accessing well-balanced information (National Center of State Courts, 2025).

 

2.     Approaching news with skepticism. Many online outlets prioritize clicks over accuracy, using misleading or sensationalized headlines to grab attention. Understanding that not everything you read is true, and that some sites specialize in spreading falsehoods, is crucial in today’s digital landscape. Learning to assess news credibility helps protect against misinformation (National Center of State Courts, 2025).

 

3.     Fact-checking before sharing. Before passing along information, verify the credibility of the source. Cross-check stories with reliable, unbiased sources known for high factual accuracy to determine what, and whom, you can trust (National Center of State Courts, 2025).

 

4.     Challenging false information. If you come across a misleading or false post, speak up. Addressing misinformation signals that spreading falsehoods is unacceptable. By staying silent, you allow misinformation to persist and gain traction (National Center of State Courts, 2025).

 

What Can Be Done Societally?

As a society, we all share the responsibility of preventing the spread of false information. Since self-regulation by social media platforms has proven unreliable, a multi-pronged approach is needed to ensure responsible content moderation and combat misinformation effectively. This approach includes:

 

1. Government Regulation and Policy Reform

Governments and regulatory bodies can play a role in setting clear guidelines for social media companies by implementing stronger content moderation laws that can require companies to take action against misinformation, hate speech, and harmful content. Transparency requirements can force platforms to disclose how their algorithms function and how moderation decisions are made. Financial penalties for failure to remove harmful content could incentivize more responsible practices. However, regulation must be balanced to avoid excessive government control over speech. It should focus on ensuring transparency, fairness, and accountability rather than dictating specific narratives (Balkin, 2021).

 

2. Public Pressure and Advocacy

Users and advocacy groups can push social media companies to do better by demanding more robust moderation policies that are fairly enforced across all users and regions. Independent oversight bodies to audit content moderation practices and hold platforms accountable. A recent poll conducted by Boston University’s College of Communications found that 72% of Americans believed it is acceptable for social media platforms to remove inaccurate information. More than half of Americans distrust the efficacy of crowd-source monitoring of social media (Amazeen, 2025). Improved fact-checking partnerships are needed to counter misinformation more effectively.

 

3. Media Literacy and User Responsibility

Since social media platforms alone cannot be relied upon to stop misinformation, individuals must take steps to protect themselves. They can verify information before sharing by checking multiple sources and rely on reputable fact-checking organizations. Other actions can include diversifying news sources and avoiding relyiance on a single platform or outlet for information, reporting misinformation and harmful content by flagging false or dangerous content, and educating others by encouraging media literacy in communities can help reduce the spread of misinformation (Sucui, 2024).

 

Conclusion

Social media companies cannot be fully trusted to police themselves, as their financial interests often clash with the need for responsible moderation. While they have taken some steps to curb misinformation, enforcement remains inconsistent, and recent cuts to moderation teams have worsened the problem. The solution lies in a combination of regulation, public accountability, and increased media literacy to create a more reliable and trustworthy information ecosystem.

 

References

Amazeen, M. (2025). Americans expect social media content moderation. The Brink: Pioneering Research of Boston University. https://www.bu.edu/articles/2025/americans-expect-social-media-content-moderation/

Balkin, J. M. (2021). How to regulate (and not regulate) social media. Journal of Free Speech Law, 1(71), 73-96. https://www.journaloffreespeechlaw.org/balkin.pdf

Chow, A. R. (2025, January 7). Whey Meta’s fact-checking change could lead to more misinformation on Facebook and Instagram. Time. https://time.com/7205332/meta-fact-checking-community-notes/

Minow, & Minow (2023). Social media companies should pursue serious self-supervision — soon: Response to Professors Douek and Kadri. Harvard Law Review, 136(8). https://harvardlawreview.org/forum/vol-136/social-media-companies-should-pursue-serious-self-supervision-soon-response-to-professors-douek-and-kadri/

Myers, S. L., and Grant, N. (2023, February 14). Combating disinformation wanes at social media giants. New York Times. https://www.nytimes.com/2023/02/14/technology/disinformation-moderation-social-media.html

Sucui, P. (January 2, 2024). How media literacy can help stop misinformation from spreading. Forbes. https://www.forbes.com/sites/petersuciu/2024/01/02/how-media-literacy-can-help-stop-misinformation-from-spreading/


Friday, March 7, 2025

How to Report Misleading and Inaccurate Content on Social Media

 



By Lilian H. Hill

 

Misinformation and disinformation, often called "fake news," spread rapidly on social media, especially during conflicts, wars, and emergencies. “Fake news” and disinformation campaigns injure the health of democratic systems because they can influence public opinion and electoral decision-making (National Center for State Courts, n.d.). With the overwhelming content shared on these platforms, distinguishing truth from falsehood has become challenging. This issue has worsened as some social media companies have downsized their Trust and Safety teams, neglecting proper content moderation (Center for Countering Digital Hate, 2023).

 

Users can play a role in curbing the spread of false information. The first step is to verify before sharing that we are mindful of what we amplify and engage with. Equally important is reporting misinformation when we come across it. Social media platforms allow users to flag posts that promote falsehoods, conspiracies, or misleading claims, each enforcing its own Community Standards to regulate content (Center for Countering Digital Hate, 2023).

 

Reporting misleading content on social media platforms is essential in reducing the spread of misinformation. Unfortunately, some platforms fail to act on reported content (Center for Countering Digital Hate, 2023). Nonetheless, users should still report when misinformation and disinformation flood their timelines.

 

Here’s how to report misleading content on some of the most widely used platforms:

1. Facebook

  • Click on the three dots (•••) in the top-right corner of the post.
  • Select "Find support or report post."
  • Choose "False Information" or another relevant category.
  • Follow the on-screen instructions to complete the report.

 

2. Instagram

  • Tap the three dots (•••) in the top-right corner of the post.
  • Select "Report."
  • Choose "False Information" and follow the steps to submit your report.

 

3. X (formerly known as Twitter)

  • Click on the three dots (•••) on the tweet you want to report.
  • Select "Report Tweet."
  • Choose "It’s misleading" and specify whether it relates to politics, health, or other misinformation.
  • Follow the prompts to complete the report.

 

4. TikTok

  • Tap and hold the video or click on the share arrow.
  • Select "Report."
  • Choose "Misleading Information" and provide details if necessary.

 

5. YouTube

  • Click on the three dots (•••) below the video.
  • Select "Report."
  • Choose "Misinformation" and provide any additional details required.

 

6. Reddit

  • Click on the three dots (•••) or the "Report" button below the post or comment.
  • Select "Misinformation" if available or choose a related category.
  • Follow the instructions to submit your report.

 

7. LinkedIn

  • Click on the three dots (•••) in the top-right corner of the post.
  • Select "Report this post."
  • Choose "False or misleading information."

 

8. Threads

  • Click more next to a post.
  • Click Report and follow the on-screen instructions.

 

After reporting, the platform will review the content and take action if it violates their misinformation policies. Users can also enhance efforts by sharing fact-checked sources in the comments or encouraging others to report the same misleading content.

 

References

Center for Countering Digital Hate (2023, October 24). How to report misinformation on social media. https://counterhate.com/blog/how-to-report-misinformation-on-social-media/

National Center for State Courts (n.d.) Disinformation and the Public. https://www.ncsc.org/consulting-and-research/areas-of-expertise/communications,-civics-and-disinformation/disinformation/for-the-public


Can Social Media Platforms Be Trusted to Regulate Misinformation Themselves?

  By Lilian H. Hill Social media platforms wield immense influence over public discourse, acting as primary sources of n...