Friday, March 14, 2025

Can Social Media Platforms Be Trusted to Regulate Misinformation Themselves?


 

By Lilian H. Hill

Social media platforms wield immense influence over public discourse, acting as primary sources of news, political debate, and social movements. While they once advertised their policies intended to combat misinformation, hate speech, and harmful content, their willingness and ability to effectively enforce these policies is problematic. The fundamental challenge is that these companies operate as profit-driven businesses, meaning their primary incentives do not always align with the public good. Myers and Grant (2023) commented that many platforms are investing fewer resources in combating misinformation. For example, Meta recently announced that they have ended their fact-checking program and instead will rely on crowdsourcing to monitor misinformation (Chow, 2025). Meta operates Facebook, Instagram, and Threads. Likewise, X, formerly known as Twitter, slashed it trust and safety staff in 2022. Experts worry that diminished safeguards once implemented to combat misinformation and disinformation decreases trust online (Myers & Grant, 2023).  

 

Key Challenges in Self-Regulation

There are four key challenges to social media platforms’ self-regulation: 

 

 

1.    Financial Incentives and Engagement-Driven Algorithms

Social media platforms generate revenue primarily through advertising, which depends on user engagement. Unfortunately, research has shown that sensationalized, misleading, or divisive content often drives higher engagement than factual, nuanced discussions. This creates a conflict of interest: aggressively moderating misinformation and harmful content could reduce engagement, ultimately affecting their bottom line (Minow & Minow, 2023).

 

For example, Facebook’s own internal research (revealed in the Facebook Papers) found that its algorithms promoted divisive and emotionally charged content because it kept users on the platform longer. YouTube has been criticized for its recommendation algorithm, which has in the past directed users toward conspiracy theories and extremist content to maximize watch time. Because of these financial incentives, social media companies often take a reactive rather than proactive approach to content moderation, making changes only when public pressure or regulatory threats force them to act.

 

 

2.    Inconsistent and Arbitrary Enforcement

Even when platforms enforce their policies, they often do so inconsistently. Factors like political pressure, public relations concerns, and high-profile users can influence moderation decisions. Some influential figures or accounts with large followings receive more leniency than average users. For instance, politicians and celebrities have been allowed to spread misinformation with little consequence, while smaller accounts posting similar content face immediate bans. Enforcement of community guidelines can vary across different regions and languages, with content in English often being moderated more effectively than in less widely spoken languages. This leaves many vulnerable communities exposed to harmful misinformation and hate speech (Minow & Minow, 2023).

 

 

3.    Reduction of Trust and Safety Teams

In recent years, many social media companies have cut back on their Trust and Safety teams, reducing their ability to effectively moderate content. These teams are responsible for identifying harmful material, enforcing policies, and preventing the spread of misinformation. With fewer human moderators and fact-checkers, harmful content is more likely to spread unchecked, especially as AI-driven moderation systems still struggle with nuance, context, and misinformation detection (Minow & Minow, 2023).

 

 

4.    Lack of Transparency and Accountability

Social media companies rarely provide full transparency about how they moderate content, making it difficult for researchers, policymakers, and the public to hold them accountable. Platforms often do not disclose how their algorithms work, meaning users don’t know why they see certain content or how misinformation spreads. When harmful content spreads widely, companies often deflect responsibility, blaming bad actors rather than acknowledging the role of their own recommendation systems. Even when they do act, platforms tend not to share details about why specific moderation decisions were made, leading to accusations of bias or unfair enforcement (Minow & Minow, 2023).

 

 

What Can Individuals Do?

Disinformation and “fake news” pose a serious threat to democratic systems by shaping public opinion and influencing electoral discourse. You can protect yourself from disinformation by:

 

1.     Engaging with diverse perspectives. Relying on a limited number of like-minded news sources restricts your exposure to varied viewpoints and increases the risk of falling for hoaxes or false narratives. While not foolproof, broadening your sources improves your chances of accessing well-balanced information (National Center of State Courts, 2025).

 

2.     Approaching news with skepticism. Many online outlets prioritize clicks over accuracy, using misleading or sensationalized headlines to grab attention. Understanding that not everything you read is true, and that some sites specialize in spreading falsehoods, is crucial in today’s digital landscape. Learning to assess news credibility helps protect against misinformation (National Center of State Courts, 2025).

 

3.     Fact-checking before sharing. Before passing along information, verify the credibility of the source. Cross-check stories with reliable, unbiased sources known for high factual accuracy to determine what, and whom, you can trust (National Center of State Courts, 2025).

 

4.     Challenging false information. If you come across a misleading or false post, speak up. Addressing misinformation signals that spreading falsehoods is unacceptable. By staying silent, you allow misinformation to persist and gain traction (National Center of State Courts, 2025).

 

What Can Be Done Societally?

As a society, we all share the responsibility of preventing the spread of false information. Since self-regulation by social media platforms has proven unreliable, a multi-pronged approach is needed to ensure responsible content moderation and combat misinformation effectively. This approach includes:

 

1. Government Regulation and Policy Reform

Governments and regulatory bodies can play a role in setting clear guidelines for social media companies by implementing stronger content moderation laws that can require companies to take action against misinformation, hate speech, and harmful content. Transparency requirements can force platforms to disclose how their algorithms function and how moderation decisions are made. Financial penalties for failure to remove harmful content could incentivize more responsible practices. However, regulation must be balanced to avoid excessive government control over speech. It should focus on ensuring transparency, fairness, and accountability rather than dictating specific narratives (Balkin, 2021).

 

2. Public Pressure and Advocacy

Users and advocacy groups can push social media companies to do better by demanding more robust moderation policies that are fairly enforced across all users and regions. Independent oversight bodies to audit content moderation practices and hold platforms accountable. A recent poll conducted by Boston University’s College of Communications found that 72% of Americans believed it is acceptable for social media platforms to remove inaccurate information. More than half of Americans distrust the efficacy of crowd-source monitoring of social media (Amazeen, 2025). Improved fact-checking partnerships are needed to counter misinformation more effectively.

 

3. Media Literacy and User Responsibility

Since social media platforms alone cannot be relied upon to stop misinformation, individuals must take steps to protect themselves. They can verify information before sharing by checking multiple sources and rely on reputable fact-checking organizations. Other actions can include diversifying news sources and avoiding relyiance on a single platform or outlet for information, reporting misinformation and harmful content by flagging false or dangerous content, and educating others by encouraging media literacy in communities can help reduce the spread of misinformation (Sucui, 2024).

 

Conclusion

Social media companies cannot be fully trusted to police themselves, as their financial interests often clash with the need for responsible moderation. While they have taken some steps to curb misinformation, enforcement remains inconsistent, and recent cuts to moderation teams have worsened the problem. The solution lies in a combination of regulation, public accountability, and increased media literacy to create a more reliable and trustworthy information ecosystem.

 

References

Amazeen, M. (2025). Americans expect social media content moderation. The Brink: Pioneering Research of Boston University. https://www.bu.edu/articles/2025/americans-expect-social-media-content-moderation/

Balkin, J. M. (2021). How to regulate (and not regulate) social media. Journal of Free Speech Law, 1(71), 73-96. https://www.journaloffreespeechlaw.org/balkin.pdf

Chow, A. R. (2025, January 7). Whey Meta’s fact-checking change could lead to more misinformation on Facebook and Instagram. Time. https://time.com/7205332/meta-fact-checking-community-notes/

Minow, & Minow (2023). Social media companies should pursue serious self-supervision — soon: Response to Professors Douek and Kadri. Harvard Law Review, 136(8). https://harvardlawreview.org/forum/vol-136/social-media-companies-should-pursue-serious-self-supervision-soon-response-to-professors-douek-and-kadri/

Myers, S. L., and Grant, N. (2023, February 14). Combating disinformation wanes at social media giants. New York Times. https://www.nytimes.com/2023/02/14/technology/disinformation-moderation-social-media.html

Sucui, P. (January 2, 2024). How media literacy can help stop misinformation from spreading. Forbes. https://www.forbes.com/sites/petersuciu/2024/01/02/how-media-literacy-can-help-stop-misinformation-from-spreading/


Friday, March 7, 2025

How to Report Misleading and Inaccurate Content on Social Media

 



By Lilian H. Hill

 

Misinformation and disinformation, often called "fake news," spread rapidly on social media, especially during conflicts, wars, and emergencies. “Fake news” and disinformation campaigns injure the health of democratic systems because they can influence public opinion and electoral decision-making (National Center for State Courts, n.d.). With the overwhelming content shared on these platforms, distinguishing truth from falsehood has become challenging. This issue has worsened as some social media companies have downsized their Trust and Safety teams, neglecting proper content moderation (Center for Countering Digital Hate, 2023).

 

Users can play a role in curbing the spread of false information. The first step is to verify before sharing that we are mindful of what we amplify and engage with. Equally important is reporting misinformation when we come across it. Social media platforms allow users to flag posts that promote falsehoods, conspiracies, or misleading claims, each enforcing its own Community Standards to regulate content (Center for Countering Digital Hate, 2023).

 

Reporting misleading content on social media platforms is essential in reducing the spread of misinformation. Unfortunately, some platforms fail to act on reported content (Center for Countering Digital Hate, 2023). Nonetheless, users should still report when misinformation and disinformation flood their timelines.

 

Here’s how to report misleading content on some of the most widely used platforms:

1. Facebook

  • Click on the three dots (•••) in the top-right corner of the post.
  • Select "Find support or report post."
  • Choose "False Information" or another relevant category.
  • Follow the on-screen instructions to complete the report.

 

2. Instagram

  • Tap the three dots (•••) in the top-right corner of the post.
  • Select "Report."
  • Choose "False Information" and follow the steps to submit your report.

 

3. X (formerly known as Twitter)

  • Click on the three dots (•••) on the tweet you want to report.
  • Select "Report Tweet."
  • Choose "It’s misleading" and specify whether it relates to politics, health, or other misinformation.
  • Follow the prompts to complete the report.

 

4. TikTok

  • Tap and hold the video or click on the share arrow.
  • Select "Report."
  • Choose "Misleading Information" and provide details if necessary.

 

5. YouTube

  • Click on the three dots (•••) below the video.
  • Select "Report."
  • Choose "Misinformation" and provide any additional details required.

 

6. Reddit

  • Click on the three dots (•••) or the "Report" button below the post or comment.
  • Select "Misinformation" if available or choose a related category.
  • Follow the instructions to submit your report.

 

7. LinkedIn

  • Click on the three dots (•••) in the top-right corner of the post.
  • Select "Report this post."
  • Choose "False or misleading information."

 

8. Threads

  • Click more next to a post.
  • Click Report and follow the on-screen instructions.

 

After reporting, the platform will review the content and take action if it violates their misinformation policies. Users can also enhance efforts by sharing fact-checked sources in the comments or encouraging others to report the same misleading content.

 

References

Center for Countering Digital Hate (2023, October 24). How to report misinformation on social media. https://counterhate.com/blog/how-to-report-misinformation-on-social-media/

National Center for State Courts (n.d.) Disinformation and the Public. https://www.ncsc.org/consulting-and-research/areas-of-expertise/communications,-civics-and-disinformation/disinformation/for-the-public


Friday, February 28, 2025

Who Consumes News on Social Media and Why?

 


By Lilian H. Hill

 

 

Social media has become a key source of news for Americans, with half of U.S. adults reporting that they sometimes rely on it for news, according to a 2023 Pew Research Center survey (Pew Research Center, 2024). A significant majority of U.S. adults (86%) report getting news from a smartphone, computer, or tablet at least occasionally, with 57% saying they do so frequently.

 

People who consume news on social media cite several benefits, including its convenience, rapid updates, and ability to engage with others through discussions and shared content (Pew Research Center, 2024). However, many also express concerns about news accuracy, quality, and political bias on these platforms. Notably, the percentage of users considering misinformation the most significant drawback has risen from 31% to 40% over the past five years.

 

Benefits and Constraints of Social Media News

Getting news through social media offers both advantages and drawbacks. One of its most significant benefits is convenience and accessibility, as it provides instant access to breaking news from anywhere, keeping users informed in real time. Additionally, social media exposes individuals to diverse perspectives, allowing them to access news from independent journalists, global outlets, and citizen reporters. The ability to receive real-time updates ensures that users stay informed as events unfold. Social media also fosters engagement and interactivity, enabling people to comment, share, and discuss news with others, thereby promoting public discourse. Personalization is another advantage, as algorithms curate news based on user preferences, making content more relevant to individual interests. Moreover, social media platforms offer cost-free access to news, bypassing paywalls common on many traditional news websites.

 

However, there are significant downsides to relying on social media for news. One primary concern is the prevalence of misinformation and fake news, as these platforms often host misleading information, deepfakes, and propaganda. Bias and echo chambers also pose a risk, as algorithms reinforce users' beliefs by prioritizing content that aligns with their views, limiting exposure to diverse perspectives. Unlike traditional journalism, many social media sources lack rigorous fact-checking, increasing the risk of spreading inaccurate information. Sensationalism and clickbait are also typical, as platforms prioritize engagement, often amplifying emotionally charged or exaggerated content over factual reporting. Privacy and data concerns are another issue, with social media companies collecting vast amounts of personal data that can be used for targeted advertising or political manipulation. Additionally, the short-form nature of social media news consumption can lead to shallow understanding, as users are less likely to analyze complex issues deeply.

 

In a study, Thorson and Battocchio (2023) explored how young adults in the U.S. shape and manage their personal media environments across digital platforms and the impact of these practices on their news consumption. Based on 50 in-depth interviews with individuals aged 18-34, along with an analysis of their most-used social media platforms, the study highlights the various efforts young users invest in constructing and curating their online presence across both “public” and “private” spaces, with particular focus on the architectural strategies that minimize their exposure to news content.

 

Generational Use of Social Media for News

Different generations consume news from various sources, reflecting technological shifts, media consumption habits, and trust in traditional versus digital platforms. Recent studies by the American Press Institute indicate that while Gen Z and Millennials still engage with local and national news from traditional sources, they are more likely to frequently access news and information through social media (Media Insight Group, 2022). Gen Z consumes news daily on social platforms at a higher rate than older Millennials, with 74% doing so compared to 68% of older Millennials. According to the Pew Research Center (2024), the percentage of Americans who regularly get news from television has remained steady at 33%, while reliance on radio and print publications continues to decline. In 2024, only 26% of U.S. adults reported often or sometimes getting their news in print.

 

However, this does not mean these groups rely exclusively on social media for complete or accurate news coverage (Castle Group, 2025; Pew Research Center, 2024). Many consumers follow news outlets and journalists on social platforms, clicking through to full articles when they appear in their feeds. Some people use a free monthly article allowance or continue researching a story beyond the app where they first encountered it. To maintain audience engagement, news organizations have adapted their approach to social media, moving beyond simple headline previews or article snippets to offer more dynamic and interactive content.

 

Here’s a breakdown of where different age groups typically obtain their news (Pew Research Center, 2024):

 

Baby Boomers, born between 1946 and 1964, primarily rely on television for news, favoring broadcast and cable networks such as CNN, Fox News, and NBC. While they still engage with print newspapers, this habit is declining. They also turn to radio sources like NPR and talk radio for updates and are gradually accessing digital news websites, though at lower rates than younger generations.

 

Generation X, born between 1965 and 1980, splits its news consumption between television and online sources, including news websites and apps. While they engage with social media for news, they tend to be more skeptical than younger generations. Many continue to listen to radio news, especially during commutes, and some still read print newspapers, though digital consumption is on the rise.

 

Millennials, born between 1981 and 1996, prefer online news sources, including digital newspapers, news apps, and streaming news content. They are heavy users of social media platforms such as Facebook, Twitter (X), Instagram, and Reddit for news updates. Increasingly, they rely on podcasts and YouTube for in-depth analysis and alternative viewpoints. Compared to older generations, they are less likely to watch traditional television news or read print newspapers.

 

Generation Z, born between 1997 and 2012, primarily consumes news via social media platforms such as TikTok, Instagram, X (formerly known as Twitter), and Snapchat. They favor short-form video content from influencers, independent journalists, and content creators. Many engage with news aggregators like Apple News and Google News, while traditional television news and print newspapers play a minimal role in their media consumption. Instead, they prefer digital and interactive content that aligns with their fast-paced and visually engaging media habits.

 

Each generation's news consumption habits reflect broader shifts in media technology and trust in different sources. While traditional news outlets still hold influence, digital and social media platforms continue to attract younger audiences. It is too soon to predict social media behavior of Generation Alpha, born between 2010 and 2024, and Generation Beta, born after 2025.

 

Mitigating Problems of Social Media News Consumption

Yaraghi (2019) commented that it is naive to view social media as purely neutral content-sharing platforms without any responsibility, but thinks it is unreasonable to hold them to the same editorial standards as traditional news media. Mitigating the problems associated with social media news content requires a multi-pronged approach involving media literacy, platform accountability, and user responsibility. Improving media literacy is essential, as people need to develop critical thinking skills to evaluate sources, detect bias, and distinguish between credible journalism and misinformation. Encouraging a fact-checking culture by verifying information through reliable sources like Snopes, PolitiFact, or Reuters Fact Check can help reduce the spread of false narratives. Additionally, users should be aware of manipulative tactics such as deepfakes, clickbait headlines, and out-of-context images that contribute to misinformation.

 

Social media platforms must also take responsibility by ensuring greater algorithm transparency, disclosing how they prioritize news content, and implementing measures to reduce the spread of misinformation. Stronger content moderation, powered by both AI and human reviewers, is necessary to flag and remove misleading content while still protecting free speech. Yaraghi (2019) stated that while social media companies can moderate or restrict content on their platforms, they cannot fully control how ideas are shared online or disseminated offline. Clear labeling and warnings for unverified or misleading content, like how X and Facebook sometimes provide context to viral posts, can further help users make informed decisions.

 

Encouraging responsible journalism is another crucial step. Supporting trusted news outlets and prioritizing fact-based reporting over sensationalized headlines can help counteract misinformation. Journalists should also uphold ethical reporting standards by rigorously verifying sources and avoiding the spread of misleading information.

 

Users themselves play a vital role in combating misinformation. Taking a moment to verify news before sharing, especially if it provokes a strong emotional reaction, can prevent the spread of false content. Diversifying news sources rather than relying on a single perspective helps reduce the risk of being trapped in an echo chamber. Additionally, users should actively report misleading content to social media platforms to ensure that misinformation does not gain traction.

 

By combining education, regulation, and individual responsibility, we can foster a more informed and resilient digital society that mitigates the negative impact of social media news content.

 

 

References

 

Castle Group (2025, January 31). How social media, Gen Z, and millennials are changing the news media landscape. https://www.thecastlegrp.com/how-social-media-gen-z-and-millennials-are-changing-the-news-media-landscape/

Media Insight Project (2022, August 22). The news consumption habits of 16- to 40-year-olds. American Press Institute. https://americanpressinstitute.org/the-news-consumption-habits-of-16-to-40-year-olds/

Pew Research Center (2024, September 17). News Platform Fact Sheet. https://www.pewresearch.org/journalism/fact-sheet/news-platform-fact-sheet/

Thorson, K., & Battocchio, A. F. (2023). “I use social media as an escape from all that” Personal platform architecture and the labor of avoiding news. Digital Journalism12(5), 613–636. https://doi.org/10.1080/21670811.2023.2244993

Yaraghi, N. (2019, April 9). How should social media platforms combat misinformation and hate speech? Brookings Institute. https://www.brookings.edu/articles/how-should-social-media-platforms-combat-misinformation-and-hate-speech/


Thursday, February 13, 2025

Digital Architecture of Disinformation

 

By Lilian H. Hill

 

Fake news and disinformation are not new, but their rapid spread is unprecedented. Many individuals struggle to distinguish between real and fake news online, leading to widespread confusion (Hetler, 2025). Disinformation architecture refers to the systematic and strategic methods used to create, spread, and amplify false or misleading information. It involves a combination of technology, human effort, and coordinated tactics to manipulate public opinion, sow discord, or achieve specific political or social goals. This architecture leverages technology, social networks, and psychological manipulation to shape public perception, influence behavior, or achieve specific objectives, such as political, financial, or ideological gains.

 

In the last few decades, Gal (2024) stated that social media platforms have transformed from basic networking sites into influential entities that shape public opinion, sway elections, impact public health, and influence social cohesion. For example, during the recent U.S. presidential election, platforms like X played a key role in disseminating accurate information and misinformation, mobilizing voters, and affecting turnout. Likewise, during the COVID-19 pandemic, social media was instrumental in sharing public health guidelines but also became a hotspot for the spread of misinformation regarding vaccines and treatments.

 

Bossetta (2024) stated that a platform's digital architecture influences political communication on social media, meaning the technical frameworks that facilitate, restrict, and shape user behavior online. This generally refers to what platforms enable, prevent, and structure online communication, such as through likes, comments, retweets, and sharing. Ong and Cabañes (2018) commented that the basic blueprint of political disinformation campaigns strongly resembles corporate branding strategy. However, political disinformation requires its purveyors to make moral compromises, including distributing revisionist history, silencing political opponents, and hijacking news media attention.

 

The primary goals of disinformation campaigns are political manipulation, social division, economic gains, and the erosion of trust in institutions such as the media, science, and democracy. Their impacts are far-reaching, leading to increased polarization, manipulation of democratic processes, reputational damage, and harm to individuals' mental well-being (Bossetta, 2018).

 

Influence of Disinformation Architecture

Disinformation has far-reaching consequences, including the erosion of trust in key institutions such as journalism, science, and governance. By spreading misleading narratives, it undermines public confidence in credible sources of information. Additionally, disinformation fuels polarization by deepening societal divisions and promoting extreme or one-sided perspectives, making constructive dialogue more difficult. It also plays a significant role in manipulating democracies, influencing elections and policy debates through deceptive tactics that mislead voters and policymakers. Beyond its societal impacts, disinformation can cause direct harm to individuals by targeting their reputations, personal safety, and mental well-being, often leading to harassment, misinformation-driven fear, and public distrust.

 

Components of Disinformation Architecture

Disinformation architecture consists of several key components that manipulate public perception. It begins with reconnaissance, where the target audience and environment are analyzed to tailor the disinformation campaign effectively. Once this understanding is established, the necessary infrastructure is built, including creating believable personas, social media accounts, and groups to disseminate false information. Content creation follows, ensuring a continuous flow of misleading materials such as posts, memes, videos, and articles that support the disinformation narrative.

 

The core aspects of disinformation architecture include content creation, amplification channels, psychological tactics, targeting and segmentation, infrastructure support, and feedback loops. Content creation involves fabricating fake news, manipulating media, and employing deepfake technology to mislead audiences. Amplification is achieved through social media platforms, bot networks, and echo chambers that reinforce biased narratives. Psychological tactics exploit emotions, cognitive biases, and perceived authority to gain trust and engagement. Targeting and segmentation enable microtargeting strategies, exploiting demographic vulnerabilities to maximize influence. Infrastructure support includes data harvesting, dark web resources, and monetization channels that sustain disinformation campaigns. Feedback loops ensure that engagement algorithms prioritize viral and sensationalist content, keeping misinformation in circulation.

 

Amplification is crucial in spreading this content widely, utilizing bots, algorithms, and social-engineering techniques to maximize reach. Engagement is then sustained through interactions that deepen the impact of disinformation, often through trolling or disruptive tactics. Eventually, mobilization occurs, where unwitting users are encouraged to take action, leading to real-world consequences.

 

Mitigation of Disinformation Architecture

To mitigate disinformation, several strategies must be implemented. Regulation and policy measures should enforce platform transparency rules and penalize the deliberate spread of harmful content. According to Gal (2024), because social media platforms play an increasingly central role in information dissemination, ensuring the integrity of that information has become more urgent than ever, making discussions about regulation essential. Given their profound influence on nearly every aspect of society, these platforms should be treated as critical infrastructure—like energy grids and water supply systems—and subject to the same level of scrutiny and regulation to safeguard information integrity. Just as a power grid failure can cause widespread disruption, large-scale social media manipulation can erode democratic processes, hinder public health initiatives, and weaken social trust.

 

Technological solutions like AI-driven detection systems and verification tools can help identify and flag false information. Public awareness efforts should promote media literacy, encouraging individuals to critically evaluate information and question sensationalist narratives (Hetler, 2025). Finally, platform responsibility must be strengthened by modifying algorithms to prioritize credible sources and enhancing content moderation to limit the spread of disinformation. Understanding these mechanisms is essential to developing effective countermeasures against the growing threat of disinformation in the digital age.

 

References

Bossetta, M. (2018). The digital architectures of social media: Comparing political campaigning on Facebook, Twitter, Instagram, and Snapchat in the 2016 U.S. election, Journalism and Mass Communication Quarterly, 95(2), 471–496. https://doi.org/10.1177/1077699018763307

Bossetta, M. (2024, October 16). Digital architecture, social engineering, and networked disinformation on social media. EU Disinfo Lab. Retrieved https://www.disinfo.eu/outreach/our-webinars/webinar-digital-architectures-social-engineering-and-networked-disinformation-with-michael-bossetta/

Gal, U. (2024, November 17). Want to combat online misinformation? Regulate the architecture of social media platforms, not their content. ABC. Retrieved https://www.abc.net.au/religion/uri-gal-online-misinformation-democracy-social-media-algorithms/104591278

Hetler, A. (2025, January 7). 11 ways to spot disinformation on social media. TechTarget. Retrieved https://www.techtarget.com/whatis/feature/10-ways-to-spot-disinformation-on-social-media

Ong, J. C., & Cabañes, J. V. A. (2018). The architecture of networked disinformation: Behind the scenes of troll accounts and fake news production in the Philippines. The Newton Tech4Dev Network. Retrieved https://newtontechfordev.com/wp-content/uploads/2018/02/ARCHITECTS-OF-NETWORKED-DISINFORMATION-FULL-REPORT.pdf


Thursday, February 6, 2025

Digital Architecture of Social Media Platforms


 

By: Lilian H. Hill

 

The architecture of an environment is known to influence human behavior. The relationship between structure and agency extends beyond physical spaces and encompasses how individuals engage with and navigate online environments (Bossetta, 2018). How social media platforms are designed and mediated varies, and these differences influence people’s online activities. For example, some social media platforms favor visual communication, while others favor textual communication.

Bosetta (2018) divided the digital architecture of social media platforms into four key categories:

 

1. Network Structure can be defined as the way connections between accounts are established and sustained. It determines how connections between accounts are established and maintained. Social media enables users to connect with peers (“Friends” on Facebook, “Followers” on X [formerly known as Twitter]), as well as with public figures, brands, or organizations, which often operate specialized accounts with advanced tools (e.g., Facebook Pages, Instagram Business Profiles).

 

This structure influences three key aspects:

  1. Searchability – How users discover and follow new accounts.
  2. Connectivity – The process of forming connections. For example, Facebook’s mutual Friend model mirrors offline networks, while X’s one-way following system fosters networks with weaker real-life ties.
  3. Privacy – Users' control over search visibility and connection interactions. Snapchat prioritizes private ties, while platforms like Instagram and X default to open networks but allow customizable privacy settings.

 

These elements shape the platform’s network dynamics, user relationships, and the content generated (Bosetta, 2018).

 

2. Functionality defines how content is mediated, accessed, and distributed on social media platforms. It encompasses five key components:

  1. Hardware Access – Platforms are accessed via devices like mobiles, tablets, desktops, and wearables, influencing user behavior. For instance, tweets from desktops tend to show more civility than those from mobile devices.
  2. Graphical User Interface (GUI) – The visual interface shapes navigation, homepage design, and interaction tools like social buttons (e.g., X Retweets, Facebook Shares), simplifying content sharing.
  3. Broadcast Feed – Aggregates and displays content, varying in centralization (e.g., Facebook's News Feed) and interaction methods (e.g., scrolling vs. click-to-open).
  4. Supported Media – Includes supported formats (text, images, videos, GIFs), size limits (character counts, video length), and hyperlinking rules.
  5. Cross-Platform Integration – Enables sharing of the same content across multiple platforms.

 

These elements shape content creation, network behavior, and platform norms, influencing user expectations and interactions. Political actors, for example, must align with platform-specific norms to avoid appearing out-of-touch or inauthentic, which could harm their credibility and electability.

 

3. Algorithmic Filtering determines how developers prioritize posts’ selection, sequence, and visibility. This involves three key concepts:

  1. Reach – How far a post spreads across feeds or networks, which algorithms can enhance or restrict.
  2. Override – Pay-to-promote services, like Facebook's "boosting," allow users to bypass algorithms and extend a post's reach.
  3. Policy – policies on fact-checking processes are subject to change, which permits the spread of fake news.

 

These factors are most relevant on platforms with one-to-many broadcast feeds (e.g., Facebook, X, Instagram). Platforms focused on one-to-one messaging (e.g., Snapchat, WhatsApp) are less affected by algorithmic filtering. However, when algorithms dictate content visibility, they influence users' perceptions of culture, news, and politics.

 

4. Datafication is how user interactions are transformed into data points for modeling. Every social media interaction leaves digital traces that can be used for advertising, market research, or improving platform algorithms. Maintaining a social media presence in political campaigns is less about direct interaction with voters and more about leveraging user data. Campaigns can analyze digital traces to inform persuasion and mobilization strategies.

 

Kent and Taylor (2021) commented that the design of many social media platforms limits meaningful discussions on complex issues. Deep, deliberative debates on complex problems like climate change or economic inequality are difficult on platforms optimized for advertising and data monetization.


References

Bossetta, M. (2018). The digital architectures of social media: Comparing political campaigning on Facebook, Twitter, Instagram, and Snapchat in the 201 6U.S. election, Journalism and Mass Communication Quarterly, 95(2), 471–496. https://doi.org/10.1177/1077699018763307

 Kent, M. L., & Taylor, M. (2021). Fostering dialogic engagement: Toward an architecture of social media for social change. Social Media + Society, 71(1). https://orcid.org/0000-0001-5370-1896


Friday, January 24, 2025

Information Pollution: Determining When Information is Accurate and Meaningful


 

By Lilian H. Hill


Information pollution is the spread of misleading, irrelevant, or excessive information that disrupts people's ability to find accurate and meaningful knowledge. The United Nations defines information pollution as the “spread of false, misleading, manipulated and otherwise harmful information” and further states that it is “threatening our ability to make informed decisions, participate in democratic processes, and contribute to the building of inclusive, peaceful and just societies” (para. 1).

In an earlier blog, we described the information ecosystem, the complex network of processes, technologies, individuals, and institutions involved in creating, distributing, consuming, and regulating information. Like environmental pollution contaminates the physical world, information pollution clutters digital and cognitive spaces, making it difficult to distinguish between useful content and noise. When so much information is false and deceptive, people begin to distrust almost everything in the news.

 

Evolution of the News

The shift of news to social media accelerated changes that are already reshaping journalism. In the 1950s and 1960s, TV news was treated as a public service, and news anchors were considered authoritative. However, by the 1980s, entertainment conglomerates purchasing news stations prioritized profits, leading to the 24-hour news cycle and a focus on attention-grabbing stories. Pundits, offering opinions rather than facts, became prominent, altering the industry and public expectations of news (U.S. PIRG Education Fund, 2023). The PIRG Education Fund states that “misinformation that seems real - but isn’t - rapidly circulates through social media” (para. 1). When anyone with a camera and computer can produce content, the supply of news information becomes virtually limitless, fueling social media feeds with countless 24-hour cycles. Unlike traditional opinion sections or dedicated pundit programs, social feeds blend opinions and facts indiscriminately, where the most sensational stories tend to thrive (U.S. PIRG Education Fund, 2023).

 

Types of Information Pollution

  • Misinformation: Inaccurate or false information shared unintentionally.

Example: Sharing outdated or incorrect medical advice without malicious intent.

  • Disinformation: False information deliberately spread to deceive.

Example: Fake news campaigns or propaganda.

  • Malinformation: Information that is based on reality but is deliberately shared with the intent to cause harm, manipulate, or deceive.

Example: Leaking private messages or emails that are factually accurate but shared publicly to harm someone's reputation or cause embarrassment intentionally.

  • Irrelevant Information: Content that distracts from meaningful or necessary knowledge.

Example: Clickbait articles that prioritize attention over substance.

  • Noise: Poorly organized, redundant, or low-quality data that hampers clarity.

Example: Forums with repetitive threads or unmoderated social media discussions.

 

Consequences of Information Pollution

Misinformation, disinformation, and malinformation, along with the rise of hate speech and propaganda, are fueling social divisions and eroding trust in public institutions. Consequences include cognitive overload, which strains mental resources, leading to stress and poor decision-making. Information pollution breeds mistrust as people struggle to verify the accuracy of available information. They may waste time and energy by trying to sift through low-quality content. Information pollution also increases susceptibility to emotional or ideological manipulation.

 

More consequences include:

  • Erosion of Trust in Institutions. The spread of false or manipulated information undermines public confidence in governments, media outlets, and other institutions. Misinformation can mislead voters, distort public debates, and interfere with fair elections.
  • Polarization and Social Divisions. Polarizing narratives deepen ideological divides, fueling hostility and hindering collaboration between groups. Hate speech and propaganda can push individuals toward extremist ideologies or actions.
  • Public Health Crises. False claims about medical treatments or vaccines can result in public health risks, such as reduced vaccination rates or harmful self-medication practices. Inaccurate information can lead to slow or ineffective responses during pandemics or natural disasters.
  • Economic Impacts. Companies may face reputational harm from false accusations or smear campaigns. Misinformation about investments or markets can lead to significant financial losses.
  • Undermining Knowledge and Education. The prevalence of false information blurs the lines between credible and unreliable sources, making it harder for people to discern the truth. Exposure to misinformation, particularly among younger audiences, can disrupt educational efforts and critical thinking.
  • Psychological and Emotional Toll. Exposure to alarming or false information can heighten public fear and anxiety. Persistent negativity and misinformation can make individuals feel alienated or distrustful of their communities.
  • Threats to National Security. States or organizations can exploit information pollution to destabilize societies or manipulate populations for political or strategic gains. Targeted campaigns can sow confusion during emergencies, hindering coordinated responses.

Mitigating Information Pollution

Addressing these consequences requires robust efforts, including promoting media literacy, enhancing regulation of online platforms, and fostering critical thinking skills to create a more informed and resilient society. Reducing information pollution in specific contexts like education and social media requires targeted strategies that promote clarity, trust, and meaningful engagement.

Strategies for combating information pollution include:

  1. Teach Media Literacy: Integrate critical thinking and fact-checking skills into educational curricula. Encourage students to evaluate sources based on credibility, bias, and evidence.
  2. Simplify and Organize Content: Present information in structured, digestible formats (e.g., summaries, infographics). Avoid overloading students with redundant materials.
  3. Use Curated Resources: Recommend vetted textbooks, articles, and tools. Leverage reputable platforms like Google Scholar or PubMed for research.
  4. Promote Inquiry-Based Learning: Encourage students to ask questions and seek evidence-based answers. Use the Socratic method to stimulate deeper understanding and engagement.
  5. Digital Hygiene Education: Teach students to manage their digital consumption (e.g., limiting screen time, avoiding multitasking). Encourage mindful engagement with technology.

 

References

United Nations Development Programme (2024, February 5). Combating the crisis of information pollution: Recognizing and preventing the spread of harmful information. Retrieved https://www.undp.org/egypt/blog/combating-crisis-information-pollution-recognizing-and-preventing-spread-harmful-information

 U.S. PIRG (Public Information Research Group) Education Fund (2023, August 14). How misinformation on social media has changed news. Retrieved https://pirg.org/edfund/articles/misinformation-on-social-media/


Can Social Media Platforms Be Trusted to Regulate Misinformation Themselves?

  By Lilian H. Hill Social media platforms wield immense influence over public discourse, acting as primary sources of n...