Showing posts with label Disinformation. Show all posts
Showing posts with label Disinformation. Show all posts

Friday, June 27, 2025

Information Warfare, Virtual Politics, and Narrative Dominance


 

By Lilian H. Hill

As the Internet becomes more advanced, it is giving rise to new challenges for democracy. Social media platforms sort users into like-minded groups, forming echo chambers that reinforce existing beliefs. Pariser (2011) states that in a world shaped by personalization, we are shown news that aligns with our preferences and reinforces our existing beliefs. Because these filters operate invisibly, we may remain unaware of what information is excluded. This dynamic contributes to the growing disconnect between individuals with differing political views, making mutual understanding more difficult. It also enables extremist groups to harness these platforms for harmful purposes. While diverse opinions are inherent to politics, social media has created a fast-paced, ever-evolving space where political discord is continuously generated (De’Alba, 2024).

Information warfare is the strategic use of information to influence, disrupt, or manipulate public opinion, decision-making, or infrastructure, often in service of political, military, or economic goals. Instead of physical force, information warfare targets the cognitive and informational environments of adversaries. Pai (2024) comments that information warfare has become central to international politics in the Information Age in which society is shaped by the creation, use, and impact of information. According to Rid (2020), information warfare aims to undermine trust between individuals and institutions. It includes tactics like propaganda, disinformation, cyberattacks, and psychological operations. In today’s digital era, state and non-state actors use social media, news platforms, and digital technologies to conduct disinformation campaigns, often blurring the lines between truth and manipulation (Pomerantsev, 2019).

Virtual politics refers to the strategic use of digital technologies, including social media, artificial intelligence, and data analytics, to manipulate political perceptions, simulate democratic engagement, and manipulate public opinion. Originally coined in the post-Soviet context, the term captured how political elites created fake parties, opposition figures, and civil society groups to manufacture the illusion of pluralism and democratic process (Krastev, 2006). Contemporary virtual politics functions through multiple mechanisms. One tactic is the creation of simulated political actors and events, where governments or interest groups establish fake NGOs, social movements, or social media accounts to fragment opposition or feign civic engagement. These simulations create an illusion of public discourse while neutralizing dissent (Krastev, 2006). A contemporary example is Russia’s promotion of fake social media accounts and organizations during the 2016 U.S. presidential election. Russian operatives created false personas, Facebook pages, Twitter accounts, and even staged events that appeared to be organized by grassroots American groups (Mueller, 2019).

Another core feature is the widespread use of disinformation and memetic warfare. Ascott (2020) notes that while internet memes may appear harmless, memetic warfare involves the deliberate circulation of false or misleading content to polarize populations or erode trust in institutions (Marwick & Lewis, 2017). A popular meme, Pepe the Frog is a green anthropomorphic frog usually portrayed with a humanoid body wearing a blue T-shirt. Originally apolitical, it expressed simple emotions like sadness and joy. The symbol was appropriated by the alt-right (alternate-right), a far-right white nationalist movement. During the 2016 U.S. presidential election, some alt-right and white nationalist groups co-opted Pepe for propaganda, using edited versions to spread hateful or extremist messages. Another common meme, the NPC Wojak is an expressionless, grey-headed figure with a blank stare, a triangular nose, and a neutral mouth. NPC is an acronym for non-player characters, a term derived from video games. The NPC Wojak meme first appeared in 2018 to mock groups seen as conformist. The NPC meme gained traction before the 2018 U.S. midterm elections amid right-wing outrage over alleged social media censorship. Conservatives used it to portray liberals as unthinking “bots,” meaning individuals who lack internal monologue, unquestioningly accept authority, engage in groupthink, or adopt positions that reflect conformity and obedience.

The most insidious aspect of virtual politics lies in data-driven psychological manipulation. Social media and other platforms collect vast amounts of personal data that is used for targeted marketing and psychological persuasion. This shift from persuasion to manipulation erodes the foundation of informed democratic decision-making. Moreover, the performative nature of online political engagement often reduces participation to reactive, emotionally charged interactions, such as likes, shares, and outrage, instead of reasoned deliberation or civic dialogue (Sunstein, 2017).

 

Narrative Dominance and Virtual Politics

Narrative dominance refers to the phenomenon in which a particular storyline, interpretation, or framework becomes the prevailing lens through which events and realities are understood and perceived. It reflects the power to shape meaning, frame discourse, and control the perceived legitimacy of knowledge or truth. A contemporary example of narrative dominance is China’s global media campaign to reshape global perception of its handling of the COVID-19 pandemic, deflect blame, criticize Western failures, spread alternative origin theories, and suppress dissenting domestic narratives (Zhou & Zhang, 2021).

 

In media, politics, and culture, dominant narratives can marginalize alternative viewpoints and solidify ideological control. In the digital age, virtual politics is a key arena in which narrative dominance is exercised and contested. Virtual politics involves the creation and circulation of curated realities that prioritize perception over policy or truth and thrive on controlling emotional responses and engagement.

 

Virtual Politics and Democracy

The consequences of information warfare, virtual politics, and narrative dominance for democracy are profound. Together, they result in diminished trust in public institutions and blur distinctions between reality and fiction. As digital platforms become the dominant venue for political communication, traditional forms of accountability —such as investigative journalism, public debate, and civic literacy —are weakened. In authoritarian regimes, virtual politics serve as a tool for controlling dissent while projecting a false image of openness. Even in democratic societies, the same tools sway elections, fragment publics, and distort political will (Bennett & Livingston, 2018). The challenge for democratic societies, then, is to develop regulatory, technological, and civic strategies to counteract the manipulative aspects of virtual politics without undermining legitimate political speech.

 

Narrative dominance in virtual politics involves creating an environment in which alternative realities are delegitimized or neglected. Narrative dominance reflects a shift from a politics of substance to a politics of spectacle and emotional resonance. Understanding this dynamic is essential for analyzing contemporary media landscapes, political behavior, and the challenges of democratic resilience in the digital era. Virtual politics is not merely about politics taking place online; it represents a fundamental transformation in how political reality is constructed, experienced, and contested. Because public life is mediated by screens, algorithms, and data, understanding the mechanics of virtual politics is critical to preserving democratic integrity and fostering genuine political engagement.

 

References

Ascott, T. (2020, February 16). How memes are becoming the new frontier of information warfare. The Strategist. https://www.aspistrategist.org.au/how-memes-are-becoming-the-new-frontier-of-information-warfare/

Bennett, W. L., & Livingston, S. (2018). The disinformation order: Disruptive communication and the decline of democratic institutions. European Journal of Communication, 33(2), 122–139. https://doi.org/10.1177/0267323118760317

De’Alba, L. M. (2024, April 15). The virtual realities of politics: Entrenched narratives and political entertainment in the age of social media. Uttryck Magazine. https://www.uttryckmagazine.com/2024/04/15/the-virtual-realities-of-politics-entrenched-narratives-and-political-entertainment-in-the-age-of-social-media/

Gerbaudo, P. (2018). The digital party: Political organisation and online democracy. Pluto Press.

Isaak, J., & Hanna, M. J. (2018). User data privacy: Facebook, Cambridge Analytica, and privacy protection. Computer, 51(8), 56–59. https://doi.org/10.1109/MC.2018.3191268

Krastev, I. (2006). Virtual politics: Faking democracy in the post-Soviet world. In Post-Soviet Affairs, 22(1), 63–67.

Marwick, A., & Lewis, R. (2017). Media manipulation and disinformation online. Data & Society Research Institute. https://datasociety.net/library/media-manipulation-and-disinfo-online/

Mueller, R. S. (2019). Report on the investigation into Russian interference in the 2016 presidential election. U.S. Department of Justice.

Pariser, E. (2011). The filter bubble: What the internet is hiding from you. Penguin.

Pomerantsev, P. (2019). This is not propaganda: Adventures in the war against reality. PublicAffairs.

Rid, T. (2020). Active measures: The secret history of disinformation and political warfare. Farrar, Straus and Giroux.

Sunstein, C. R. (2017). #Republic: Divided democracy in the age of social media. Princeton University Press.

Zhou, L., & Zhang, Y. (2021). China’s global propaganda push: COVID-19 and the strategic use of narrative. Journal of Contemporary China, 30(130), 611–628.

 

 

Friday, March 14, 2025

Can Social Media Platforms Be Trusted to Regulate Misinformation Themselves?


 

By Lilian H. Hill

Social media platforms wield immense influence over public discourse, acting as primary sources of news, political debate, and social movements. While they once advertised their policies intended to combat misinformation, hate speech, and harmful content, their willingness and ability to effectively enforce these policies is problematic. The fundamental challenge is that these companies operate as profit-driven businesses, meaning their primary incentives do not always align with the public good. Myers and Grant (2023) commented that many platforms are investing fewer resources in combating misinformation. For example, Meta recently announced that they have ended their fact-checking program and instead will rely on crowdsourcing to monitor misinformation (Chow, 2025). Meta operates Facebook, Instagram, and Threads. Likewise, X, formerly known as Twitter, slashed it trust and safety staff in 2022. Experts worry that diminished safeguards once implemented to combat misinformation and disinformation decreases trust online (Myers & Grant, 2023).  

 

Key Challenges in Self-Regulation

There are four key challenges to social media platforms’ self-regulation: 

 

 

1.    Financial Incentives and Engagement-Driven Algorithms

Social media platforms generate revenue primarily through advertising, which depends on user engagement. Unfortunately, research has shown that sensationalized, misleading, or divisive content often drives higher engagement than factual, nuanced discussions. This creates a conflict of interest: aggressively moderating misinformation and harmful content could reduce engagement, ultimately affecting their bottom line (Minow & Minow, 2023).

 

For example, Facebook’s own internal research (revealed in the Facebook Papers) found that its algorithms promoted divisive and emotionally charged content because it kept users on the platform longer. YouTube has been criticized for its recommendation algorithm, which has in the past directed users toward conspiracy theories and extremist content to maximize watch time. Because of these financial incentives, social media companies often take a reactive rather than proactive approach to content moderation, making changes only when public pressure or regulatory threats force them to act.

 

 

2.    Inconsistent and Arbitrary Enforcement

Even when platforms enforce their policies, they often do so inconsistently. Factors like political pressure, public relations concerns, and high-profile users can influence moderation decisions. Some influential figures or accounts with large followings receive more leniency than average users. For instance, politicians and celebrities have been allowed to spread misinformation with little consequence, while smaller accounts posting similar content face immediate bans. Enforcement of community guidelines can vary across different regions and languages, with content in English often being moderated more effectively than in less widely spoken languages. This leaves many vulnerable communities exposed to harmful misinformation and hate speech (Minow & Minow, 2023).

 

 

3.    Reduction of Trust and Safety Teams

In recent years, many social media companies have cut back on their Trust and Safety teams, reducing their ability to effectively moderate content. These teams are responsible for identifying harmful material, enforcing policies, and preventing the spread of misinformation. With fewer human moderators and fact-checkers, harmful content is more likely to spread unchecked, especially as AI-driven moderation systems still struggle with nuance, context, and misinformation detection (Minow & Minow, 2023).

 

 

4.    Lack of Transparency and Accountability

Social media companies rarely provide full transparency about how they moderate content, making it difficult for researchers, policymakers, and the public to hold them accountable. Platforms often do not disclose how their algorithms work, meaning users don’t know why they see certain content or how misinformation spreads. When harmful content spreads widely, companies often deflect responsibility, blaming bad actors rather than acknowledging the role of their own recommendation systems. Even when they do act, platforms tend not to share details about why specific moderation decisions were made, leading to accusations of bias or unfair enforcement (Minow & Minow, 2023).

 

 

What Can Individuals Do?

Disinformation and “fake news” pose a serious threat to democratic systems by shaping public opinion and influencing electoral discourse. You can protect yourself from disinformation by:

 

1.     Engaging with diverse perspectives. Relying on a limited number of like-minded news sources restricts your exposure to varied viewpoints and increases the risk of falling for hoaxes or false narratives. While not foolproof, broadening your sources improves your chances of accessing well-balanced information (National Center of State Courts, 2025).

 

2.     Approaching news with skepticism. Many online outlets prioritize clicks over accuracy, using misleading or sensationalized headlines to grab attention. Understanding that not everything you read is true, and that some sites specialize in spreading falsehoods, is crucial in today’s digital landscape. Learning to assess news credibility helps protect against misinformation (National Center of State Courts, 2025).

 

3.     Fact-checking before sharing. Before passing along information, verify the credibility of the source. Cross-check stories with reliable, unbiased sources known for high factual accuracy to determine what, and whom, you can trust (National Center of State Courts, 2025).

 

4.     Challenging false information. If you come across a misleading or false post, speak up. Addressing misinformation signals that spreading falsehoods is unacceptable. By staying silent, you allow misinformation to persist and gain traction (National Center of State Courts, 2025).

 

What Can Be Done Societally?

As a society, we all share the responsibility of preventing the spread of false information. Since self-regulation by social media platforms has proven unreliable, a multi-pronged approach is needed to ensure responsible content moderation and combat misinformation effectively. This approach includes:

 

1. Government Regulation and Policy Reform

Governments and regulatory bodies can play a role in setting clear guidelines for social media companies by implementing stronger content moderation laws that can require companies to take action against misinformation, hate speech, and harmful content. Transparency requirements can force platforms to disclose how their algorithms function and how moderation decisions are made. Financial penalties for failure to remove harmful content could incentivize more responsible practices. However, regulation must be balanced to avoid excessive government control over speech. It should focus on ensuring transparency, fairness, and accountability rather than dictating specific narratives (Balkin, 2021).

 

2. Public Pressure and Advocacy

Users and advocacy groups can push social media companies to do better by demanding more robust moderation policies that are fairly enforced across all users and regions. Independent oversight bodies to audit content moderation practices and hold platforms accountable. A recent poll conducted by Boston University’s College of Communications found that 72% of Americans believed it is acceptable for social media platforms to remove inaccurate information. More than half of Americans distrust the efficacy of crowd-source monitoring of social media (Amazeen, 2025). Improved fact-checking partnerships are needed to counter misinformation more effectively.

 

3. Media Literacy and User Responsibility

Since social media platforms alone cannot be relied upon to stop misinformation, individuals must take steps to protect themselves. They can verify information before sharing by checking multiple sources and rely on reputable fact-checking organizations. Other actions can include diversifying news sources and avoiding relyiance on a single platform or outlet for information, reporting misinformation and harmful content by flagging false or dangerous content, and educating others by encouraging media literacy in communities can help reduce the spread of misinformation (Sucui, 2024).

 

Conclusion

Social media companies cannot be fully trusted to police themselves, as their financial interests often clash with the need for responsible moderation. While they have taken some steps to curb misinformation, enforcement remains inconsistent, and recent cuts to moderation teams have worsened the problem. The solution lies in a combination of regulation, public accountability, and increased media literacy to create a more reliable and trustworthy information ecosystem.

 

References

Amazeen, M. (2025). Americans expect social media content moderation. The Brink: Pioneering Research of Boston University. https://www.bu.edu/articles/2025/americans-expect-social-media-content-moderation/

Balkin, J. M. (2021). How to regulate (and not regulate) social media. Journal of Free Speech Law, 1(71), 73-96. https://www.journaloffreespeechlaw.org/balkin.pdf

Chow, A. R. (2025, January 7). Whey Meta’s fact-checking change could lead to more misinformation on Facebook and Instagram. Time. https://time.com/7205332/meta-fact-checking-community-notes/

Minow, & Minow (2023). Social media companies should pursue serious self-supervision — soon: Response to Professors Douek and Kadri. Harvard Law Review, 136(8). https://harvardlawreview.org/forum/vol-136/social-media-companies-should-pursue-serious-self-supervision-soon-response-to-professors-douek-and-kadri/

Myers, S. L., and Grant, N. (2023, February 14). Combating disinformation wanes at social media giants. New York Times. https://www.nytimes.com/2023/02/14/technology/disinformation-moderation-social-media.html

Sucui, P. (January 2, 2024). How media literacy can help stop misinformation from spreading. Forbes. https://www.forbes.com/sites/petersuciu/2024/01/02/how-media-literacy-can-help-stop-misinformation-from-spreading/


Friday, March 7, 2025

How to Report Misleading and Inaccurate Content on Social Media

 



By Lilian H. Hill

 

Misinformation and disinformation, often called "fake news," spread rapidly on social media, especially during conflicts, wars, and emergencies. “Fake news” and disinformation campaigns injure the health of democratic systems because they can influence public opinion and electoral decision-making (National Center for State Courts, n.d.). With the overwhelming content shared on these platforms, distinguishing truth from falsehood has become challenging. This issue has worsened as some social media companies have downsized their Trust and Safety teams, neglecting proper content moderation (Center for Countering Digital Hate, 2023).

 

Users can play a role in curbing the spread of false information. The first step is to verify before sharing that we are mindful of what we amplify and engage with. Equally important is reporting misinformation when we come across it. Social media platforms allow users to flag posts that promote falsehoods, conspiracies, or misleading claims, each enforcing its own Community Standards to regulate content (Center for Countering Digital Hate, 2023).

 

Reporting misleading content on social media platforms is essential in reducing the spread of misinformation. Unfortunately, some platforms fail to act on reported content (Center for Countering Digital Hate, 2023). Nonetheless, users should still report when misinformation and disinformation flood their timelines.

 

Here’s how to report misleading content on some of the most widely used platforms:

1. Facebook

  • Click on the three dots (•••) in the top-right corner of the post.
  • Select "Find support or report post."
  • Choose "False Information" or another relevant category.
  • Follow the on-screen instructions to complete the report.

 

2. Instagram

  • Tap the three dots (•••) in the top-right corner of the post.
  • Select "Report."
  • Choose "False Information" and follow the steps to submit your report.

 

3. X (formerly known as Twitter)

  • Click on the three dots (•••) on the tweet you want to report.
  • Select "Report Tweet."
  • Choose "It’s misleading" and specify whether it relates to politics, health, or other misinformation.
  • Follow the prompts to complete the report.

 

4. TikTok

  • Tap and hold the video or click on the share arrow.
  • Select "Report."
  • Choose "Misleading Information" and provide details if necessary.

 

5. YouTube

  • Click on the three dots (•••) below the video.
  • Select "Report."
  • Choose "Misinformation" and provide any additional details required.

 

6. Reddit

  • Click on the three dots (•••) or the "Report" button below the post or comment.
  • Select "Misinformation" if available or choose a related category.
  • Follow the instructions to submit your report.

 

7. LinkedIn

  • Click on the three dots (•••) in the top-right corner of the post.
  • Select "Report this post."
  • Choose "False or misleading information."

 

8. Threads

  • Click more next to a post.
  • Click Report and follow the on-screen instructions.

 

After reporting, the platform will review the content and take action if it violates their misinformation policies. Users can also enhance efforts by sharing fact-checked sources in the comments or encouraging others to report the same misleading content.

 

References

Center for Countering Digital Hate (2023, October 24). How to report misinformation on social media. https://counterhate.com/blog/how-to-report-misinformation-on-social-media/

National Center for State Courts (n.d.) Disinformation and the Public. https://www.ncsc.org/consulting-and-research/areas-of-expertise/communications,-civics-and-disinformation/disinformation/for-the-public


Thursday, February 13, 2025

Digital Architecture of Disinformation

 

By Lilian H. Hill

 

Fake news and disinformation are not new, but their rapid spread is unprecedented. Many individuals struggle to distinguish between real and fake news online, leading to widespread confusion (Hetler, 2025). Disinformation architecture refers to the systematic and strategic methods used to create, spread, and amplify false or misleading information. It involves a combination of technology, human effort, and coordinated tactics to manipulate public opinion, sow discord, or achieve specific political or social goals. This architecture leverages technology, social networks, and psychological manipulation to shape public perception, influence behavior, or achieve specific objectives, such as political, financial, or ideological gains.

 

In the last few decades, Gal (2024) stated that social media platforms have transformed from basic networking sites into influential entities that shape public opinion, sway elections, impact public health, and influence social cohesion. For example, during the recent U.S. presidential election, platforms like X played a key role in disseminating accurate information and misinformation, mobilizing voters, and affecting turnout. Likewise, during the COVID-19 pandemic, social media was instrumental in sharing public health guidelines but also became a hotspot for the spread of misinformation regarding vaccines and treatments.

 

Bossetta (2024) stated that a platform's digital architecture influences political communication on social media, meaning the technical frameworks that facilitate, restrict, and shape user behavior online. This generally refers to what platforms enable, prevent, and structure online communication, such as through likes, comments, retweets, and sharing. Ong and Cabañes (2018) commented that the basic blueprint of political disinformation campaigns strongly resembles corporate branding strategy. However, political disinformation requires its purveyors to make moral compromises, including distributing revisionist history, silencing political opponents, and hijacking news media attention.

 

The primary goals of disinformation campaigns are political manipulation, social division, economic gains, and the erosion of trust in institutions such as the media, science, and democracy. Their impacts are far-reaching, leading to increased polarization, manipulation of democratic processes, reputational damage, and harm to individuals' mental well-being (Bossetta, 2018).

 

Influence of Disinformation Architecture

Disinformation has far-reaching consequences, including the erosion of trust in key institutions such as journalism, science, and governance. By spreading misleading narratives, it undermines public confidence in credible sources of information. Additionally, disinformation fuels polarization by deepening societal divisions and promoting extreme or one-sided perspectives, making constructive dialogue more difficult. It also plays a significant role in manipulating democracies, influencing elections and policy debates through deceptive tactics that mislead voters and policymakers. Beyond its societal impacts, disinformation can cause direct harm to individuals by targeting their reputations, personal safety, and mental well-being, often leading to harassment, misinformation-driven fear, and public distrust.

 

Components of Disinformation Architecture

Disinformation architecture consists of several key components that manipulate public perception. It begins with reconnaissance, where the target audience and environment are analyzed to tailor the disinformation campaign effectively. Once this understanding is established, the necessary infrastructure is built, including creating believable personas, social media accounts, and groups to disseminate false information. Content creation follows, ensuring a continuous flow of misleading materials such as posts, memes, videos, and articles that support the disinformation narrative.

 

The core aspects of disinformation architecture include content creation, amplification channels, psychological tactics, targeting and segmentation, infrastructure support, and feedback loops. Content creation involves fabricating fake news, manipulating media, and employing deepfake technology to mislead audiences. Amplification is achieved through social media platforms, bot networks, and echo chambers that reinforce biased narratives. Psychological tactics exploit emotions, cognitive biases, and perceived authority to gain trust and engagement. Targeting and segmentation enable microtargeting strategies, exploiting demographic vulnerabilities to maximize influence. Infrastructure support includes data harvesting, dark web resources, and monetization channels that sustain disinformation campaigns. Feedback loops ensure that engagement algorithms prioritize viral and sensationalist content, keeping misinformation in circulation.

 

Amplification is crucial in spreading this content widely, utilizing bots, algorithms, and social-engineering techniques to maximize reach. Engagement is then sustained through interactions that deepen the impact of disinformation, often through trolling or disruptive tactics. Eventually, mobilization occurs, where unwitting users are encouraged to take action, leading to real-world consequences.

 

Mitigation of Disinformation Architecture

To mitigate disinformation, several strategies must be implemented. Regulation and policy measures should enforce platform transparency rules and penalize the deliberate spread of harmful content. According to Gal (2024), because social media platforms play an increasingly central role in information dissemination, ensuring the integrity of that information has become more urgent than ever, making discussions about regulation essential. Given their profound influence on nearly every aspect of society, these platforms should be treated as critical infrastructure—like energy grids and water supply systems—and subject to the same level of scrutiny and regulation to safeguard information integrity. Just as a power grid failure can cause widespread disruption, large-scale social media manipulation can erode democratic processes, hinder public health initiatives, and weaken social trust.

 

Technological solutions like AI-driven detection systems and verification tools can help identify and flag false information. Public awareness efforts should promote media literacy, encouraging individuals to critically evaluate information and question sensationalist narratives (Hetler, 2025). Finally, platform responsibility must be strengthened by modifying algorithms to prioritize credible sources and enhancing content moderation to limit the spread of disinformation. Understanding these mechanisms is essential to developing effective countermeasures against the growing threat of disinformation in the digital age.

 

References

Bossetta, M. (2018). The digital architectures of social media: Comparing political campaigning on Facebook, Twitter, Instagram, and Snapchat in the 2016 U.S. election, Journalism and Mass Communication Quarterly, 95(2), 471–496. https://doi.org/10.1177/1077699018763307

Bossetta, M. (2024, October 16). Digital architecture, social engineering, and networked disinformation on social media. EU Disinfo Lab. Retrieved https://www.disinfo.eu/outreach/our-webinars/webinar-digital-architectures-social-engineering-and-networked-disinformation-with-michael-bossetta/

Gal, U. (2024, November 17). Want to combat online misinformation? Regulate the architecture of social media platforms, not their content. ABC. Retrieved https://www.abc.net.au/religion/uri-gal-online-misinformation-democracy-social-media-algorithms/104591278

Hetler, A. (2025, January 7). 11 ways to spot disinformation on social media. TechTarget. Retrieved https://www.techtarget.com/whatis/feature/10-ways-to-spot-disinformation-on-social-media

Ong, J. C., & Cabañes, J. V. A. (2018). The architecture of networked disinformation: Behind the scenes of troll accounts and fake news production in the Philippines. The Newton Tech4Dev Network. Retrieved https://newtontechfordev.com/wp-content/uploads/2018/02/ARCHITECTS-OF-NETWORKED-DISINFORMATION-FULL-REPORT.pdf


Friday, November 29, 2024

When Misinformation Causes Harm

 

Image Credit: Pexels

By Lilian H. Hill

 

We’re learning again what we always known: Words have consequences.”

President Biden, March 19, 2021

The phrase "words have consequences" reflects a widely understood concept about the power of language and its impact on people and situations. While the quote may not have a single origin, its essence is found in numerous historical and philosophical texts and contemporary discussions. The phrase is particularly relevant in misinformation, as it highlights the real-world impact of false or misleading information on individuals and society. Misinformation, when spread through various channels, especially social media, news outlets, and word of mouth, can cause harm in several ways, mainly affecting people's beliefs, actions, and decisions. 

We are seeing the results of misinformation in the ongoing recovery from Hurricanes Helene and Milton, both of which made landfall in Florida. On September 26, Hurricane Helene landed in the Big Bend region of Florida, near Perry, with maximum sustained winds of 140 mph. Hurricane Milton made landfall with wind speeds of 120 mph on the west coast of the U.S. state of Florida, less than two weeks after Hurricane Helene. This blog post was written two months after the hurricane events and old news in the information ecosystem. It is daily life for the people who are dealing with the aftermath of the hurricanes.

Following major weather disasters, misinformation frequently surges. With Hurricane Helene impacting several battleground states, the spread of false claims has intensified. Some of the most extreme conspiracy theories circulating online suggest that politicians manipulated the weather to target Republican regions and that the government aims to seize land in North Carolina for lithium mining (Tarrant, 2024).

Misinformation during hurricane recovery has severe and far-reaching consequences, as it complicates efforts to provide accurate information, distribute resources, and ensure the safety of affected communities. For example, the Federal Emergency Management Agency, or FEMA, had to address the rumor that the $750.00 Serious Needs Assistance would be the only assistance hurricane victims would receive. In reality, Serious Needs Assistance is dispersed for “upfront, flexible payment for essential items like food, water, baby formula, breastfeeding supplies, medication and other serious disaster-related needs” (para. 1).  

Following that, “FEMA may provide money and other services to help you recover from losses caused by a presidentially declared disaster, such as damage to your home, car, and other personal items.” FEMA can provide funds for temporary housing, repair or replacement of owner-occupied homes for primary residences, temporary housing, and hazard mitigation assistance, depending on individual needs. Rumors about limited assistance can prevent people from applying for the help they need. The problem is so pervasive that FEMA maintains a Hurricane Rumors Response webpage in 12 languages that is updated with each new hurricane landfall.

Some keyways in which misinformation impacts hurricane recovery include:

 

1. Public Safety Risks

Misinformation about evacuation orders, shelter availability, or road conditions can put lives at risk. For example, if false information spreads that certain areas are safe to return to when they are not, people might expose themselves to dangerous flooding, structural instability, or other hazards. Similarly, misleading updates about ongoing storms can leave people unprepared for secondary dangers like storm surges or flash floods.

 

2. Strain on Emergency Services

False claims about the availability of emergency services or relief supplies can overwhelm first responders. People must be more informed about where they can receive aid or assistance to avoid flooding the wrong locations or resources, further straining already limited services. In extreme cases, this can divert attention from critical rescue efforts or supply distribution, delaying recovery for those in real need.

 

3. Confusion Around Relief Resources

Misinformation about accessing federal or state disaster relief can hinder recovery efforts. False claims about the steps needed to apply for financial assistance (e.g., FEMA aid), insurance processes, or donation sites may lead to frustration and slow the distribution of funds and resources. Additionally, scammers often take advantage of these situations, spreading fake donation links or relief fund drives, which siphon resources away from legitimate efforts.

 

4. Economic and Community Impact

Post-hurricane recovery efforts often rely on accurate information about damaged infrastructure, business reopening, and rebuilding efforts. Misinformation about these topics can lead to prolonged economic hardship for communities, as people may hesitate to return or invest in rebuilding due to fear or uncertainty caused by false information. Additionally, misinformation about insurance claims or rebuilding permits can delay recovery for homeowners and businesses.

 

5. Health and Well-being

During recovery, misinformation can affect the physical and mental health of individuals. For example, false information about contaminated water sources, unapproved medications, or unverified health risks can cause unnecessary fear or lead people to take inappropriate actions that worsen their situation. In some cases, rumors or unverified claims about medical conditions (such as exposure to mold or diseases post-hurricane) can prevent people from seeking proper medical care.

In summary, misinformation during hurricane recovery can exacerbate existing challenges, delay crucial response efforts, and even result in loss of life. It underscores the importance of accurate communication and the responsible sharing of information during disaster response.

 

References

Biden, J. (2021, March 19). Remarks by President Biden at Emory University. White House Briefing. Retrieved https://www.whitehouse.gov/briefing-room/speeches-remarks/2021/03/19/remarks-by-president-biden-at-emory-university/

FEMA (2024, October 8). Addressing Hurricane Helene Rumors and Scams. Retrieved https://www.fema.gov/blog/addressing-hurricane-helene-rumors-and-scams

 Tarrant, R. (2024, October 7). Misinformation has surged following Hurricane Helene. Here's a fact check. CBS News. Retrieved https://www.cbsnews.com/news/hurricane-helene-fact-check-misinformation-conspiracy-theories/

 

Friday, June 21, 2024

Infodemics: How Misinformation and Disinformation Spread Disease


 

 

By Lilian H. Hill

 

An infodemic refers to an overabundance of information, both accurate and false, that spreads rapidly during an epidemic or crisis, making it difficult for people to find trustworthy sources and reliable guidance. The term is a blend of "information" and "epidemic". It highlights how the proliferation of information can parallel the spread of disease, creating additional challenges in managing the primary crisis. The term rose to prominence in 2020 during the COVID-19 pandemic. During epidemics, accurate information is even more critical than in normal times because people need it to adjust their behavior to protect themselves, their families, and their communities from infection (World Health Organization, 2020).

 

Contradictory messages and conflicting advice can create confusion and mistrust among the public (Borges et al., 2022). An infodemic can intensify or lengthen outbreaks when people are unsure about what they need to do to protect their health and the health of people around them. The situation is so dire that the World Health Organization (2020) published guidance to help individuals, community leaders, governments, and the private sector understand some key actions they can take to manage the COVID-19 infodemic.

 

Characteristics of Infodemics

Infodemics result in more information than most people can process effectively, especially those with low health literacy. With growing digitization, information spreads more rapidly. Alongside accurate information, a significant amount of misinformation (false or misleading information shared without harmful intent) and disinformation (false information deliberately spread to deceive) is disseminated. Information spreads quickly, particularly through interconnected social media and digital platforms, reaching global audiences instantaneously. Infodemics often feature highly emotional, sensational, or alarming content that captures attention but may not be accurate or helpful.

 

Examples of Infodemics

Three global epidemics have occurred in recent memory, each accompanied by infodemics:

 

  1. COVID-19 Pandemic: During the COVID-19 pandemic, an infodemic emerged with vast amounts of information about the virus, treatments, vaccines, and public health measures. This included a significant spread of misinformation and conspiracy theories.

 

  1. Ebola Outbreaks: Past Ebola outbreaks have seen infodemics where misinformation about the disease’s transmission and treatments spread rapidly, complicating response efforts.

 

  1. Zika Virus: The Zika virus outbreak was accompanied by an infodemic, with rumors and false information about the virus’s effects and prevention measures.

 

Understanding and addressing infodemics is crucial for effective crisis management and public health response, ensuring that accurate information prevails and supports informed decision-making by individuals and communities. With human encroachment on natural areas, the likelihood of future epidemics is high (Shafaati et al., 2023).

 

Consequences of Infodemics

The flood of conflicting information can cause confusion, anxiety, and stress, making it hard for individuals to know how to respond appropriately to the crisis. Trust in authorities, experts, and media can be eroded when people encounter inconsistent messages or feel they are being misled. Misinformation can lead to harmful behaviors, such as using unproven treatments, ignoring public health advice, or spreading conspiracy theories. The spread of false information can hamper public health responses and crisis management efforts, as resources may be diverted to combat misinformation instead of focusing solely on the crisis. The plethora of unreliable health information delays care provision and increases the occurrence of hateful and divisive rhetoric (Borges et al., 2022). Infodemics can exacerbate social divisions, as different groups may cling to varying sets of information and beliefs, leading to polarized views and conflicts.

 

Managing Infodemics

Another new term is “infodemiology,” a combination of information and epidemiology. Epidemiology, the study of the distribution of health and disease patterns within populations to use this information to address health issues, is a fundamental aspect of public health. It aims to minimize the risk of adverse health outcomes through community education, research, and health policy development (World Health Organization 2024). Infodemiology is the study of the flood of information and how to manage it for public health. Infodemic management involves systematically applying risk- and evidence-based analyses and strategies to control the spread of misinformation and mitigate its effects on health behaviors during health crises.

 

For example, in their systematic review of publications about health infodemics and misinformation, Borges et al. (2022) commented that “social media has been increasingly propagating poor-quality, health-related information during pandemics, humanitarian crises and health emergencies. Such spreading of unreliable evidence on health topics amplifies vaccine hesitancy and promotes unproven treatments” (p. 556). However, they noted that social media has also been successfully employed for crisis communication and management during emerging infectious disease pandemics and significantly improved knowledge awareness and compliance with health recommendations. For governments, health authorities, researchers, and clinicians, promoting and disseminating reliable health information is essential to counteract false or misleading health information spread on social media.

Image Credit: Anna Shvets, Pexels

 

Strategies for Combating Infodemics

For government officials, public health professionals, and educators, preparation is essential to prevent the next pandemic disaster (Shafaati et al., 2023). Strengthening public health services and investing in research and development for new medications and vaccines are crucial steps. Expanding access to education and resources in vulnerable communities is also necessary to enhance understanding and encourage preventive actions. Additionally, investing in international cooperation is vital to support countries at risk of outbreaks and provide economic assistance to those affected by pandemics.

 

  1. Promoting Accurate Information: Authorities and experts must provide clear, accurate, and timely information. This includes regular updates from trusted sources like public health organizations.

 

  1. Media Literacy: Enhancing public media literacy can help individuals critically evaluate the information they encounter, recognize reliable sources, and avoid sharing unverified claims.

 

  1. Fact-Checking and Verification: Fact-checking organizations and platforms are crucial in verifying information and debunking false claims. Prominent placement of fact-checked information can help correct misconceptions.

 

  1. Algorithmic Adjustments: Social media platforms and search engines can adjust their algorithms to prioritize credible sources and reduce the visibility of misleading content.

 

  1. Collaboration and Coordination: Effective communication and coordination among governments, health organizations, media, and tech companies are essential to manage the flow of information and combat misinformation.

 

  1. Public Engagement: Engaging with communities and addressing their concerns directly can build trust and ensure accurate information reaches diverse audiences. This may include town hall meetings, Q&A sessions, and community-specific communications.

 

Referencesre

Borges do Nascimento, I. J., Pizarro, A. B., Almeida, J. M., Azzopardi-Muscat, N., Gonçalves, M. A., Björklund, M., & Novillo-Ortiz, D. (2022). Infodemics and health misinformation: A systematic review of reviews. Bulletin of the World Health Organization, 100(9):544-561. https://doi.org:10.2471/BLT.21.287654.

Shafaati, M., Chopra, H., Priyanka, Khandia, R., Choudhary, O. P., & Rodriguez-Morales, A. J. (2023). The next pandemic catastrophe: can we avert the inevitable? New Microbes and New Infections, 52, 101110. https://doi.org: 10.1016/j.nmni.2023.101110. 

World Health Organization (2020). Managing the COVID-19 Infodemic: A call for action. Author. https://iris.who.int/bitstream/handle/10665/334287/9789240010314-eng.pdf?sequence=1on

World Health Organization (2024). Let’s flatten the infodemic curve, https://www.who.int/news-room/spotlight/let-s-flatten-the-infodemic-curve

 



Information Warfare, Virtual Politics, and Narrative Dominance

  By Lilian H. Hill As the Internet becomes more advanced, it is giving rise to new challenges for democracy. Social me...