Showing posts with label Misinformation. Show all posts
Showing posts with label Misinformation. Show all posts

Friday, June 20, 2025

Data Literacy and Data Justice


 

 

By Lilian H. Hill

Data literacy is a fundamental skill set that entails the ability to read, write, understand, and communicate data in context effectively. It empowers individuals and organizations to derive meaning from data, make informed decisions, and solve problems. Data literacy is an interdisciplinary competency that integrates elements of mathematics, science, and information technology. Data literacy requires understanding data sources and constructs, analytical methods, and AI techniques (Stobierski, 2021). Data literacy is not about being a data scientist; it's about having a general understanding of data concepts and how to apply them effectively. 

The rapid expansion of digital information in today’s world has triggered a significant shift in how knowledge and skills are valued, making the ability to understand, interpret, and extract meaningful insights from data a vital competency. Schenck and Duschl (2024) comment that data increasingly drive decisions across all sectors of society, and promoting data literacy has become essential to preparing individuals to participate actively and thoughtfully in the digital age. In education, this changing environment calls for a reimagined approach that goes beyond conventional literacies, positioning data literacy as a core skill necessary for future success.

Skills of Data Literacy

Building data literacy skills is an essential process in today’s data-driven world. It begins with learning the fundamentals of data, including understanding different types such as quantitative versus qualitative data, and recognizing basic statistical concepts like mean, median, standard deviation, and correlation. Familiarity with common data formats (e.g., CSV, JSON, Excel files) lays the groundwork for deeper analytical work (Mandinach & Gummer, 2016). Introductory courses from platforms like Coursera or edX, as well as open-access tutorials and videos, offer accessible entry points for building this foundational knowledge.

To apply data literacy practically, individuals should become familiar with commonly used tools. Beginners might start with spreadsheets like Microsoft Excel or Google Sheets to learn basic data manipulation and chart creation. As comfort grows, they can explore more advanced platforms such as Tableau or Power BI for data visualization or learn coding languages like Python (using libraries such as Pandas) and SQL for deeper analysis. Practicing with real-world data available from open sources like government portals or World Bank Open Data helps bridge theory and application.

A crucial next step is learning to interpret data visualizations. Charts, graphs, and dashboards are the primary means of communicating data, and understanding how to read them critically is crucial for avoiding misinterpretation. Tools such as Gapminder or data stories from Our World in Data provide engaging ways to practice understanding patterns and trends visually (Knaflic, 2015).

Equally important is the development of critical thinking skills about data itself. This means asking questions such as: Where did the data come from? Is the sample size sufficient? Is there potential for bias or missing information? Cultivating skepticism and inquiry when reviewing data sources helps prevent the spread and influence of misinformation (Bhargava et al., 2021).

Communication is another fundamental part of data literacy. It’s not enough to understand data. The ability to clearly and ethically explain insights is equally important. This involves selecting appropriate visuals, simplifying complex ideas, and telling compelling data-driven stories (Knaflic, 2015). Platforms like Flourish or Datawrapper can help users experiment with design and narrative techniques that enhance data communication.

Ultimately, data literacy must be maintained and continually updated through ongoing learning. Schenk and Duschl (2024) call for a transformative change in educational practices, recommending a move away from formal, theory-first instruction toward contextual, inquiry-based learning. This change is viewed as crucial for equipping students with the practical skills necessary to apply data literacy effectively in real-world situations. Data literacy is not only a technical skill but also a civic and ethical one, enabling people to make informed decisions and engage in democratic processes.

Data Literacy and Social Justice

One of the core connections between big data analytics and data literacy lies in the ability to manage and critically evaluate the quality and relevance of data. Big data involves massive, unstructured datasets sourced from sensors, social media, transactional records, and more. This can introduce biases, inconsistencies, and privacy risks. Data-literate individuals are better equipped to ask critical questions: Where does the data come from? Is it representative? What algorithms are being applied? Who might be harmed by this analysis? These questions are especially important in fields like healthcare, criminal justice, education, and marketing, where big data can amplify existing societal inequities if not interpreted responsibly (boyd & Crawford, 2012).

Data justice aims to ensure that data practices do not perpetuate or exacerbate structural inequities and social injustices, but instead promote human rights, dignity, and democratic participation (Dencik & Sanchez-Monedero, 2022). The increasing dependence on data-driven technologies in all aspects of social life is a driving force behind major shifts in science, government, business, and civil society. While these changes are frequently promoted for their potential to improve efficiency and decision-making, they also introduce profound societal challenges. Data justice refers to the fair and equitable treatment of individuals and communities in the collection, analysis, use, and governance of data. It emphasizes that data are not neutral. How data are gathered, interpreted, and applied often reflect existing power structures, biases, and inequalities. Data justice has emerged as a critical framework for addressing these challenges through a lens centered on social justice. For example, if a predictive policing algorithm unfairly targets neighborhoods based on biased crime data, it may lead to over-policing in communities of color. A data justice approach would question the assumptions behind the data, advocate for community oversight, and explore alternative models that prioritize community safety without reinforcing systemic bias.

Finally, data literacy supports democratic participation in a big data society. As governments and corporations increasingly rely on data to guide decisions, including pandemic response, urban planning, and surveillance, citizens need the skills to engage with data-related policies, challenge unfair uses, and advocate for transparency and accountability. Without broad-based data literacy, power becomes concentrated in the hands of a few data-literate experts and institutions, potentially reinforcing social and economic inequalities (D’Ignazio & Klein, 2020).

References

Bhargava, R., Kadouaki, R., Bhargava, E., Castro, G., & D’Ignazio, C. (2021). Data murals: Using the arts to build data literacy. The Journal of Community Informatics, 17(1), 1–15. https://doi.org/10.15353/joci.v17i1.4602

boyd, d., & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662–679. https://doi.org/10.1080/1369118X.2012.678878

Dencik, L., & Sanchez-Monedero, J. (2022). Data justice. Internet Policy Review, 11(1). https://doi.org/10.14763/2022.1.1615

D’Ignazio, C., & Klein, L. F. (2020). Data feminism. MIT Press.

Jones, B. (2025). Data literacy fundamentals: Understanding the power and value of data (2nd ed.). Data Literacy Press.

Knaflic, C. N. (2015). Storytelling with data: A data visualization guide for business professionals. Wiley.

Mandinach, E. B., & Gummer, E. S. (2016). Data literacy for educators: Making it count in teacher preparation and practice. Teachers College Press.

Schenck, K. E., & Duschl, R. A. (2024). Context, language, and technology in data literacy. Routledge Open Research, 3(19).

            (https://doi.org/10.12688/routledgeopenres.18160.1)

Stobierski, T. (2021). Data literacy: An introduction for business. Harvard Business Review Online. https://online.hbs.edu/blog/post/data-literacy

Taylor, L. (2017). What is data justice? The case for connecting digital rights and freedoms globally. Big Data & Society, 4(2). https://doi.org/10.1177/2053951717736335

 

Friday, March 14, 2025

Can Social Media Platforms Be Trusted to Regulate Misinformation Themselves?


 

By Lilian H. Hill

Social media platforms wield immense influence over public discourse, acting as primary sources of news, political debate, and social movements. While they once advertised their policies intended to combat misinformation, hate speech, and harmful content, their willingness and ability to effectively enforce these policies is problematic. The fundamental challenge is that these companies operate as profit-driven businesses, meaning their primary incentives do not always align with the public good. Myers and Grant (2023) commented that many platforms are investing fewer resources in combating misinformation. For example, Meta recently announced that they have ended their fact-checking program and instead will rely on crowdsourcing to monitor misinformation (Chow, 2025). Meta operates Facebook, Instagram, and Threads. Likewise, X, formerly known as Twitter, slashed it trust and safety staff in 2022. Experts worry that diminished safeguards once implemented to combat misinformation and disinformation decreases trust online (Myers & Grant, 2023).  

 

Key Challenges in Self-Regulation

There are four key challenges to social media platforms’ self-regulation: 

 

 

1.    Financial Incentives and Engagement-Driven Algorithms

Social media platforms generate revenue primarily through advertising, which depends on user engagement. Unfortunately, research has shown that sensationalized, misleading, or divisive content often drives higher engagement than factual, nuanced discussions. This creates a conflict of interest: aggressively moderating misinformation and harmful content could reduce engagement, ultimately affecting their bottom line (Minow & Minow, 2023).

 

For example, Facebook’s own internal research (revealed in the Facebook Papers) found that its algorithms promoted divisive and emotionally charged content because it kept users on the platform longer. YouTube has been criticized for its recommendation algorithm, which has in the past directed users toward conspiracy theories and extremist content to maximize watch time. Because of these financial incentives, social media companies often take a reactive rather than proactive approach to content moderation, making changes only when public pressure or regulatory threats force them to act.

 

 

2.    Inconsistent and Arbitrary Enforcement

Even when platforms enforce their policies, they often do so inconsistently. Factors like political pressure, public relations concerns, and high-profile users can influence moderation decisions. Some influential figures or accounts with large followings receive more leniency than average users. For instance, politicians and celebrities have been allowed to spread misinformation with little consequence, while smaller accounts posting similar content face immediate bans. Enforcement of community guidelines can vary across different regions and languages, with content in English often being moderated more effectively than in less widely spoken languages. This leaves many vulnerable communities exposed to harmful misinformation and hate speech (Minow & Minow, 2023).

 

 

3.    Reduction of Trust and Safety Teams

In recent years, many social media companies have cut back on their Trust and Safety teams, reducing their ability to effectively moderate content. These teams are responsible for identifying harmful material, enforcing policies, and preventing the spread of misinformation. With fewer human moderators and fact-checkers, harmful content is more likely to spread unchecked, especially as AI-driven moderation systems still struggle with nuance, context, and misinformation detection (Minow & Minow, 2023).

 

 

4.    Lack of Transparency and Accountability

Social media companies rarely provide full transparency about how they moderate content, making it difficult for researchers, policymakers, and the public to hold them accountable. Platforms often do not disclose how their algorithms work, meaning users don’t know why they see certain content or how misinformation spreads. When harmful content spreads widely, companies often deflect responsibility, blaming bad actors rather than acknowledging the role of their own recommendation systems. Even when they do act, platforms tend not to share details about why specific moderation decisions were made, leading to accusations of bias or unfair enforcement (Minow & Minow, 2023).

 

 

What Can Individuals Do?

Disinformation and “fake news” pose a serious threat to democratic systems by shaping public opinion and influencing electoral discourse. You can protect yourself from disinformation by:

 

1.     Engaging with diverse perspectives. Relying on a limited number of like-minded news sources restricts your exposure to varied viewpoints and increases the risk of falling for hoaxes or false narratives. While not foolproof, broadening your sources improves your chances of accessing well-balanced information (National Center of State Courts, 2025).

 

2.     Approaching news with skepticism. Many online outlets prioritize clicks over accuracy, using misleading or sensationalized headlines to grab attention. Understanding that not everything you read is true, and that some sites specialize in spreading falsehoods, is crucial in today’s digital landscape. Learning to assess news credibility helps protect against misinformation (National Center of State Courts, 2025).

 

3.     Fact-checking before sharing. Before passing along information, verify the credibility of the source. Cross-check stories with reliable, unbiased sources known for high factual accuracy to determine what, and whom, you can trust (National Center of State Courts, 2025).

 

4.     Challenging false information. If you come across a misleading or false post, speak up. Addressing misinformation signals that spreading falsehoods is unacceptable. By staying silent, you allow misinformation to persist and gain traction (National Center of State Courts, 2025).

 

What Can Be Done Societally?

As a society, we all share the responsibility of preventing the spread of false information. Since self-regulation by social media platforms has proven unreliable, a multi-pronged approach is needed to ensure responsible content moderation and combat misinformation effectively. This approach includes:

 

1. Government Regulation and Policy Reform

Governments and regulatory bodies can play a role in setting clear guidelines for social media companies by implementing stronger content moderation laws that can require companies to take action against misinformation, hate speech, and harmful content. Transparency requirements can force platforms to disclose how their algorithms function and how moderation decisions are made. Financial penalties for failure to remove harmful content could incentivize more responsible practices. However, regulation must be balanced to avoid excessive government control over speech. It should focus on ensuring transparency, fairness, and accountability rather than dictating specific narratives (Balkin, 2021).

 

2. Public Pressure and Advocacy

Users and advocacy groups can push social media companies to do better by demanding more robust moderation policies that are fairly enforced across all users and regions. Independent oversight bodies to audit content moderation practices and hold platforms accountable. A recent poll conducted by Boston University’s College of Communications found that 72% of Americans believed it is acceptable for social media platforms to remove inaccurate information. More than half of Americans distrust the efficacy of crowd-source monitoring of social media (Amazeen, 2025). Improved fact-checking partnerships are needed to counter misinformation more effectively.

 

3. Media Literacy and User Responsibility

Since social media platforms alone cannot be relied upon to stop misinformation, individuals must take steps to protect themselves. They can verify information before sharing by checking multiple sources and rely on reputable fact-checking organizations. Other actions can include diversifying news sources and avoiding relyiance on a single platform or outlet for information, reporting misinformation and harmful content by flagging false or dangerous content, and educating others by encouraging media literacy in communities can help reduce the spread of misinformation (Sucui, 2024).

 

Conclusion

Social media companies cannot be fully trusted to police themselves, as their financial interests often clash with the need for responsible moderation. While they have taken some steps to curb misinformation, enforcement remains inconsistent, and recent cuts to moderation teams have worsened the problem. The solution lies in a combination of regulation, public accountability, and increased media literacy to create a more reliable and trustworthy information ecosystem.

 

References

Amazeen, M. (2025). Americans expect social media content moderation. The Brink: Pioneering Research of Boston University. https://www.bu.edu/articles/2025/americans-expect-social-media-content-moderation/

Balkin, J. M. (2021). How to regulate (and not regulate) social media. Journal of Free Speech Law, 1(71), 73-96. https://www.journaloffreespeechlaw.org/balkin.pdf

Chow, A. R. (2025, January 7). Whey Meta’s fact-checking change could lead to more misinformation on Facebook and Instagram. Time. https://time.com/7205332/meta-fact-checking-community-notes/

Minow, & Minow (2023). Social media companies should pursue serious self-supervision — soon: Response to Professors Douek and Kadri. Harvard Law Review, 136(8). https://harvardlawreview.org/forum/vol-136/social-media-companies-should-pursue-serious-self-supervision-soon-response-to-professors-douek-and-kadri/

Myers, S. L., and Grant, N. (2023, February 14). Combating disinformation wanes at social media giants. New York Times. https://www.nytimes.com/2023/02/14/technology/disinformation-moderation-social-media.html

Sucui, P. (January 2, 2024). How media literacy can help stop misinformation from spreading. Forbes. https://www.forbes.com/sites/petersuciu/2024/01/02/how-media-literacy-can-help-stop-misinformation-from-spreading/


Friday, March 7, 2025

How to Report Misleading and Inaccurate Content on Social Media

 



By Lilian H. Hill

 

Misinformation and disinformation, often called "fake news," spread rapidly on social media, especially during conflicts, wars, and emergencies. “Fake news” and disinformation campaigns injure the health of democratic systems because they can influence public opinion and electoral decision-making (National Center for State Courts, n.d.). With the overwhelming content shared on these platforms, distinguishing truth from falsehood has become challenging. This issue has worsened as some social media companies have downsized their Trust and Safety teams, neglecting proper content moderation (Center for Countering Digital Hate, 2023).

 

Users can play a role in curbing the spread of false information. The first step is to verify before sharing that we are mindful of what we amplify and engage with. Equally important is reporting misinformation when we come across it. Social media platforms allow users to flag posts that promote falsehoods, conspiracies, or misleading claims, each enforcing its own Community Standards to regulate content (Center for Countering Digital Hate, 2023).

 

Reporting misleading content on social media platforms is essential in reducing the spread of misinformation. Unfortunately, some platforms fail to act on reported content (Center for Countering Digital Hate, 2023). Nonetheless, users should still report when misinformation and disinformation flood their timelines.

 

Here’s how to report misleading content on some of the most widely used platforms:

1. Facebook

  • Click on the three dots (•••) in the top-right corner of the post.
  • Select "Find support or report post."
  • Choose "False Information" or another relevant category.
  • Follow the on-screen instructions to complete the report.

 

2. Instagram

  • Tap the three dots (•••) in the top-right corner of the post.
  • Select "Report."
  • Choose "False Information" and follow the steps to submit your report.

 

3. X (formerly known as Twitter)

  • Click on the three dots (•••) on the tweet you want to report.
  • Select "Report Tweet."
  • Choose "It’s misleading" and specify whether it relates to politics, health, or other misinformation.
  • Follow the prompts to complete the report.

 

4. TikTok

  • Tap and hold the video or click on the share arrow.
  • Select "Report."
  • Choose "Misleading Information" and provide details if necessary.

 

5. YouTube

  • Click on the three dots (•••) below the video.
  • Select "Report."
  • Choose "Misinformation" and provide any additional details required.

 

6. Reddit

  • Click on the three dots (•••) or the "Report" button below the post or comment.
  • Select "Misinformation" if available or choose a related category.
  • Follow the instructions to submit your report.

 

7. LinkedIn

  • Click on the three dots (•••) in the top-right corner of the post.
  • Select "Report this post."
  • Choose "False or misleading information."

 

8. Threads

  • Click more next to a post.
  • Click Report and follow the on-screen instructions.

 

After reporting, the platform will review the content and take action if it violates their misinformation policies. Users can also enhance efforts by sharing fact-checked sources in the comments or encouraging others to report the same misleading content.

 

References

Center for Countering Digital Hate (2023, October 24). How to report misinformation on social media. https://counterhate.com/blog/how-to-report-misinformation-on-social-media/

National Center for State Courts (n.d.) Disinformation and the Public. https://www.ncsc.org/consulting-and-research/areas-of-expertise/communications,-civics-and-disinformation/disinformation/for-the-public


Friday, November 29, 2024

When Misinformation Causes Harm

 

Image Credit: Pexels

By Lilian H. Hill

 

We’re learning again what we always known: Words have consequences.”

President Biden, March 19, 2021

The phrase "words have consequences" reflects a widely understood concept about the power of language and its impact on people and situations. While the quote may not have a single origin, its essence is found in numerous historical and philosophical texts and contemporary discussions. The phrase is particularly relevant in misinformation, as it highlights the real-world impact of false or misleading information on individuals and society. Misinformation, when spread through various channels, especially social media, news outlets, and word of mouth, can cause harm in several ways, mainly affecting people's beliefs, actions, and decisions. 

We are seeing the results of misinformation in the ongoing recovery from Hurricanes Helene and Milton, both of which made landfall in Florida. On September 26, Hurricane Helene landed in the Big Bend region of Florida, near Perry, with maximum sustained winds of 140 mph. Hurricane Milton made landfall with wind speeds of 120 mph on the west coast of the U.S. state of Florida, less than two weeks after Hurricane Helene. This blog post was written two months after the hurricane events and old news in the information ecosystem. It is daily life for the people who are dealing with the aftermath of the hurricanes.

Following major weather disasters, misinformation frequently surges. With Hurricane Helene impacting several battleground states, the spread of false claims has intensified. Some of the most extreme conspiracy theories circulating online suggest that politicians manipulated the weather to target Republican regions and that the government aims to seize land in North Carolina for lithium mining (Tarrant, 2024).

Misinformation during hurricane recovery has severe and far-reaching consequences, as it complicates efforts to provide accurate information, distribute resources, and ensure the safety of affected communities. For example, the Federal Emergency Management Agency, or FEMA, had to address the rumor that the $750.00 Serious Needs Assistance would be the only assistance hurricane victims would receive. In reality, Serious Needs Assistance is dispersed for “upfront, flexible payment for essential items like food, water, baby formula, breastfeeding supplies, medication and other serious disaster-related needs” (para. 1).  

Following that, “FEMA may provide money and other services to help you recover from losses caused by a presidentially declared disaster, such as damage to your home, car, and other personal items.” FEMA can provide funds for temporary housing, repair or replacement of owner-occupied homes for primary residences, temporary housing, and hazard mitigation assistance, depending on individual needs. Rumors about limited assistance can prevent people from applying for the help they need. The problem is so pervasive that FEMA maintains a Hurricane Rumors Response webpage in 12 languages that is updated with each new hurricane landfall.

Some keyways in which misinformation impacts hurricane recovery include:

 

1. Public Safety Risks

Misinformation about evacuation orders, shelter availability, or road conditions can put lives at risk. For example, if false information spreads that certain areas are safe to return to when they are not, people might expose themselves to dangerous flooding, structural instability, or other hazards. Similarly, misleading updates about ongoing storms can leave people unprepared for secondary dangers like storm surges or flash floods.

 

2. Strain on Emergency Services

False claims about the availability of emergency services or relief supplies can overwhelm first responders. People must be more informed about where they can receive aid or assistance to avoid flooding the wrong locations or resources, further straining already limited services. In extreme cases, this can divert attention from critical rescue efforts or supply distribution, delaying recovery for those in real need.

 

3. Confusion Around Relief Resources

Misinformation about accessing federal or state disaster relief can hinder recovery efforts. False claims about the steps needed to apply for financial assistance (e.g., FEMA aid), insurance processes, or donation sites may lead to frustration and slow the distribution of funds and resources. Additionally, scammers often take advantage of these situations, spreading fake donation links or relief fund drives, which siphon resources away from legitimate efforts.

 

4. Economic and Community Impact

Post-hurricane recovery efforts often rely on accurate information about damaged infrastructure, business reopening, and rebuilding efforts. Misinformation about these topics can lead to prolonged economic hardship for communities, as people may hesitate to return or invest in rebuilding due to fear or uncertainty caused by false information. Additionally, misinformation about insurance claims or rebuilding permits can delay recovery for homeowners and businesses.

 

5. Health and Well-being

During recovery, misinformation can affect the physical and mental health of individuals. For example, false information about contaminated water sources, unapproved medications, or unverified health risks can cause unnecessary fear or lead people to take inappropriate actions that worsen their situation. In some cases, rumors or unverified claims about medical conditions (such as exposure to mold or diseases post-hurricane) can prevent people from seeking proper medical care.

In summary, misinformation during hurricane recovery can exacerbate existing challenges, delay crucial response efforts, and even result in loss of life. It underscores the importance of accurate communication and the responsible sharing of information during disaster response.

 

References

Biden, J. (2021, March 19). Remarks by President Biden at Emory University. White House Briefing. Retrieved https://www.whitehouse.gov/briefing-room/speeches-remarks/2021/03/19/remarks-by-president-biden-at-emory-university/

FEMA (2024, October 8). Addressing Hurricane Helene Rumors and Scams. Retrieved https://www.fema.gov/blog/addressing-hurricane-helene-rumors-and-scams

 Tarrant, R. (2024, October 7). Misinformation has surged following Hurricane Helene. Here's a fact check. CBS News. Retrieved https://www.cbsnews.com/news/hurricane-helene-fact-check-misinformation-conspiracy-theories/

 

Friday, June 21, 2024

Infodemics: How Misinformation and Disinformation Spread Disease


 

 

By Lilian H. Hill

 

An infodemic refers to an overabundance of information, both accurate and false, that spreads rapidly during an epidemic or crisis, making it difficult for people to find trustworthy sources and reliable guidance. The term is a blend of "information" and "epidemic". It highlights how the proliferation of information can parallel the spread of disease, creating additional challenges in managing the primary crisis. The term rose to prominence in 2020 during the COVID-19 pandemic. During epidemics, accurate information is even more critical than in normal times because people need it to adjust their behavior to protect themselves, their families, and their communities from infection (World Health Organization, 2020).

 

Contradictory messages and conflicting advice can create confusion and mistrust among the public (Borges et al., 2022). An infodemic can intensify or lengthen outbreaks when people are unsure about what they need to do to protect their health and the health of people around them. The situation is so dire that the World Health Organization (2020) published guidance to help individuals, community leaders, governments, and the private sector understand some key actions they can take to manage the COVID-19 infodemic.

 

Characteristics of Infodemics

Infodemics result in more information than most people can process effectively, especially those with low health literacy. With growing digitization, information spreads more rapidly. Alongside accurate information, a significant amount of misinformation (false or misleading information shared without harmful intent) and disinformation (false information deliberately spread to deceive) is disseminated. Information spreads quickly, particularly through interconnected social media and digital platforms, reaching global audiences instantaneously. Infodemics often feature highly emotional, sensational, or alarming content that captures attention but may not be accurate or helpful.

 

Examples of Infodemics

Three global epidemics have occurred in recent memory, each accompanied by infodemics:

 

  1. COVID-19 Pandemic: During the COVID-19 pandemic, an infodemic emerged with vast amounts of information about the virus, treatments, vaccines, and public health measures. This included a significant spread of misinformation and conspiracy theories.

 

  1. Ebola Outbreaks: Past Ebola outbreaks have seen infodemics where misinformation about the disease’s transmission and treatments spread rapidly, complicating response efforts.

 

  1. Zika Virus: The Zika virus outbreak was accompanied by an infodemic, with rumors and false information about the virus’s effects and prevention measures.

 

Understanding and addressing infodemics is crucial for effective crisis management and public health response, ensuring that accurate information prevails and supports informed decision-making by individuals and communities. With human encroachment on natural areas, the likelihood of future epidemics is high (Shafaati et al., 2023).

 

Consequences of Infodemics

The flood of conflicting information can cause confusion, anxiety, and stress, making it hard for individuals to know how to respond appropriately to the crisis. Trust in authorities, experts, and media can be eroded when people encounter inconsistent messages or feel they are being misled. Misinformation can lead to harmful behaviors, such as using unproven treatments, ignoring public health advice, or spreading conspiracy theories. The spread of false information can hamper public health responses and crisis management efforts, as resources may be diverted to combat misinformation instead of focusing solely on the crisis. The plethora of unreliable health information delays care provision and increases the occurrence of hateful and divisive rhetoric (Borges et al., 2022). Infodemics can exacerbate social divisions, as different groups may cling to varying sets of information and beliefs, leading to polarized views and conflicts.

 

Managing Infodemics

Another new term is “infodemiology,” a combination of information and epidemiology. Epidemiology, the study of the distribution of health and disease patterns within populations to use this information to address health issues, is a fundamental aspect of public health. It aims to minimize the risk of adverse health outcomes through community education, research, and health policy development (World Health Organization 2024). Infodemiology is the study of the flood of information and how to manage it for public health. Infodemic management involves systematically applying risk- and evidence-based analyses and strategies to control the spread of misinformation and mitigate its effects on health behaviors during health crises.

 

For example, in their systematic review of publications about health infodemics and misinformation, Borges et al. (2022) commented that “social media has been increasingly propagating poor-quality, health-related information during pandemics, humanitarian crises and health emergencies. Such spreading of unreliable evidence on health topics amplifies vaccine hesitancy and promotes unproven treatments” (p. 556). However, they noted that social media has also been successfully employed for crisis communication and management during emerging infectious disease pandemics and significantly improved knowledge awareness and compliance with health recommendations. For governments, health authorities, researchers, and clinicians, promoting and disseminating reliable health information is essential to counteract false or misleading health information spread on social media.

Image Credit: Anna Shvets, Pexels

 

Strategies for Combating Infodemics

For government officials, public health professionals, and educators, preparation is essential to prevent the next pandemic disaster (Shafaati et al., 2023). Strengthening public health services and investing in research and development for new medications and vaccines are crucial steps. Expanding access to education and resources in vulnerable communities is also necessary to enhance understanding and encourage preventive actions. Additionally, investing in international cooperation is vital to support countries at risk of outbreaks and provide economic assistance to those affected by pandemics.

 

  1. Promoting Accurate Information: Authorities and experts must provide clear, accurate, and timely information. This includes regular updates from trusted sources like public health organizations.

 

  1. Media Literacy: Enhancing public media literacy can help individuals critically evaluate the information they encounter, recognize reliable sources, and avoid sharing unverified claims.

 

  1. Fact-Checking and Verification: Fact-checking organizations and platforms are crucial in verifying information and debunking false claims. Prominent placement of fact-checked information can help correct misconceptions.

 

  1. Algorithmic Adjustments: Social media platforms and search engines can adjust their algorithms to prioritize credible sources and reduce the visibility of misleading content.

 

  1. Collaboration and Coordination: Effective communication and coordination among governments, health organizations, media, and tech companies are essential to manage the flow of information and combat misinformation.

 

  1. Public Engagement: Engaging with communities and addressing their concerns directly can build trust and ensure accurate information reaches diverse audiences. This may include town hall meetings, Q&A sessions, and community-specific communications.

 

Referencesre

Borges do Nascimento, I. J., Pizarro, A. B., Almeida, J. M., Azzopardi-Muscat, N., Gonçalves, M. A., Björklund, M., & Novillo-Ortiz, D. (2022). Infodemics and health misinformation: A systematic review of reviews. Bulletin of the World Health Organization, 100(9):544-561. https://doi.org:10.2471/BLT.21.287654.

Shafaati, M., Chopra, H., Priyanka, Khandia, R., Choudhary, O. P., & Rodriguez-Morales, A. J. (2023). The next pandemic catastrophe: can we avert the inevitable? New Microbes and New Infections, 52, 101110. https://doi.org: 10.1016/j.nmni.2023.101110. 

World Health Organization (2020). Managing the COVID-19 Infodemic: A call for action. Author. https://iris.who.int/bitstream/handle/10665/334287/9789240010314-eng.pdf?sequence=1on

World Health Organization (2024). Let’s flatten the infodemic curve, https://www.who.int/news-room/spotlight/let-s-flatten-the-infodemic-curve

 



Information Warfare, Virtual Politics, and Narrative Dominance

  By Lilian H. Hill As the Internet becomes more advanced, it is giving rise to new challenges for democracy. Social me...