Misinformation

At Nexus, we recognize the importance of fostering an environment where information can be shared freely while minimizing the potential harm that misinformation can cause. Unlike other forms of restricted content, misinformation presents unique challenges due to the constantly evolving nature of knowledge and the varying levels of understanding among individuals. What is considered factual today may change with new discoveries, and people may unknowingly share false or misleading information with good intentions. Because of these complexities, we do not enforce a blanket prohibition on misinformation but instead focus on identifying and mitigating content that poses a risk of imminent harm or interferes with essential societal functions such as public health and elections.

Rather than attempting to create an exhaustive list of prohibited claims, our approach involves categorizing misinformation based on the risks it presents and developing policies that balance the values of free expression, public safety, dignity, authenticity, and privacy. We rely on independent third-party experts, including global health organizations, human rights groups, and election authorities, to assess the validity of claims and determine their potential impact.

How We Handle Misinformation:

1. Misinformation That Poses a Risk of Physical Harm

We remove misinformation or unverifiable rumors when independent experts determine that it is likely to contribute to a risk of imminent violence or physical harm. This includes:

  • False claims that incite violence or encourage attacks against individuals or groups.

  • Fabricated or manipulated content that falsely depicts acts of violence, military conflicts, or public safety threats.

  • Misinformation that promotes harmful or dangerous activities, such as false safety instructions during emergencies.

In areas experiencing heightened political, civil, or social unrest, we work closely with local experts and organizations to assess how misinformation may contribute to real-world harm. In such cases, we take proactive steps to identify and remove content that could incite violence or escalate existing tensions.

2. Health-Related Misinformation

We collaborate with leading public health authorities to identify and remove misinformation that poses an imminent risk to health and safety. This includes:

  • False claims about vaccines, such as misinformation about their safety, effectiveness, or ingredients.

  • Misinformation during public health emergencies, such as pandemics or outbreaks, where false claims could lead to risky behavior, refusal of treatment, or failure to take necessary precautions.

  • Promotion of harmful or unproven treatments, such as substances with no medical benefit that could cause serious harm (e.g., bleach as a cure for diseases).

While we recognize the importance of discussions around medical treatments, opinions and personal experiences are not classified as misinformation unless they make verifiably false claims that could lead to harm.

3. Misinformation That Interferes With Elections and Census Participation

Maintaining the integrity of elections and census processes is essential to a functioning democracy. We remove misinformation that directly misleads people about how to participate in these processes, including:

  • False claims about the time, date, location, or method of voting or voter registration.

  • Misrepresentations about who is eligible to vote or register or what documentation is required.

  • False claims about whether a vote will be counted or the validity of voting methods (e.g., mail-in ballots, early voting).

  • False claims about the census process, including misleading information about who can participate and how census data is used.

  • Rumors that discourage participation, such as false warnings about law enforcement presence at polling stations or census collection sites.

Beyond misinformation, we also prohibit content that incites violence, promotes illegal voter suppression tactics, or encourages coordinated interference in elections, as outlined in other sections of our Community Standards.

Managing Manipulated Media and Misleading Content:

Digitally Created or Altered Content

Media manipulation is increasingly sophisticated, and certain types of digital alterations can mislead the public on matters of public importance. To address this, we:

  • Require users to disclose the use of AI-generated or digitally altered content when it is designed to appear photorealistic or uses realistic-sounding audio.

  • Apply informative labels to manipulated media that could mislead people about major public events, politics, or health-related matters.

  • Restrict the distribution of content that intentionally misrepresents reality, even if it does not violate other policies.

While some digital alterations are harmless (e.g., cropping, adding background music, or artistic edits), content that misleads users in a significant or deceptive manner is subject to restrictions.

How We Limit the Spread of Misinformation:

We acknowledge that not all misinformation poses an imminent threat. People may exaggerate, speculate, or misinterpret information without malicious intent, and some false claims may circulate within humorous, satirical, or opinion-based contexts. Rather than outright removal, we focus on:

  • Reducing the visibility of misleading content through fact-checking partnerships with independent organizations that assess the accuracy of viral content.

  • Providing context and authoritative information alongside questionable claims, directing users to reliable sources.

  • Encouraging digital literacy so users can evaluate information critically before sharing it.

Consequences for Repeatedly Sharing Misinformation:

Accounts, Pages, and Groups that persistently share misinformation that falls under our removal policies may face escalating enforcement actions, including:

  • Decreased visibility and reduced distribution of their content.

  • Limitations on advertising and monetization to prevent financial incentives for spreading false claims.

  • Temporary restrictions on posting or engagement features.

  • Permanent removal for repeated or egregious violations.

Evolving Our Approach to Misinformation:

As digital communication and information-sharing continue to evolve, so too must our approach to managing misinformation. We regularly reassess our policies based on expert input, research, and emerging threats, ensuring that our standards remain relevant and effective.

By combining proactive misinformation removal, content moderation strategies, and educational initiatives, we strive to create a more informed and trustworthy online environment. Our goal is not to censor differing perspectives but to prevent harm, uphold the integrity of public discourse, and ensure that our platform remains a place where users can engage in meaningful, fact-based conversations.

Previous
Previous

Memorialization

Next
Next

Spam