Two social media platforms, Discord and Reddit, have strategies to curb the spread of misinformation. Each have taken steps to moderate content, educate users, and enforce policies, but both also reveal the limits of current digital governance.
Discord, which is known for its private and community centered servers- rolled out a detailed Misinformation Policy Explainer in October 2023. The company defines misinformation as content that is demonstrably false or misleading and likely to result in physical or societal harm, with special attention to public health, safety, elections, and civil movements. A visual of this policy page shows a user-friendly, scenario-based guide on how Discord interprets harm and intent- a more nuanced approach than blanket bans. Discord makes it clear that their goal is not to shut down debate, but to protect users from dangerous falsehoods.
Discord’s moderation is led by a centralized Trust & Safety team that uses both user reports and proactive tools. In its Q4 2023 Transparency Report, the platform revealed it had removed 919 servers for scams and misinformation and disabled over 6,000 accounts involved in deceptive practices. The report includes visual breakdowns of violations, and a screenshot of the enforcement section highlights “harmful misinformation” grouped under broader categories like “deceptive practices,” showing that while action is being taken, the exact scope of misinformation moderation is harder to pinpoint.
Another feature that supports this effort is Discord’s collaboration with independent fact-checkers like PolitiFact and Snopes, especially during spikes in high-risk topics. However, because most of Discord’s content is not publicly searchable, detecting misinformation in real-time becomes a game of scale. Private servers, especially large ones with lax moderation, can become echo chambers. This structure makes proactive misinformation detection harder, meaning Discord has to rely heavily on its community to report issues.
To strengthen its approach, Discord could add moderator-facing tools like automated content scanners or dynamic warning systems that flag suspicious messages. User education also remains underdeveloped; onboarding new users with examples of what counts as harmful misinformation, which are delivered through in-app prompts or server notices, could help build a more informed base of reporters and moderators. Finally, future transparency reports should disaggregate misinformation from other deceptive content categories to give users and researchers a clearer picture of its prevalence.
Reddit, on the other hand, operates on a more decentralized model. Each subreddit is governed by volunteer moderators who can create, enforce, or ignore rules within their communities. Still, overarching platform-wide policies are enforced by Reddit administrators. The official Content Policy prohibits users from knowingly posting false or misleading content that may cause harm, particularly around civic processes and health information. A screenshot of the policy page displays Reddit’s emphasis on civic integrity and community trust, setting a tone for enforcement that prioritizes societal harm.
Enforcement comes in two forms: volunteer subreddit moderators and intervention by Reddit admins when issues become more than a certain community can handle. The subreddit r/NoNewNormal was taken down for propagating disinformation regarding COVID-19. Meanwhile r/GenZedong was quarantined due to repeated content violations. When a subreddit is quarantined, visitors receive a warning message- such as the one in a Time article on Reddit bans, that the community may contain misinformation or hate speech.
According to the Reddit H2 2023 Transparency Report– the platform disclosed that AutoModerator (a tool used by moderators to automate rule enforcement) was responsible for over 72% of all removals. This automated system can flag certain keywords, patterns, or behaviors but has clear limitations when it comes to context or nuance. Screenshots from the report show a strong reliance on automated moderation, with visual charts illustrating subreddit-level removals and the balance between volunteer and admin interventions.
Reddit’s model allows communities to develop their own norms, which can foster nuance and conversation. However, this also opens the door to uneven enforcement. Subreddits with disengaged or ideologically biased moderators can become breeding grounds for misinformation. Meanwhile, communities with proactive moderation can remain largely misinformation-free; but there’s no guarantee of this without consistent enforcement standards.
To address these inconsistencies, Reddit could launch platform-wide moderator training modules focused specifically on identifying and handling misinformation. These could include case studies, decision trees, and real examples of enforcement in action. Integrating more sophisticated detection tools that go beyond keyword-based AutoMod filters- could help moderators flag emerging misinformation trends even when they’re subtly worded. Reddit could also issue temporary “platform advisories” for major misinformation waves, such as during election cycles or pandemics, signaling communities to be more vigilant and providing guidance on how to moderate.
Ultimately, both Discord and Reddit show that while platform policies are becoming more sophisticated, enforcement remains the sticking point. Detection is still largely reactive, scale is a challenge, and misinformation evolves faster than moderation strategies. Still, the steps taken so far, whether through centralized enforcement or empowered community moderators, show that platforms are no longer ignoring the problem. They just haven’t fully solved it yet.