Two major social media platforms are piloting community-driven content moderation systems that allow trusted users to vote on whether flagged posts violate platform guidelines. The approach mirrors models used by some online forums and aims to reduce reliance on automated moderation.

Initial data from the pilot programs shows that community moderators agree with professional content review teams approximately 85 percent of the time, with the greatest divergence occurring on politically sensitive content.

Critics argue that crowdsourced moderation may introduce majority bias and create inconsistent enforcement, while supporters say it increases transparency and builds user trust in platform governance.