What is Moderation?
Moderation is the safety and quality process for reviewing flags, AI suggestions, disputed claims, and risky content.
Definition
Moderation is the process that keeps SevaPremi's public record accurate, safe, civil, and privacy-aware. It includes user Flags, automated screening, human Editorial review, and recorded actions. Moderation can apply to Problems, Ideas, Stakeholders, comments when they ship, help content, and future mission updates. It is not a popularity contest and it is not meant to suppress ordinary disagreement.
Why It Matters in SevaPremi
The platform deals with real people, real institutions, and real public issues. False claims can harm trust. Spam can bury genuine Problems. Personal information can put people at risk. SevaPremi's moderation model exists so users can contribute openly while still having guard rails. It also supports the AI rule: AI may classify or suggest, but it does not silently write outcomes to domain tables. Human review remains the authority for trust-relevant action.
What Happens When You Interact
When you Flag content, you feed the moderation workflow privately. When AI screens new content, its result should land as a suggestion for human handling. Editorial may approve, hide, remove, request edits, merge duplicates, or escalate the item. Serious actions should be auditable. If your own content is moderated, you may receive an explanation or next step depending on the surface. Moderation quality directly affects whether citizens, experts, stakeholders, and funders can trust the platform.
