Trust, Safety, and Transparency
Our commitment to your well-being is deliberate and practical. AI is used as a guardrail, with clear human oversight.
Protective AI mediation
Our AI helps flag manipulation, dishonesty, and harassment patterns. The objective is user protection, not engagement inflation.
Human oversight and appeals
Significant moderation outcomes are reviewed by people. If you believe a decision is wrong, you can submit an appeal.
Pre-emptive safety
Safety begins at onboarding with profile consistency checks, account signals, and behavior-aware trust systems.
Transparency commitments
We publish an AI Transparency Statement and explain enforcement boundaries in plain language.
Safety by design