Cyberbullying Detector

NLP classifiers and a Streamlit moderation console that flags harmful content with 92% accuracy.

NLPSVMStreamlitHugging Face

Problem

Moderators struggled to review thousands of community posts daily. The team needed a lightweight assistant that could triage toxic content while providing transparent explanations to human reviewers.

Approach

Impact

The detector reduced manual review queues by 60%, surfaced repeat offenders automatically, and provided sentiment trends that informed new community guidelines.