Content Moderation Policy

Effective Date: January 7, 2026


Overview

MyBot.ai, operated by Helios Technologies, LLC, is committed to maintaining a safe platform. This policy describes how we moderate content. All content must comply with our Terms of Use, which details prohibited content and user responsibilities.


How We Moderate

Automated Systems

We use AI-powered tools to:

  • Detect and block content that violates our Terms of Use before it is posted
  • Filter AI responses to prevent generation of prohibited content
  • Scan uploaded images for policy violations

Human Review

Our Trust & Safety team reviews:

  • Content flagged by automated systems
  • User reports
  • Appeals from users

User Reporting

Users can report violations using in-platform reporting tools on any character, message, or image. Reports are reviewed promptly.


Enforcement

When violations are identified, we may:

  • Issue warnings
  • Remove content
  • Restrict or suspend accounts
  • Permanently terminate accounts
  • Report illegal activity to law enforcement

Enforcement actions are proportional to the severity of the violation. Users may appeal decisions by contacting support@mybot.ai.


Cooperation with Authorities

As stated in our Terms of Use (Section 8), we cooperate with law enforcement and will report illegal content—particularly content involving minors—to appropriate authorities including the National Center for Missing & Exploited Children (NCMEC).


Contact

To report violations: Use in-platform tools or email support@mybot.ai

For DMCA requests: privacy@mybot.ai (see Terms of Use, Section 10)