Safety & Child Protection
Last updated: February 9, 2026
Our Commitment
Hello is built with safety as a foundational requirement, not an afterthought. Every system in Hello — from account creation through social interaction — is designed to verify real identities, prevent impersonation, and protect users from predatory behavior, fraud, and harmful content.
Our platform serves users aged 13 and older. We recognize the heightened responsibility that comes with building a social platform accessible to minors, and we have implemented technical safeguards, policy controls, and human oversight processes specifically designed to protect young users.
This document describes the technical systems, policies, and procedures we use to keep Hello safe. It supplements our Terms of Service and Privacy Policy.
1. Identity Verification Pipeline
Before any social features unlock, every user must complete our multi-step identity verification pipeline. This pipeline is entirely server-controlled — the mobile app cannot self-approve, skip, or bypass any step.
Step 1: Liveness Challenges
Users complete real-time face movement prompts (turn head, smile, blink, move closer) while multiple camera frames are captured. These challenges are randomized and must be completed within a time window to prevent pre-recorded or manipulated submissions.
Step 2: Anti-Spoofing Detection
Our AI model analyzes all captured frames for indicators of fraud, including: screens or displays held in front of the camera, printed photographs, 3D masks or prosthetics, pre-recorded video playback, deepfake or face-swap technology, and any other synthetic or manipulated imagery. If spoofing is detected, the verification attempt is immediately rejected and logged.
Step 3: Motion Confirmation
Both AI visual analysis and on-device sensor data (head rotation angles, face bounding-box size changes, eye openness metrics) must independently confirm real physical movement between challenge prompts. This dual-validation approach makes it significantly harder to bypass verification with static or pre-generated content.
Step 4: AI Age Estimation
Our AI estimates the user's biological age from facial features across multiple frames and compares the estimate to the date of birth provided during onboarding. A discrepancy greater than 4 years between the AI estimate and the stated age results in automatic rejection. This safeguard is specifically designed to prevent underage users from claiming to be older and adults from misrepresenting themselves as younger.
Step 5: Identity Anchoring
After successful verification, a single face capture is stored in our private, encrypted server-side storage. This capture serves as the user's identity anchor — all future profile photos must match this verified face. The anchor cannot be accessed, modified, or deleted by the user through the client application. Only our server, using a privileged service-role key, can read or modify identity anchors.
2. Profile Photo Moderation
Every profile photo uploaded to Hello goes through mandatory AI-powered moderation before it becomes visible to other users. No profile photo is published without passing all checks. Our moderation system enforces the following requirements:
- Single person: Exactly one person must appear in the photo.
- Face visibility: The subject's face must be clearly visible and unobstructed.
- Content safety: No explicit, sexually suggestive, violent, drug-related, or otherwise inappropriate content.
- Image quality: The photo must meet minimum quality standards (resolution, lighting, focus).
- Identity match: The person in the photo must match the user's verified face capture from the identity anchoring step. This prevents impersonation and catfishing.
Photos that fail any check are rejected with a clear explanation. The user is given the opportunity to upload a compliant photo. Repeated moderation failures may trigger additional scrutiny or account review.
Moderation decisions are made server-side. The mobile app does not perform local moderation and cannot bypass the server-side review process.
Chat Message Moderation
In addition to profile photo moderation, all chat messages sent through Hello are subject to real-time AI-powered content moderation. 100% of messages are moderated — there is no sampling or selective review. Every message is evaluated before delivery and assigned one of the following statuses:
- Approved: The message meets community guidelines and is delivered to the recipient.
- Pending review: The message has been flagged for additional review by our safety team before delivery.
- Rejected: The message violates community guidelines and is not delivered. The sender is notified that the message was blocked.
Sender verification status is checked on every message send. Users who have not completed identity verification cannot send messages. This ensures that only verified, real users participate in chat.
3. Security Architecture
Hello's security architecture is designed around the principle of server-side authority. Trust decisions are never delegated to the client application.
- Server-side trust decisions: All verification approvals, photo moderation outcomes, face-match determinations, and trust score adjustments are computed and enforced exclusively on our servers. The mobile app submits data for evaluation but cannot influence or override the outcome.
- Request authentication & signing: Every mobile API request requires a valid authentication token (JWT). Requests are additionally protected by SHA-256 payload signing with timestamp-based replay prevention, ensuring that requests cannot be forged, tampered with, or replayed.
- Short-lived session tokens: Sensitive API operations use short-lived session tokens that expire within minutes. This limits the window of exposure if a token is compromised.
- Rate limiting: Every verification and moderation endpoint is rate-limited per user and per IP address to prevent automated abuse, brute-force attacks, and resource exhaustion.
- Database-level isolation: Row Level Security (RLS) policies on our PostgreSQL database ensure that users can only read their own data. Write access to trust-critical tables (verifications, wallets, audit logs) is restricted to our server's privileged service-role connection. Users cannot directly insert, update, or delete verification records.
- Private storage: Verification photos and identity anchors are stored in a private, encrypted storage bucket. Files are accessible only via our server using a privileged service-role key and are never exposed via public URL. Storage policies prevent users from overwriting or deleting verification files.
- Encryption: All data in transit is encrypted via TLS 1.2+. Data at rest is encrypted using AES-256 by our hosting provider.
4. Audit Trail & Monitoring
Hello maintains a comprehensive audit trail for all trust-critical events. Every verification attempt, moderation decision, and security-relevant action is logged with the following information:
- Event type (e.g., liveness success, liveness failure, spoofing detection, age mismatch, face-match result, moderation outcome)
- Unique verification identifier
- Timestamp
- Anonymized metadata relevant to the event
- Originating IP address
Audit logs are stored in a dedicated table with Row Level Security enabled and no client-facing read policies. Only authorized personnel with service-role access can review audit data. Audit records are retained for 2 years for safety investigations and regulatory compliance, after which they are permanently deleted.
This audit trail enables us to detect patterns of abuse, investigate safety reports, support law enforcement investigations, and continuously improve our safety systems.
5. Zero Tolerance Policy
Hello enforces a strict zero-tolerance policy for the following conduct. Any violation results in immediate and permanent account termination, with no option for appeal or reinstatement:
- Child sexual abuse material (CSAM): Creation, distribution, possession, or solicitation of any material depicting or suggesting the sexual exploitation of a minor.
- Grooming & solicitation: Any attempt to build a relationship with a minor for the purpose of sexual exploitation, including but not limited to romantic or sexual communication, gift-giving intended to build trust for exploitative purposes, or attempts to move conversations to private or unmonitored channels.
- Sexual exploitation: Sextortion, non-consensual sharing of intimate images, sexual coercion, or any form of sexual abuse.
- Predatory behavior: Age-inappropriate contact initiated by adults toward minors, persistent unwanted contact despite rejection, or any pattern of behavior that suggests predatory intent.
- Verification fraud: Using photographs, screens, masks, deepfakes, or any other technology to deceive the liveness verification system; impersonating another person; creating accounts using another person's identity.
- Harassment & threats: Bullying, threats of violence, stalking, intimidation, doxxing, or any sustained pattern of hostile contact.
- Hate speech: Content or conduct that attacks individuals or groups based on race, ethnicity, national origin, religion, gender, sexual orientation, disability, or other protected characteristics.
- Explicit & harmful content: Sexually explicit material, graphic violence, glorification of self-harm or suicide, promotion of dangerous activities, or content that is otherwise harmful to the safety and well-being of users.
6. Minor-Specific Protections
Hello implements additional safeguards for users under 18:
- Mandatory age verification: AI age estimation at the point of onboarding prevents users from misrepresenting their age. A discrepancy of more than 4 years between the AI estimate and the stated date of birth results in rejection.
- Parental consent requirement: Users under 18 must confirm that a parent or legal guardian has reviewed and consented to the Terms of Service and Privacy Policy before using the platform.
- Enhanced content moderation: Content interactions involving minor accounts receive heightened scrutiny.
- No users under 13: Hello does not knowingly permit users under the age of 13 to create accounts. If we determine that a user is under 13, their account is terminated and all associated data is deleted within 48 hours.
- Real identity requirement: The identity verification pipeline ensures that every user is a real person whose face matches their profile. This makes it significantly harder for adults to pose as minors or for predators to create anonymous or fake accounts.
7. Reporting & Response
In-App Reporting
Users can report abuse, suspicious behavior, or safety concerns directly within the Hello app. Reports can be submitted against specific users, messages, or content. Every report is logged and reviewed.
In chat conversations, users can report individual messages directly from the message interface. Report reasons include: harassment, inappropriate content, spam, suspected underage user, and other. Each report captures the specific message content, the conversation context, and both user identities for review.
Blocking
Users can block other users directly from chat or profile screens. Blocks are bidirectional — once a block is in place, neither party can send messages to the other, and blocked users are removed from discovery and matching results. Blocks take effect immediately and persist until the blocking user chooses to unblock.
Report Prioritization
Reports are triaged by severity. Reports involving potential harm to minors, sexual exploitation, threats of violence, or other high-severity concerns are flagged for immediate review and escalation. Our goal is to review and act on high-severity reports within 24 hours.
Response Actions
Depending on the severity and nature of the reported conduct, our response may include:
- Warning the reported user
- Temporarily restricting the reported user's access to social features
- Reducing the reported user's trust score
- Removing violating content
- Requiring re-verification
- Permanently terminating the reported user's account
- Preserving evidence and reporting to law enforcement or NCMEC
External Reporting
Safety concerns can also be reported via email at safety@thehelloapp.us. Parents, guardians, educators, and other concerned parties may use this address to report concerns about a minor's safety on the platform.
8. NCMEC & Mandatory Reporting
Hello complies fully with all applicable child safety reporting obligations, including those mandated under United States federal law.
In accordance with 18 U.S.C. § 2258A, we report all instances of apparent child sexual abuse material (CSAM) and conduct involving the sexual exploitation of minors to the National Center for Missing & Exploited Children (NCMEC) via the CyberTipline. Reports include all relevant information available to us, including account data, content, metadata, IP addresses, and any other information that may assist in the identification of victims and offenders.
We also comply with applicable state-level mandatory reporting laws and will report suspected child abuse or neglect to the appropriate local authorities when required by law.
Data associated with accounts involved in CSAM or exploitation reports is preserved in accordance with legal requirements, even after account termination, to ensure availability for law enforcement investigations.
9. Law Enforcement Cooperation
Hello cooperates with law enforcement agencies investigating crimes against children, crimes involving the safety of our users, and other serious offenses. Our cooperation includes:
- Legal process response: We respond promptly to valid subpoenas, court orders, search warrants, and other lawful requests for user data. We aim to respond to routine requests within 10 business days and to expedited requests within 24 hours.
- Emergency disclosure: In cases involving an imminent threat to the life or physical safety of a person, particularly a child, we will disclose relevant information to law enforcement without a court order, as permitted under 18 U.S.C. § 2702(b)(8).
- Evidence preservation: Upon receipt of a valid preservation request, we will preserve relevant account data for 90 days (renewable upon request) pending issuance of formal legal process.
- Proactive referral: When our safety team identifies conduct that indicates an imminent risk to a child or any user, we may proactively refer the matter to the appropriate law enforcement agency.
Law enforcement agencies may submit requests to legal@thehelloapp.us. For emergency requests involving imminent danger to a child, contact safety@thehelloapp.us with "EMERGENCY" in the subject line.
10. Trust Score System
Every Hello user has a trust score — a numerical indicator that reflects account integrity and behavioral history. Trust scores are computed and maintained exclusively on our servers and cannot be viewed or manipulated by users.
Trust scores may be affected by:
- Successful identity verification (increases trust)
- Profile photo passing moderation and face matching (increases trust)
- Reports filed against the user (decreases trust)
- Content moderation violations (decreases trust)
- Suspicious behavior patterns detected by our systems (decreases trust)
Users with low trust scores may have limited access to social features, receive enhanced scrutiny on content uploads, or be required to re-verify their identity. Trust score decisions are made at our sole discretion and are not subject to appeal through the client application.
11. Continuous Improvement
Safety is not a one-time implementation — it is a continuous process. We are committed to:
- Regularly reviewing and updating our verification, moderation, and anti-spoofing systems to address emerging threats and attack vectors.
- Monitoring industry best practices, regulatory developments, and guidance from organizations like NCMEC, the Internet Watch Foundation (IWF), the Tech Coalition, and the WePROTECT Global Alliance.
- Investing in new safety technologies, including enhanced AI detection capabilities and additional verification layers.
- Engaging with child safety experts, advocacy groups, and researchers to identify areas for improvement.
- Conducting regular internal audits of our safety systems and incident response procedures.
12. Transparency
We believe transparency builds trust. We are committed to being open about how our safety systems work, what data we collect and why, and how we handle safety incidents. This Safety & Child Protection page, along with our Privacy Policy and Terms of Service, provides a comprehensive description of our approach to safety.
Account Deletion
Users have the right to delete their account at any time through the app's profile settings. Account deletion is a self-service action that permanently removes all associated data, including verification photos, identity anchors, profile information, chat messages, and any other user-generated content. Once deletion is initiated, the process is irreversible and all data is purged from our systems, except where retention is required by law (such as data preserved in connection with NCMEC reports or active law enforcement investigations).
We will update this page as our safety practices evolve. Material changes will be reflected in the "Last updated" date above.
Report a Safety Concern
If you have a safety concern, need to report potential abuse, or are a parent or guardian concerned about a minor's experience on Hello, contact our safety team at safety@thehelloapp.us.
If a child is in immediate danger, contact your local emergency services (911 in the United States) or the NCMEC CyberTipline at 1-800-843-5678.