
Generative AI is changing what's possible in creative tools, and with that comes a responsibility to ensure those tools can't be used against the people most vulnerable to harm. Protecting children from sexual exploitation is one of our deepest held commitments as a company.
In line with our general approach to safety, we integrate child safety considerations at every level, from model development to product launches to end user generations. This approach closely aligns with Thorn's Safety by Design for Generative AI principles, which set an industry standard for how generative AI developers can guard against the creation and spread of child sexual abuse material (CSAM), including AI-generated CSAM, and other sexual harms against children. Below is a summary of how those principles show up across our products and processes.
1. Develop: Building Models That Proactively Address Risk
Safeguarding Training Data
Safety starts well before a user ever touches our product. We take deliberate steps during model development to reduce the risk that our models can be used to generate CSAM or other sexual content involving minors. We integrate hash matching, child safety classifiers and LLM-based moderation to ensure we do not train our models on sexual content involving minors or adults.
Red Teaming and Evaluation
Before a model ships, we conduct thorough testing to the extent legally permissible to identify and resolve potential vulnerabilities. We conduct such testing across text, image, video and audio to mitigate the possibility that CSAM or other sexual content involving minors could be produced by a user. This continuous testing ensures that our mitigations keep pace as models and new techniques emerge, and threat vectors evolve.
2. Deploy: Safeguards, Policies and Enforcement
Clearly Defining Usage Restrictions
We ensure that our strict boundary against any sexual content involving children is clear to all of our users. Our Usage Policy prohibits all "content that depicts, facilitates or promotes child sexual abuse or the sexualization of children," and clearly outlines that any violation will result in a permanent account ban, and, where appropriate, reporting to the National Center for Missing and Exploited Children (NCMEC).
Detecting CSAM & Sexual Content Involving Children
Once a model is deployed, we rely on multiple layers of detection to catch potentially harmful content, and attempts to create such content. This includes scanning all user-provided content against known-CSAM hash databases and a CSAM-specific classifier to detect previously unknown CSAM. We also apply AI-based classifiers to identify attempts to create CSAM or other sexual content involving children.
Reporting to NCMEC
We manually review all flagged content, and report all confirmed CSAM content to NCMEC. In 2025, we submitted a total of 516 reports to NCMEC's CyberTipline.
Deploying Content Provenance
We implement C2PA provenance signals so that content generated with our tools can be traced back to its origin. Provenance isn't a complete solution to misuse, but it gives platforms, researchers and law enforcement a meaningful signal for identifying AI-generated content.
3. Maintain: Monitoring, Improvement and Collaboration
Monitoring and Continuous Iteration
The techniques used to create and distribute CSAM evolve quickly. We are continuously testing and iterating on our models and safeguards to ensure that our safeguards can keep pace, especially as we move toward more content generated in real time.
Collaborating with Industry and Civil Society
We also recognize that no single company can solve this problem alone. We are committed to engaging with organizations like Thorn and Tech Coalition and with peers across the industry because the best defenses come from shared knowledge. We'll keep investing in these partnerships, and we'll keep updating our approach as the threat landscape changes and the tools to counter it evolve.
You can read more about our broader safety work on our Safety page. If you encounter content that you believe was created with our tools and raises concerns, please report it here.

