Snapchat announced updated transparency and safety policies
Snapchat recently updated its transparency and security policies, focusing on AI-generated media watermarking, and a human-based reviewing process for political ads focused on preventing misleading content and AI misuse. AI features will undergo red-teaming, filtering, and inclusion testing.
Snapchat's new transparency policies detail the company's updated approach to watermarking and labeling AI-generated content, centering on political advertisements. Snapchat's new general approach to labeling generated content involves providing in-app contextual icons, symbols, and labels when necessary. For example, when AI-generated Dreams images are shared, the recipient will see a context card providing more information. AI-powered editing features such as the extend tool are now demarcated as AI features with a sparkle icon visible to the Snapchatter editing a picture. Human reviewers will rigorously vet political advertisements for misleading content and AI misuse. Finally, Snapchat is preparing an upcoming feature that watermarks AI-generated images exported or saved to the camera roll. Recipients of Snapchat-generated pictures will find the easily recognizable ghost alongside the sparkle symbol.
Alongside the updated transparency measures, Snapchat offered a closer look at its safety policies: more than 2,500 hours of red-teaming work aimed at improving the safety and consistency of AI outputs, a system designed to detect and remove problematic prompt before AI Lens experiences go public, and testing aimed at mitigating potentially biased AI outputs to improve equitable access to Snapchat's AI-powered features. More on these transparency and safety measures can be found on Snapchat's support site.