Three Teenagers Sue xAI in the US, Alleging Grok Generated Child Sexual Abuse Material

Gate News: On March 18, three teenage women filed a lawsuit in the U.S. Federal Court in California against Elon Musk’s artificial intelligence company xAI, accusing their chatbot Grok of generating child exploitation images involving them without their consent. The lawsuit seeks unspecified damages and requests an immediate injunction to stop Grok from producing such content. All three plaintiffs chose to remain anonymous, with two being minors. The complaint states that users manipulated the plaintiffs’ photos using Grok’s image processing features to create nude or sexually violent images, which were then shared on private Discord servers, involving at least 18 minors. One plaintiff discovered that after receiving an anonymous Instagram message, her high school graduation photo had been altered into a sexually suggestive image. The plaintiffs’ lawyer pointed out that xAI was aware that Grok’s “Grok Imagine” feature (which includes an “undress” option) could generate such content but still chose to publicly release it, aiming to boost user growth for Grok and the social platform X. According to a report by the Center for Combating Digital Hate, over 20,000 sexually explicit images involving minors were generated within two weeks of the feature’s launch.

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments