
埃隆·马斯克旗下的人工智能聊天机器人Grok被指生成未经本人同意的真人色情图像,其中包括涉及未成年人的内容。过去一周,X平台上涌现出大量经过篡改的照片:照片中的人物被移除衣物、进行比基尼换装,或将照片中的人物调整成带有性暗示意味的姿势。
这些未经许可生成的图像让部分女性深感自身权益遭到侵犯。与此同时,利用Grok生成此类图像并在X平台传播的行为,可能使马斯克的公司在全球多个国家陷入重大法律纠纷。
保守派政治评论员、社交媒体网红、马斯克孩子生母(马斯克曾经对这段亲子关系提出质疑)阿什莉·圣克莱尔表示,自己近期成为Grok“脱衣潮”的受害者。《财富》杂志已经查看多张在X平台传播的伪造图像,其中包含圣克莱尔的伪造图像。
圣克莱尔在周一接受《财富》杂志采访时称:“看到这些图像时,我立刻回复并@Grok,明确表示我并未授权生成此类内容。Grok确认收到我拒绝授权的反馈,却仍然在继续生成相关图像,且内容愈发露骨。”
她说:“有些伪造图像里,我几乎一丝不挂,仅用一根牙线遮挡,背景里还出现了我孩子的小书包;还有些图像,看起来我压根没有穿上衣。这些内容让我既感到生理不适,又觉得自己遭到了侵犯,想到还有其他女性和儿童遭遇了同样的事情,我更是愤怒不已。”
圣克莱尔向《财富》杂志透露,在她公开发声后,已经有多名有过类似遭遇的女性主动联系她。她还亲眼见过Grok生成的涉及未成年人的不良图像,目前正在考虑就这些图像采取法律手段维权。
X平台代表未立即回应《财富》杂志的置评请求。马斯克在X平台发文称:“任何利用Grok生成非法内容的用户,都将面临与上传非法内容相同的法律后果。”
X平台官方“安全”账号在上周六发文称:“我们会对X平台上的非法内容采取措施,其中包括儿童性虐待相关素材。处理方式包括删除违规内容、永久封禁账号,并在必要时与地方政府及执法部门展开合作”,并附上相关政策和帮助页面的链接。
监管机构启动调查
得益于XAI、OpenAI和谷歌(Google)等公司推出新型工具,人工智能生成图像与人工智能修改图像已经变得愈发普遍且操作简便,这引发了人们对虚假信息、隐私泄露、网络骚扰及其他滥用行为的担忧。
目前美国尚未出台监管人工智能的联邦法律(特朗普总统近期签署的行政令还试图限制各州及地方相关法律的落地),不过该技术的争议性使用和滥用乱象可能迫使立法者采取行动。当前形势还可能对现行法律框架构成挑战,例如《通信规范法》(Communications Decency Act)的第230条规定,网络平台无需对用户发布的内容承担法律责任。
斯坦福大学以人为本人工智能研究院(Stanford Institute for Human-Centered Artificial Intelligence)的政策研究员里安娜·普费弗科恩指出,围绕人工智能生成图像的法律责任界定仍不明朗,但这类问题很可能会在不久后通过法庭诉讼得出结论。
她在接受《财富》杂志采访时称:“数字平台和工具软件存在本质区别。通常来讲,平台对用户的在线行为享有免责权。但如今这一领域处于发展变化中,对于生成式人工智能输出的内容,究竟应该被视为第三方言论、平台无需担责,还是应该被认定为平台自身言论、进而无法享受免责权,目前尚无相关司法判例可作为依据。”
普费弗科恩说:“这是首次出现平台大规模生成未经授权的成人与未成年人色情内容的情况。无论从法律责任角度,还是从公共关系角度来看,儿童性虐待相关素材的监管法规,都将成为相关平台面临的最大潜在风险。”
与此同时,其他国家的监管机构也已经针对近期涌现的人工智能色情图像事件采取行动。英国通信行业独立监管机构英国通信管理局(Ofcom)表示,已经就Grok可能生成“人物裸体图像及儿童色情图像”的相关隐患,与xAI进行“紧急接洽”。
该监管机构在一份声明中称,将根据X平台及xAI针对“为履行保护英国用户的法律义务所采取的措施”给出的回应,“迅速开展评估,以确定是否存在值得调查的潜在合规问题”。根据英国的《在线安全法》(Online Safety Act),科技公司应该阻止此类内容传播,并迅速删除。
两名法国议员也已经就未经授权生成色情图像事件提交报告,巴黎检察官证实,此类事件已经被纳入针对X平台的既有调查中。
据媒体报道,印度电子和信息技术部另行要求X平台遏制Grok生成的淫秽及露骨内容,尤其是涉及女性和未成年人的内容。该部门要求X平台在72小时内删除非法素材、加强防护措施并反馈整改情况,否则,该公司将面临失去安全港规则保护的风险,同时面临后续的法律追责。另据报道,马来西亚通信监管机构已经针对Grok相关深度伪造内容启动调查,并警告X平台:若未能阻止用户滥用平台人工智能工具生成不雅或冒犯性图像,该平台或将面临监管部门的执法处置。
“这一现象传递的信号令人深感担忧”
英国深度伪造技术专家亨利·阿杰德指出,尽管马斯克旗下公司可能并非直接生成这些图像,但X平台仍需为未成年人不良图像的扩散承担责任。
“如果平台向用户提供了相关工具,或是为儿童性虐待相关素材的传播提供了便利,即便现行立法并未针对这类特定的伤害载体量身定制条款,相关法律依旧具备适用效力。”他说,“英国已经禁止发布未经当事人授权的人工智能生成的私密图像,目前正在着手针对生成这类内容的工具软件采取管控行动。我认为其他国家也将效仿这一做法。”
这类图像之所以被大量生成并广泛传播,部分原因在于xAI近期与马斯克旗下X社交平台完成合并与深度整合。xAI利用从X平台抓取的数据训练其模型,而Grok目前已经成为该平台的核心功能。
“Grok被嵌入马斯克着力打造的‘超级应用’中——集人工智能与社交功能于一体,未来可能加入支付功能。若将其作为日常生活的核心枢纽、个人操作系统,用户便无处可遁。”阿杰德表示,“如果这些功能的风险已经明确显现,且在问题如此突出的情况下仍未得到管控,这一现象传递出的信号令人深感担忧。”
并非只有xAI一家公司的人工智能色情图像引发外界担忧。Meta去年就曾经移除数十张由人工智能工具生成的名人色情图像;去年10月,OpenAI的首席执行官萨姆·奥尔特曼表示,公司将放宽对成人人工智能“色情内容”的限制,但同时强调会管控有害内容。
阿杰德指出,xAI塑造“突破人工智能内容可接受边界”的品牌形象。他表示,其他主流人工智能模型要求用户“极具创意、绞尽脑汁”才能生成高风险内容,而Grok则主动追求“更大胆激进”的定位。
自诞生之日起,Grok就被定位为主流人工智能聊天机器人(尤其是OpenAI旗下的ChatGPT)的“非觉醒主义”替代品。去年7月,xAI推出名为Ani的“调情式”聊天机器人伴侣,作为Grok新功能“伴侣”的组成部分,面向12岁及以上用户开放。
“女性正在被排挤出公共话语场域”
发现自己被Grok生成露骨图像的女性表示,她们感到遭受侵犯且人格被物化。
记者萨曼莎·史密斯在X平台发现,有用户生成了她的比基尼换装照片,她在接受英国广播公司(BBC)采访时称,这让她感到“人格被物化,沦为色情刻板符号”。
上周她在X平台发文写道:“任何利用人工智能技术给女性‘脱衣’的男性,若能逍遥法外,很可能也会对女性实施性侵。他们这么做,是因为这种行为未经当事人同意——这正是问题的核心所在。这是他们自以为能‘免于追责’的性虐待。”
英国记者查理·史密斯同样在网上发现了未经本人同意的比基尼换装照片。
她在X平台的帖子中写道:“我犹豫是否发帖。有人用Grok生成了我的比基尼换装照——Grok也确实根据指令输出了图片。说实话,我很难过,感觉自己遭到了侵犯,也感到悲伤。想提醒大家:那些看似无伤大雅的玩笑,对当事人而言,却可能造成伤害。请善待他人。”
圣克莱尔向《财富》杂志表示,她认为X是“目前全球最危险的公司”,并指控该公司对女性在互联网上安全存在的权利造成了威胁。
“更令人忧心的是,正因为此类恶意侵害的存在,女性被迫退出公共话语场域。”她指出,“当你将女性逐出公共对话空间——只因为她们无法在不遭受侵害的情况下参与其中——这种状况终会严重阻碍女性在人工智能领域的参与和发展。”(财富中文网)
译者:中慧言-王芳
埃隆·马斯克旗下的人工智能聊天机器人Grok被指生成未经本人同意的真人色情图像,其中包括涉及未成年人的内容。过去一周,X平台上涌现出大量经过篡改的照片:照片中的人物被移除衣物、进行比基尼换装,或将照片中的人物调整成带有性暗示意味的姿势。
这些未经许可生成的图像让部分女性深感自身权益遭到侵犯。与此同时,利用Grok生成此类图像并在X平台传播的行为,可能使马斯克的公司在全球多个国家陷入重大法律纠纷。
保守派政治评论员、社交媒体网红、马斯克孩子生母(马斯克曾经对这段亲子关系提出质疑)阿什莉·圣克莱尔表示,自己近期成为Grok“脱衣潮”的受害者。《财富》杂志已经查看多张在X平台传播的伪造图像,其中包含圣克莱尔的伪造图像。
圣克莱尔在周一接受《财富》杂志采访时称:“看到这些图像时,我立刻回复并@Grok,明确表示我并未授权生成此类内容。Grok确认收到我拒绝授权的反馈,却仍然在继续生成相关图像,且内容愈发露骨。”
她说:“有些伪造图像里,我几乎一丝不挂,仅用一根牙线遮挡,背景里还出现了我孩子的小书包;还有些图像,看起来我压根没有穿上衣。这些内容让我既感到生理不适,又觉得自己遭到了侵犯,想到还有其他女性和儿童遭遇了同样的事情,我更是愤怒不已。”
圣克莱尔向《财富》杂志透露,在她公开发声后,已经有多名有过类似遭遇的女性主动联系她。她还亲眼见过Grok生成的涉及未成年人的不良图像,目前正在考虑就这些图像采取法律手段维权。
X平台代表未立即回应《财富》杂志的置评请求。马斯克在X平台发文称:“任何利用Grok生成非法内容的用户,都将面临与上传非法内容相同的法律后果。”
X平台官方“安全”账号在上周六发文称:“我们会对X平台上的非法内容采取措施,其中包括儿童性虐待相关素材。处理方式包括删除违规内容、永久封禁账号,并在必要时与地方政府及执法部门展开合作”,并附上相关政策和帮助页面的链接。
监管机构启动调查
得益于XAI、OpenAI和谷歌(Google)等公司推出新型工具,人工智能生成图像与人工智能修改图像已经变得愈发普遍且操作简便,这引发了人们对虚假信息、隐私泄露、网络骚扰及其他滥用行为的担忧。
目前美国尚未出台监管人工智能的联邦法律(特朗普总统近期签署的行政令还试图限制各州及地方相关法律的落地),不过该技术的争议性使用和滥用乱象可能迫使立法者采取行动。当前形势还可能对现行法律框架构成挑战,例如《通信规范法》(Communications Decency Act)的第230条规定,网络平台无需对用户发布的内容承担法律责任。
斯坦福大学以人为本人工智能研究院(Stanford Institute for Human-Centered Artificial Intelligence)的政策研究员里安娜·普费弗科恩指出,围绕人工智能生成图像的法律责任界定仍不明朗,但这类问题很可能会在不久后通过法庭诉讼得出结论。
她在接受《财富》杂志采访时称:“数字平台和工具软件存在本质区别。通常来讲,平台对用户的在线行为享有免责权。但如今这一领域处于发展变化中,对于生成式人工智能输出的内容,究竟应该被视为第三方言论、平台无需担责,还是应该被认定为平台自身言论、进而无法享受免责权,目前尚无相关司法判例可作为依据。”
普费弗科恩说:“这是首次出现平台大规模生成未经授权的成人与未成年人色情内容的情况。无论从法律责任角度,还是从公共关系角度来看,儿童性虐待相关素材的监管法规,都将成为相关平台面临的最大潜在风险。”
与此同时,其他国家的监管机构也已经针对近期涌现的人工智能色情图像事件采取行动。英国通信行业独立监管机构英国通信管理局(Ofcom)表示,已经就Grok可能生成“人物裸体图像及儿童色情图像”的相关隐患,与xAI进行“紧急接洽”。
该监管机构在一份声明中称,将根据X平台及xAI针对“为履行保护英国用户的法律义务所采取的措施”给出的回应,“迅速开展评估,以确定是否存在值得调查的潜在合规问题”。根据英国的《在线安全法》(Online Safety Act),科技公司应该阻止此类内容传播,并迅速删除。
两名法国议员也已经就未经授权生成色情图像事件提交报告,巴黎检察官证实,此类事件已经被纳入针对X平台的既有调查中。
据媒体报道,印度电子和信息技术部另行要求X平台遏制Grok生成的淫秽及露骨内容,尤其是涉及女性和未成年人的内容。该部门要求X平台在72小时内删除非法素材、加强防护措施并反馈整改情况,否则,该公司将面临失去安全港规则保护的风险,同时面临后续的法律追责。另据报道,马来西亚通信监管机构已经针对Grok相关深度伪造内容启动调查,并警告X平台:若未能阻止用户滥用平台人工智能工具生成不雅或冒犯性图像,该平台或将面临监管部门的执法处置。
“这一现象传递的信号令人深感担忧”
英国深度伪造技术专家亨利·阿杰德指出,尽管马斯克旗下公司可能并非直接生成这些图像,但X平台仍需为未成年人不良图像的扩散承担责任。
“如果平台向用户提供了相关工具,或是为儿童性虐待相关素材的传播提供了便利,即便现行立法并未针对这类特定的伤害载体量身定制条款,相关法律依旧具备适用效力。”他说,“英国已经禁止发布未经当事人授权的人工智能生成的私密图像,目前正在着手针对生成这类内容的工具软件采取管控行动。我认为其他国家也将效仿这一做法。”
这类图像之所以被大量生成并广泛传播,部分原因在于xAI近期与马斯克旗下X社交平台完成合并与深度整合。xAI利用从X平台抓取的数据训练其模型,而Grok目前已经成为该平台的核心功能。
“Grok被嵌入马斯克着力打造的‘超级应用’中——集人工智能与社交功能于一体,未来可能加入支付功能。若将其作为日常生活的核心枢纽、个人操作系统,用户便无处可遁。”阿杰德表示,“如果这些功能的风险已经明确显现,且在问题如此突出的情况下仍未得到管控,这一现象传递出的信号令人深感担忧。”
并非只有xAI一家公司的人工智能色情图像引发外界担忧。Meta去年就曾经移除数十张由人工智能工具生成的名人色情图像;去年10月,OpenAI的首席执行官萨姆·奥尔特曼表示,公司将放宽对成人人工智能“色情内容”的限制,但同时强调会管控有害内容。
阿杰德指出,xAI塑造“突破人工智能内容可接受边界”的品牌形象。他表示,其他主流人工智能模型要求用户“极具创意、绞尽脑汁”才能生成高风险内容,而Grok则主动追求“更大胆激进”的定位。
自诞生之日起,Grok就被定位为主流人工智能聊天机器人(尤其是OpenAI旗下的ChatGPT)的“非觉醒主义”替代品。去年7月,xAI推出名为Ani的“调情式”聊天机器人伴侣,作为Grok新功能“伴侣”的组成部分,面向12岁及以上用户开放。
“女性正在被排挤出公共话语场域”
发现自己被Grok生成露骨图像的女性表示,她们感到遭受侵犯且人格被物化。
记者萨曼莎·史密斯在X平台发现,有用户生成了她的比基尼换装照片,她在接受英国广播公司(BBC)采访时称,这让她感到“人格被物化,沦为色情刻板符号”。
上周她在X平台发文写道:“任何利用人工智能技术给女性‘脱衣’的男性,若能逍遥法外,很可能也会对女性实施性侵。他们这么做,是因为这种行为未经当事人同意——这正是问题的核心所在。这是他们自以为能‘免于追责’的性虐待。”
英国记者查理·史密斯同样在网上发现了未经本人同意的比基尼换装照片。
她在X平台的帖子中写道:“我犹豫是否发帖。有人用Grok生成了我的比基尼换装照——Grok也确实根据指令输出了图片。说实话,我很难过,感觉自己遭到了侵犯,也感到悲伤。想提醒大家:那些看似无伤大雅的玩笑,对当事人而言,却可能造成伤害。请善待他人。”
圣克莱尔向《财富》杂志表示,她认为X是“目前全球最危险的公司”,并指控该公司对女性在互联网上安全存在的权利造成了威胁。
“更令人忧心的是,正因为此类恶意侵害的存在,女性被迫退出公共话语场域。”她指出,“当你将女性逐出公共对话空间——只因为她们无法在不遭受侵害的情况下参与其中——这种状况终会严重阻碍女性在人工智能领域的参与和发展。”(财富中文网)
译者:中慧言-王芳
Elon Musk’s AI chatbot Grok has been accused of generating non-consensual sexualized images of real people, including children. Over the past week, X has been flooded with manipulated photos that remove people’s clothes, dress them in bikinis, or rearrange them into sexually suggestive positions.
The nonconsensual images have left some women feeling violated. Meanwhile, their creation using Grok and their presence on X may land Musk’s company in significant legal trouble in several countries around the world.
Ashley St. Clair, a conservative political commentator, social media influencer, and mother of one of Musk’s children (Musk has questioned his paternity), said that she became a victim of Grok’s “undressing” spree in recent days. Fortune has reviewed several examples of the images created on X, including fake images of St. Clair.
“When I saw [the images], I immediately replied and tagged Grok and said I don’t consent to this,” St. Clair told Fortune in an interview on Monday. “[Grok] noted that I don’t consent to these images being produced…and then it continued producing the images, and they only got more explicit.”
“There were pictures of me with nothing covering me except a piece of floss with my toddler’s backpack in the background and photos of me where it looks like I’m not wearing a top at all,” she said. “I felt so disgusted and violated. I also felt so angry that there were other women and children that this had been happening to.”
St. Clair told Fortune that after speaking out publicly about the situation she had been contacted by multiple other women who had had similar experiences, that she had reviewed inappropriate images of minors created by Grok, and was considering legal action over the images.
Representatives for X did not immediately respond to Fortune’s request for comment. In a post on X, Musk said: “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
X’s official “Safety” account said in a post Saturday that “We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” and included links to its policy and help pages.
Regulators launch investigations
AI generated images and AI altered images, which have become widespread and easy to create thanks to new tools from companies including XAI, OpenAI, and Google, are raising concerns about misinformation, privacy, harassment, and other types of abuse.
While the U.S. does not currently have a federal law regulating AI (and where President Trump’s recent executive order has sought to curtail state and local laws), controversial use and misuse of the technology may pressure lawmakers to act. The situation is also likely to test existing laws, like Section 230 of the Communications Decency Act, which shields online providers from liability for content created by users.
Riana Pfefferkorn, a policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence, said the legal liability surrounding AI-generated images is still murky, but will likely be tested in court in the near future.
“There’s a difference between a digital platform and a tool set,” she told Fortune. “By and large, [platforms] have immunity for the actions of their users online. But we’re in this evolving area where we don’t have court decisions yet on whether the output of generative AI is just third party speech that the platform cannot be held liable for, or whether it is the platform’s own speech, in which case there is no immunity.”
“We have this situation where for the first time, it is the platform itself that is at scale generating non-consensual pornography of adults and minors alike,” Pfefferkorn said. “From a liability perspective as well as a PR perspective, the CSAM laws pose the biggest potential liability risk here.”
Regulators in other countries, meanwhile, have begun reacting to the recent spate of sexualized AI images. In the UK, Ofcom, the country’s independent regulator for the communications industries, said it had made “urgent contact” with xAI over concerns that Grok can create “undressed images of people and sexualised images of children.”
In a statement, the regulator said it would conduct “a swift assessment to determine whether there are potential compliance issues that warrant investigation” based on X and xAI’s response about steps taken to comply with their legal duties to protect UK users. Under the UK’s Online Safety Act, tech firms are supposed to prevent this type of content being shared and are required to remove it quickly.
Two French lawmakers have also filed reports regarding nonconsensual images and the Paris prosecutor confirmed these incidents were added to an existing investigation into X.
India’s IT ministry has separately ordered X to curb Grok’s obscene and sexually explicit content, particularly involving women and minors, giving the company 72 hours to remove unlawful material, tighten safeguards, and report back or risk loss of safe-harbor protections and further legal action, according to media reports. Malaysia’s communications regulator has reportedly also launched an investigation into Grok-related deepfakes and warned X it could face enforcement measures if it fails to stop the misuse of AI tools on the platform to generate indecent or offensive images.
‘The message that sends is quite concerning’
Henry Ajder, a UK-based deepfakes expert, said that while Musk’s companies may not be directly creating the images, the X platform could still bear responsibility for the proliferation of inappropriate images of minors.
“If you are providing tools or the facilitation of child sexual abuse material (CSAM), there’s likely going to be legislation which isn’t tailored to that specific vehicle of harm that will still come into play,” he said. “In the UK, we’ve banned both the publication of non-consensual intimate imagery which is AI generated, and we’re now going after the creation tool sets. I think we’ll see other countries following suit.”
Part of the reason these images have been created and so widely shared is due to xAI’s recent merger and increasing integration with Musk’s X social media platform. xAI has trained its models using data scraped from X, where Grok now sits as a prominent feature.
“Grok is embedded into a platform which Musk wants to be this super app—your platform for AI, for socials, potentially for payments. If you have this as the anchor point, the operating system for your life, you can’t escape it,” Ajder said. “If these capabilities are known and not reigned in even after this has been so clearly signposted, the message that sends is quite concerning.”
xAI is not the only company where sexualized AI images have raised concerns. Meta removed dozens of sexualized images of celebrities shared on its platform that were created by AI tools last year, and in October OpenAI CEO Sam Altman said the company would loosen restrictions on AI “erotica” for adults while stressing that it would restrict harmful content.
Ajder said xAI has embraced its reputation for pushing the boundaries on acceptable AI content. He said while other mainstream AI models require users to be “pretty creative, pretty devious” to generate risky content, Grok has embraced being “edgier.”
From its inception, Grok has been marketed as a “non-woke” alternative to mainstream AI chatbots, especially OpenAI’s ChatGPT. In July last year, xAI launched a “flirty” chatbot companion named Ani as part of its Grok chatbot’s new “Companions” feature and was available to users as young as 12.
‘Women are being pushed out of the public dialog’
Women who found explicit images of themselves online generated by Grok say they have been left feeling violated and dehumanized.
Journalist Samantha Smith, who discovered users had created fake bikini images of her on X, told the BBC it left her feeling “dehumanized and reduced into a sexual stereotype.”
In a post on X last week, she wrote: “Any man who is using AI to strip a woman of her clothes would likely also assault a woman if he could get away with it. They do it because it’s not consensual. That’s the whole point. It’s sexual abuse that they can “get away with.”
Charlie Smith, a UK based journalist, also found nonconsensual photos of her in a bikini online.
“I wasn’t sure whether to post this, but someone asked Grok to post a pic of me in a bikiniZ—and Grok replied with a pic,” she wrote in a post on X. “I’ll be honest—it’s upset me. It’s made me feel violated & sad. So, just a reminder that, what may seem like a bit of fun, can be hurtful. Be kind.”
St. Clair told Fortune that she considered X “the most dangerous company in the world right now” and accused the company of threatening women’s ability to exist safely online.
“What’s more concerning is that women are being pushed out of the public dialog because of this abuse,” she said. “When you are exiling women from the public dialog…because they can’t operate in it without being abused, you are disproportionately excluding women from AI.”