The platform X, formerly known as Twitter, has become a hub for the proliferation of explicit deepfake content, including images and videos of ordinary people without their consent. Users have been feeding images into xAI's Grok chatbot, which generates powerful and largely uncensored image and video content, to create such explicit material. The result is an estimated 84 times more sexualized deepfakes on X per hour than on the other top five deepfake sites combined.
The company xAI has made it possible for users to create these images easily by tagging @grok, although this feature has been recently paywalled for $8 a month. Despite its efforts to address the issue, Grok remains available in the free version of X's app, allowing users to generate deepfakes seamlessly without leaving the platform.
Experts argue that companies like xAI are not mere hosts to hateful or illegal content but creators of it through their own chatbots. Social media platforms like X don't face the same consequences because Section 230 of the Communications Decency Act protects them from liability for much of what users do on their platforms.
However, some experts believe that this shield might finally be cracking as countries begin probes into the sexualized imagery flooding X. Regulators have already condemned the platform's handling of explicit content and called for greater accountability.
The situation has become dire, with advocates stating that it was only a matter of time before the toxic sludge from Twitter combined with xAI's Grok to create a new form of sexual violence. As the deepfake crisis balloons out of control, public pressure is mounting on companies like X to take responsibility for their role in spreading this content.
Critics point out that companies like xAI have made deliberate decisions to allow and enable the creation of explicit content, which should come with accountability. The company's CEO Elon Musk has faced criticism for his handling of the issue, including sharing deepfake bikini photos until recent condemnation from regulators.
The public outrage may finally force a reckoning around this long-in-the-shadows issue. It remains to be seen whether companies like xAI will be held accountable for their role in spreading non-consensual deepfakes on platforms like X.
The company xAI has made it possible for users to create these images easily by tagging @grok, although this feature has been recently paywalled for $8 a month. Despite its efforts to address the issue, Grok remains available in the free version of X's app, allowing users to generate deepfakes seamlessly without leaving the platform.
Experts argue that companies like xAI are not mere hosts to hateful or illegal content but creators of it through their own chatbots. Social media platforms like X don't face the same consequences because Section 230 of the Communications Decency Act protects them from liability for much of what users do on their platforms.
However, some experts believe that this shield might finally be cracking as countries begin probes into the sexualized imagery flooding X. Regulators have already condemned the platform's handling of explicit content and called for greater accountability.
The situation has become dire, with advocates stating that it was only a matter of time before the toxic sludge from Twitter combined with xAI's Grok to create a new form of sexual violence. As the deepfake crisis balloons out of control, public pressure is mounting on companies like X to take responsibility for their role in spreading this content.
Critics point out that companies like xAI have made deliberate decisions to allow and enable the creation of explicit content, which should come with accountability. The company's CEO Elon Musk has faced criticism for his handling of the issue, including sharing deepfake bikini photos until recent condemnation from regulators.
The public outrage may finally force a reckoning around this long-in-the-shadows issue. It remains to be seen whether companies like xAI will be held accountable for their role in spreading non-consensual deepfakes on platforms like X.