In recent weeks, the surge of deepfake content generated by artificial intelligence on social media platforms has caused widespread alarm. Notable victims include singer Taylor Swift, with explicit images going viral on platforms like X (formerly Twitter), and even robocalls featuring the voice of US President Joe Biden. These instances highlight the escalating risks of manipulated media as the US gears up for the upcoming election cycle.
Despite misleading audio and visuals from AI not being a new phenomenon, recent advancements in AI technology have made it easier to create such content, while also making it harder to detect. The White House has expressed concern over the circulation of false images, with a commitment to addressing the issue.
Challenges in Policing AI-Generated Content on Social Media Platforms
The incidents have served as a stress test for social media platforms’ ability to police AI-generated content. Despite rules against sharing synthetic, manipulated content, platforms like X struggled to promptly remove explicit AI-generated images of Taylor Swift, allowing them to garner millions of views before action was taken. This has raised questions about the effectiveness of current moderation efforts.
Calls for Responsibility and Improved Regulation
Experts emphasize the need for companies and regulators to play a role in preventing the spread of obscene manipulated content. AI researcher Henry Ajder advocates for identifying ways different stakeholders, including search engines, tool providers, and social media platforms, can create more obstacles in the process of creating and sharing such content.
The Swift incident has sparked public outrage, prompting the trending phrase “protect Taylor Swift” on social media. This is not the first time Swift has been a target of explicit AI manipulation, but the widespread public response marks a heightened level of concern.
Exploitation and Targeting of Women
The ease of creating explicit AI content is raising disturbing concerns, particularly regarding its impact on women and girls worldwide, regardless of their social status. The issue is not limited to Swift, as the top deepfake websites reportedly hosted about 1,000 videos referencing her at the end of 2023.
Proliferation of Pornographic Deepfakes
The surge in AI-generated deepfake content is evident in the significant increase in pornographic videos, which have multiplied more than ninefold since 2020. The top 10 sites hosting such content had 114,000 videos at the end of last year, with Swift being a common target. The number of visits to deepfake websites has also seen a sharp rise, indicating the growing prevalence of this disturbing trend.
Challenges in Detection and Social Media Spread
Despite the rapid proliferation of deepfake content, reliable detection capabilities on social media platforms remain elusive. The lack of effective detection mechanisms results in a roundabout process, relying on individuals to spot and question the authenticity of content. Even when companies identify and remove such videos, the speed at which they spread often leads to irreversible damage.
Lack of Legislation and Calls for Action
The absence of federal laws in the US specifically addressing deepfakes, including pornographic ones, presents a challenge in holding creators accountable. While some states have implemented laws, their inconsistent application makes it difficult for victims to seek justice. The Biden administration is reportedly working with AI companies on initiatives such as watermarking generated images to aid in their identification as fakes.
Future Legislative Steps and Protection for Private Citizens
Discussions in Congress are underway regarding legislative steps to protect celebrities’ and artists’ voices from AI manipulation. However, there is currently a lack of protections for private citizens. Swift has not made a public statement on the issue, but experts suggest that legal action by public figures like her could serve as a crucial step in addressing the challenges posed by AI-generated deepfakes.
Comments