Breeze Liu, an AI startup founder and digital rights advocate, faced an uphill battle to remove nonconsensual explicit content of herself from the internet—forcing Microsoft to act only after direct confrontation.
The Start of a Digital Nightmare
In April 2020, Breeze Liu received a call that would change her life. A former classmate informed her that an explicit video of her was circulating online without her consent. The footage, allegedly recorded when she was a minor, had been uploaded to adult content platforms and soon spread across the web. Worse yet, it was later used to generate AI-powered deepfake videos, further amplifying her distress.
A Battle Against the Internet’s Gatekeepers
Determined to reclaim her digital identity, Liu embarked on a years-long effort to erase approximately 400 unauthorized images and videos. While several platforms complied with takedown requests, Microsoft—a dominant force in cloud hosting—remained unresponsive. Despite multiple attempts to get the tech giant to remove 150 explicit images stored on its Azure cloud services, Liu encountered silence and bureaucratic roadblocks.
The Turning Point: Confronting Microsoft
After months of frustration, Liu and her colleague Andrea Powell took matters into their own hands. At a digital safety conference, they directly confronted a senior Microsoft executive about the company’s inaction. This face-to-face interaction proved pivotal. Within days, Microsoft escalated the case, and the explicit content finally started disappearing from its servers.
The Broader Implications of Liu’s Struggle
Liu’s ordeal highlights the systemic failures in addressing digital abuse, particularly when victims fall into legal gray areas. The slow response from tech companies, inconsistent policies, and lack of enforcement leave survivors vulnerable. Many platforms lack automated tools to swiftly detect and remove nonconsensual content, making it an ongoing battle for affected individuals.
Pushing for Legislative Change
Recognizing the need for systemic reform, Liu has advocated for stronger legislation. A bipartisan group of lawmakers recently reintroduced a bill requiring websites to remove reported explicit content within 48 hours. The proposed law, which has gained traction in Congress, aims to impose stricter penalties on platforms that fail to act swiftly.
Looking Ahead: AI and Digital Safety
As AI technology continues to evolve, the risk of deepfake exploitation grows. Liu’s experience underscores the urgent need for AI-driven solutions that can detect and prevent misuse of personal data online. Companies investing in AI-driven security protocols, such as IndyKite’s AI Control Suite, are taking steps toward addressing these challenges. However, regulatory frameworks must evolve alongside technological advancements to ensure comprehensive protection.
A Call for Accountability
Liu’s victory against Microsoft is a testament to the power of persistence. Yet, her struggle is far from over. Explicit content featuring her remains on some platforms, and she continues to fight for stronger protections for victims of digital abuse. Her journey serves as a warning about the dangers of unchecked AI technology—while also offering a blueprint for future action.
Until tech companies adopt more efficient and ethical AI-driven content moderation solutions, victims like Liu will be left to navigate a broken system on their own.