Artificial Intelligence (AI) holds immense potential, but its misuse by humans is increasingly becoming the bigger threat. From unintentional over-reliance to malicious exploitation, the risks of AI are multiplying due to human decisions.
The Real Danger: Human Misuse of AI
While many anticipate the rise of artificial general intelligence (AGI) in the coming years, including predictions from tech leaders like Sam Altman and Elon Musk, the true concern lies elsewhere. Experts and researchers agree that 2025 will not bring superintelligent AI, but instead, an escalation of risks stemming from human misuse. These risks are far from hypothetical and are already manifesting in diverse sectors.
Unintentional Misuse: Over-Reliance on AI
AI tools are increasingly being used in professional environments, but the over-reliance on these systems has led to significant errors. For example, lawyers have faced sanctions for submitting fake citations generated by AI tools like ChatGPT. In one case, British Columbia lawyer Chong Ke included fictitious cases in a legal filing, leading to penalties. Similarly, two New York attorneys were fined $5,000 for relying on AI-generated false citations. Such examples highlight the danger of assuming AI-generated content is inherently accurate.
Intentional Misuse: The Rise of Deepfakes
The malicious use of AI tools has also surged. In early 2024, social media platforms were flooded with non-consensual deepfake images of Taylor Swift, bypassing Microsoft’s AI safeguards through simple misspellings of her name. Despite subsequent fixes, this incident underscores how easily AI tools can be manipulated. More broadly, open-source tools for creating deepfakes are widely accessible, amplifying the spread of such harmful content.
These deepfakes not only damage individuals but also contribute to what’s known as the “liar’s dividend.” This phenomenon allows powerful figures to dismiss genuine evidence of wrongdoing by claiming it is AI-generated. Tesla, for instance, argued that a 2016 video of Elon Musk could have been a deepfake during a legal case. Similarly, Indian politicians and January 6 defendants in the US have claimed that incriminating videos were deepfakes, often without basis.
Unfit AI Applications: Denying Rights and Opportunities
Many AI systems currently in use are not suitable for their intended purposes, yet they are employed to make significant decisions about people’s lives. In the Netherlands, an AI algorithm wrongly accused thousands of parents of welfare fraud, leading to financial devastation and even governmental resignations. Similar issues have arisen in hiring processes, where AI tools like Retorio claim to assess candidates but can be manipulated by superficial changes, such as wearing glasses or altering a background.
The Road Ahead: Managing AI Risks
As AI-generated content becomes increasingly indistinguishable from reality, the challenges in mitigating these risks will grow. Companies, governments, and society at large have a monumental task ahead in regulating and managing AI misuse. The focus must remain on addressing real-world threats rather than being distracted by sci-fi scenarios of rogue AI systems.
In the broader context of AI’s impact, it’s worth noting how innovations are transforming industries. For example, balancing AI efficiency with sustainability goals has become a pressing challenge, showcasing the dual potential and risks AI poses across sectors.
Ultimately, the future of AI will be shaped not by the technology itself, but by how responsibly humans choose to wield it.