A recent report from blockchain intelligence firm Elliptic titled “AI-enabled crime in the cryptoasset ecosystem” has shed light on the emerging threats posed by artificial intelligence (AI) in cryptocurrency crime.
The report, supported by case studies, identifies five emerging types of AI-enabled crimes, ranging from deepfake scams to state-sponsored cyberattacks, emphasizing that these threats are still in their infancy.
AI Deepfake Scams
AI, while having the potential to significantly transform the global economy, also brings risks. According to Elliptic, threat actors are already exploiting AI for illicit activities within the cryptoasset ecosystem.
One of the report’s findings is the use of AI to create deepfakes. These highly realistic videos and images are being used by scammers to impersonate high-profile individuals such as celebrities, politicians, and industry leaders to legitimize fake projects.
“Crypto giveaway and doubling scams are increasingly using deepfake videos of crypto CEOs and celebrities to encourage victims to send funds to scam crypto addresses.”
Specific instances mentioned in the report include deepfakes targeting Ripple (XRP) and its CEO, Brad Garlinghouse, particularly following the company’s legal victory against the U.S. SEC in July 2023.
Other individuals who have been targeted by deepfake scams include Elon Musk, former Singaporean Prime Minister Lee Hsien Loong, and the 7th and 8th Presidents of Taiwan, Tsai Ing-wen and Lai Ching-te.
Anne Neuberger, the U.S. Deputy National Security Advisor for Cyber and Emerging Technologies, also addressed the growing concerns about AI’s misuse. She highlighted that AI is being used not only for everyday scams but also for more sophisticated criminal activities.
“Some North Korean and other nation-state and criminal actors have been observed trying to use AI models to accelerate the creation of malicious software and identifying vulnerable systems,” Neuberger stated.
GPT Tokens and Dark Web Activities
The hype around AI has also given rise to the creation of GPT-themed tokens, which are often promoted with promises of high returns. Elliptic warns that while some of them may be legitimate, many are being promoted in amateur trading forums with false claims of official associations with AI companies like ChatGPT.
The report also reveals discussions on dark web forums about leveraging AI models to reverse-engineer crypto wallet seed phrases and bypass authentication for various services.
“Throughout numerous dark web cybercrime forums, Elliptic has identified chatter that explores the use of LLMs to reverse-engineer crypto wallet seed phrases, bypassing authentication for services such as OnlyFans, and providing alternatives to image ‘undressing’ manipulation services such as DeepNude.”
Elliptic also states that most AI-related threats in the cryptocurrency sector are still in the early stages, highlighting the importance of being vigilant and taking proactive steps to address these developing forms of crypto crime.