BTC -2.90%
ETH -3.69%
SOL -7.80%
PEPE -10.52%
SHIB -10.10%
BNB -4.57%
DOGE -9.19%
XRP -7.29%
Best Crypto Poker

Elliptic Report Reveals the Future of Crypto Crime is AI-Driven

Harvey Hunter
Last updated: | 2 min read
Elliptic Report Reveals the Future of Crypto Crime is AI-Driven

An Elliptic report exposed the rise of AI crypto crimes marking a new era of cyber threats, exploited for deepfake scams, state-sponsored attacks, and other sophisticated illicit activities.

The report credits artificial intelligence (AI) for driving significant beneficial innovation in many industries, including the AI cryptoasset sector. This innovation has birthed many projects poised to redefine the landscape of AI crypto.

However, as with any emerging technology, there remains a risk of bad actors seeking to exploit new developments for illicit purposes.

As a result, Elliptic calls for a review of early warning signs of illegal activity to ensure long-term innovation and mitigate emerging risks in their early stages.

Deepfake Celebrities, Leaders, and Crypto CEOs

Elliptic Highlighted the use and distribution of deepfakes and AI-generated material to advertise crypto scams.

Deepfake videos frequently exploit the likeness of influential figures like Elon Musk and former Singaporean Prime Minister Lee Hsien Loong to promote fraudulent investment schemes.

Picture of deepfakes used in AI crypto crime: Source: Elliptic report.

They falsely imply that the project has legitimate or official backing – thereby legitimizing it among potential victims.

Industrial-scale scams like “pig butchering” romance scams involve maintaining extensive and prolonged communication with victims throughout the scam process.

Elliptic also reported instances of deepfake technology impersonating high-level executives during online meetings. This exploits the figure’s position to potentially authorize large transactions, impacting both the corporate and crypto sectors.

AI Enhanced Illicit Markets

Tools like ChatGPT can be used to generate or bug-check code. The potential implications of this are relevant to crypto, facilitating cybercrime by identifying vulnerabilities at scale.

Decentralized crypto apps often use open-source code, like smart contracts, making them vulnerable to hacks. The rise of DeFi auditing counters these risks.

However, while AI could streamline smart contract audits, auditors caution about current capabilities. There’s concern that AI could be used by hackers to quickly identify vulnerabilities in DeFi protocols.

Dark web forums explore large language models (LLMs) for crypto-related crimes. This includes reverse-engineering wallet seed phrases and automating scams like phishing and malware deployment.

Dark web markets offer “unethical” versions of GPTs designed for AI crypto crime. These tools aim to evade detection by legitimate GPTs.

WormGPT, the self-described “enemy of ChatGPT”, was noted in the report. It Introduces itself as a tool that “transcends the boundaries of legality.” It openly advertises itself for facilitating the creation of phishing emails, carding, malware, and generating malicious code.

United States warns of Korean AI Crypto Crime

The United Nations attributes over 60 cryptocurrency heists to North Korean state actors, amounting to over $3 billion stolen from 2017 to 2023. Elliptic cited reports indicating they are exploring AI to enhance hacking capabilities.

According to the report, Anne Neuberger, the U.S. Deputy National Security Advisor for Cyber and Emerging Technologies, also addressed the growing concerns about AI criminality.

“Some North Korean and other nation-state and criminal actors have been observed trying to use AI models to accelerate the creation of malicious software and identifying vulnerable systems.”

The report highlighted North Korea’s advancement in AI research since 2013, focusing on applications like facial recognition and potential military uses. Kim Il Sung University plays a role in AI program development, collaborating with Chinese entities in this field.

Elliptic hasn’t found evidence of hostile state actors using AI on blockchains directly, but these groups are testing large language models (LLMs) to enhance hacking skills.