Binance Faces ChatGPT-Powered Smear Campaign Linking Founder to Chinese Communist Party – Here’s the Latest

Ruholamin Haqshanas
Last updated: | 2 min read
Source: AdobeStock / Iryna Budanova

Binance has become a victim of a smear campaign by an AI-powered chatbot that spreads false information regarding its founder Changpeng Zhao (CZ) and his supposed affiliation with the Chinese Communist Party.

Binance has reportedly received dozens of inquiries from congressional offices regarding CZ and his connection to the China National Petroleum Corporation (CNPC) over the past couple of days.

The requests are in relation to a conversation by ChatGPT, which claims that CZ created a social media platform for CNPC. 

However, Patrick Hillmann, the Chief Strategy Officer of Binance, disclosed that the information had no factual basis.

“To all the crypto and AI sleuths out there, here is the ChatGPT thread if someone wants to dig in. As you’ll see ChatGPT pulls this from a fake LinkedIn profile and a non-existent Forbes article. We can’t find any evidence of this story nor the LinkedIn page ever existing.”

According to Hillman, ChatGPT obtained this information from a fabricated LinkedIn profile that does not exist and a Forbes article, which is also non-existent. 

Hillman criticized ChatGPT for misleading people and damaging Binance’s reputation. 

“Like any new technology, bad actors will try and take advantage of it in the early days for profit or political gain. I hope raising awareness of issues like this prevents people from sole-sourcing an AI-generated response when seeking to disparage someone.”

He also noted that Binance has been channeling significant time and resources into exploring how blockchain technology and AI can work together.

AI Chatbots Explode in Popularity but They Are Not Immune to Mistakes

Since the release of OpenAI’s ChatGPT in November last year, AI chatbots have taken the internet by storm.

Despite their unlimited potential and vast functionality, these tools are not immune to mistakes. In fact, some AI chatbots by popular tech companies have already made some horrible mistakes.

For instance, in its very first demo, Google’s AI chatbot Bard wrongly said James Webb Space Telescope took the first pictures of a planet outside of our own solar system.

Since AI’s ability to generate false or misleading information can cause real damage to individuals and organizations, many organizations have asked for safeguards against the spread of disinformation by such tools. 

The Biden administration has recommended five principles companies should uphold regarding the development of AI technologies through a volunteer “bill of rights.”

These principles include data privacy, protections against algorithmic discrimination, and transparency around when and how automated systems are being utilized.

The National Institute of Standards and Technology has also provided voluntary guardrails that companies can use to limit the risk of AI tools causing harm to the public.