Cybercriminals Using ChatGPT to Build Hacking Tools, Write Code

Expert and novice cybercriminals have already started to use OpenAI’s chatbot ChatGPT in a bid to build hacking tools, security analysts have said. 

In one documented example, the Israeli security company Check Point spotted(Opens in a new window) a thread on a popular underground hacking forum by a hacker who said he was experimenting with the popular AI chatbot to “recreate malware strains.” 

The hacker had gone on to compress and share Android malware that had been written by ChatGPT across the web. The malware had the ability to steal files of interest, Forbes reports(Opens in a new window)

The same hacker showed off a further tool that installed a backdoor on a computer and could infect a PC with more malware. 

Check Point noted in its assessment(Opens in a new window) of the situation that some hackers were using ChatGPT to create their first scripts. In the aforementioned forum, another user shared Python code he said could encrypt files and had been written using ChatGPT. The code, he said, was the first such one he had written. 

While such code could be used for harmless reasons, Check Point said that it could “easily be modified to encrypt someone’s machine completely without any user interaction.”

The security company stressed that while ChatGPT-coded hacking tools appeared “pretty basic,” it is “only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad.”

Recommended by Our Editors

A third case of ChatGPT being used for fraudulent activity flagged by Check Point included a cybercriminal who showed it was possible to create a Dark Web marketplace using the AI chatbot. The hacker posted in the underground forum that he had used ChatGPT to create a piece of code that uses third-party API to retrieve up-to-date cryptocurrency prices, which is used for the Dark Web market payment system.

ChatGPT’s developer, OpenAI, has implemented some controls which prevent obvious requests for the AI to build spyware. However, the AI chatbox has come under yet more scrutiny after security analysts and journalists found it could write grammatically correct phishing emails without typos(Opens in a new window)

OpenAI did not immediately respond to a request for comment.

SecurityWatch newsletter for our top privacy and security stories delivered right to your inbox.”,”first_published_at”:”2021-09-30T21:22:09.000000Z”,”published_at”:”2022-03-24T14:57:33.000000Z”,”last_published_at”:”2022-03-24T14:57:28.000000Z”,”created_at”:null,”updated_at”:”2022-03-24T14:57:33.000000Z”})” x-show=”showEmailSignUp()” class=”rounded bg-gray-lightest text-center md:px-32 md:py-8 p-4 mt-8 container-xs” readability=”31.423799582463″>

Like What You’re Reading?

Sign up for SecurityWatch newsletter for our top privacy and security stories delivered right to your inbox.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

Facebook Comments Box

Hits: 0