A hot potato: Despite being just a beta release, ChatGPT is already showing impressive capabilities to the entire spectrum of internet communities, including underground forums, where seasoned cyber-criminals have demonstrated how the AI can make creating functional malware much easier.

While New York City schools have decided to ban ChatGPT from their network and devices, the internet criminal underground is looking into adopting the new technology to create malware faster. The machine learning-based chatbot was designed to interact in a conversational way, answer follow-up questions and admit its mistakes, and it seems OpenAI researchers made such a good job that the service can even write code that works with just a few adjustments here and there.

Security enterprise Check Point recently scoped cyber-crime forums in search of ChatGPT-assisted pieces of malicious code. They found what they were searching for, as ChatGPT is seemingly being used both as an "educational" tool and as a pure malware-creation platform.

Thanks to OpenAI's chatbot, users of the underground hacking forum analyzed by Check Point were able to create a Python-based stealer that searches for common file types, copies them to the Temp folder, ZIPs them and uploads them to a hardcoded FTP server. Lather analyses confirmed that the malicious code could work.

A second sample, created by the same user, was a Java-based code snippet capable of downloading an SSH/Telnet client (PuTTY) and then running it covertly on the system using Powershell - a function that could be modified to download and run any program. Other, less capable "threat actors" used ChatGPT to create an encryption tool so that they could easily generate cryptographic keys, encrypt files, compare hashes and more.

ChatGPT can even be (ab)used to "facilitate fraud activity," Check Point warned, as the service was also able to create marketplace scripts for the Dark Web, using third-party APIs to "get up-to-date cryptocurrency (Monero, Bitcoin and Etherium) prices as part of the Dark Web market payment system."

Check Point previously tried their hand at automating an entire infection flow complete with a phishing email and malicious Excel VBA code. Furthermore, the researchers also used Codex - another code-creating, AI-based system - to create other types of complex pieces of (potentially) malicious code.

Regarding ChatGPT, the researchers say that it's still too early to decide whether or not the chatbot "will become the new favorite tool for participants in the Dark Web." However, the underground community has already shown a significant interest in "jumping into this latest trend to generate malicious code." ChatGPT should include some safeguards to avoid abuse, but malware authors and script kiddies have shown they can easily bypass those safeguards.