How Cybercriminals are Using ChatGPT to Create Malware?

Cybercriminals

ChatGPT is being used by Cybercriminal to create Malware.

According to cybersecurity specialists, cybercriminals have begun leveraging OpenAI’s artificially intelligent chatbot ChatGPT to quickly construct hacking tools. Scammers are also trying ChatGPT’s potential to construct other chatbots tailored to impersonate young ladies in order to catch prey, according to one specialist who monitors criminal forums. Many early ChatGPT users were concerned that the app, which went popular in the days following its release in December, may code harmful malware capable of spying on users’ keyboard strokes or encrypting data. According to a survey from Israeli security firm Check Point, underground criminal forums have finally caught on. In one Check Point-reviewed forum post, a hacker who had previously released Android malware demonstrated code developed using ChatGPT that took files of interest, compressed them, and transferred them across the internet. They demonstrated another tool that planted a backdoor on a computer and could upload additional malware to an affected computer.

Another user on the same forum posted Python code that could encrypt files, claiming that OpenAI’s app assisted them in developing it. They said it was the very first script they’d ever written. According to Check Point’s analysis, such malware can be used for absolutely innocuous purposes, but it can also “simply be updated to encrypt someone’s system fully without any user intervention,” similar to how ransomware works. Check Point highlighted that the same forum member has previously offered access to hacked enterprise servers and stolen data. One user also considered “abusing” ChatGPT by using it to help design features for a dark web marketplace similar to Silk Road or Alphabay. The user demonstrated how the chat bot might easily create an app that monitored cryptocurrency values for a hypothetical payment system. According to Alex Holden, head of cyber intelligence firm Hold Security, dating scammers are also using ChatGPT to construct convincing personas. “They want to construct chatbots to impersonate largely girls in order to advance in talks with their marks,” he explained. “They’re attempting to automate idle chit-chat.” At the time of publication, OpenAI had not responded to a request for comment.

While the ChatGPT-coded tools appeared “quite rudimentary,” Check Point claimed it was only a matter of time before more “skilled” hackers discovered a way to exploit the AI. According to Rik Ferguson, vice president of security intelligence at the American cybersecurity firm Forescout, ChatGPT does not appear to be capable of coding something as complex as the major ransomware strains seen in significant hacking incidents in recent years, such as Conti, which was infamous for its use in the breach of Ireland’s national health system. OpenAI’s tool, on the other hand, will lower the barrier to entry for newcomers into the criminal market by creating more basic, but equally potent malware, according to Ferguson.

He also expressed worry that, rather than being used to create code that steals victims’ data, ChatGPT could be used to assist in the creation of websites and bots that deceive users into providing their information. It has the potential to “industrialize the design and personalization of harmful web sites, highly-targeted phishing attacks, and social engineering-based scams,” according to Ferguson. Check Point threat intelligence expert Sergey Shykevich told Forbes that ChatGPT will be a “wonderful tool” for Russian hackers who don’t speak English to create legitimate-looking phishing emails. Concerning precautions against criminal use of ChatGPT, Shykevich stated that they would eventually, and “sadly,” have to be enforced through regulation. OpenAI has put certain filters in place to prevent obvious requests for ChatGPT to construct malware with policy violation notifications, but hackers and journalists have found ways around those safeguards. According to Shykevich, firms such as OpenAI may have to be legally compelled to educate its AI to detect such exploitation.

Join our WhatsApp and Telegram Community to Get Regular Top Tech Updates
Whatsapp Icon Telegram Icon

Disclaimer: Any financial and crypto market information given on Analytics Insight are sponsored articles, written for informational purpose only and is not an investment advice. The readers are further advised that Crypto products and NFTs are unregulated and can be highly risky. There may be no regulatory recourse for any loss from such transactions. Conduct your own research by contacting financial experts before making any investment decisions. The decision to read hereinafter is purely a matter of choice and shall be construed as an express undertaking/guarantee in favour of Analytics Insight of being absolved from any/ all potential legal action, or enforceable claims. We do not represent nor own any cryptocurrency, any complaints, abuse or concerns with regards to the information provided shall be immediately informed here.

Close