Tech

A researcher used ChatGPT to create dangerous data-stealing malware

[ad_1]

In context: Ever since its launch last year, ChatGPT has created ripples among tech enthusiasts with its ability to write articles, poems, movie scripts, and more. The AI tool can even generate functional code as long as it is given a well-written and clear prompt. While most developers will use the feature for completely harmless purposes, a new report suggests it can also be used by malicious actors to create malware despite the safeguards put in place by OpenAI.

A cybersecurity researcher claims to have used ChatGPT to develop a zero-day exploit that can steal data from a compromised device. Alarmingly, the malware even evaded detection from all vendors on VirusTotal.

Forcepoint’s Aaron Mulgrew said he decided early on in the malware creation process not to write any code himself and use only advanced techniques that are typically employed by sophisticated threat actors like rogue nation states.

Describing himself as a “novice” in malware development, Mulgrew said he used Go implementation language not only for its ease of development, but also because he could manually debug the code if needed. He also used steganography, which hides secret data within an regular file or message in order to avoid detection.

Mulgrew started off by asking ChatGPT directly to develop the malware, but that made the chatbot’s guardrails kick into action and it bluntly refused to carry out the task on ethical grounds. He then decided to get creative and asked the AI tool to generate small snippets of helper code before manually putting the entire executable together.

This time around, he was successful in his endeavor, with ChatGPT creating the contentious code that bypassed detection by all anti-malware apps on VirusTotal. However, obfuscating the code to avoid detection proved tricky, as ChatGPT recognizes such requests as unethical and refuses to comply with them.

Still, Mulgrew was able to do that after only a few attempts. The first time the malware was uploaded to VirusTotal, five vendors flagged it as malicious. Following a couple tweaks, the code was successfully obfuscated, and none of the vendors identified it as malware.

Mulgrew said the entire process took “only a few hours.” Without the chatbot, he believes it would have taken a team of 5-10 developers weeks to craft the malicious software and ensure it could evade detection by security apps.

While Mulgrew created the malware for research purposes, he said a theoretical zero-day attack using such a tool could target high-value individuals to exfiltrate critical documents on the C drive.

[ad_2]

Source link