Skip to content

AI-driven PromptLocker ransomware, shown in an NYU research project, functioned similarly to standard ransomware by identifying targets, exfiltrating select data, and encrypting hard drives.

NYU's AI-driven ransomware was initially identified by ESET, unveiling it as a research project.

NYU Research Project Demonstrates Functioning of AI-Controlled Ransomware: The simulated software...
NYU Research Project Demonstrates Functioning of AI-Controlled Ransomware: The simulated software identified targets, transmitted particular data, and encrypted hard drives as standard ransomware would.

AI-driven PromptLocker ransomware, shown in an NYU research project, functioned similarly to standard ransomware by identifying targets, exfiltrating select data, and encrypting hard drives.

A new study by researchers at New York University's Tandon School of Engineering has added fuel to the ongoing debate about the role of Artificial Intelligence (AI) in the future of hacking. The study, titled "Ransomware 3.0: Self-Composing and LLM-Orchestrated," explores the potential for AI to be used in ransomware attacks, raising concerns about the possibility of increased attacks in the future.

The study, led by professor Jonathon Smith, who specializes in cybersecurity, claims that a simulation of an AI-powered ransomware system was able to carry out all four phases of attacks on personal computers, enterprise servers, and industrial control systems. This finding could potentially inspire ne'er-do-wells to adopt similar approaches in real-world attacks, but the full impact remains to be seen.

Initially, cybersecurity firm ESET believed they had discovered the first AI-powered ransomware, named PromptLocker, in the wild. However, it was later revealed that PromptLocker was actually an experiment called "Ransomware 3.0" conducted by the NYU researchers. The PromptLocker sample did not implement destructive capabilities, suggesting it was designed for a controlled experiment.

The study found that open-source AI models eliminate costs for ransomware operators, making them unnecessary to work with commercial Language Model (LLM) service providers. This could potentially lead to significant cost savings for ransomware operators, as the prototype consumed approximately 23,000 AI tokens per complete attack execution, equivalent to roughly $0.70 using commercial API services.

It is important to note that there is a significant difference between academic researchers demonstrating a proof-of-concept and legitimate hackers using that same technique in real-world attacks. The cybersecurity industry's promise that AI will be the future of hacking is still conjecture and may take some time to materialize.

The study's findings are available for reading, but the full implications of the research are still uncertain. The NYU researchers uploaded a Ransomware 3.0 sample to VirusTotal, a malware analysis platform, for further examination.

While the study does not provide definitive evidence that AI will become the norm in hacking, it does highlight the potential for increased ransomware attacks due to the cost savings that AI could provide to operators. As such, it is crucial for the cybersecurity community to remain vigilant and continue to develop strategies to combat these threats.

Read also: