Much ado about nothing?

PromptLocker: AI-powered ransomware is a research project

NYU
Image source: Katherine Welles/Shutterstock.com

Security researchers at New York University have demonstrated how cost-effectively AI-based blackmail software can be developed. Security experts initially mistook the project for real malware.

At the end of August, a report from security company ESET caused quite a stir: The researchers claimed to have discovered AI-supported ransomware called “PromptLocker” in the wild for the first time. However, it has now emerged that this was a misunderstanding. The supposed malware came from a research laboratory.

Ad

Scientists at the NYU Tandon School of Engineering have shown with their “Ransomware 3.0” experiment that complete ransomware attacks can be orchestrated using large language models.

Confusion during malware analysis

The confusion arose when the researchers uploaded a sample of their experiment to VirusTotal. The malware analysis platform is routinely monitored by security companies. ESET then classified the sample as “PromptLocker”, the first AI-powered ransomware in the wild.

However, it was actually a controlled research experiment. An NYU spokesperson confirmed to the specialist media that the alleged malware program originated from the university’s own laboratory.

Ad

Automated attack chain via Lua script

The experimental software uses dynamically generated Lua scripts to perform classic ransomware functions. Based on hard-coded prompts, the system scans file systems, identifies worthwhile targets, copies data and performs encryption processes.

In the simulations, the system successfully coped with all typical attack phases: from initial system exploration to data exfiltration and final encryption. Tests were carried out on various platforms, including desktop PCs, company servers and industrial control systems.

Drastically reduced development costs

The economic impact could be significant. While traditional ransomware development requires specialized teams and extensive infrastructures, the NYU prototype only required about 23,000 AI tokens per complete run.

When using commercial API services, this corresponds to costs of around 70 cents. With freely available open source models, even these minimal expenses are completely eliminated.

Proof of concept with far-reaching consequences

The researchers deliberately refrained from using destructive functions. After all, this was an academic experiment under controlled conditions. Nevertheless, “Ransomware 3.0” demonstrates the potential for a new generation of automated cyberattacks.

The complete study entitled “Ransomware 3.0: Self-Composing and LLM-Orchestrated” is publicly available. It is likely to generate intense debate in cyber security circles – and possibly inspire unwanted copycats.

The coming months will show whether the fears come true or whether the AI-supported threat remains theoretical for the time being.

Lars

Becker

Redakteur

IT Verlag GmbH

Ad

Weitere Artikel