Researchers Test Zero-click Worms that Exploit Generative AI Apps

Researchers have created computer worms with self-propagation capabilities that target GenAI applications.
Researchers Test Zero-click Worms that Exploit Generative AI Apps

A new study dubbed ComPromptMized, warns of zero-click worms exploiting generative AI, spreading through systems sans user interaction, and posing data theft risks. Experts stress urgent AI security measures.

Researchers have recently unveiled findings indicating the creation of a computer worm capable of targeting generative AI-powered applications. This revelation comes amidst growing concerns over the security of artificial intelligence systems.

In a collaborative effort led by Stav Cohen from Technion – Israel Institute of Technology, Ron Bitton from Intuit, and Ben Nassi from Cornell Tech, the team developed and tested this novel worm against popular AI models, including Gemini Pro (previously Bard AI), ChatGPT, and LLaVA.

While the study highlights the potential malicious applications of such technology, it also echoes a warning issued last year by Europol regarding prompt engineering and jailbreaking of AI chatbots.

The research suggests that attackers could exploit this worm to manipulate AI models into replicating malicious inputs and engaging in harmful activities. One alarming demonstration involved the worm attacking generative AI email assistants, effectively stealing email data and distributing spam.

The mechanism behind the worm’s operation is intriguing yet concerning. By introducing specific text into an email, attackers could “poison” the databases of certain email application clients. This manipulation could then prompt models like ChatGPT and Gemini to replicate the malicious input and extract sensitive user data from the context.

In their study dubbed ComPromptMized, the researchers explored various scenarios, including both black-box and white-box accesses, and tested the worm’s effectiveness against different types of input data, such as text and images. The findings underscore the potential threats posed by such attacks on the burgeoning GenAI ecosystems.

The implications of this research extend beyond theoretical concerns. As more companies integrate generative AI capabilities into their applications, the risk of exploitation becomes increasingly tangible. The ability of malicious actors to leverage AI technology for nefarious purposes underscores the urgent need for robust security measures in AI development and deployment.

Beth Linker, Senior Director of AI & SAST at the Synopsys Software Integrity Group, emphasized the significance of this research, stating, “This attack highlights the vulnerability of GenAI-powered proactive agents as a potential target for exploitation. With the proliferation of new AI-driven tools promising to streamline our digital interactions, it is crucial for organizations to carefully consider the permissions granted to these tools and implement robust safety measures.”

While the research provides valuable insights into the vulnerabilities of generative AI systems, it also serves as a call to action for stakeholders across various industries. As we continue to embrace the benefits of AI innovation, it is imperative to remain alert against emerging threats and prioritize the development of strong security protocols.

  1. OpenAI’s ChatGPT Can Create Polymorphic Malware
  2. Malicious Abrax666 AI Chatbot Exposed as Potential Scam
  3. Malicious Ads Infiltrate Bing AI Chatbot in Malvertising Attack
  4. Following WormGPT, FraudGPT Emerges for AI-Driven Cyber Crime
  5. Researcher create polymorphic Blackmamba malware with ChatGPT
Total
0
Shares
Related Posts