Report Uncovers Massive Sale of Compromised ChatGPT Credentials

Group-IB Report Warns of Evolving Cyber Threats Including AI and macOS Vulnerabilities and Ransomware Attacks.
Report Uncovers Massive Sale of Compromised ChatGPT Credentials

Group-IB report unveils: compromised ChatGPT accounts sold on dark web, highlighting AI security risks, ransomware surge, macOS vulnerabilities. Take action to stay safe.

Group-IB’s Threat Intelligence has released a “Hi-Tech Crime Trends 23/24” report highlighting a dramatic surge in ransomware attacks, macOS system threats, and the growing use of AI by cyber criminals.

The report revealed Asia-Pacific being the primary target for advanced persistent threat groups in 2023, with 523 attacks worldwide. APAC organizations accounted for 34% of global attacks, with Europe and the Middle East ranking second and Africa third.

A 70% rise in public ads selling zero-day exploits was recorded during 2022-2023. Threats like the CVE-2023-38831 zero-day vulnerability in ZIP file format remained popular among advanced cybercrime groups and nation-state actors for cyber-espionage activities. 

The report also warned of a growing interest in AI systems, particularly ChatGPT credentials, for reaching sensitive corporate data as public Large Language Models (LLMs) often do not protect accounts with multi-factor authentication.

AI systems can let attackers access confidential information, including internal source code, financial data, and trade secrets, and access communication history logs between employees and systems, allowing attackers access to sensitive data.

Over 225,000 infostealer logs containing compromised ChatGPT credentials were detected between January-October 2023. Four ChatGPT-style tools were developed since mid-2023 to facilitate such activities, including WolfGPT, DarkBARD, FraudGPT, and WormGPT. FraudGPT and WormGPT are popular for social engineering and phishing, while WolfGPT focuses on code or exploits.

Researchers detected around 130,000 compromised hosts with ChatGPT access between June and October 2023, marking a 36% increase from the previous year. The LummaC2 information stealer breached most logs.

Researchers also detected 4,583 companies with their information, files, and data published on ransomware Distributed Leaks (DLSs), marking a 74% growth from the previous year, with North American companies being the biggest victims. 

Report Uncovers Massive Sale of Compromised ChatGPT Credentials

Global threat actors, primarily APT groups, are targeting Apple platforms more, with underground sales of macOS information stealers increasing fivefold. Most concerning cyber risks for 2024 include zero-day exploits and malicious service use.

Cybersecurity firms have long been raising alarm over the continuous expansion in the global cyber threats spectrum. In June 2023, Group-IB researchers discovered a trend of over 100,000 devices infected with stolen ChatGPT credentials, with 26,802 compromised accounts recorded in May 2023. The Asia-Pacific region had the highest concentration of compromised credentials. ChatGPT’s default settings store user queries and AI responses, exposing confidential information.

In January 2024, Kaspersky Digital Footprint Intelligence reported threat actors exploiting AI technologies for illegal activities, sharing jailbreaks and exploiting legitimate tools for malicious purposes, particularly ChatGPT and LLMs, which can be used for malicious purposes, including malware development and illicit language model use.

The latest report highlights a significant rise in ransomware attacks targeting manufacturing, real estate, healthcare, government, and military sectors, the growing sophistication of cybercriminals targeting macOS systems, and the potential misuse of AI by cybercriminals to automate tasks, personalize attacks, and bypass security measures.

Businesses should invest in robust security solutions like firewalls, endpoint protection software, and intrusion detection systems, along with offering employee training on cyber threats to stay protected.

  1. OpenAI’s ChatGPT Can Create Polymorphic Malware
  2. Malicious Abrax666 AI Chatbot Exposed as Potential Scam
  3. Malicious Ads Infiltrate Bing AI Chatbot in Malvertising Attack
  4. Following WormGPT, FraudGPT Emerges for AI-Driven Cyber Crime
  5. Researcher create polymorphic Blackmamba malware with ChatGPT
Related Posts