As of now, based on the information regarding the sale of the Abrax666 AI Chatbot, cybersecurity researchers are of the opinion that the chatbot is most likely a scam.
As of late October 2023, cybercriminals have been advertising a new malicious AI chatbot called Abrax666 on the dark web and underground hacking forums. The developer of the Abrax666 AI chatbot has been boasting about the chatbot claiming it to be a perfect multitasking tool for ethical and unethical activities.
Currently, the pricing structure of the Abrax666 AI chatbot is €299 per month, and €2499 per year, while its source code is available for sale for €4999, however, according to cybersecurity researchers at SlashNext, the Abrax666 chatbot is likely to be fake, and could be an attempt to scam buyers.
It is worth noting that after the launch of OpenAI’s ChatGPT chatbot, cybercriminals attempted to exploit it for malicious purposes. In particular, Russian hackers tried to leverage the chatbot to create malware and phishing pages. While there was some success, to escalate their efforts, cybercriminals launched their own malicious chatbots, including WormGPT and FraudGPT.
In the latest report, SlashNext’s cybersecurity researcher Daniel Kelley highlighted several red flags indicating how the Abrax666 AI chatbot could be a scam despite the big claims. The chatbot is being sold by a threat actor using the alias Abrax on a notorious Russian-language forum where the rule of sale includes sellers depositing some amount of money to initiate the sales process.
However, Abrax not only did not deposit the funds but also refused to allow interested parties to review the chatbot before making a purchase. One potential buyer using the alias ‘SocketSilence’ stated that Abrax failed to provide any evidence of whether they have sold the chatbot previously or if it actually works.
Despite video demonstrations shared by the seller, the authenticity of the chatbot’s capabilities remains in question. The videos, according to Kelley, do not exhibit standard AI chatbot behaviour and appear more like a standard tool.
“The only potentially credible evidence that has caused us to slightly defer our verdict here, are videos being circulated by ‘Abrax’ that allegedly show the AI chatbot in use,” Daniel explained in a blog post.
“However, even these videos do not appear to showcase the standard output one would expect from an AI chatbot of this nature. The output appears to look more like a standard tool that is not capable of real-time communication and does not accept prompts but arguments and flags instead,” the researcher added.
Abrax’s attempt to sell the chatbot on other cybercrime forums resulted in the removal of the threat, possibly due to administrators detecting malicious intent. Furthermore, the topic initiated by Abrax was locked by forum administrators at the time of writing.
Overall, the SlashNext report casts doubt on the legitimacy of the Abrax666 AI chatbot, cautioning potential buyers against its advertised capabilities. However, if the seller provides evidence that their product is legitimate and aligns with the claims they have made, it could likely change the minds of researchers.