Artificial Intelligence has taken the world swiftly into a new era of digitalization. It has revolutionized how humans perceive and depend on technology today. However, this great boon has a dark side. Our previous article highlighted how different LLMs (Large Language Models), such as ChatGPT and Google Bard, can be used for malicious actions initiated by cybercriminals. Following the same route, different GPT systems are now invented only to serve benefits to cyber criminals. This article talks about GPTs as a new weapon in the cybercriminal arsenal.
GPTs: A New Weapon in the Cybercriminal Arsenal
WormGPT
WormGPT has an active community of cybercriminals since it offers privacy for its users and can be used for blackhat purposes. To edge up its utility, the tool uses different AI models, such as saving the conversation thread and memorizing the context of the prompts, making it unique from others.
As a demo, the advertisers demonstrated how WormGPT can write malware to grab the credentials of an infected victim and send it over to the attacker’s Discord, all in a matter of a few seconds. For the sake of privacy, it can be run over TOR browser.
The WormGPT plans start for as low as USD 20 per month and are advertised to create ransomware and malware. Additionally, they have a bot that can be run on a mobile for ease of convenience, offered for a much lower price.
On the other hand, many users are falling prey to fraudsters offering fake and pretending to be sellers for WormGPT, which has started another chain of scams.
FraudGPT
The service of FraudGPT was announced on July 22, 2023, on a cybercrime forum. The tool aimed to provide malicious AI-generated solutions such as malware, phishing pages, source code, and hacking tools. This tool also enabled the hunting of other dark web forums, credit card collectives, vulnerabilities, and data breaches.
This service was advertised for a small fee and was targeted towards fraudsters, hackers, and spammers.Multiple subscriptions are already active for this service, and a video demo as a working proof was provided. The users describe this tool as a ChatGPT alternative that can be easily manipulated for tailored outputs without ethical boundaries.
ChatGPT DAN
Users have intelligently tried to manipulate the functioning of ChatGPT, forcing it to cross its ethical boundaries and provide a response. Here, the main focus is on the role-play concept, where ChatGPT is an assumed entity called ‘DAN’ (Do Anything Now). All the prompts provided then are to be responded to as how Dan would react as a character.
Here, the GPT tool is made to believe that it can generate content that does not comply with the Open AI policy, and none of the responses should be limited from getting correct answers. The prompt should also provide two solutions: one as a standard ChatGPT and the other as DAN.
This process, now termed ‘jailbreak,’ has been heavily misused in the cybercriminal community. This DAN was provided a personality to help pretend properly, such as a free-thinker with no boundaries.
While, as of today, there is not much praise for this feature, hackers are using prompts on a similar scale to achieve their malicious intentions.
FlowGPT
FlowGPT is a collection of different open-source and free AI chatbots hosted together that know no censorship and legitimacy boundaries. It hosts the following, as observed,
- ChaosGPT
- DarkGPT
- CheaterGPT (WormGPT)
ChaosGPT has its inception in July 2023 and is currently engaged with around 91500 users across the globe. It has been tagged as an evil super prompt system. CheaterGPT, as we know it is recently released, has been able to manage 1200 users in merely two months. Lastly, DarkGPT, operational since April 2023, has approximately 251500 users.
The official edition of ChatGPT has some limitations, but the above variants based on the same feature can provide answers that are otherwise beyond the limits. FlowGPT also hosts a separate forum and a chat section where people come and discuss new ideas on prompt engineering.
XXXGPT
Given all of the GPTs above, this one is relatively new. XXXGPT is a brainchild of hackers based on cybercrime forums and has to offer similar features as the above ones. The twist here is a bot backed by five expert members catering to the project requirements of their buyers. Besides malware, trojans, and cryptic codes, the XXXGPT houses POS and ATM malware and different card bypass exploits.
With the given trend, these prime services operating on the dark web marketplaces have kickstarted a wave of new ideas and offerings heavily relying on crossing artificial intelligence’s ethical and legal boundaries.
On a Final Note…
Technology knows no limits, making it a threat beyond human imagination. It is possible to have more malicious GPTs as a new weapon in the cybercriminal arsenal in the future, which can aid cybercriminals to conduct attacks. Individuals can take great care by following best practices in cyber hygiene. GPT models and all the advancements in this technology have become a double-edged sword. This adds convenience to natural, kind, and malicious humans.
I hope this article is enlightening about cybercrime techniques and has made you alert and careful. Remember, one cautious mindset is all it takes to evade an attack that can have catastrophic impacts. We hope you liked this article. Do let us know what you think in the comments below.
Author Bio: This article has been written by Rishika Desai, B.Tech Computer Engineering graduate with 9.57 CGPA from Vishwakarma Institute of Information Technology (VIIT), Pune. Currently works as Cyber Threat Researcher at CloudSEK. She is a good dancer, poet and a writer. Animal love engulfs her heart and content writing comprises her present. You can follow Rishika on Twitter at @ich_rish99.