Artificial intelligence (AI) is reshaping cybersecurity in opposing ways. On the one hand, predictive models help recognise anomalies and accelerate incident response; on the other, democratisation and customisation of large language models (LLMs) have created a black market where AI is exploited to generate malware, launch phishing campaigns or clone voices. Independent analyses report that discussions about jailbreaking public versions of ChatGPT and similar models surged by 52 % in 2024, while mentions of malicious AI tools rose by 219 %. This article draws on aggregated data from security vendors, academic research and United Nations reports to describe a global trend, noting that metrics vary by region and acknowledging the partial nature of the data. This multidimensional approach improves transparency and highlights methodological limitations.
Underground generative tools have taken the form of AI‑as‑a‑Service: monthly subscriptions granting access to LLMs stripped of ethical filters. WormGPT, an uncensored clone based on GPT‑J, is one example; by the end of 2024 its Telegram bot counted nearly 3 000 paying users. Pre‑packaged phishing kits such as 16shop, cited by the UNODC report, have been sold to over 70 000 criminals in 43 countries and generated more than 150 000 fraudulent domains. These services not only write convincing phishing emails but also generate malware code, create fake identities and produce disinformation in seconds. The explosion of dark AI drastically lowers the barrier to entry: even individuals without development skills can launch attacks by paying a monthly subscription or buying kits priced at US $60–$150. The Kela report also notes that conversations about jailbreaks and uncensored AI models have shifted to private channels such as Discord and Telegram, where actors share specific techniques to bypass safeguards.
Academic research and United Nations reports show how AI integration is radically transforming criminal techniques. Examples such as IBM’s DeepLocker show that AI can hide a payload inside a legitimate app and activate it only when the victim is recognised via facial recognition or geolocation. The UNODC report highlights the emergence of AI‑enabled botnets capable of automatically scanning vulnerabilities and bypassing CAPTCHA tests. In parallel, Palo Alto Networks researchers have generated malware for Windows, macOS and Linux using the MITRE ATT&CK framework. The combination of these factors suggests that the next wave of malware will be more autonomous, harder to detect and customised for its targets.
Key observations
- Exponential growth of malicious models – Kela’s research found that mentions of malicious AI tools in underground forums increased by 219 % in 2024. These tools are often jailbroken versions of public models or open‑source models refined with malicious data. UNODC reports indicate that Southeast Asia has become a hotspot for transnational fraud networks using automation and AI to extend their operations.
- Industrialisation of cyber‑crime – WormGPT and similar platforms show that unfiltered generative models can be marketed as subscription services, enabling even low‑skilled operators to conduct attacks. Phishing kits such as 16shop, sold to thousands of users and capable of automatically creating cloned websites of well‑known brands, further reduce the need for technical skills and transform cyber‑crime into an industry. In the same context, the Kela report cites the development of voice‑phishing bots and automated IVR systems, sold for around US $2 000, which enable thousands of fraudulent calls.
- AI‑generated malware: complexity and polymorphism – Palo Alto Networks researchers confirmed that AI can generate advanced malware, replicating MITRE ATT&CK techniques and testing them on multiple operating systems. Polymorphic variants produced by AI change code on each execution, rendering signature‑based engines ineffective. The UNODC notes that criminals incorporate models trained on stolen passwords to generate plausible credentials. In addition, the 2025 Kela report describes AI tools such as EvilAI, Telegram bots that generate ransomware and infostealers at prices between US $10 and $60.
- Democratisation of phishing and fraud – Generative AI is supercharging phishing, social engineering and fraud. Malicious actors use models to write grammatically flawless messages, create personalised emails and generate audio/video deepfakes, making scams harder to spot. A 2025 analysis by SQ Magazine reports that 68 % of analysts consider AI‑generated phishing harder to detect and that such campaigns have risen by 67 %. Voice‑cloning attacks linked to business email compromise grew by 81 %. The Kela report adds that the cost of automated phishing has dropped by more than 95 %, enabling attackers to send volumes of emails previously unimaginable.
- Escalating attack metrics – In 2025 the number of AI‑assisted attacks grew by 47 % globally, with the financial sector absorbing one‑third of incidents. Forty‑one per cent of ransomware families use AI components to adapt payloads. The UNODC notes that AI‑enabled botnets can sustain DDoS campaigns with minimal oversight and bypass human verification systems. Moreover, nearly a quarter of analysed payloads contain autonomous malware capable of adapting to the host.
Why this matters
- Speed and adaptability of malware – AI allows rapid generation of malicious code for multiple operating systems and adapts it to the target environment. Examples like DeepLocker show how a payload can remain dormant until it recognises its victim. The ability to create polymorphic malware that changes its code with each infection undermines signature‑based systems and requires behavioural detection methods.
- Impersonation, psychological warfare and ethics – Models fed with malware analyses or public data can mimic known campaigns and generate false flags that confuse analysts. Deepfakes and voice clones erode trust in communications. Ethically, malicious AI raises issues of privacy, accountability and human rights; international guidelines such as OECD principles and the emerging European AI Act urge transparent and responsible use of AI.
- Lowering the barrier to entry – Subscription services and pre‑packaged phishing kits sold for US $60–$150 enable criminals with little skill to orchestrate large‑scale attacks. Access to unlocked LLMs reduces the need for coding or technical knowledge. EvilAI, FraudGPT and similar offerings highlight the expansion of a crime‑as‑a‑service model.
- Amplification of fraud and ransomware – The combination of deepfakes, automated spam and malware yields highly effective fraud campaigns. SQ Magazine reports a 62 % increase in synthetic‑content fraud in 2025 and an average breach cost of US $5.72 million. Ransomware families leverage AI to modulate payload release and negotiate ransom demands.
- Convergence of autonomous attacks and industry sectors – Nearly a quarter of payloads are autonomous. Sectors such as manufacturing (25.7 %), finance (18.2 %), energy (11.1 %) and healthcare (6.3 %) are particularly targeted. The UNODC notes that emerging digital infrastructures in Southeast Asia and the Pacific, characterised by limited regulatory capacity, offer new exploitation opportunities.
Looking ahead
- Continued evolution of malicious models – As open‑source models proliferate and jailbreak techniques continue to be shared, new generations of dark LLMs will emerge. Academic studies demonstrate that these models can generate complex, polymorphic malware. In future they may integrate reinforcement‑learning algorithms to refine tactics automatically and exploit real‑time data from campaigns.
- AI‑as‑a‑Service as a criminal industry – Trends highlighted by WormGPT and 16shop suggest that dark‑web marketplaces will evolve into full platforms with customer support, updates and technical assistance. Subscription access lowers costs and broadens the attacker base. The as‑a‑service model may extend to vulnerability research and penetration‑testing services.
- Strengthening dynamic defences – Behaviour‑based and dynamic‑analysis techniques will become essential. Palo Alto Networks experts recommend investing in tools that leverage AI to identify abnormal behaviours in real time and generate dynamic detection rules. Organisations must also improve digital literacy and AI ethics among staff to recognise AI‑generated emails, voices and videos.
- Regulation and international cooperation – The speed at which threats evolve necessitates privacy and security regulations that account for AI. United Nations reports call for transnational cooperation to share indicators of compromise and close regulatory gaps. In Europe, the AI Act will be fully applicable in 2026, introducing risk categories and compliance requirements; in the United States, initiatives such as the AI Bill of Rights guide responsible implementation. Regional divergences could incentivise criminals to relocate infrastructure to more permissive jurisdictions.
Our perspective
- Dual‑use AI and ethical responsibility – The same technology that produces useful content can be exploited to create malware and disinformation. Promoting a culture of responsible AI and developing guidelines that balance innovation and security are essential.
- The dark web as a criminal innovation lab – Underground communities experiment with AI faster than legitimate organisations, using open‑source models and jailbreaks to build new tools. Monitoring these spaces enables early identification of emerging techniques and the anticipation of counter‑measures.
- Polymorphism, anonymity and attribution – AI‑generated variants and impersonation techniques complicate forensic analysis and attribution. Defending against them requires a multidisciplinary approach that includes behavioural analysis, threat hunting and cross‑sector collaboration.
- Methodological transparency and plural sources – Understanding emerging trends requires combining data from vendors, academic studies and international agencies. Reporting the provenance and limitations of statistics improves replicability and reduces biases stemming from commercial interests.
- Training as primary defence – Beyond technological investments, organisations must educate employees about the risks of malicious AI. Simulations of AI‑generated phishing, exercises with deepfakes and ethical training can improve resilience.
Trend analysis shows that AI is transforming the threat landscape and that the dark web has become an incubator for intelligent malware. As a threat‑intelligence brand, Toralya examines adversarial use of AI across reconnaissance, deception and malware development, tracks early indicators from underground forums and outlines mitigation strategies. Our insights enable clients to anticipate AI‑driven attacks and prepare before they materialise. In a context where generative models will continue to evolve, investing in proactive, ethical analysis and dynamic solutions is no longer optional: it is the key to securing the digital future.

No responses yet