Stop Sleeping On AI: Why Security Teams Should Embrace The Technology
Integrating AI into defense strategies can enhance efficiency and accuracy in identifying and responding to threats. By automating workflows, security teams can focus on more strategic tasks and proactive measures. AI tools can help defenders stay ahead of cyber threats and minimize the impact of security incidents. Embracing AI technology is essential for staying competitive in the ever-evolving cybersecurity landscape. Failure to adopt AI solutions may leave organizations vulnerable to sophisticated cyber attacks.

Ron Williams is the CEO and founder of Kindo.Ai. Artificial intelligence (AI) is no longer a futuristic tool for cybersecurity. It’s gone mainstream. Threat actors have integrated AI into their operations with alarming success, using tools like WormGPT, GhostGPT, and even legitimate platforms like Google’s Gemini AI to scale their attacks. Google’s Threat Intelligence Group recently detailed how state-sponsored actors have been abusing Gemini AI to enhance reconnaissance, scripting, and privilege escalation. These factors lead to a harsh reality: The asymmetry of power in AI between cybersecurity and bad actors is growing, and security teams are falling behind.
If defenders don’t start using AI to automate workflows, mitigate threats, and improve incident response, they risk being perpetually outpaced by modern attackers. The time to act is now, not after attackers have perfected the use of AI in their operations.
AI in Cybercrime
ChatGPT democratized consumer AI access, revolutionizing a whole range of industries. However, cybercriminals quickly recognized its potential for malicious usage, and just a year after its launch, discussions on cybercrime networks about exploiting AI exploded, leading to an increase in AI-based attack strategies. Hundreds of thousands of ChatGPT accounts were being bought and sold on underground markets, and by mid-2023, WormGPT, a malicious chatbot designed to enhance business email compromise attacks and spear-phishing campaigns, sent shockwaves through the industry. WormGPT was marketed as an AI tool specifically trained on malicious datasets to improve cybercrime operations, prompting headlines warning of AI-powered cybercrime on the rise. But WormGPT was just the beginning. Variants like FraudGPT, DarkBERT (not to be confused with DarkBART), and GhostGPT followed.
Fast-forwarding to today, cybercriminals have found multiple ways to weaponize AI for their operations:
- Bypassing ethical constraints
- Masquerading legitimate chatbots as malicious chatbots
- Training AI models on malicious datasets
Challenges and Opportunities
Despite clear evidence of AI’s role in advancing cybercrime, many security teams remain hesitant to embrace AI defenses. This reluctance sometimes stems from three key concerns: lack of trust in AI, implementation complexity, and job security fears. Many cybersecurity professionals view AI as a “black box” technology and are concerned that it’s difficult to predict how AI will behave in a live security environment.
Another major roadblock is the perceived difficulty of integrating AI into legacy security infrastructure. A lot of organizations assume that AI adoption requires a fundamental overhaul of existing systems, which is daunting and expensive. However, security teams can start small by identifying repetitive, time-consuming tasks that AI can automate.
Some security professionals fear that widespread AI adoption could automate them out of a job. While discussions about AI replacing analysts entirely are common in the industry, AI should be viewed as an augmentation tool rather than a replacement.
Benefits of AI in Cybersecurity
The benefits of AI and autonomous agents extend beyond the SOC; AI can also improve web application security, agile security in software development lifecycles, penetration testing, and threat intelligence gathering. Security teams don’t need to overhaul their entire infrastructure overnight. Incremental AI adoption can have immediate benefits. AI is not a passing trend—it’s the present and future of cybersecurity.
Attackers are not waiting for defenders to catch up. They are actively refining AI-augmented attack methods, making their operations faster, more scalable, and more effective. Security teams must recognize that the only way to counter AI-based cyber threats is to fight fire with fire.