Shining A Light On Shadow AI
Shadow AI refers to the unauthorized use of artificial intelligence systems within an organization. It is considered a negative practice as it can lead to security breaches and data leaks. The use of shadow AI poses an infringement on organizational policies and can introduce risks to the overall operations. It is important for companies to monitor and control the use of AI systems to prevent shadow AI activities. Addressing shadow AI can help mitigate potential threats and safeguard sensitive information.

Alex de Minaur of Australia casts a shadow as he serves to Arthur Cazaux of France during their second round men's singles match at the Wimbledon Tennis Championships in London, Thursday, July 3, 2025.
Shadow AI is illuminating. In some ways, the use of unregulated artificial intelligence services that fail to align with an organization’s IT policies and wider country-specific data governance controls might be seen as a positive i.e. it’s a case of developers and data scientists looking for new innovations to bring hitherto unexplored new efficiencies to a business. But mostly, unsurprisingly, shadow AI (like most forms of shadow technology and bring your own device activity) is viewed as a negative, an infringement, and a risk.
The Current State of AI Development
The problem today is that AI is essentially still so nascent, so embryonic and only really starting to enjoy its first wave of implementation. With many users’ exposure to AI relegated to seeing amusing image constructs built by ChatGPT and other tools, we’ve yet to get to a point where widespread enterprise use of AI tools has become the norm. Although that time is arguably very soon, the current state of AI development means that some activity is being driven undercover.
The Challenge of Unsanctioned AI Tools
The unsanctioned use of AI tools by developers is becoming a serious issue as application development continues to evolve at a rapid pace. Scott McKinnon, CSO for UK&I at Palo Alto Networks, emphasizes the necessity of embedding clear, enforceable AI governance and oversight into the continuous delivery pipeline to balance speed with security.
Risks and Concerns
One of the major concerns is the supply chain integrity and the introduction of opaque software dependencies which can lead to vulnerabilities and prompt injection attacks. Organizations must secure their AI development environments and vet tools rigorously to ensure trust and safety in AI-driven applications.
The Solution: Platform Approach
To address the risks posed by unsanctioned AI use, organizations need to move towards a unified platform approach that consolidates AI governance, system controls, and developer workflows. This approach enables organizations to enforce consistent policies, detect risky behaviors early, and provide developers with safe, approved AI capabilities within their existing workflows.
Model Poisoning and Shadow AI
At its worst, shadow AI can lead to model poisoning, where attackers manipulate AI models through changing training data to produce biased or dangerous results. This underscores the importance of comprehensive AI governance and the risks associated with shadow AI.
Implications and HONESTAI ANALYSIS
Shadow AI presents challenges in terms of network system health, unfair advantages for IT teams, and the potential introduction of biased AI. Organizations must address these challenges by implementing robust AI governance and oversight to reduce risks while fostering innovation.