AI has ushered in a new kind of hacker
The expansive use of AI technology exposes potential vulnerabilities in systems that hackers can exploit. Companies need to prioritize security measures to protect their AI models from cyber threats. The evolving landscape of AI security requires continuous monitoring and updates to stay ahead of potential breaches. It is essential for organizations to implement robust security protocols to safeguard their data and AI assets. Collaboration between cybersecurity experts and AI developers is crucial to fortify defenses against emerging threats.

AI is offering hackers new openings.
Hackers are using new AI models to infiltrate companies with old tricks. Open-source models are gaining popularity, but raise the bar for cybersecurity. Researchers scoured Hugging Face for malicious models and found hundreds.
AI and Cybersecurity
AI doomsayers continue to worry about the technology's potential to bring about societal collapse. But the most likely scenario for now is that small-time hackers will have a field day. Hackers usually have three main objectives, according to Yuval Fernbach, the chief technology officer of machine learning operations at software supply chain company JFrog. They shut things down, they steal information, or they change the output of a website or tool.
Scammers and hackers, like employees of any business, are already using AI to jump-start their productivity. Yet, it's the AI models themselves that present a new way for bad actors to get inside companies since malicious code is easily hidden inside open-source large language models, according to Fernbach.
"We are seeing many, many attacks," he said. Overloading a model so that it can no longer respond is particularly on the rise, according to JFrog. Industry leaders are starting to organize to cut down on malicious models. JFrog has a scanner product to check models before they go into production. But to some extent the responsibility will always be on each company.
Malicious Models Attack
When businesses want to start using AI, they have to pick a model from a company like OpenAI, Anthropic, or Meta, as most don't go to the immense expense of building one in-house from scratch. Going with an open-source model from Meta or any one of the thousands available is increasingly popular. Companies can use APIs or download models and run them locally.
As AI matures, companies are more likely to stitch together multiple models with different skills and expertise. Each new model, and any updates of data or functionality down the road, could contain malicious code or simply a change to the model that impacts the outcome.
The consequences of complacency can be meaningful. In 2024, a Canadian court ordered Air Canada to give a bereavement discount to a traveler who had been given incorrect information on how to obtain the benefit from the company's chatbot, even after human representatives of the airline denied it.
Scale of the Problem
To find out the scale of the problem, JFrog partnered with online repository for AI models, Hugging Face, last year. Four hundred of the more than 1 million models contained malicious code — less than 1% and about the same chance as landing four of a kind in a five-card hand of poker.
Since then, JFrog estimates that while the number of new models has increased three-fold, attacks increased seven-fold. Adding insult to injury, many popular models often have malicious imposters whose names are slight misspellings of authentic models that tempt hurried engineers.
Fifty-eight percent of companies polled in the same survey either had no company policy around open-source AI models or didn't know if they had one. And 68% of responding companies had no way to review developers' model usage other than a manual review.
With agentic AI on the rise, models will not only provide information and analysis but also perform tasks, and the risks could grow.