NYS lacks adequate AI guidance for state agencies, says comptroller
New York State Comptroller warns that state agencies are facing a lack of sufficient guidance on implementing artificial intelligence technology. The comptroller highlights the importance of clear regulations and oversight to ensure responsible AI use. The absence of adequate AI guidance could lead to potential risks and challenges for state agencies. Calls are made for the development of comprehensive policies to address this issue.

QUEENSBURY – In a new audit, NYS Comptroller Thomas DiNapoli said the state should improve the guidance it provides to state agencies regarding their use of artificial intelligence, or AI, for various services. He said agencies are mostly on their own when it comes to use of AI and are taking a “patchwork of approaches” to oversee use.
Main Findings
“New York state agencies are using AI to monitor prisoners’ phone calls, catch fraudulent driver’s license applications, assist older adults, and support government services,” DiNapoli said. “Our audit found insufficient central guidance and oversight to check that these systems are reliable and accurate, and no inventory of what AI the state is using. This audit is a wake-up call. Stronger governance over the state’s growing use of AI is needed to safeguard against the well-known risks that come with it.”
The audit reviewed AI use at the Office for the Aging, DOCCS, the DMV, and the Department of Transportation (DOT) and found varying policies and risk avoidance among the agencies. “These incomplete approaches to AI governance do not ensure that the State’s use of AI is transparent, accurate, and unbiased and avoids disparate impacts,” the audit report found. “For example, none of the agencies required or developed specific procedures to test AI systems in order to evaluate whether outputs were accurate or biased.”
Specific Agency Findings
According to the audit report, the Office for the Aging uses an AI companion called ElliQ, a voice-operated device that initiates conversations and remembers what users say. It is designed to combat loneliness and social isolation among seniors. In 2023, 808 of these AI companions were shipped out to 530 program participants.
The DOCCS uses AI called Investigator Pro to monitor the voices of prisoners on phone calls to ensure other incarcerated individuals aren’t calling using another’s personal identification number. The audit said that the DOCCS has addressed certain AI-related risks in its contract terms for Investigator Pro, but that contract does not address bias mitigation, which may lead to false positives and increased investigations. DOCCS also does not monitor or measure error rates, according to the audit.
The DMV uses facial recognition technology to deter identity fraud by using computer modeling to compare facial measurements of new commercial license applicants with those already on file. And while the DOT has not formally adopted any AI systems, it has been piloting three for potential use.
ITS Governance and Policies
Use of AI in the state is governed by the Office of Information Technology Services (ITS), which issued its AI policy in January 2024. The comptroller’s audit noted that three of the four agencies use ITS’s definition of AI, but the DMV established its own AI definition and did not consider its facial recognition software as AI. The DMV also did not consult with ITS to determine whether its use of facial recognition software qualified as AI.
ITS defines an AI system as something that “perceive[s] real and virtual environments; abstract[s] such perceptions into models through analysis in an automated manner; and use[s] model inferences to formula options for information or action.” Unlike the other agencies audited, the DMV has established formal AI policies for risk management. The DMV and the DOT have also established AI committees to manage its use within those agencies, and other developing AI technologies.
Overall Audit HONESTAI ANALYSIS
Overall, the audit decried the lack of uniformity in policy among state agencies that have incorporated or are considering using AI for various services. “A major problem with the AI Policy is that it leaves agencies free to determine what is, or is not, responsible use of AI. Conflicting and confusing guidance regarding the use of confidential information with AI systems as well as lack of staff training also create opportunities for inadvertent noncompliance and contribute to concerns about unintended uses and consequences,” DiNapoli’s office said in its announcement of the audit report.
The full, 37-page audit report can be found on the comptroller’s website.