Trump’s push for AI deregulation could put financial markets at risk
Without proper AI safeguards, financial institutions may face challenges in accurately assessing risk and making informed decisions. This can lead to potential financial instability and market disruptions. It is crucial for organizations to prioritize the implementation of robust safeguards to mitigate these risks and protect the integrity of the financial system. Failure to do so could have far-reaching consequences for both the institutions and the broader economy.

As Canada moves toward stronger AI regulation with the proposed Artificial Intelligence and Data Act (AIDA), its southern neighbour appears to be taking the opposite approach. AIDA, part of Bill C-27, aims to establish a regulatory framework to improve AI transparency, accountability, and oversight in Canada, although some experts have argued it doesn’t go far enough.
Meanwhile, United States President Donald Trump’s is pushing for AI deregulation. In January, Trump signed an executive order aimed at eliminating any perceived regulatory barriers to “American AI innovation.” The executive order replaced former president Joe Biden’s prior executive order on AI. Notably, the U.S. was also one of two countries — along with the U.K. — that didn’t sign a global declaration in February to ensure AI is “open, inclusive, transparent, ethical, safe, secure, and trustworthy.”
The Power of AI in Financial Markets
AI’s potential in financial markets is undeniable. It can improve operational efficiency, perform real-time risk assessments, generate higher income, and forecast predictive economic change.
My research has found that AI-driven machine learning models not only outperform conventional approaches in identifying financial statement fraud but also in detecting abnormalities quickly and effectively. In another study, my co-researcher and I found that AI models like artificial neural networks and classification and regression trees can predict financial distress with remarkable accuracy.
Artificial neural networks are brain-inspired algorithms. Similarly, classification and regression trees are decision-making models that divide data into branches based on important features to identify outcomes. Our artificial neural networks models predicted financial distress among Toronto Stock Exchange-listed companies with a staggering 98 per cent accuracy, suggesting AI’s immense potential in providing early warning signals that could help avert financial downturns before they start.
The Risks of Deregulation
Trump’s push for deregulation could result in Wall Street and other major financial institutions gaining significant power over AI-driven decision-making tools with little to no oversight. When profit-driven AI models operate without the appropriate ethical boundaries, the consequences could be severe. Unchecked algorithms, especially in credit evaluation and trading, could worsen economic inequality and generate systematic financial risks that traditional regulatory frameworks cannot detect.
Algorithms trained on biased or incomplete data may reinforce discriminatory lending practices. In lending, for instance, biased AI algorithms can deny loans to marginalized groups, widening wealth and inequality gaps. Furthermore, unregulated AI-driven risk models might overlook economic warning signals, resulting in substantial errors in monetary control and fiscal policy.
A Blueprint for Financial Stability
My research underscores the importance of integrating machine learning methods within strong regulatory systems to improve financial oversight, fraud detection, and prevention. Durable and reasonable regulatory frameworks are required to turn AI from a potential disruptor into a stabilizing force.
By implementing policies that prioritize transparency and accountability, policymakers can maximize the advantages of AI while lowering the risks associated with it. A federally regulated AI oversight body in the U.S. could serve as an arbitrator, just like Canada’s Digital Charter Implementation Act of 2022 proposes the establishment of an AI and Data Commissioner.
Financial institutions would be required to open the “black box” of AI-driven alternatives by mandating transparency through explainable AI standards — guidelines that are aimed at making AI systems’ outputs more understandable and transparent to humans.
Crisis Prevention or Catalyst?
Will AI still be the key to foresee and stop the next economic crisis, or will the lack of regulatory oversight cause a financial disaster? As financial institutions continue to adopt AI-driven models, the absence of strong regulatory guardrails raises pressing concerns. Policymakers must act swiftly to regulate the increasing impact of AI before deregulation opens the path for an economic disaster.
Without decisive action, the rapid adoption of AI in finance could outpace regulatory efforts, leaving economies vulnerable to unforeseen risks and potentially setting the stage for another global financial crisis.
Sana Ramzan does not work for, consult, own shares in, or receive funding from any company or organization that would benefit from this article and has disclosed no relevant affiliations beyond their academic appointment.