The U.S. Federal Reserve Urges Careful Management of AI Risks in Finance

Vice Chair Michael Barr calls for responsible adoption, clear governance and industry-wide collaboration

Speaking at the Singapore Fintech Festival, Federal Reserve Vice Chair for Supervision Michael S. Barr outlined why AI could reshape the financial sector, and why regulators and firms must manage its risks with care. He described AI as “transformational,” but warned that its rapid spread across financial services demands strong governance, clear oversight and an industry-wide commitment to responsible use. 


Balancing Opportunity with Uncertainty 


Barr noted that AI is developing faster than many expected. Generative AI tools, first popularised in 2022, are now used by “three in four large companies.” Yet, he said, the economic impact of AI remains hard to predict. It may simply support existing work, or it may “transform the nature of work and leisure” in ways that reshape entire industries. 


For now, the picture is mixed. AI skills already appear in “one in 10” job postings across financial services. But fast adoption could create short-term disruption, especially if firms cannot adapt their processes or if workers lack needed skills. 


AI in Financial Services: Progress and Pressure 


Barr said financial institutions are moving quickly to integrate AI, especially for tasks like document review, fraud detection and customer support. These early use cases can streamline operations and support productivity. But as firms begin to use AI in core areas, such as credit decisions, risk modelling and trading, the stakes rise. 


He stressed that AI-driven systems must meet the same legal and supervisory standards as any other critical process. Decisions must be “well controlled, numerically and legally precise, explainable and replicable.” Today, he noted, developers “still struggle” to consistently meet all these conditions. 


Barr also highlighted several risk areas: 


  • Consumer fairness: AI models may unintentionally reinforce bias in lending 
  • Market integrity: Algorithmic trading powered by AI could lead to tacit collusion, manipulation or sudden spikes in volatility 
  • Operational resilience: Firms need governance that can keep pace with new models and rapid deployment cycles 


He said that innovation must be “responsive to these risks” for AI to become a sustainable part of financial operations. 


Strengthening AI Governance in Central Banking 


Barr also discussed the Federal Reserve’s approach to adopting AI internally. He said central banks must understand how the technology works, not only to regulate it, but also to use it responsibly. 


The Fed has launched an organisation-wide AI programme and governance framework. It follows a “learning-by-doing” model while keeping clear guardrails in place. Early uses include writing support, research, software development and technology modernisation. 


One focus is updating legacy systems. Barr said the Fed is using AI tools to translate old code, generate tests and accelerate cloud migration. These efforts have already led to “faster delivery, improved quality, and enhanced developer experience.” 


The Fed is also studying how AI can support financial-stability analysis, supervisory models and payment-system resilience. 


Keeping Pace With a Fast-moving Technology 


Barr closed by stressing that central banks must stay closely engaged as AI reshapes economic activity. “Given AI’s current and prospective role, we are devoting the necessary resources to understanding it,” he said. AI will continue to influence markets, labour and financial stability, and regulators must be prepared for that future. 


“AI has the potential to fundamentally change the economy and society,” he said. “And as central bankers, we need to keep up.” 


Learn How to Use AI to Make Compliance Smarter and More Efficient 


Get in touch with our team to explore how CUBE can help your organisation integrate AI-driven compliance and risk solutions combined with expert human oversight.