Blog
Governance in artificial intelligence: It’s about trust.
Author: David Hook, Director - Financial Services, Australia & New Zealand “It’s morning in the age of AI,” opens IBM’s 2024 AI in Action report. In a survey of over 2,000 organisations,...
Author: David Hook, Director - Financial Services, Australia & New Zealand
“It’s morning in the age of AI,” opens IBM’s 2024 AI in Action report. In a survey of over 2,000 organisations, two-thirds of leaders report that AI has already driven a revenue growth rate by over 25%. Seventy two percent of leaders say their C-suite and IT leadership are aligned on how to achieve AI maturity. And 85% of organisations – of which financial services are amongst the most represented – are following a set roadmap rather than taking a more opportunistic approach.
But that doesn't mean their customers are on board too. According to KPMG, roughly 3 in every 5 individuals are wary of trusting AI at all. Commercial organisations are amongst those least trusted to develop, use, and regulate AI. And only 1 in 2 people believe that the benefit is worth the risk.
Now, risk, governance, and ethics may not capture the imagination compared to gen AI’s potential, but they are the clearest and most direct lines to building trust amongst consumers. So before gen AI flings us into a new tomorrow, building this foundation of trust will be crucial to ensure the industry – and its stakeholders – arrive there safely.
Governance is the foundation of trust in AI
Right now, we’re generally pretty happy to throw a few ideas at ChatGPT to see what it can come up with. But how would you feel about giving it your credit card number? Would you let AI make a financial decision that could impact you for decades, and still trust it’s coming from a place of informed and moderated intelligence?
Financial services and technology providers are no strangers to distrust – our highly regulated environment attests to that. But good, internally-driven governance is critical because while humans are sympathetic and fallible to mistakes, machines must be held to a higher standard of precision. And how do we ensure that? Through explainable reasoning; using smaller, more interpretable models trained on specific and owned data; adhering to regulatory principles; and committing to ethical behaviour in AI design and deployment.
Managing risk without compromising ethics
Consider the use of AI for critical decisions like credit scoring, fraud detection, or investment strategies: what happens when the system makes a mistake? In an industry where a misplaced decimal point can result in catastrophic losses, are you fully prepared to manage those risks and ensure regulatory compliance? And that’s ignoring the reputational risk of bad decision making (which can be just as damaging to the bottom line).
The challenge that gen AI presents is that these models are learning and changing over time. So, it’s critical you get your foundations right from the start in order to lessen the chance of AI drifting further away from them.
The Federal Government recently published their voluntary AI Safety Standards because they want what we want: a safe and secure banking system. The golden rule is pretty simple: don’t put the system at risk. It’s a fairly wise rule to live by.
Right now, the biggest boom (and potential) of gen AI in financial services is happening behind the scenes. Which is to say, institutions can leverage it much more confidently in the back-end rather than in front of customers. Gen AI might not take customer calls on your behalf (at least, not yet), but it will equip your service staff with real-time information and tools to deliver excellent service faster. The amount of analytical, compliance, and regulatory knowledge required of staff is a burden on institutions, but gen AI could eliminate this roadblock instantly with the right answers at the right time.
But the answer has to be trusted.
It’s why IBM spent 40 years developing an AI governance framework that not only includes a global ethics board and charter, but intentionally smaller models that prove all their decisions as trustworthy. And it’s why we talk so stridently about putting the right governance structures in place before you dip your toe in the waters of AI.
Because without a strong governance framework in place, that trust will never be fully realised, and the future will remain a distant matter of science fiction.