The Risks and Rewards of AI in Banking
Turn on the television. I bet you can find at least one movie about artificial intelligence’s potential impact on the world — both for good and for bad. The good includes everything from Disney’s WALL-E to Marvel superheroes. The bad ranges from iRobot to Skynet in the Terminator movies. Even J.Lo battled and worked with AI in her most recent Netflix film.
While we’re a far cry from Skynet, we’re closer than ever to a reality where AI is a pervasive part of our everyday lives. You might have even used AI to help you find a movie about AI. Or, maybe you asked Google or Alexa to “play a movie about AI.”
Over the past decade, we’ve seen the world evolve from basic chatbots to a few niche users of ChatGPT to a widespread use of artificial intelligence across organizations in nearly every industry. Primary examples of AI usage today range across nearly every industry.
The retail industry leverages AI to deliver personalized ads and shopping recommendations to customers based on browsing and purchase histories. In healthcare, AI is leveraged to help analyze medical images and detect conditions like cancer, heart disease, and more. In the automotive industry, AI is used to plot travel routes and now, even drive the car for you with advances in autonomous vehicles. The list of examples goes on — agriculture, manufacturing, entertainment, utilities, etc.
Within the financial industry, the vast majority of financial services companies (91%) are either assessing AI or already using it in production, according to NVIDIA’s annual State of AI in Financial Services report.
Like other industries, financial services leaders are looking to AI to help them deliver new operational efficiencies, reduce costs, automate repetitive tasks, improve customer experiences, and drive new product and services innovations. And, many financial providers are already seeing results across use cases like fraud detection, customer service, and risk management.
However, financial services organizations face a two-fold challenge when it comes to leveraging business data and consumer-permissioned financial data.
First, intelligence locked away is intelligence wasted. A recent commissioned survey of 150 financial services industry leaders and decision makers in the U.S. and Canada from Forrester Consulting states:
“...most financial institutions struggle considerably to utilize financial data today. In part, this stems from an inability — or unwillingness — to evolve their data programs. More than 60% of respondents say their organization largely still uses data the same way they always have. This creates challenges with delivering personalized experiences at scale (60% say this is an issue), wasting data (52% say they have substantial amounts of data they are not using), and using their consumer data to make better products, services, and experiences (38% wrestle with this).”
What’s more, nearly half (44%) of financial services data leaders say they have lost customers due to poor data usage. This wasted or misused data is a huge opportunity for AI to help companies more effectively use the data at their fingertips. But that brings us to the second challenge ...
Artificial intelligence relies on the data used to feed its system. If that data is inaccurate, confusing, or outdated, financial providers are left making decisions, running operations, or serving customers based on bad information. In the case of AI, intelligence poorly managed is intelligence made dumb.
Regardless of the use case, the biggest challenge — and opportunity — that the majority of AI models and financial services companies face is being able to tap into and leverage data in a way that delivers true value.
AI Risks: When Data is Dumb, Deficient, or Deceitful
Risk 1: Data Quality
AI is only as good as the data that fuels it. Poor data quality, incomplete data sets, and disjointed information can be costly for businesses. For instance, a recent survey from Vanson Bourne and Fivetran found that models trained on inaccurate, incomplete, and low-quality data caused misinformed business decisions that impacted an organization's global annual revenue by 6%, or $406 million on average.
So how many companies are struggling with this premise of “bad data in, bad decisions out?” The vast majority (77%) of organizations have data quality issues, and 91% report this impacts their company’s performance, according to a 2022 survey of 500 data professionals from Great Expectations. And, a recent report from Informatica points to data quality as the top issue preventing companies from succeeding with generative AI initiatives.
Risk 2: AI Hallucinations and Misinformation
Earlier this year, the World Economic Forum stated that “false and misleading information supercharged with cutting-edge artificial intelligence … is the top immediate risk to the global economy.” According to NewsGuard, an organization that tracks misinformation, websites hosting AI-created false articles have increased by more than 1,000% from May 2023 to December 2023.
Whether AI is intentionally misled by malicious actors or simply making stuff up to satisfy human prompts and questions, the risk is real. Primary examples of misinformation and AI hallucinations (incorrect or misleading results that AI models generate) include everything from Air Canada chatbot lies to made-up court cases invented by ChatGPT for a U.S. District Court case.
Bottom line? Verify what you read. In fact, when it comes to AI hallucinations, popular AI models like ChatGPT and Google’s PaLM make up facts anywhere from 3% to 27% of the time, depending on the tool. And, 89% of machine learning (ML) engineers who work with generative AI say their models show signs of hallucination.
Risk 3: Data-Generated Bias
While it’s clear AI can be easily misled by inaccurate information, it can also be led astray by the same stereotypes and inherent biases that humans struggle to combat. This can be due to the data used to train the AI model, the algorithms, or cognitive bias from the creators of an AI model. Gartner estimates that 85% of AI projects provide false results due to bias built into the data or the algorithms, or that exist in the professionals managing those deployments.
In addition, a team of researchers from the University of Southern California’s Information Sciences Institute found bias in up to nearly 39% of data in two AI databases. One often used example of bias is seen among AI-generated images — with race and gender biased when AI was asked to create images of people in specialized professions.
For financial services, bias becomes a real risk — so much so that the Consumer Financial Protection Bureau (CFPB) has expanded its definition of “unfair” acts and practices to include discriminatory conduct, even by AI. Bias in AI use cases for financial services could impact credit and loan decisions, financial advice and investments, customer service and more. Last year, the Center for Financial Inclusion identified harmful gender bias within financial advice for women versus men within popular AI tools.
Risk 4: Data Privacy and Security
Privacy and security are already top concerns related to data — the use of AI paints an even bigger spotlight on the need for strong data privacy and security protocols and tools. The risks are two-fold:
- Protecting against AI-powered cyberattacks, threats, and fraud
- Educating on proper AI usage and establishing policies to protect against data breaches and leaks
When it comes to protecting against AI attacks, a survey of 1,800 security leaders and practitioners from Darktrace found 60% believe their organizations are inadequately prepared to defend against AI-powered threats and attacks. For financial services, the American Banker Journal reports just over half of financial institutions say they lost between $5 million and $25 million to AI-based threats in 2023.
The other primary risk is related to how companies and its employees leverage AI. According to a 2022 Gartner report, 40% of organizations experienced an AI-related privacy breach. Of those, only 1 in 4 was considered malicious. This means the majority of those breaches were likely caused by irresponsible internal practices. For example, Cisco’s 2024 Data Privacy Benchmark study found 48% of organizations are entering non-public information about their company into GenAI apps. Another 38% say they’ve put customer information into them as well!
Beyond putting restrictions and policies in place, companies — particularly those with access to personal information like a consumer’s finances — have work to do to educate employees and establish the right protocols to protect against inadvertent harm.
Risk 5: Mistrust and Reputation Risk
Could using AI hurt your reputation and damage customer trust? Maybe… but it depends on the company and situation. At a high level, a recent Harvard Business Review study found most people (57%) don’t trust AI. Additionally, a survey from KPMG and the University of Queensland found that 61% are wary about trusting AI systems.
While the majority don’t trust AI today, consumers are starting to trust AI to take the place of human interactions in certain cases. For example, PWC found that half of consumers said they would trust GenAI to collate product information before they make a purchase (55%) or provide product recommendations (50%). However, there is still room to grow in gaining trust with AI for more complex topics — only 23% trust GenAI to assist them with legal advice, and just 27% trust it to execute financial transactions.
As trust in AI grows, financial services providers have opportunities to leverage AI responsibly to augment activities like customer service, financial advice, and automated insights or notifications.
However, this means protecting against the other 4 risks outlined above to ensure any AI missteps or mishaps don’t damage consumer trust or have a negative impact on the company’s reputation. According to the 2024 Edelman Trust Barometer, when innovations like AI are mismanaged, trust and enthusiasm for these technologies wanes. When AI is managed well, Edelman found that 38% of consumers would embrace it. On the flip side, when it is managed poorly, this drops by more than 10 points — just 26% of global respondents would embrace AI.
AI Rewards: When Data is Intelligent, Integrated, and Inclusive
Reward 1: Intelligence Amplified
Financial services providers and most businesses face an increasingly difficult challenge: how to capture, analyze, and take action on a growing mountain of data. A single financial transaction generates dozens of data points. All of these data points across the millions of transactions occurring each day can provide valuable insights into consumer behaviors, product adoption, and engagement. However, more than half (52%) admit they have substantial amounts of consumer financial data they are not using, according to a commissioned survey from Forrester Consulting.
Artificial intelligence can augment and enhance current data analysis processes and tools to enable smarter, faster insights based on large sets of data. It can also help uncover hidden patterns and previously unrealized opportunities that can drive significant value for organizations. McKinsey & Company calls out that AI’s potential to create value for financial services organizations is one of the largest across industries, potentially unlocking $1 trillion of incremental value for banks annually.
By automating data analysis with AI tools, what would previously take an individual hours to do can be reduced to minutes or even seconds. In fact, IBM’s 2022 Global AI Adoption Index found 65% of businesses are using AI to reduce manual or repetitive tasks. And, this takes us to our next key benefit.
Reward 2: Operational Efficiencies
According to IBM, 54% of organizations are seeing cost savings and efficiencies from using AI in IT, business, or network processes. For the financial industry, McKinsey’s 2023 banking report estimates generative AI could enhance productivity in the banking sector by up to 5% and reduce global expenditures by up to $300 billion.
These operational efficiencies can take the form of everything from automating repetitive tasks and reducing manual errors to filling labor and skills shortages and enabling chatbots to handle less complex customer service inquiries. With the productivity gains possible with AI, financial providers can speed up innovation and transform how they create and maintain sustainable growth.
Reward 3: Financial Inclusion
While the risk of AI-generated bias in financial advice is real, AI can also break down barriers to financial access for unbanked and underbanked communities in a number of ways:
- Democratizing access to financial products and advice
- Empowering better money management with personalized insights and tools
- Expanding credit access to those with little or no credit history
“Technology drives inclusion in financial services. It facilitates widespread education, accessibility, and affordability that were previously reserved for a wealthy few, opening doors and breaking down barriers worldwide.” — World Economic Forum
Democratized Access: Not only can operational efficiencies gained from using AI help financial providers bring down the cost of delivering financial services, it also opens up new ways of delivering products and services to consumers that can meet them where they are. For instance, AI can make it easier for consumers with disabilities or language barriers to gain access to financial tools and advice.
Better Money Management: AI-powered insights embedded into the digital and mobile banking apps consumers already use can help them better understand and manage their money. In addition, AI-powered chatbots and virtual financial advisors can provide a free option for consumers who want advice on how to create a budget, manage an investment portfolio, and more. And, consumers are interested in using AI tools to help them manage their money. A Forbes Advisor survey found more than 40% of consumers would use artificial intelligence to get financial advice.
Expanded Credit Access: The U.S. Chamber of Commerce says that “more inclusive financial services cannot happen if institutions are not equipped to handle increased applications and subsequent decisions. One way to efficiently work with this additional data is to utilize artificial intelligence and machine learning.” For instance, PWC reports that lenders using advanced credit modeling techniques are “already seeing 15% to 30% increases in credit approvals with no change in loss rates. On the financial inclusion side, these approvals often include once overlooked borrowers who can now access credit from the formal banking system.”
Reward 4: Fraud Detection
A survey from Mastercard and Fintech Nexus reveals that “increased fraud detection” ranks as the primary driver (63%) behind AI investment among financial institutions. AI can improve fraud detection and prevention by analyzing patterns and anomalies in transaction data more quickly and efficiently. In fact, a study by IBM found that AI-powered fraud detection systems will reduce false positives by up to 70%, resulting in significant cost savings and improved customer experiences.
And, organizations are already seeing the results. J.P. Morgan has been using the underlying AI-powered large language models for payment validation screening for more than two years. It also speeds up processing in other ways by reducing false positives and enabling better queue management. The result has been lower levels of fraud and a better customer experience, with account validation rejection rates cut by 15% to 20%.
Reward 5: Personalization at Scale
Eighty percent of consumers say customer experiences should be better considering all the data companies collect, according to Salesforce’s State of the Connected Customer report. However, it’s difficult to truly understand customers and deliver personalization at scale when most financial providers only see a small picture of a consumer’s financial life.
Consumers have money in more places than ever before, averaging at least 5 to 7 different financial accounts, and as a result, often don’t have a holistic view of their financial lives. AI can help financial providers deliver personalized experiences at scale by helping to automate and personalize support, as well as translate financial data points into actionable intelligence.
A Capgemini report on GenAI found that 67% of executives said that generative AI can improve customer service through automated and personalized support. In addition, Salesforce’s research found 68% of service professionals use AI to create and personalize service communications with customers.
Conclusion: It’s All about the Data
Across all risks and rewards that AI can bring to financial services, it starts with being able to utilize consumer-permissioned financial data and business intelligence to mitigate the risks and maximize the opportunities. As financial services companies embrace AI for a multitude of use cases, they must also become a truly data-driven organization.
Consumer financial data programs are vital to continued success, but all too often, organizations are limited by their inability to turn data into actionable intelligence.” —Opportunity And Growth In Financial Services: Turning Data Insights into Actionable Intelligence