Site icon Occasional Digest

Banks Seek Ways To Manage Data Quality For AI Services

Occasional Digest - a story for you

The use of AI in finance has been a hot topic at this year’s Sibos conference, as banks consider how AI can transform the way financial services are delivered and consumed, and, more urgently, how its data is managed.

The use of Large Language Models (LLMs) to increase efficiency, improve customer service, and enhance decision-making has been part of the conversation since the launch of ChatGPT by OpenAI in November 2022.  

While LLMs represent a significant advancement in the capabilities of AI, particularly in how machines understand and interact with human language, banks are taking a cautious response owing to concerns around regulatory compliance, data privacy and security, model accuracy and reliability, bias, and fairness.  

“Banks have been awash with data forever, but there is no prioritization, and tagging is incomplete or inconsistent,” states Andy Schmidt, vice president and global industry lead for Banking, CGI. “To be able to simply train a large language model to find the data, you need to have enough confidence in the data that it is usable enough.”  

“I think the important part that people need to sort out first is really getting the data sorted out. Setting out your data governance and making sure that the data is of a decent quality. Being able to de-dupe it and then figuring out where you need to enrich it,” he says.  

Standard Chartered is delivering AI-driven solutions and Margaret Harwood Jones, global head of Financing and Securities, says the bank has been working hard to solve data-management challenges. “You get so many instruction requests that come in in a very unstructured format, so we are using AI to turn those into structured data formats that we can then process efficiently.”  

At a Women in Tech Sibos event hosted by EY, panelists discussed how the only way to avoid biases in AI is to train LLMs to represent everyone from their inception, not just white males, and the only way to do this is to employ a more diverse range of staff.  

IBM believes that organizations need to proactively detect and mitigate risks; monitoring for fairness, bias, and drift. Updates to Granite Guardian 3.0, a widely used network monitoring tool in North America by Granite Telecommunications, allow developers to implement safety guardrails by checking user prompts and LLM responses. This includes checking for things like social bias, hate, toxicity, profanity, violence, and jailbreaking on 10 of the largest LLMs. 

Because of the potential risks and ethical implications, banks need to take responsible AI seriously, which includes taking a stringent approach to their data. 

Source link

Exit mobile version