A Blog by Jonathan Low


Apr 10, 2024

Banks' Gen AI Lets Investors Interact With Data, Build Models, Test Scenarios

More investment firms and financial institutions are employing generative AI to help investors interact with vast amounts of data in real time in order to test assumptions and build scenarios which can influence venture and other investments. 

This should be beneficial both to startups and early stage companies creating such models but also to those in the business of determining which of these applications are most likely to succeed while generating excess returns for their investors. JL 

Matt Marshall reports in Venture Beat:

State Street, one of the US's largest banks and the biggest custody bank in the world, with 15% of the world’s assets, (providing security for investors’ investment assets). State Street is bringing gen AI applications to market, to empower investors to more easily and efficiently interact with their data. Using gen AI, clients are able to pull indicators into their models to build scenarios, and then  “ change assumptions, see them visualized in real time, and have it drive decisions. Gen AI allows interaction with large amounts of data and build scenarios that would take much more time in a traditional way.” State Street’s research tracks 1.8 million indicators daily. The chatbot will let customers get information about things like the latency difference between indicators.

Forget having to sift through spreadsheets for hours. StateStreet, one of the nation’s largest banks, is bringing several generative AI applications to market, in a push to empower large investors to more easily and efficiently interact with their investment data.

Caroline Arnold, a State Street executive vice president, who drives some of the bank’s efforts around innovation and user experience, told the audience of VentureBeat’s AI Impact Tour event in Boston that the company is starting by adding a generative AI interface to its Alpha platform, which is the central data repository for State Street’s custody customers.

State Street is the biggest custody bank in the world, with about 12 to 15% of the world’s assets custodied there. In the financial world, “custody” means acting as a safe keeper for assets – a high-security vault for investors’ financial holdings, including stocks, bonds, cash and other investment assets. 

“What gen AI allows you to do is to interact in a very natural way with large amounts of data, on the fly, and build scenarios… in a way that would take you much more time in a traditional way,” Arnold said. Anyone who custodies at Street Street will be able to analyze their data holdings much more quickly, she said. For example, they’ll be able to ask: “‘What’s my exposure to the Brazilian Real at this moment?’, and get an answer and begin to build on that,” she said.


Arnold’s optimistic comments reflected similar stories about potentially game-changing technologies we’ve been hearing in other stops of our AI Impact Tour. In monthly stops around the country’s major cities, VentureBeat is inviting regionally based companies to share their journeys working with generative AI. In San Francisco, Wells Fargo’s CIO Chintan Mehta said that the bank is pushing forward aggressively with the technology. And in New York in February, executives at Citi and New York Presbyterian shared stories about the significant internal enthusiasm around generative AI. (VentureBeat takes its tour to Atlanta tomorrow, with a focus on the intersection of security and AI.)


The Boston-based State Street, which ranks as the 11th or 12 largest bank in the U.S., is also building a generative AI-based chatbot that allows State Street’s broader group of customers to more easily interact with the bank’s research site, Arnold said. State Street’s research site tracks about 1.8 million indicators daily, she said, many of which are followed globally by investors, including inflation data watched closely by central banks. The chatbot will let customers ask questions like “What is State Street’s position on whether or not there’s going to be a recession?,” Arnold said, and get information about things like the latency difference between indicators. That might sound simple, Arnold said, but it isn’t because you have to be able to track the sum total of all the analysis State Street has done on the economy.


Arnold did not say when these two new applications would be released publicly but said they are “well on their way,” and that the bank is working with regulators on legal and compliance precautions it is putting in place for them. State Street says it has 400 AI engineers working on a range of AI applications.

State Street’s initial tests of the research site chatbot found that “within a few weeks it was answering questions better than the helpdesk,” Arnold said. The bot, based on open-source large language model (LLM) models, did occasionally offer up some “weird answers” and the bank had to fine-tune it, she said. The bank’s structured data was a better initial data source for the bot, for this reason, but moving forward, the bank would test the bot only in areas where data is less curated.


Ultimately, using gen AI, State Street clients should be able to pull indicators into their models to build scenarios, and then  “they should just be able to build those on the fly, change their assumptions, see those things rendered and visualized in real time, and have it drive decisions,” Arnold said.

Finally, Arnold said, State Street is close to signing a contract with Microsoft to introduce that company’s Copilot product – based on OpenAI’s LLMs – to help guide employees and reduce friction in several other workplace and internal application areas. “If you need a conference room, if you need a day off, if you need the answer to a policy question, you should really just have a chatbot concierge that can understand what you mean and actually take an action for you,” she said. She said the company’s Office of Architecture is putting in place what’s needed to scale the use of the Microsoft product at the bank.

The data challenge: ensuring accuracy for AI

In other comments, Arnold addressed steps the bank was taking to ensure its data was prepared adequately for usage in generative AI applications. Accountability is an internal pillar for data integrity at State Street, Arnold said. This means the company’s business units need to “own” the data when building these applications because it’s those applications that often generate the company’s most valuable data, or what Arnold called the crown jewels. “I think that’s actually new for a lot of people in the business world. They think of their IT person, or their database administrator, as owning the data.”


She added: “No, the data creators own the data, they own the governance of the data, they own the quality of that data, and they own understanding how that data is used and consumed by others.”

She said data must be accurate, complete and timely. However, banks have been helped here by legislation passed in the wake of the 2008 financial crisis forcing more transparency and documentation of data origin and lineage. When it comes to generative AI, she said one area the bank is working on is the explainability of LLM outputs, which is still a challenge. “For us in the financial industry, we have to be able to explain decisions and how they’re made. And if we say, [here’s] an insight that you can bank your business on, we have to know where it came from.”

Despite these data challenges, Arnold said generative AI is creating greater expectations as people are getting a taste of its transformative potential. The biggest problem, she said, will be that the demand for generative AI “will be overwhelming.”


Organizationally, the answer is to democratize as much of these technologies as you can, while putting guardrails in place, Arnold said. But there’s a real tension here. She said there’s a tendency for the cadre of skilled people in the main architectural team to want to wrap every generative AI prompt to ensure security, but that can slow down innovation. She said the same tension was seen during the introduction of cloud technologies. It’s necessary to have a real discussion with the developers of applications at the edges of the organization and those who want to control generative AI about where the real risks are. “If you hold it too close, you won’t be able to compete. There is an arms race brewing out there. It’s similar to the early days of the internet when you either got to the internet or you didn’t, and if you didn’t you’re probably not here today.”


In remarks following Arnolds, Microsoft corporate vice president Kathleen Mitford, who is responsible for marketing Azure to enterprise companies, agreed that the biggest danger is “moving too slow.” She reinforced a point made by Arnold that the new generation of technologies, including GPUs, cloud and Internet of Things (IoT), can allow you to compute huge amounts of data quickly in ways that weren’t expected before. “All these things make it a lot easier to move fast with generative AI,” Mitford said. “From idea to execution, I’ve never seen anything like it in my technology career.” (Disclosure: Microsoft was a sponsor of the event, which was held on March 27.)

She agreed with Arnold that companies need to move responsibly and keep humans in the loop, because of risks that generative AI models still carry. Both executives said it is better to see generative AI as an assistant than as something that will replace jobs. Arnold conceded that AI will change the job market, but she said she believes there is more to say yes to than no around the technology. She said it would help accelerate learning for many people. If you’re an auditor and have a co-pilot, for example, you get past having to “go hat in hand” to ask people for help or look through a manual. “You’re a better auditor, faster.”

Separately, David Clifford, head of data science and applied machine learning at Biogen, a Boston-based biotech company, spoke about the opportunities of generative AI in that sector, including ways to develop new drugs with AI. Here the challenges were greater because of the need for better industry-wide data benchmarks, and for data that is more representative and complete when it comes to how people respond to drugs. The company is taking a cautious approach, he said. Another big risk is around intellectual property, where an LLM model might provide a recommendation that essentially regurgitates a patent. “That can get organizations in a lot of trouble,” he said. “Our organization is going to take a really hard look at anything that might highlight key risks.”


Post a Comment