A Blog by Jonathan Low

 

Nov 8, 2019

How AI Is Being Trained To Detect Fraudulent Stock Trading

It's very good in situations where human experience suggests there could or should be issues that are difficult to identify. JL

Jonathan Vanian reports in Fortune:

Nasdaq introduced a tool that uses deep learning to flag suspicious trades. The technology makes it easier to identify fraud amid the billions of trades that investors make annually. Nasdaq has put the new software to work alongside its more conventional systems for spotting stock market crimes like insider trading. One risk with using A.I. for spotting scams is a flood of false positives—overwhelming the staff who review flagged trades. What sets deep learning apart, is that it’s “good at finding things that are very hard to describe.”
An investor submits a mountain of online orders to sell shares in a grocery chain. It may look like another routine trade, but it’s a scam.
What he’s actually done is spook others into selling their stock in the company, causing its value to tumble. After canceling his own sales, the scammer quickly swoops in to buy the thousands of shares flooding the market at a bargain price.
This illegal scheme, known as spoofing, is well known. But it’s difficult to police because shady traders can use algorithms to place and swiftly cancel orders, over and over again.
In October, stock exchange Nasdaq introduced a tool that uses deep learning, a subset of artificial intelligence, to flag such suspicious trades. The technology is supposed to make it easier to identify fraud amid the billions of trades that investors make annually.
Nasdaq has put the new software to work alongside its more conventional systems for spotting stock market crimes like insider trading. What sets deep learning apart, says Nasdaq’s machine-learning chief Michael O’Rourke, is that it’s “good at finding things that are very hard to describe.”
To train its deep learning, Nasdaq fed it order and trade data from its exchange as well as nonpublic information. After a year of testing, the team that created the technology judged it reliable enough to use more broadly. 
One potential risk with using A.I. for spotting scams is a flood of false positives—overwhelming the staff who review flagged trades. “You can’t miss, but you don’t want to overshoot,” says O’Rourke. However, it’s too soon to calculate a meaningful error rate, he adds, and he thinks a limited number of false positives is acceptable.
In the end, it’s up to federal regulators to pursue wrongdoers—a challenge considering the Securities and Exchange Commission has limited resources. Still, Nasdaq’s team says the new software will help by providing the agency with better and timelier information.
Nasdaq isn’t alone in using machine learning to identify stock fraud. The University of Michigan, for example, is also developing A.I.-powered statistical techniques to counter fraudsters. The problem, however, is that it lacks access to the market data that is so critical to training its system.
“There are some daunting obstacles to building algorithms to detect manipulation when you don’t have a realistic toy market to play with and when you don’t have access to real-time data,” says Gabriel Rauterberg, an assistant law professor at the university. (­Nasdaq rival New York Stock Exchange may also be well positioned to develop deep learning for detecting fraud, but it declined to comment to Fortune.)
Eventually, Nasdaq says, it plans to sell versions of its new technology to other exchanges, something it already does with other software it has developed. 

0 comments:

Post a Comment