A Blog by Jonathan Low

 

Oct 4, 2019

How AI Determines If You Land A Job, Get A Loan, Land In Jail. Etc.

AI is being deployed with greater frequency. There is no legal obligation to notify consumers when they are being evaluated - for whatever reason - by an algorithm rather than a human.

There is growing concern that without any sort of transparency or possibility of appeal, these systems could perpetuate inaccuracies and inequities. JL 

Dalvin Brown reports in USA Today:

AI can match employees who have the ideal skill sets for a specific work environment with employers who may be too busy to have humans screen candidates."Meaningful bits" of information include "how a person will work, how long they will stay, will they be a top sales performer or a high-quality worker." It is (also) touted as a faster and more accurate assessment of a potential borrower as it sifts through tons of data in seconds.AI is used in predictive analysis, in which a computer reveals how likely a person is to commit a crime. The technique has faced scrutiny over whether it improves safety or simply perpetuates inequities. 
Businesses across almost every industry deploy artificial intelligence to make jobs simpler for staff and tasks easier for consumers. 
Computer software teaches customer service agents how to be more compassionate, schools use machine learning to scan for weapons and mass shooters on campus, and doctors use AI to map the root cause of diseases.
Sectors such as cybersecurity, online entertainment and retail use the tech in combination with wide swaths of customer data in revolutionary ways to streamline services. 
Though these applications may seem harmless, perhaps even helpful, the AI is only as good as the information fed into it, which can have serious implications.
You might not realize it, but AI helps determine whether you qualify for a loan in some cases. There are products in the pipeline that could have police officers stopping you because software identified you as someone else.
Imagine if people on the street could take a photo of you, then a computer scanned a database to tell them everything about you, or if an airport's security camera flagged your face while a bad guy walked clean through TSA.
Those are real-world possibilities when the tech that’s supposed to bolster convenience has human bias baked into the framework.
"Artificial intelligence is a super powerful tool, and like any really powerful tool, it can be used to do a lot of things – some of which are good and some of which can be problematic," said Eric Sydell, executive vice president of innovation at Modern Hire, which develops AI-enabled software.
"In the early stages of any new technology like this, you see a lot of companies trying to figure out how to bring it into their business," Sydell said, "and some are doing it better than others."
Artificial intelligence tends to be a catch-all term to describe tasks performed by a computer that would usually require a human, such as speech recognition and decision making. 
Whether it's intentional or not, humans make judgments that can spill over into the code created for AI to follow. That means AI can contain implicit racial, gender and ideological biases, which prompted an array of federal and state regulatory efforts.

Criminal justice 

In June, Rep. Don Beyer, D-Va., offered two amendments to a House appropriations bill that would prevent federal funds from covering facial recognition technology by law enforcement and require the National Science Foundation to report to Congress on the social impacts of AI.
"I don’t think we should ban all federal dollars from doing all AI. We just have to do it thoughtfully," Beyer told USA TODAY. He said computer learning and facial recognition software could enable police to falsely identify someone, prompting a cop to reach for a gun in extreme cases. 
"I think very soon we will ask to ban the use of facial recognition technology on body cams because of the real-time concerns," Beyer said. "When data is inaccurate, it could cause a situation to get out of control."
AI is used in predictive analysis, in which a computer reveals how likely a person is to commit a crime.  Though it's not quite to the extent of the "precrime" police units of the Tom Cruise sci-fi hit "Minority Report," the technique has faced scrutiny over whether it improves safety or simply perpetuates inequities. 
Americans have voiced mixed support of AI applications, and the majority (82%) agree that it should be regulated, according to a study this year from the Center for the Governance of AI and Oxford University’s Future of Humanity Institute.
When it comes to facial recognition specifically, Americans say law enforcement agencies will put the tech to good use. 

Jobs 

Numerous studies suggest that automation will destroy jobs for humans. For example, Oxford academics Carl Benedikt Frey and Michael Osborne estimated that 47% of American jobs are at high risk of automation by the mid-2030s. 
As workers worry about being displaced by computers, others are hired thanks to AI-enabled software.
The technology can match employees who have the ideal skill sets for a specific work environment with employers who may be too busy to have humans screen candidates.
Modern Hire uses data gathered from tests, audio interviews and resumes to predict how a person might behave on the job.
"Meaningful bits" of information include "how a person will work, how long they will stay, will they be a top sales performer or a high-quality worker," Sydell said.
Using AI, "we can get rid of processes that don't work well or are redundant. And we can give candidates a better experience by giving them real-time feedback throughout the process," Sydell said. 
He said if AI is deployed poorly, it can make the job environment worse, but if it's done thoughtfully, it can lead to fairer workplaces.

Finance 

For better or worse, artificial intelligence affects the financial decisions people make, and it has for years. It plays an increasingly significant role in how traders invest, and it's  particularly effective at preventing credit card fraud, experts said.
Where things get questionable is when the tech is used to decide whether you're worthy of borrowing money from a bank.
"Whenever you apply for a loan, there may be AI to figure out if that loan should be given or not," said Kunal Verma, co-founder of AppZen, an AI platform for finance teams with clients including WeWork and Amazon.
The technology is often touted as a faster and more accurate assessment of a potential loan borrower as it can sift through tons of data in seconds. However, there's room for error.
If the information fed into an algorithm shows that you live in an area where a lot of people have defaulted on their loans, the system may determine you are not reliable, Verma said.
"It may also happen that the area may have a lot of people of certain minorities or other characteristics that could lead to a bias in the algorithm," Verma said.

Solutions to bias 

Bias can creep in at almost every stage of the deep-learning process; however, algorithms can also help reduce disparities caused by poor human judgment.
One type of solution involves altering sensitive attributes in a data set to offset the outcome. Another is prescreening data to maintain accuracy. Either way, the more data a company has, the more fair AI can be, Sydell said.
"There’s a reason why Google, Facebook and Amazon are leaders in AI," Sydell said. "It’s because they have tons of data to crunch. Other companies have access to the same type of AI technology, but they may not have massive amounts of data to use and apply it to. That’s the stumbling block." 
Beyer, the politician who wants to regulate AI, is in favor of having humans double-check decisions made by computers "until the technology is perfect, if it ever is." 
He said it may be worth it to question whether AI should be the go-to solution to every problem, including whether someone goes to jail. 
"And when it’s perfect, we have to start thinking about privacy. Like, is it reasonable to take a photo of someone and run that through a database?" Beyer said. "If AI can read an X-ray much more quickly, much more accurately and with less bias than a human, that’s terrific. If we give AI the ability to declare war, we’re in big trouble."

0 comments:

Post a Comment