A Blog by Jonathan Low

 

Nov 29, 2019

Your Next Car Will Be Watching You As Much As It's Watching the Road

Observing what is going on inside the relatively compact interior of a car is much less complex that managing what is going on outside. And the data generated is just as - if not more - resaleable. JL

Ben Dickson reports in Gizmodo:

“Being ‘aware’ of the environment inside the car is a closer proposition than completely autonomous self-driving cars—the reason being the lower risk of the operation.”In-car cameras powered with computer vision algorithms can perform complex tasks such as analyzing the state of drivers and passengers and detecting their interactions with different objects. “This enables car manufacturers and ridesharing companies to adapt to complex human states, with the goal of improving road safety and more personalized transportation experiences.”
When you think of artificial intelligence and cars, the first thing that likely comes to mind is ambitious self-driving vehicle projects of tech giants like Google, Uber, and probably Apple. Most of these companies are leveraging AI to create cars that can understand their environments and navigate roads under different conditions, and hopefully, make driving safer—eventually. Some day. Probably.
What’s received less attention is the use of AI inside cars. Thanks to advances in deep learning, it has become possible to develop technologies that can determine what is happening inside vehicles and make the ride safer and more pleasant—all while creating new privacy and security risks.
For better or worse, many applications of in-car AI are right around the corner. In the near future, you can expect cars to help detect distracted drivers, be more conscious of their real owner, and help improve the ride experience by tuning the environment of the car to the preferences of its passengers. But as we know all too well, technological advancements come without impactful tradeoffs.

Safety first

“Being ‘aware’ of the environment inside the car is a closer proposition than completely autonomous self-driving cars—the reason being the lower risk of the operation,” says Emrah Gultekin, CEO of Chooch, a computer vision company.
Currently, what we mostly have is narrow AI, algorithms that can perform limited tasks very well but are not very good at dealing with open environments. AI that can understand and deal with the uncertainties of open roads might still be years away. But inside the car is a much more limited space, which makes it suitable for narrow AI.
“Using AI to understand what’s happening with people in vehicles is relevant not only for autonomous vehicles of the future but also for cars on the road today,” says Rana el Kaliouby, CEO and co-founder of Affectiva, a company that uses AI to measure human emotions.


In-car cameras powered with computer vision algorithms can perform complex tasks such as analyzing the state of drivers and passengers and detecting their interactions with different objects. “This enables car manufacturers, fleet operators, and ridesharing companies to build next-generation mobility that adapts to complex human states, with the goal of improving road safety and delivering better, more personalized transportation experiences,” el Kaliouby says.
Affectiva says it has developed an AI system that can detect various expressions and emotions in human faces. Earlier this year, the company raised $26 million to apply its object- and emotion-detection technology inside cars. The company expects its technology to enter production stage in the next two to three years. Here’s how it’s supposed to work: A camera installed near the steering wheel monitors the driver’s behavior. Affectiva’s AI measures the frequency and length of blinking eyes to determine whether a driver is drifting into drowsiness and signals a warning and recommends playing music, changing the temperature, or pulling over.
The AI is also being developed to detect distractions, such as when drivers are texting, eating, talking on the phone, or turning their heads to talk to passengers. This capability can tie in with other road safety technologies such as automatic lane control.
AI could also soon ensure that only the people who are supposed to can get inside. “The ability of a car to detect known drivers is an important near-future safety feature of vehicles. Matching faces with identity cards within a vehicle are the key to this,” Gultekin says.
Chooch is developing a facial recognition system to detect the rightful owners of cars. When someone is renting a car, they hold up their passport and show their face to the car’s camera. The car’s built-in AI then uses facial recognition to identify them and make sure the right person is sitting behind the steering wheel.
Gultekin further says that by scanning the passengers of the car and their activities, AI algorithms could regulate their environment. “This applies to everything from auto-adjusting interior lights to locking doors to changing the volume of music in dangerous driving conditions,” he says. “A car can be alerted, or even slow down when there is threatening language or cursing in the car. When kids are detected in the back of a car, the car can auto-lock the windows and doors, or change the channel to kid’s programming.”

The future commute

“One question that we and our automotive partners spend a lot of time thinking about is: in next-generation vehicles—either semi-autonomous capabilities, robo shuttles or ridesharing, how will people want to spend their time?” el Kaliouby says. “Some will want to work, others may want to relax, watch content, sleep, or socialize with other people in the car.”
This is where AI can help, el Kaliouby suggests, by providing a deep analysis of the emotions and cognitive states of passengers and their interactions with each other and the in-cabin systems.
During the 2019 Consumer Electronics Show, Hyundai Kia introduced the Real-time Emotion Adaptive Driving (R.E.A.D) technology, an AI-powered interactive cabin that reacts and adjusts itself to the emotional state of the passengers. The system uses cameras and sensors to read the facial expression, heart rate, and electrodermal activity of the passengers. It then tailors the interior environment according to its assessment to create a more pleasant mobility experience.
“Emotion AI can provide an understanding of people’s preferences and optimize the in-cabin environment to offer a personalized experience,” el Kaliouby says.

That eerie feeling

If all of this sounds like a recipe for disaster, that’s because it has all the ingredients.
One of the challenges of developing AI is algorithmic bias, the tendency of deep learning algorithms to pick up overt and covert biases contained in their training data sets. For instance, a deep learning algorithm trained on too many white faces will become less accurate in detecting faces with darker skin tones. Algorithmic bias can lead to discrimination against demographics who are not well represented in the training data.
Recognizing this, Affectiva has analyzed more than 8.5 million faces in 87 countries. “This helps ensure that our algorithms work with high accuracy regardless of age, gender and ethnicity,” says el Kaliouby. “Mitigating bias in AI is critical to ensure technology works in a global world.”
Also, like other deep learning applications, building AI systems that can monitor and determine what’s happening inside a vehicle requires massive amounts of data. Companies collect and store user data on their servers, where they run them through their AI algorithms. In recent years, there have been several cases where the collection of consumer data has resulted in privacy scandals. For example, last year, news broke that Amazon’s Alexa assistant had accidentally recorded a private conversation of an Oregon couple and sent it to a random person on their contacts list.


There have also been several cases where companies have made user data available to outside contractors without explicitly warning the users. Companies often employ contractors to annotate user data, which they then use to train their AI algorithms.
One solution that several companies are exploring is edge AI, specialized hardware that can run deep learning algorithms locally without needing a link to the cloud. Edge AI obviates the need to send data to the cloud and store it on company servers. “We recognize that people’s emotions and states are extremely personal,” el Kaliouby say, explaining that Affectiva’s technology runs locally on automotive-grade embedded systems. “It does not require data to be sent to the cloud,” she adds
Edge AI can also enhance the security of AI-powered vehicles. “There is still a major fear that cars can be hacked and hijacked by bad actors,” Gultekin says. The security of internet-connected cars has become a major concern in recent years. Researchers have shown that with enough resources, they can hack cars and cause damage to the passengers.
“The way to overcome this in a risky environment like driving is by completely disconnecting the vehicle from processing on the cloud during driving. That’s why the AI, for the most part, needs to run on the edge,” Gultekin says.
But there still remain fears that tech companies can use your data for other sinister purposes. “We need to think carefully about the meaning of fundamental rights and how to protect them in light of the continued desire for efficiency,” warns Bernhardt Trout, a chemical engineering professor who teaches an AI ethics course at the Massachusetts Institute of Technology. “These companies can use their AI systems to target us with ads and manipulate us with the aim of controlling us in every way according to the vision of the company.”
Trout describes AI-powered smart cities and cars as “being utterly effective in controlling the behavior of users” and potentially paving the way for “Stalin-style surveillance.”
“Only, Stalin couldn’t read our minds,” he says.
Moving forward, transparency will play an important role in building trust in the AI systems that will slowly find their way into our cars—if we choose to trust it at all. El Kaliouby stresses that automakers and mobility services providers must be clear about in-cabin sensing technology and educate consumers on what the technology does, what data it collects, and how it stores and uses the data.
“We strongly believe in the need for clear opt-in and consent to help build consumer confidence with this technology,” el Kaliouby says. “Any AI that is designed to interact with humans must be evaluated with regards to the ethical and privacy implications for these technologies.

0 comments:

Post a Comment