A Blog by Jonathan Low

 

Apr 17, 2018

Google Creates Way for Artificial Intelligence To Isolate Voices In a Crowd

So much for blending in to the crowd. JL

Jeff Dunn reports in ars technica:

Google researchers attempted to replicate the cocktail party effect, or the human brain's ability to focus on one source of audio while filtering out others—just as you would while talking to a friend at a party. Google read the "face thumbnails" of people speaking in each video frame and a spectrogram of that video's soundtrack. The system is able to sort out which audio source belongs to which face
Google researchers have developed a deep-learning system designed to help computers better identify and isolate individual voices within a noisy environment.
As noted in a post on the company's Google Research Blog this week, a team within the tech giant attempted to replicate the cocktail party effect, or the human brain's ability to focus on one source of audio while filtering out others—just as you would while talking to a friend at a party.
Google's method uses an audio-visual model, so it is primarily focused on isolating voices in videos. The company posted a number of YouTube videos showing the tech in action:

The company says this tech works on videos with a single audio track and can isolate voices in a video algorithmically, depending on who's talking, or by having a user manually select the face of the person whose voice they want to hear.
Google says the visual component here is key, as the tech watches for when a person's mouth is moving to better identify which voices to focus on at a given point and to create more accurate individual speech tracks for the length of a video.
According to the blog post, the researchers developed this model by gathering 100,000 videos of "lectures and talks" on YouTube, extracting nearly 2,000 hours worth of segments from those videos featuring unobstructed speech, then mixing that audio to create a "synthetic cocktail party" with artificial background noise added.
Google then trained the tech to split that mixed audio by reading the "face thumbnails" of people speaking in each video frame and a spectrogram of that video's soundtrack. The system is able to sort out which audio source belongs to which face at a given time and create separate speech tracks for each speaker. Whew.

Google singled out closed-captioning systems as one area where this system could be a boon, but the company says it envisions "a wide range of applications for this technology" and that it is "currently exploring opportunities for incorporating it into various Google products." Hangouts and YouTube seem like two easy places to start. It's not hard to see how the tech could work when applied to a pair of smart glasses, à la Google Glass, and voice-amplifying earbuds, either.
Aiding smart speakers like the Google Home in their ability to recognize individual voices seems like another use case, but because this model is focused on video, it would likely work better with a speaker with a display, like Amazon's Echo Show. Earlier this year, Google opened up the Google Assistant to "smart display" devices like the Echo Show, but the company hasn't released one itself.In any case, the privacy ramifications of this kind of tech seem just as obvious as the potential use cases. Google's voice isolation is far from bulletproof in the examples above, but with some more fine-tuning, it could make for a powerful eavesdropping and surveillance tool in the wrong hands.
That's a lot of speculation for now, though. Here's hoping this research at least lessens the need to shout at Google Home in the future.

0 comments:

Post a Comment