A Blog by Jonathan Low

 

Jun 15, 2019

How 6G Will Stream Human Brain Caliber AI To Wireless Devices

Getting there is half the fun...JL

Jeremy Horwitz reports in Venture Beat:

“Terahertz frequencies will be the first wireless spectrum that can provide the real time computations needed for wireless remoting of human cognition.” A wireless drone with limited on-board computing could be remotely guided by a server-sized AI as capable as a top human pilot, or a building could be assembled by machinery directed by computers far from the construction site. Submillimeter wave spectra will enable enhancements of  see-in-the-dark millimeter wave cameras, high-definition radar, and terahertz  security body scanning. The high bandwidth will enable transition from  fiber cable to “wireless fiber” for connectivity.

As 5G networks continue to expand in cities and countries across the globe, key researchers have already started to lay the foundation for 6G deployments roughly a decade from now. This time, they say, the key selling point won’t be faster phones or wireless home internet service, but rather a range of advanced industrial and scientific applications — including wireless, real-time remote access to human brain-level AI computing.
That’s one of the more interesting takeaways from a new IEEE paper published by NYU Wireless’s pioneering researcher Dr. Ted Rappaport and colleagues, focused on applications for 100 gigahertz (GHz) to 3 terahertz (THz) wireless spectrum. As prior cellular generations have continually expanded the use of radio spectrum from microwave frequencies up to millimeter wave frequencies, that “submillimeter wave” range is the last collection of seemingly safe, non-ionizing frequencies that can be used for communications before hitting optical, x-ray, gamma ray, and cosmic ray wavelengths.
Dr. Rappaport’s team says that while 5G networks should eventually be able to deliver 100Gbps speeds, signal densification technology doesn’t yet exist to eclipse that rate — even on today’s millimeter wave bands, one of which offers access to bandwidth that’s akin to a 500-lane highway. Consequently, opening up the terahertz frequencies will provide gigantic swaths of new bandwidth for wireless use, enabling unthinkable quantities and types of data to be transferred in only a second.
The most relatable one would enable wireless devices to remotely transfer quantities of computational data comparable to a human brain in real time. As the researchers explain it, “terahertz frequencies will likely be the first wireless spectrum that can provide the real time computations needed for wireless remoting of human cognition.” Put another way, a wireless drone with limited on-board computing could be remotely guided by a server-sized AI as capable as a top human pilot, or a building could be assembled by machinery directed by computers far from the construction site.
Some of that might sound familiar, as similar remote control concepts are already in the works for 5G — but with human operators. The key with 6G is that all this computational heavy lifting would be done by human-class artificial intelligence, pushing vast amounts of observational and response data back and forth. By 2036, the researchers note, Moore’s law suggests that a computer with human brain-class computational power will be purchasable by end users for $1,000, the cost of a premium smartphone today; 6G would enable earlier access to this class of computer from anywhere.
Dr. Rappaport’s team also expects that the submillimeter wave spectra will enable enhancements of existing technologies, such as see-in-the-dark millimeter wave cameras, high-definition radar, and terahertz (rather than millimeter wave) security body scanning. The incredibly high bandwidth will also enable a transition from reliance on fiber cable infrastructure to “wireless fiber” for network backhaul and datacenter connectivity.
There are, of course, significant practical challenges to overcome before 6G can move from theoretical to real, including miniaturization of the core technologies and health studies to confirm that terahertz frequencies are as safe as currently believed. Additionally, like millimeter wave transmissions, submillimeter wave frequencies will require highly directional antennas, in part because they’re highly susceptible to interference from the atmosphere, particularly above 800 GHz.
But the researchers note that overcoming those challenges, as was successfully accomplished with millimeter wave over the past decade, will lead to great benefits for users. Data transmissions will consume far less energy, and ultra-high gain antennas will be able to be made “extremely small.” That will pave the way for tinier devices, including military-grade secure communications links that are “exceedingly difficult” to intercept or eavesdrop upon.
In March, the FCC unanimously voted to open the 95GHz to 3THz range for “6G, 7G, or whatever is next,” though commissioners suggested the speculative uses of the frequencies at that point made the vote akin to “designating zoning laws for the moon.” Based on past history, Dr. Rappaport and others will be at the forefront of transitioning these concepts from science fiction to science fact — in the foreseeable if not immediate future.

Amazon sends Alexa developers on quest for ‘holy grail of voice science’

Amazon VP of devices David Limp onstage at the re:Mars conference held at the Aria Resort and Casino in Las Vegas on June 4, 2019
Above: Amazon VP of devices David Limp onstage at the re:Mars conference held at the Aria Resort and Casino in Las Vegas on June 4, 2019
Image Credit: Khari Johnson / VentureBeat
At Amazon’s re:Mars conference last week, the company rolled out Alexa Conversations in preview. Conversations is a module within the Alexa Skills Kit that stitches together Alexa voice apps into experiences that help you accomplish complex tasks.
Alexa Conversations may be Amazon’s most intriguing and substantial pitch to voice developers in years. Conversations will make creating skills possible with fewer lines of code. It will also do away with the need to understand the many different ways a person can ask to complete an action, as a recurrent neural network will automatically generate dialogue flow.
For users, Alexa Conversations will make it easier to complete tasks that require the incorporation of multiple skills and will cut down on the number of interactions needed to do things like reserve a movie ticket or order food.
Amazon VP David Limp refers to Conversations as a great next step forward. “It has been sort of the holy grail of voice science, which is how can you make a conversation string together when you didn’t actually programmatically think about it end-to-end. […] I think a year or two ago I would have said we didn’t see a way out of that tunnel, but now I think the science is showing us that [although] it will take us years to get more and more conversational, […] this breakthrough is very big for us, tip of the iceberg,” Limp said.

It begins with a night out and casual conversation

The Alexa Conversations journey is first emerging with a night-out scenario. In an onstage demo last week at re:Mars, a woman buys a movie ticket, makes dinner reservations, and hails a ride in about one minute. (Atom tickets, Uber, and OpenTable are early Alexa Conversations partners.)
The night-out scenario is the first of what Amazon says will become a collection of bundled experiences to get things done.
Conversations may someday power more difficult tasks such as a weekend trip scenario that Limp demonstrated last fall at an event to introduce nearly a dozen new Alexa-powered devices. Limp’s talk of a holy grail is a transformation that every major tech company in the world with an AI assistant is chasing: to evolve assistants from a voice interface that completes basic tasks one at a time to an assistant that can handle complex and complicated tasks.
Two years ago, during a rare onstage gathering of current or former leaders from Alexa, Google Assistant, Siri, and Cortana teams, Viv cofounder and Siri co-creator Adam Cheyer — a person who’s pondered the future of voice assistants since the 1990s – wondered aloud about an assistant that can guide you through the scenario of planning for your sister’s wedding. (Samsung acquired Viv in October 2016 to enhance their Bixby AI assistant.)
At the event, Cheyer talked about how voice will define the next decade of computing and the importance of bridging first-party AI assistant services with a third-party voice app ecosystem. “I don’t want to have to remember what a car assistant can do, the TV system do, the Alexa versus Cortana versus … too much. I want one assistant on every device to access every service without any differentiation between what’s core and what’s third-party,” Cheyer said.
Amazon is working towards that end, starting by reducing the number of interactions you need to get things done with Alexa. Last fall, Amazon introduced Follow-Up Mode, so you can engage in multiple interactions but only have to say the “Alexa” wake word once. With Conversations, the number of interactions necessary to execute the night-out scenario is cut down from 40 to about a dozen back-and-forth interactions.
To further increase the perception that Alexa is capable of natural conversation, the AI assistant learned to whisper when a person is whispering, and can now respond to name-free skill invocation. That means you can say “Get me a ride” instead of first having to launch the skill by saying, “Alexa, launch the Uber skill.”

Creating the perception of intelligence

Amazon isn’t alone in its ambition to make an assistant capable of fluid conversation like the kind you’d expect from another person. Google introduced Continued Conversations so you don’t have the say the wake word to continue to talk about something. Alexa Conversations also gives Amazon’s AI assistant the power to quickly take care of things or engage in commerce akin to Google Assistant’s new food ordering powers and Google’s Duplex. Duplex for the Web and deep connections between Android apps and Google Assistant made their debut last month. Microsoft is also bringing similar intelligence to workplace assistants with Semantic Machines, a startup it acquired in 2018.
It all points to the issue that more complex tasks require more than a single exchange, which Alexa AI senior product manager Sanju Pancholi emphasized. “When you’re starting to solve more complex problems, there is more give and take of information, there are more decisions at each point in time, and hence there are multiple actions that can come in context of the same conversation with different individuals,” he said.
He led a session at re:Mars to make a pitch for Alexa Conversations for businesses and developers, and talked about an assistant that can “solve their product and service needs in the moment of recognition when they realize they need it.”
To be seen as intelligent, Amazon thinks an assistant should understand natural language, remember context, and make proactive predictive suggestions, traits that can prove an assistant is smart enough to accomplish more complex tasks. Doing away with a need to repeat yourself is also critical.
“If you make [customers] repeat information again and again and again, you are forcing them to believe that they are talking to a dumb entity, and if that’s the rapport you’re building with them from the get-go, the chances are they’re never going to delegate higher order tasks to you, because they will never think you’re capable of solving higher-order problems for them,” he said.
The Alexa Skills Store now has more than 90,000 skills, and 325,000 developers have used the Alexa Skills Kit, Pancholi said. Alexa is now available in 100 million devices.
Pancholi shared with developers that potential next steps for Alexa Conversations scenarios may include collections of skills to help people watch content at home, get food delivered, or buy a gift.

Skills on skills

In an interview with VentureBeat, Alexa chief scientist Rohit Prasad declined to share details about use cases that may be taken up next, but believes this could include ways to help plan a weekend. Prasad, who has led Alexa AI initiatives for language understanding and emotional intelligence, said Conversations is designed to stitch together the voice ecosystem for engagement increases for skills and Alexa alike.
“The developer proposition is that you start getting more traffic and more discovery as the more cross skilled we become, like the fact that night out experience is now getting you to order a cab. So Uber and Lyft will see more traffic as well and more customer engagement. So that, and plus skill discovery will happen naturally as part of that. So that’s a huge piece of our value proposition in this case.
Even Blueprints — voice app templates for private, custom Echo skills — may soon incorporate Conversations, Prasad said. Batches of custom skills for the home could, for example, walk kids through multi-step routines, do chores, and help countdown to important dates.
The first proactive Alexa features — Hunches, which suggests event reminders and smart home actions, and Alexa Guard for detecting the sound of broken glass or smoke alarm — were rolled out last fall.
Conversations could someday also become part of Amazon’s voice assistant for the workplace offering if the module is incorporated into Alexa for Business, which added support for Blueprints in March.

Brands, indie developers, and

In January 2018, CNBC reported that Amazon was in talks with brands like Procter & Gamble and Clorox to ink deals to promote their products to Alexa users.
Amazon Alexa VP Steve Rabuchin insists there’s no way for businesses or developers to get prioritized by Alexa’s voice app recommendation system, but the Alexa voice app ecosystem may face another problem. Because of the nature of how voice apps work often without a screen, packaging skills means some skills may inevitably be left out or won’t be ranked.
This is especially important for voice apps. Unlike searching for apps on a smartphone, Alexa delivers voice app recommendation engine only serves up three skills at a time.
“Our vision isn’t to end up where it’s just the biggest brands or most popular,” Rabuchin said in an interview with VentureBeat. “A lot of our most popular skills are indie developers, individuals developers.”
Amazon’s skills recommendation engine that responds when you say things like “Alexa, get me a ride,” recommends voice apps based on measurements like engagement levels, which Amazon started paying developers for in 2017. Whether a skill works
Conversations will incorporate skill quality measurements like user ratings, engagement levels Factors like regional significance, whether a skill works on a smart display, and personal information may also decide which skills appear during Alexa Conversations interactions.
“I think we have a good playbook to start from like, I don’t think it’s a perfect playbook, but it’s a great one to start with,” Prasad said.
 
:)
 

0 comments:

Post a Comment