A Blog by Jonathan Low

 

Jul 18, 2013

The Cognitive Desktop: From Siri's Creators, A Predictive Digital Personal Assistant

Are we getting busier or lazier? Or maybe some of each.

To the extent that we live in the present, we employ an array of technologies to extend our knowledge, productivity and reach. Our questions get answered, our directions are plotted and our work gets done. But it is clearly not enough for devices to do our thinking for us. Now we want them to anticipate our needs and wants, not merely satisfy them.

Predictability has always been the holy grail of business. If you could accurately determine what someone wants or needs and then effectively provide the solution, you could probably charge more, have higher margins and secure a loyal customer in the process. So, there are manifest incentives to bring that vision to reality. The challenge is designing a system that can achieve that goal. And as if that were not difficult enough, since we are living in the real world, it is preferable that this magikal savant operate on limited amounts of data which it can process in very short periods of time.

Siri became a breakthrough success after initial skepticism because the voice activation feature made it more convenient than typing. There was a certain wow factor involved, but anything that reduces what economists call friction - the quotidian tasks and features that make stuff work but slow things down - is worth paying extra for. Adding to that capability by reaching into the future could be even more exciting. The problem is that choices may be limited and the joys of serendipity may be constrained, but as a society we have spoken: we are deeply committed to reductions of effort, for good or ill, and we will reward anyone or anything that eases our burdens. JL  

Rachel Metz reports in MIT Technology Review:

An intelligent assistant that could someday know what information you need before you even ask.
In a small, dark, room off a long hallway within a sprawling complex of buildings in Silicon Valley, an array of massive flat-panel displays and video cameras track Grit Denker’s every move. Denker, a senior computer scientist at the nonprofit R&D institute SRI, is showing off Bright.
Initially, Bright is meant to cut down on the cognitive overload faced by workers in high-stress, data-intensive jobs like emergency response and network security. Bright may, for instance, aid network administrators in trying to stop the spread of a fast-moving virus by quickly providing crucial infection information, or help 911 operators send the right kind of assistance to the scene of an accident. But like many other technologies developed at SRI, such as the digital personal assistant Siri (now owned by Apple), Bright could eventually trickle down to laptops and smartphones. It might take the form of software that automatically brings up listings for your favorite shows when it thinks you’re about to sit down and watch TV, or searches the Web for information relevant to your latest research project without requiring you to lift a finger.
Already some assistant software, such as Google Now for Android smartphones, tries to predict what information a user may need and serve it up automatically. It does this by, for example, recognizing that the user is waiting at a bus stop and delivering bus timetables. The aim of Bright is to develop something even more sophisticated and capable in an office setting. But the big challenge for Bright and similar projects is: how do you learn from a relatively small amount of information?
Originally created by Stanford University as a research institution in 1946 (it’s been operating independently since 1970), SRI International, based in Menlo Park, California, has developed key technologies including the computer mouse, the LCD, and even the first twinklings of the Internet, called ARPAnet. In recent years, it has had success in the artificial-intelligence field with Siri, which was spun out of a project SRI did for the Department of Defense’s Defense Advanced Research Projects Agency, or DARPA, called CALO (that’s “cognitive agent that learns and organizes”).
Denker describes Bright as a “cognitive desktop” and “a desktop that really understands what you’re doing, and not just for you, but also in a collaborative setting for people.” In its current setup, three cameras stare out at her; a monitor shows where she’s looking and displays a real-time log of every action she takes, as well as a familiar-looking computer desktop of files and folders. When she uses the monitor in front of her to open an e-mail from Wells Fargo bank requesting a meeting, for example, Bright records all her actions on a monitor off to the left, noting that she opened the message, that she spent time looking at it (rather than just gazing elsewhere on the screen), and that she closed it.
As Denker demonstrates Bright’s nascent capabilities, it’s not hard to imagine the technology easing everything from scheduling tasks to searching the Web. She explains that her team is trying to adapt existing computer science techniques that try to increase efficiency by anticipating what information will be needed next and testing different actions in advance to speed up response time. Bright, she says, uses the same ideas to anticipate what the user will want to do, so it requires additional equipment to monitor the user. A touch-sensitive display can track finger touches, and hand motions—such as waving—are tracked too.
While it is being developed for cybersecurity and emergency response, Bright could be tailored for other types of users. In schools, for example, Bright might be able to determine that a student is struggling and adjust itself to better meet his or her needs.
There’s a long way to go, however. The system is currently focused on “cognitive indexing”—the mechanism that ties various clues together and then tries to predict what is important. The team behind Bright also needs to build its abilities to predict interests and automate tasks. And before it can be rolled out anywhere, Bright needs to learn how to study what you’re using your computer for.
Getting to know a user is difficult, says Bill Mark, vice president of information and computing sciences at SRI and one of the principal investigators behind CALO. Mark calls this the “small-data problem”; while “big data” efforts focus on gleaning insights from mountains of information, systems like Bright are looking for patterns in much smaller quantities, and this can be very tricky. The limited data set, combined with users’ tendency to change behavior, is very unfriendly to pattern-finding algorithms, he says: “We’re not putting in that much data. These machine-learning algorithms like to generalize over very large amounts of data.”
There are plenty of other challenges. Krzysztof Gajos, an assistant professor of computer science at Harvard who also spent a year working on CALO, notes that one of the difficulties in building intelligent interactive systems is figuring out how to distinguish mandatory tasks like office work from voluntary tasks like playing games. For office-related tasks, he says, it’s hard to design automation in a way that leaves the user feeling in control and seems worth using even though it will occasionally screw up.
“If you look back to systems like the Microsoft Clippy, you can see an example of a system that failed at that,” Gajos says. “The few times it failed were just so aggravating that it overshadowed any benefits the system might have provided for many users.”

0 comments:

Post a Comment