Mobile Is So Now: The Future of Technology is Contextual
It's like that old Carly Simon song says, what we need and are increasingly demanding is 'Anticipation.'
We are sensory beings but, as the following article explains, our senses may not have kept evolutionary pace with the developments that rule our lives. The result is that our ability to discern threats and opportunities is degraded by our inability to understand them in the contemporary context. Which makes the future even more worrisome.
For good or ill, software and hardware designers have become aware of this and are beginning to address it. Some will find this intrusive and uncomfortable. Some will be annoyed by the presumption. And some will hardly notice.
The concern is that we may be denied opportunities that could interest us or expand our awareness by the digital prestidigitation. On the other hand, our modern handmaiden, convenience, will be enhanced. Whatever one's attitude or outlook, the fact is that these decisions are already being made for us. Our task will be to determine whether we like them or not - and then to see if there is anything we can do about it. JL
Pete Mortensen reports in Fast Company:
It’s called situational awareness. The way we respond to the world around us
is so seamless that it’s almost unconscious. Our senses pull in a multitude of
information, contrast it to past experience and personality traits, and present
us with a set of options for how to act or react. Then, it selects and acts upon
the preferred path. This process--our fundamental ability to interpret and act
on the situations in which we find ourselves--has barely evolved since we were
sublingual primates living on the Veldt. You’re walking home alone on a quiet street. You hear footsteps approaching
quickly from behind. It’s nighttime. Your senses scramble to help your brain
figure out what to do. You listen for signs of threat or glance backward. What
you learn may prompt you to turn down another street, confront the person, or
relax. Whether he or she turns out to be a mugger or a jogger, your brain
rapidly cycled through many scenarios seeking an answer.
Here’s the rub: Our senses aren’t attuned to modern life. A lot of the data
needed to make good decisions are unreliable or nonexistent. And that’s a
problem.
In the coming years, there will be a shift toward what is now known as
contextual computing, defined in large part by Georgia Tech researchers Anind Dey and Gregory Abowd about a
decade ago. Always-present computers, able to sense the objective and subjective
aspects of a given situation, will augment our ability to perceive and act in
the moment based on where we are, who we’re with, and our past experiences.
These are our sixth, seventh, and eighth senses.
Hints of this shift are already arriving. Mobile devices with GPS deliver
location-based services, which sets a baseline for the many ways your phone can
gather information it will use to make your life easier down the line. Amazon’s
and Netflix’s recommendation engines, while not magnificently intuitive, feed
you book and video recommendations based on your behavior and ratings.
Facebook’s and Twitter’s valuations are premised on the notion that they can
leverage knowledge of your acquaintances and interests to push out relevant
content and market to you in more effective ways.
These merely scratch the surface. The adoption of contextual
computing--combinations of hardware, software, networks, and services that use
deep understanding of the user to create tailored, relevant actions that the
user can take--is contingent on the spread of new platforms. Frankly, it depends
on the smartphone. Mobile technology isn’t interesting because it’s a new form
factor. It’s interesting because it’s always with the user and because it’s
equipped with sensors. Future platforms designed from the ground up for
contextual computing will make such devices seem closer to toys than to a phone
with cool tools.
For that to happen, computer scientists, technology companies, and users all
need to understand and buy into the requirements and possibilities of contextual
computing. It’s a cultural moment that’s not dissimilar to the way in which
graphical, and then networked computing, were introduced in conceptual and
technical forms 10 years before reaching commercial success.
At Jump, we’ve identified four data graphs essential to the rise of
contextual computing: social, interest, behavior, and personal. Some are
well-established and others have emerged seemingly out of thin air in the last
few years. By mastering all four of these graphs, players seeking to dominate
the next era of the web will be wildly successful.
There are legitimate ethical concerns about each of these graphs. They throw
into relief the larger questions of privacy policy we’re currently wrestling
with as a culture: Too much disclosure of the social graph can lead to friends
feeling that you’re tattling on them to a corporation. The interest graph can
turn your passions into a marketing campaign. The behavior graph can allow
people who wish you harm to know where you are and what you’re doing. And
revealing the personal graph can make it feel like an outside entity is quite
literally reading your mind. We’re all trying to understand what to do about
this from an individual standpoint, let alone a legal one.
Despite the ethical ambiguity around contextual computing, what matters is
that companies are actively constructing these graphs already. These products
and services are in the market today, but most in existence target only one or
two of these graphs. Few are pursuing all four, both given the immaturity of the
space and a lack of clear targets to shoot for. This has the unintentional
effect of highlighting the risks of using such services, without demonstrating
their benefits. For the potential of contextual computing to be realized, these
data sets must be integrated.
This data set shows how you connect to other people and how they are
connected to one another. It also reveals the nature and emotional relevance of
those connections. Most people associate this with Facebook, but it’s actually
an idea and data set that spread far beyond its walls. In an ideal contextual
computing state, this graph would be complete--so gentle nudges by software and
services can bring together two people who are strangers but who could get along
brilliantly and are in the same place at the same time. It could be two people
who share a friend and who simultaneously move to Omaha, where neither person
knows a soul.
Only when this graph is open to a wide variety of services will it reach its
potential. And all the social data in the world won’t be helpful in the
slightest if you know little about a specific person’s beliefs, activities, and
interests.
This is the set of data relating to a person’s deepest held beliefs, core
values, and personality. It’s what makes a person unique in the world, just as
the social graph helps to show what makes her similar to others. The data set is
under-developed at the moment, and it’s quite difficult to design for, even
conceptually.
Given that psychology still struggles to explain exactly how our personal
identities function, it’s not surprising that documenting such information in a
computable form is slow to emerge. There are early indicators that this will
change, however. For example, Proust.com, a relatively new (and struggling)
social-networking service, asks users to document intimate details of their
lives and their beliefs based on the idea of the famed Proust Questionnaire.
People have, quite reasonably so, been reluctant to share such information in a
publicly viewable social network.
A more successful example is Evernote, which has built a large business based
on making it incredibly easy and secure to document both recently consumed
information and your innermost thoughts. Scraping such intimate files for data
is currently the questionable realm of the NSA, however. Entirely new solutions
will need to be created if the potential of the personal graph is to be
reached.
Your tastes and preferences are largely organized around the subjects that
tend to correlate with one another. It’s also about the overlaps in taste
between the individuals whose lives closely resemble your own. Many companies
have made early bets in this arena; Twitter is a fan and believes it’s well on
its way to fully charting how all subjects connect to all others.
For now, such applications are notoriously narrow. For example, a book site
like Goodreads.com is capable
of predicting what other books you might read based on your expressed interests.
What’s problematic is that the interest graph falls far short of depicting your
real interests and tastes. It cannot yet tackle the way your curiosity might
lead you to new directions. And it could never effectively recommend a
restaurant or a vacation spot based on what it knows you read.
It’s easy for data to depict what you actually do instead of what you claim
to do. Sensors do the job. So do, if less elegantly, self-reporting mechanisms.
This data can sit in pivotal contrast to the interest graph, allowing computers
to know, perhaps better than you, how likely you are to go for a jog. It would
be useful, too, for a travel site that notes how you tell friends you’d like to
visit China but records that you only vacation in Europe. Rather than uselessly
recommending vacation deals to Beijing, a smart travel app would instead feed
you deals to Paris or Berlin. The behavior graph provides the foundation, to
some extent, of Google Search, Netflix recommendations, Amazon recommendations,
iTunes Genius, Nike+ run tracking, FourSquare, FitBit, and the entire
"quantified self" movement. When mashed against the other three graphs, there’s
a potential for real insight.
The real potential of contextual computing isn’t about just one of these
graphs. It’s about connections that resonate between them and which get tailored
to different kinds of experiences. Early entrants like Google’s Now and Glass
projects, Highlig.ht, and Siri are just beginning to experiment with these
technologies. Just as the visionaries at Xerox PARC (who developed the
foundational technologies of every desktop PC) could not have fully grasped the
long-term impact of the mouse and graphical computing when they began working on
them in 1973, we cannot say now which contextual applications will emerge as
most vital. The way to the future will be paved on many thousands of interesting
failures.
Granted, true contextual computing is a little further around the corner than
the most optimistic pundits would have you believe. That should not be mistaken
as a caveat that it’s unlikely to fully arrive. As Bill Gates astutely pointed
out, “There’s a tendency to overestimate how much things will change in two
years and underestimate how much change will occur over 10 years.” (Notably, the
tablet computers he introduced in 2001 didn’t achieve commercial success until
the launch of the iPad in 2010.)
Within a decade, contextual computing will be the dominant paradigm in
technology. Even office productivity will move to such a model. By combining a
task with broad and relevant sets of data about us and the context in which we
live, contextual computing will generate relevant options for us, just as our
brains do when we hear footsteps on a lonely street today. Then and only then
will we have something more intriguing than the narrow visions of wearable
computing that continually surface: We’ll have wearable intelligence.
As a Partner and Co-Founder of Predictiv and PredictivAsia, Jon specializes in management performance and organizational effectiveness for both domestic and international clients. He is an editor and author whose works include Invisible Advantage: How Intangilbles are Driving Business Performance. Learn more...
0 comments:
Post a Comment