A Blog by Jonathan Low

 

Aug 3, 2017

To Realize Their Potential, ChatBots Need Deep Learning

The vision for knowledge to be embedded in chatbot-directed conversation is becoming a reality. But the data required to perfect the model needs more powerful enhancements in order to be useful to power machine learning algorithms. JL

Mazdak Rezvani reports in Venture Beat:

If a modern conversation engine hopes to go beyond answering simple, one-level questions, it must blend the most prominent techniques emerging from the field of deep learning with solid statistics, linguistics, other machine learning techniques, and more structured classical techniques, such as semantic parsing and program induction. The first stop in building an intelligent conversational system is data. While endless streams of data are constantly being generated, most of it is too raw to be of immediate use for machine learning algorithms.
Most tech giants are investing heavily in both applications and research, hoping to stay ahead of the curve of what many believe to be an inevitable AI-led paradigm shift. At the forefront of this resurgence are the fields of conversational interactions (personal assistants or chatbots) and computer vision and autonomous navigation, which — thanks to advances in hardware, data availability, and revolutionary machine learning techniques — have enjoyed tremendous progress within the span of just a few years. AI advances are turning problems previously thought to lie beyond the realm of what machines could tackle into commodities that are percolating into our everyday life.
Tailing the remarkable growth in popularity enjoyed by AI, a new generation of chatbots has recently flooded the market, and with them the promise of a world where many of our online interactions won’t happen on a website or in an app, but in a conversation. Helping turn this promise into reality is a combination of better user interfaces, the omnipresence of smartphones, and new, state of the art machine learning techniques which stimulate conversation, or even within other conversations between the bot and the consumer. Moreover, business goals and the intent of the consumer can influence the kind of response the bot will give.
If a modern conversation engine hopes to go beyond answering simple, one-level questions, it must blend the most prominent techniques emerging from the field of deep learning with solid statistics, linguistics, other machine learning techniques, and more structured classical techniques, such as semantic parsing and program induction.
The first stop in building an intelligent conversational system is data. In particular, deep learning is notorious for needing vast amounts of high quality data before it can unleash its true potential. But while we live in an era where endless streams of data are constantly being generated, most of it is too raw to be of immediate use for machine learning algorithms.
Unsupervised Learning, the subfield of machine learning devoted to extracting information from raw data, unassisted by humans, is likely a promising alternative. Among its many uses, it can be utilized to build an embedding model. In plain English, these techniques allow data to be represented in a less complex form, allowing patterns to be discovered more easily.
While unsupervised learning is already ubiquitous in machine learning, deep learning offers additional innovative ways to build — such embedding models — providing state of the art performance. Optimization of these techniques can alleviate the need for a lot of high quality and expensive labeled data, which is essential in getting artificially intelligent chatbots to perform well.
However, the standard approach in deep learning involves collecting a large, highly specific dataset, which is subsequently used to train a network with a mostly static architecture. Once trained, the network maps directly from input to a fixed set of outputs that are known in advance. Despite being the foundation of remarkably powerful systems, this approach isn’t flexible enough to handle the kind of information needed to carry a realistic conversation. This brings us to the next big obstacle in the way of truly human-like chatbots: the ability to maintain and reason with an internal model of the world.
We humans are constantly (and usually subconsciously) checking every new piece of information we receive from our surroundings against an internal model of the world — a model of what is normal and what is not, of how entities are related, how we can make logical inferences involving said entities, and so on. If, when driving, we see a ball rolling down the street, we immediately know we should slow down and remain in state of alert, looking out for the possibility that a distracted child will soon pop out of nowhere while chasing their ball. This kind of intuition is built on top of an understanding of how entities relate to each other, combined with the ability to make logical connections along a knowledge graph and come up with a conclusion that requires multiple reasoning steps.
This level of automatic and extremely broad reasoning still eludes AI researchers and is perhaps one of the last frontiers in the way of truly intelligent and autonomous AI agents, conversational bots included. To accomplish this goal, the ability to reason is central.
Finally, the ability to put it all together is yet another frontier waiting for a solution. Unlike a search engine where the user is content with being presented a list of matches ordered by relevance, a conversation engine must be more specific. Simply using NLP to identify a set of relevant information is insufficient. It should be able to parse the input, break it down, and present a response to the user that is not only clear and concise, but highly relevant to their taste — rinse and repeat.
We are still in the early stages of the AI-powered conversational revolution, and it is fair to assume some problems that seem insurmountable today will likely be solved in the coming years. We are quickly moving toward a world in which you will be able to have long and complex interactions with your AI assistants, which will not only understand what you want to say but will know your preferences and tailor your experience accordingly.
To do so, we must merge multiple disciplines, including deep learning, statistics, and others, building technology that blends consumer preferences, environment, and language into one piece of intelligent, flexible software.

0 comments:

Post a Comment