A Blog by Jonathan Low

 

Oct 24, 2019

Can Artificial Intelligence Think?

In human terms, it probably could, but it doesn't yet. JL


Daniel Shapiro comments in Forbes:

When interacting with artificial intelligence interfaces at our current level of AI technology, our human inclination is to treat them like vending machines, rather than to treat them like a person. Today's AI systems learn to think fast and automatically (like System 1), but artificial intelligence as a science doesn’t yet have a good handle on how biases, shortcuts, and generalizations get baked into the “thinking” machine during learning. With today’s AI, there is no deliberative step by step thinking process going on.
Sci-fi and science can’t seem to agree on the way we should think about artificial intelligence. Sci-fi wants to portray artificial intelligence agents as thinking machines, while businesses today use artificial intelligence for more mundane tasks like filling out forms with robotic process automation or driving your car. When interacting with these artificial intelligence interfaces at our current level of AI technology, our human inclination is to treat them like vending machines, rather than to treat them like a person. Why? Because thinking of AI like a person (anthropomorphizing) leads to immediate disappointment. Today’s AI is very narrow, and so straying across the invisible line between what these systems can and can’t do leads to generic responses like “I don’t understand that” or “I can’t do that yet”. Although the technology is extremely cool, it just doesn’t think in the way that you or I think of as thinking.
Let’s look at how that “thinking” process works, and examine how there are different kinds of thinking going on inside AI systems.
First, let me convince you that thinking is a real thing. Putting aside the whole conversation about consciousness, there is a pretty interesting philosophical argument that thinking is just computing inside your head. As it turns out, this has been investigated, and we can make some conclusions beyond just imagining what thinking might really be. In the book “Thinking Fast and Slow”, Nobel laureate Daniel Kahneman talks about the two systems in our brains that do thinking: A fast automated thinking system (System 1), and a slow more deliberative thinking system (System 2). Just like we have a left and right brain stuck in our one head, we also have these two types of thinking systems baked into our heads, talking to each other and forming the way we see the world. And so thinking is not as much about being right, as it is a couple of ways for making decisions. Today's AI systems learn to think fast and automatically (like System 1), but artificial intelligence as a science doesn’t yet have a good handle on how to do the thinking slow approach we get from System 2. Also, today’s AI systems make the same sorts of mistakes as System 1, where biases, shortcuts, and generalizations get baked into the “thinking” machine during learning. With today’s AI, there is no deliberative step by step thinking process going on. For example, How can AI “think”, when a major component of what thinking is all about isn’t ready for primetime?
Now that we have a bit more definition about what thinking is, how can we make more human-like artificial intelligence? Maybe representing feedback loops will get us to a sort of thinking machine like System 2. Well, as it turns out, we have not cracked that yet. AI models don’t contain common knowledge about the world. For example, I recall Yann Lecun, a “founding father” of modern AI,  gave an example sentence “He walked through the door” and pointed out that today’s AI models can’t decide what this means. There is a silly interpretation where we can conclude that a person crashed through a door like a superhero, smashing it to pieces. There is another interpretation where either the door was open or the person opens the door to walk through the doorway. Unfortunately, without common knowledge, you don’t really know which situation is more likely. This shows us that even “thinking fast” situations can go poorly using the tools we have available today.
We live in a world where fast thinking AI is the norm, and the models are slowly trained on huge amounts of data. The reason you can’t make a better search engine than Google is not the secrecy of their search algorithms. Rather, it is the fact that they have data you don’t have, from excellent web crawlers to cameras on cars driving around your neighborhood. Currently, the value in AI is the data, and the algorithms are mostly free and open source. Gathering masses of data is not necessarily enough to ensure a feature works. Massive efforts at human labor are often required. In the future, thinking algorithms that teach themselves may themselves represent most of the value in an AI system, but for now, you still need data to make an AI system, and the data is the most valuable part of the project. Thinking is not easily separated from the human condition, but we humans are also far from perfect. We may be smart on average, but as individuals, we are not built to do statistics. There's some evidence for the wisdom of crowds, but a crowd holding pitchforks and torches may change your mind. As it turns out, we are adapted through the generations to avoid being eaten by lions, rather than being adapted to be the best at calculus. We humans also have many biases and shortcuts built into our hardware. It’s well documented. For example, correlation is not causation, but we often get them mixed up. A colleague of mine has a funny story from her undergraduate math degree at a respected university, where the students would play a game called “stats chicken”, where they delay taking their statistics course until the fourth year, hoping every year that the requirement to take the course will be dropped from the program.
Given these many limitations on our human thinking, we are often puzzled by the conclusions reached but our machine counterparts. We “think” so differently from each other. When we see a really relevant movie or product recommendation, we feel impressed by this amazing recommendation magic trick, but don’t get to see the way the magic trick is performed. And one is tempted to conclude that machine-based thinking is better or cleaner than our messy biological process, because it is build on so much truth and mathematics. In many situations that’s true, but that truth hides a dark underlying secret. In many cases, it is not so clear why artificial intelligence works so well. The engineering got a bit ahead of the science, and we are playing with tools we don’t fully understand. We know they work, and we can test them, but we don’t have a good system for proving why things work. In fact, there are some accusations even in respected academic circles (slide 24, here) that the basic theory of artificial intelligence as a field of science is not yet rigerously defined. It is not just name-calling or jealousy being hurled by the mathematicians at the engineers. AI is a bunch of fields stuck together, and there really is a lack of connection in the field between how to make things work and proving why they work. And so the question about thinking and AI is also a question about knowledge. You can drive a car if you don’t know exactly how it works inside, and so maybe you can think, even if you don’t know why your thinking works.
Assuming we don’t have a concrete theory underlying the field of artificial intelligence, how can engineers get anything done? Well, there are very good ways to test and train AI models, which is good enough for today’s economy. There are many types of AI, including supervised learning, unsupervised learning, reinforcement learning, and more. Engineers don’t tend to ask questions like “is it thinking?”, and instead ask questions like “is it broken?” and “what is the test score?”
Supervised learning is a very popular type of artificial intelligence that makes fast predictions in some narrow domain. The state-of-the-art machinery for doing supervised learning on large datasets is feed-forward deep neural networks. This type of system does not really think. Instead, it learns to pick a label (for classification) or a number (for regression) based upon a set of observations. The way decisions are baked into neural networks during “learning” is not obvious without a strong validation step. More transparent AI models have been around for a long time, for example, in areas such as game theory for military planning. Explicit models like decision trees are a common approach to developing an interpretable AI system, where a set of rules is learned that defines your path from observation to prediction, like a choose your own adventure story where each piece of data follows a path from the beginning of the book to the conclusion.
Another type of artificial intelligence called reinforcement learning involves learning the transition from one decision to the next based on what’s going on in the environment and what happened in the past. We know that without much better “environment” models of the world, these approaches are going to learn super slowly, to do even the most basic tasks. Systems that learn to solve problems this way rely heavily on accurate models of how the world works. When dealing with a problem related to humans, they need lots of data on what those humans do, or like, or think. For example, you can’t learn to generate amazing music without data on what humans like to listen to. In a game playing simulator an AI model can play against itself very quickly to get smart, but in human-related applications the slow pace of data collection gums up the speed of the project. And so in a broad sense, the AI field is still under construction at the same time as we are plugging lots of things into it.
Regardless of the underlying technical machinery, when you interact with a trained artificial intelligence model in the vast majority of real-life applications today, the model is pre-trained and is not learning on the fly. This is done to improve the stability of your experience, but also hides the messiness of the underlying technology. The learning tends to happen is a safe space where things can be tested, and you experience only the predictions (also called inference) as a customer of the AI system. 
Despite the hype, AI models that think like we do are not coming around the corner to exceed humanity in every way. Truly thinking machines are definitely worthy of research, but they are not here just yet. Today, AI models and human analysts work side-by-side, where the analyst gives their opinion and is assisted by an AI model. It is useful to think about more general mathematical models like rainfall estimation and sovereign credit risk modeling to think about how mathematical models are carefully designed by humans, encoding huge amounts of careful and deliberative human thinking. The practice of building AI systems involves a lot of reading and creativity. It's not just coding away at the keyboard.
I feel that AI software developers gradually build a sense for how to think about what an AI model is doing, and it ins’t “thinking”. I wanted to get some input from someone unrelated to me in the artificial intelligence field, to see if they feel the same way. Through the CEO of DTA, I set up a talk with Kurt Manninen about his work on an AI product called AstraLaunch. I asked Kurt a lot of technology questions, leading up to the question “Does the system think like people do?”
AstraLaunch is a pretty advanced product involving both supervised and unsupervised learning for matching technologies with company needs on a very technical basis. A complicated technology like this is a good area to be thinking about “thinking”. The system has an intake process that leads into a document collection stage, and then outputs a chart of sorted relevant documents and technologies. What I wanted to understand from Kurt is the way he thinks about what the matching technology. Is the system thinking when it maps the needs of NASA to the technology capabilities of companies? When diagnosing an incorrect prediction, does he think about the model as making a mistake, or is the origin of the mistake with the model maker and/or the data?
Kurt’s answer was really close to what I expected from my own experience. Technology like AstraLaunch involves humans and AI models working together to leverage the strength information processing approach. But, Kurt felt strongly, as I do, that bugs in AI models are the fault of people, not the model. An AI developer can see where the training wasn’t set up properly to understand language, or vocabulary, or where the dataset collection went wrong, etc.
Returning to the original question about artificial intelligence and thinking, I think we can solidly conclude that these systems don’t do thinking at all. If we only have fast and automatic (System 1) artificial intelligence to work with, can we think of an AI model as a gifted employee that thinks differently about the world? Well, no. AI will probably cheat if the training is unmanaged, and so it is a lazy, deceptive employee. It will use the easy way out to get the highest score on every test, even if the approach is silly or wrong. As we try and build a “System 2” that thinks more like us, we need to remember that thinking is not about passing a test. Instead consider this quote:
The test will last your entire life, and it will be comprised of the millions of decisions that, when taken together, will make your life yours. And everything, everything, will be on it.
John Green

0 comments:

Post a Comment