Despite the avowed need for collaboration, alliances and partnerships in a vast, ever-growing global economic network, the response of most enterprises, professions and individuals is to tighten the framework within which they operate. This may be, in part, because the breadth and changeable nature of world-wide opportunities or threats causes a reflexive clinch. Or it could be because of a fear that without a more well-defined set of skills and objectives, institutions and the people who make them go will be shunted aside for a more easily understood operational story.
There is logic behind this concern. The capital markets have, for several decades now, steadily reduced their support for the conglomerate approach of the 1960s and 70s. Then, scale and mass were considered the primary drivers of value in a nascent global web. Increasingly, however, the markets have effectively shifted that focus, saying that they believe they are in the best position to make such judgments rather than institutional executives.
Speed, in addition, has played a role in hastening this trend. With algorithmic trading, investors are able to assemble a diverse and complementary set of holdings based on their own needs and desires. They can make changes in that portfolio almost instantaneously rather than waiting months for negotiations over mergers and acquisitions.
It would appear that the result of all this inexorable focusing would be greater simplicity. But the opposite has occurred, as the following article explains. The very natural human desire to differentiate oneself in order to generate attention and, presumably, greater value has caused enterprises and individuals to become almost ridiculously specific about the complexity justifying their supposed worth. Just as the biological imperative to attract a mate has created genetic markers, so the economic imperative has embraced a reductio ad absurdum of qualifications and verifications.
One practical side-effect is that organizations generate increasingly impossible-to-attain standards for hiring based on this ever-narrowing set of requirements, then claim that they can not find employable candidates due to the failings of the workforce rather than re-examining the pin-hole absurdity of their demands. Enterprises, too, attempt to offset the impact of global competition by making themselves 'understandable' but in so doing, simply deepen the mystery of their value proposition by layering on ever more categorical yet incomprehensible details.
The reality is that institutions and individuals must increasingly rely on others in their networks to survive and grow. The key to doing so is reducing barriers to understanding and increasing the opportunities to share, communicate and collaborate. That value proposition is actually pretty simple. JL
Roger Martin comments in Harvard Business Review:
Each narrow knowledge domain develops analytical tool-sets that deepen the narrow knowledge domain.
People who make it their business to study large-scale problems (business theorists and economists among them) seem to be in broad agreement that the world is growing ever more complex — and that this trend makes their work harder. If this is true, then we should be grateful for their ongoing efforts and to a large extent let them off the hook for failing to make more progress. But is it true?
The claim can be hard to evaluate given the number of meanings that attach to the word complexity. But if we start from a solid, shared definition it becomes easier to consider. Of all the definitions, I like Peter Senge’s old but simple one best. He spells it out in The Fifth Discipline, by way of explaining why seemingly sophisticated forecasting tools so often miss the mark:
[T]hey are all designed to handle the sort of complexity in which there are many variables: detail complexity. But there are two types of complexity. The second type is dynamic complexity, situations where cause and effect are subtle and where the effects over time of interventions are not obvious. Conventional forecasting, planning and analysis methods are not equipped to deal with dynamic complexity.Senge’s distinction between detail complexity (driven by the number of variables) and dynamic complexity (heightened by any subtlety between cause and effect) is not only key to explaining why some overhyped tools don’t deliver. More broadly, it is consistent with how growing knowledge in a field inherently advances and generates complexity.
The starting point for knowledge is mystery. Everything we now know started as a mystery in which we couldn’t even discern the variables that mattered, and therefore had no capacity to understand cause and effect. Think of how the world was baffled, for example, in the very early days of the AIDS crisis. We didn’t know how to think about this new and horrible condition.
But in due course, as is the case in many domains of knowledge, AIDS became less of a mystery. With hard work and study we advanced to a heuristic — that is, we started to understand what variables mattered and developed a sense of the cause and effect. We came to the conclusion that it is an acquired autoimmune disease transmitted primarily through sexual contact. This enabled researchers to focus on the relevant variables and better understand cause-and-effect relationships — for example, the relationship between unprotected sex and transmission.
Some knowledge gets advanced all the way to algorithm, in which every relevant variable is specified and the cause-and-effect relationships are precisely defined. This has happened, for example, with polio. We figured out what causes it and developed a vaccine that if taken protects the individual against the disease forever. We haven’t driven AIDS knowledge to an algorithm yet. It is not entirely clear what all the relevant variables are and there are still plenty of subtleties to attempt to understand between cause and effect. But our understanding is far advanced from a mystery — and hence the many treatments that help HIV-infected patients avoid developing full-blown AIDS.
AIDS researchers and every other scientist since Aristotle have attempted to ferret out cause and effect because they want to explain how the world works. They want to drive knowledge toward an algorithm like E=MC^2 with all the subtlety gone.
The question is: How do they do it? How do they eliminate the subtlety between cause and effect in order to drive knowledge toward algorithm? Typically, the approach is to tackle cause and effect (dynamic complexity) by reducing the number of variables considered (detail complexity).
My own clan — the economists — is particularly inclined in this direction. There are a thousand economists working on partial equilibrium problems for every one working on a general equilibrium problem. This is despite the fact that no one would contest that general equilibrium clarity is the most valuable knowledge by far. Why? Because it is really difficult to specify any general equilibrium cause-and-effect relationships.
Instead, most of the guns deployed in modern knowledge advancement are aimed at narrow problems for which the cause-and-effect relationship is specified with the famous “all other things being equal” proviso.
Each narrow domain develops ever more algorithmic knowledge, and those developing the knowledge are extremely confident that they are right because they are so specialized within their own domain. The liver expert is completely confident that he or she is correct even if it is the interaction with another condition that threatens your health most.
This approach has created another kind of complexity: inter-domain complexity. Every field is segmented into multiple domains, each with deep algorithmic knowledge, specialized tools, and experts in the domain who think they are absolutely right. And they are indeed right, as long as we ignore the reality of detail complexity.
However, the real world we live in, and have always lived in, is a world of detail complexity. So when we sacrifice dealing with detail complexity to focus on dynamic complexity, the solutions don’t produce the outcomes that we really want. For all their great work, it is unclear that economists have actually helped government officials manage the complex task of managing a national economy any better than they ever have. And despite massive advances in narrow domains of medical knowledge, actual health outcomes have been difficult to improve, especially in errors of high detail complexity.
This is, I believe, what makes it feel that complexity has increased. I absolutely do not believe that the subtlety between cause and effect has increased at all in the world. But the negative manifestations of the largely unaddressed inter-domain complexity make it feel like we have massive un-addressable complexity overwhelming us.
In other words, we are bedeviled by manufactured complexity — complexity that could have been avoided but has instead been amplified by the pursuit of narrow knowledge in a broad world.
It is vital, therefore, to our ability to make progress against large-scale problems that we figure out how to tackle inter-domain complexity.