A Blog by Jonathan Low

 

Jun 22, 2019

Artificial Intelligence And the Threat Of Diminishing Returns

What if, by the nature of its benefits, many of artificial intelligence's gains have already been realized? JL

Kurt Cagle reports in Forbes:

Artificial intelligence is deflationary - it improves the efficiency of processes and augments the ability of people well beyond their negative capabilities. AI are provides huge benefits initially, eventually reaching a stage where further investment results in diminishing returns of efficiency. We have likely passed the mid-point of this evolution. This does not imply that there are no reasons for investment in computational innovation, only that expectations need to be adjusted about what artificial intelligence and computational efficiency are likely to bring in the years ahead.
Artificial intelligence, the use of computer processes to infer and make decisions on information about the world that is not necessarily explicitly given, has been a hallmark of much of this decade. From word processors that went from simple spell check to office suites that now have a significant hand in the production process, from cruise control to self-driving vehicles, from halting speech recognition software to fully integrated video/audio concept recognition, AI and its related technologies have quietly but perhaps irrevocably changed our relationship with computers far more than most people realize.
Yet as the information revolution continues, the impacts that it is having upon our economy are now reaching an extent where most of the models that economists have formulated about how that economy works are being thrown out. We're in terra incognita at this stage, and this, in turn, is forcing politicians, policy makers, economists, business leaders and everyday people to rethink many of the fundamental assumptions on which we base our notions of work, value and utility.
Almost everything that deals with intellectual property has gone digital in the last three decades, as we move from an environment where not only are the physical products that used to convey that IP - from novels to newspapers to movies to music no longer require the physical media to transport content, but increasingly IP being produced today cannot  be transcribed back to physical media.
Digitization means networking, live (and changing) content, annotations, and interactions with not only the producers but other consumers of that IP. Cable companies are throwing in the towel as wireless bandwidth becomes fast enough to compete with (and ultimately outcompete) the ability of fixed-line companies to provide bandwidth to highly mobile customers. This might not seem to be an AI issue at first, but AI is increasingly being called upon to better recommend content in an increasingly competitive space of products, from books and movies to toothpaste and deodorant, while at the same time attempting to find the sweet spot to move the greatest amount of product for the least amount of money.
This is leading to declining margins for more producers and first tier distributors. even as it also leads to declining actual production. AI is there too - the ability to more accurately track and fulfill orders means that the need to produce excess product (which is the foundation of mass production) also declines. This makes its way back up the supply chain, as fewer intermediate products are needed, and from there fewer raw materials. There is, of course, a lower limit to this, the point where supply chains are minimized to be as efficient as possible.
However, even here that limit is not a hard one. 3D printing, which makes extensive use of AI as well as the raw computing power that AI also requires, is reshaping manufacturing profoundly. If the number of parts needed for a particular product drops below a certain minimum, it is increasingly economical to make those part using 3D printing, shaping and assembly processes. This floor reduces the need for third party suppliers, which continues to impact manufacturing.
Ironically, at a time where the political rhetoric seems to be all about immigrants taking away jobs, the reality is far more sobering. Re-onshoring, where companies are reducing offshoring after a couple of decades, is gaining momentum primarily because finishing companies are reducing their supply chain exposure by incorporating both AI and 3D manufacturing to reduce the overall costs below that necessary to import the intermediate components from overseas.
Put another way, it's cheaper to manufacture in the US again, not because of labor costs, but because of the lack of need for labor,  and a significantly reduced supply chain, in the first place.
The Era of Diminishing Returns
Most people have a somewhat skewed view of what inflation is. At its core, you actually have to talk about the inflation of what relative to what. In a consumer sense, inflation occurs when the cost of the same goods and services increase over time. This is goods inflation, and there are several factors that contribute to it. Wage inflation means that, in general, the cost of hiring people is increasing for the same skill set. It is a truism economically that wage inflation
leads to goods inflation, as there is more demand for the same limited set of products, so people are willing to pay higher prices. In actuality, goods inflation is only marginally coupled to wage inflation, a fact that was known even sixty years ago when the truism was first promoted.
The cost of producing a particular physical product actually comes from several factors - how much the raw material and intermediate component costs are, how much infrastructure and energy is necessary to process and assemble those pieces,  how much gets siphoned off for research, financing, marketing, distribution, and support, and how much gets paid back to the investors in the form of dividends. There are also psychological factors - most purchasers of finished goods tend to overvalue the actual cost because they have in general been conditioned to accept a given price as being the final price, especially for those people like the author who are still thinking in terms of the pre-computer economy.
Here's the paradox, however. When supply chains are shrinking, when fewer items are being produced because they don't need to be, when both IP products and physical products that have IP components are reducing the overall requirement to buy physical things like DVD players, demand is dropping. When demand drops, prices should drop. In purely IP circles, prices have been
dropping - the number of publishing companies have exploded, but they exist primarily on tiny margins sufficient to take care of the basic needs of the author and publisher, but only just (and that's if they're successful).
The reality, though is that most of the gains due to profits over decreasing production costs have gone into stock markets and dividends, which accounts for the explosive growth in most of the major exchange indices over the last decade. In effect, we're seeing stock market inflation, as investors are now chasing after companies that produce the highest dividends despite the fact that, except for a few long term shots, the actual potential of these companies to continue paying such high dividends (through sales) is decreasing pretty dramatically.
To put things into perspective, the potential for artificial intelligence in 1975 to change the world was high. Computers were large, heavy, expensive and slow. However, even then people were using it to format books, perform calculations, generate billing statements and so forth. It would take another decade, and the introduction of both the personal computer and networking protocols, before that potential began to be realized, and the impact that it had even over the course of that decade was already leading to massive dislocation as jobs disappeared and people made the transition to the nascent digital economy.
What would follow would be a classical definition of a logistics curve, with the seemingly slow plodding of the information revolution suddenly going all hockey stick-like, leading to Gordon Moore's famous dictum about processing power increasing by a factor of two every eighteen months. Logistics curves occur all the time in natural settings and can be thought of as the behavior of a species as it uses, then exhausts, its available resources.
In the information sphere, this can be thought of as the availability of niches that computer technology could be used for. A niche, in turn, can be thought of as the food of the logistics curve. When the food is plentiful, the potential for innovation is high. As food (available niches) get more crowded, it takes more energy to find and exploit an "uneaten" niche. Eventually, the system reaches an equilibrium, where population growth slows, then stops altogether.
Now, this is a very simplified model, especially once you factor in AI's ability to enhance other domains. Bio-engineering, the commercialization of space, autonomous vehicles and drones, it would seem like the future is written with AI crayons. Yet in reality, most of the ramifications of artificial intelligence have already been worked out, and in many respects, the actual value that AI brings, at least as defined in today's terms, is fairly disappointing. Better
recommendations, facial recognition, more surveillance, voice activation. It brings more channels of interacting with the noosphere, but as cable proved conclusively, being able to go from a handful of channels to twenty was game-changing, but going from twenty to six hundred ... not so much.
We're in a period now where technological innovations have diminishing returns. Self-driving cars sound cool, but the basic role for such cars, acting as personal transportation, does not significantly change whether the driver has to pay attention to the road or can watch a movie while on the way to work. Autonomous lawn mowers sound like a winner in the AI category, yet unless you happen to be a groundskeeper for a golf course (and perhaps not even then) the utility of buying one versus paying a kid to mow it with a traditional lawnmower still works out in favor of the kid for most people.
You can see this in the lagging IoT space. Earlier this decade IoT hype entered a fever pitch, despite the fact that computers (and even robots) have been connected to the Internet for the better part of sixty years. What came out of this was the next generation of programmable thermostats (which are now so intelligent that they are almost impossible to program), voice-activated digital assistants that let you replace five seconds of keyboard time with fifteen
seconds of voice commands, and the ability to turn the lights on and off with your smartphone - all very useful things, right? Well, arguably, no. The ability to create a programmable thermostat in the first place is useful, but the ability to control it from your cell phone (and an app, always an app) is a gee-whiz, look at that kind of technology, technology in desperate search of a problem.
The real innovation was the smartphone, and herein lies the dilemma. By sheer happenstance, the search to create a better phone, one not tethered to a phone line, created what amounted to the perfect form factor for a user interface - something small enough to fit in the hand and still provide useful information, and, somewhat coincidentally, something that you can hold up to your ear. For us bipedal apes, it was about the best thing since sliced bread.
Unfortunately, having settled on such a form factor earlier, subsequent would-be innovators have discovered that, well, most other form factors just don't do it for us. Smartwatches are too small for most purposes, don't have a large enough surface to make a decent speaker feasible, and frankly, the Dick Tracy-esque talking into your sleeve just looks dorky. VR glasses and goggles are generally highly distracting, lending themselves well to gaming (and similar immersive environments) but not most day-to-day activities.
This is not to say that there isn't still room for innovation, only that such innovations are unlikely to be radical changes to what already exists. Holographics - the ability to project 3D - is still out there, but holographics faces the same problem that virtual reality does: it looks very cool when used as a movie effect, but most environments are too busy for holographics to show up well, and in most cases, quasi-holographics (3D within a screen) usually gives the same ability without the distractions and with greater control.
Indeed, this becomes obvious if you looking at marketing efforts for these technologies: "better", "improved", "faster", "better resolution". These are all signals that the core technology, engineering and science are both understood, and any major breakthroughs will only give an incremental improvement, and this, in turn, places an upper boundary on good enough.  When you replace a screen that's 640x480 with one that's 1280x960, the difference is profound, but the next jump, to 2560x1920, while requiring squeezing four times the pixels into the same space at roughly the same costs, does not appreciably improve the quality of the image. Diminishing returns rears it's ugly head once again.
This article has focused on consumer electronics, but the same effect is occurring in most high tech fields. Once the good enough threshold is reached, the demand for new products drops. Put another way, the price should come down. In reality, technology prices tend to be fairly inelastic - retailers are disinclined to lower prices directly (for many reasons) but the number of sales and specials increase in order to reduce otherwise unmovable inventory.
Devolution of the Retailer
Beyond this mechanism, a second one worth watching is the shift towards subscription services in support of online purchases of software. Amazon Prime is a good example of this - by buying a subscription mechanism, you are in effect getting discounts that, over the course of the membership, significantly reduce the cost of the goods that you are buying. The effect, economically, is to make prices more elastic, which makes it harder for sellers to arbitrage the inelasticity of prices. The exchange makes markets more efficient, but because markets have traditionally been weighted towards the retailers, such efficiencies are not always welcome.
At the same time, this has been a major boon to suppliers, who were themselves disintermediated as the rise of retail distribution ate into their margins during much of the 20th century, ironically because such exchanges, which do require increasingly sophisticated AI, allow smaller and more bespoke vendors a place to sell their products without the barrier to entry costs that made mass production feasible only to large investors.
This has created increasingly narrow markets targeted to those with both the
means and the interest to support those markets. For instance, many film production houses are bypassing the large studios and movie theatres altogether and creating special-run shows specifically for home theater markets, along with other specialized venues such as custom children's programming for hospitals or even thirty-second mini-casts for gas stations.  The whole cosplay movement, for example, would be unable to exist without these channels, as the market is simply too small and fragmented for a large retailer to take seriously.
The subscription model also helps these micro-producers get the exposure that they otherwise wouldn't. Subscription services reduce the overall cost paid to the supplier, but at the same time, they have the potential to make up the difference in volume of sales due to that exposure. This holds especially true in the arena of intellectual property - software, literary works, courses, music, video, any product that can be created once then distributed without the supplier having to actually pay the conversion cost for turning the software into atoms. The cost to promote (what fuels advertising) drove twentieth-century economics, but the twenty-first century is shaping up to be very different, with the product itself frequently serving as the advertisement - the first few books of a series (or episodes or versions) get people hooked on how the story ends, and the denouement (or payday for the author/creator) comes in part at the
end, and in part from customers in micro-markets who want to support the creation of the rest of the story.
Ironically, this process is also fueling a new kind of supply chain, one in which software is sold to makers who in turn use that as grist for their own products. To illustrate this, I do a fair amount of 3D rendering, in part because I'm looking for something distinct to illustrate the articles and books that I write, and in part to sell other images to exchanges where other people may turn them into book covers, advertising, game characters or just raw stock. I do some of my own 3D modeling and rigging as well, but I also buy models from third-party developers, some of which can run to a significant amount of money over time. In the end, I make a small profit from it, which usually goes into buying more models or HDRI landscapes or utility packages to produce a better product.
This may seem like a minor hobbyist use, but consider that most car manufacturers now take their CAD artwork for their vehicles and convert them into formats that can be worked with in 3D model/rendering environments such as the Unreal game platform. These models (generally reduced from the originals in terms of complexity) are then sold for upwards of several hundred dollars to advertisers who use them to create hyperrealistic views of those cars in just about any setting. Chances are pretty good that a third or more of all of the automobiles in car advertisements in various media (plus movies and shows) don't actually exist as anything but a CGI model.
This completes the circle on 3D printing and AI. Already, automobile and aircraft companies are relying upon 3D printing to create seamless parts that don't require riveting or welding (always potential points of failure), in sizes up to and including aircraft wings and fuselages. Increasingly, you're also seeing the same artisanal factories creating bespoke aircraft, cars and similar goods without ever tooling up a production line. The models exist (in many cases similar or identical to the ones that animators use to drive a car along an imaginary road) to instruct the manufacturies how to create the components, often taking advantage of AI to reduce or eliminate the needs for bolts or screws in the first place.
Such goods are (occasionally) more expensive, but given that they are wholly customized this is hardly surprising, and the cost to configure a plant to create such bespoke goods would not be even remotely competitive. Again, however, this translates into fewer goods being created because the buyer can establish the needs and requirements ahead of time, and it potentially means longer product life spans because, once created, such goods could be reconfigured without the need to replace everything (for example, swapping out the body of a car without replacing the engine or electrical system).
The Augmentation Economy
Given all of this, how will AI-induced deflation affect the economy? The bad news is that it is destroying the economy that existed in the twentieth century. The good news is that the evolution is largely done, although the societal impacts are now playing out.
First, this economy does not favor investors seeking a large return on their investment, except in a few key areas, though the economy is an arena where a larger number of smaller investors can still make a killing, primarily to help popular smaller businesses get past smaller barriers to entry. Markets are fragmenting and (perhaps worse) fractalizing, making it harder to make larger sums of money work effectively. Markets (as pricing mechanisms) are becoming more efficient, but they have also become decoupled from traditional equity exchanges, which work on the assumption of large companies providing long term returns through mass production. Microsoft, Amazon and Apple are now valued as trillion dollar companies, but given the largely digital nature of what they produce, the likelihood that they will actually return that kind of value as dividends is almost vanishingly small.
It is also an economy that is starting to favor makers and analysts over shapers and mediators and will continue to do so for some time. Shapers, in general, are decision-makers, those who establish policy and direction within an organization or agency. Mediators are salespeople or marketers, whether they are on the floor doing retail or are brokers overseeing billion dollar contracts. Makers are for the most part designers and craftspeople, working in either the physical or virtual realms to create templates, intellectual works and software. Analysts identify patterns and use those patterns to better predict future behavior, identify opportunities, determine malfeasance or establish best courses of action. They are also "explainers", people who educate others about specific topics or concepts, and as such include teachers, librarians and journalists.
In the twentieth century, the mediators and shapers dominated. However, exchanges such as Amazon, the shrinking of supply chains, and the shift of sales to the online world (and AI) in general are dealing death blows to mediator professions - marketers, salespeople, recruiters, agents, really anyone who makes a living trying to facilitate a deal. Shapers are currently only mildly affected by the changes, but that's changing as ever more of their mandate both can be accomplished via AI and requires the services of analysts to establish the parameters of the problems they are attempting to shape. This can be seen in the rise of the COO as a critical part of an organization - the person who basically makes the organization work, and it is likely that over time more shapers will come up through the ranks of the analytical COO rather than marketing and sales, as was the case in the past.
As different classes of professions have gone through the digitalization mill, their original roles and valuations basically collapsed. Making a living as a journalist even ten years ago was damn near impossible because journalism was one of the first fields to be significantly rewritten by the move to the cloud. However, the power and influence of online journalists, the ones that made the transition earliest, has been notable in recent years.
Enterprising Millennials are making a decent if not great living as journalists, analysts or influencers. Actuaries, long considered one of the most boring jobs in the world, have been remade as rock star data scientists. Authors who watched their earnings collapse over the last couple of decades are now beginning to make respectable money again, albeit at the cost of shifting from working on epic blockbusters to cranking out smaller, bite-sized novelettes. Librarians are becoming data stewards, and many sculptors and artists are now working in Hollywood (or perhaps closer to Skywalker Ranch) or in gaming.
What's significant is that most of these fields involve some form of AI to execute. In general, fields that have been least impacted by digitalization still obey old economy rules, but they are decaying in value. If you are a farmer in the old economy, you're watching your income disappear. However, the people entering into agriculture today more often than not have Masters or even PhDs in genetics, biology, or systems science, and they are the ones that are succeeding. The automotive industry is hamstrung by the lack of people with solid engineering degrees, design skills and the ability to interface with 3D tools. Even professional sports is becoming more science than art, with computerized tools to analyze swings and throws and shots, how athletes run or dance or swim, and so forth.
The critical point to observe here is that the jobs involved are not mainly programming jobs. Indeed, there are signs that coding itself has reached an equilibrium point with some growth but with the commoditization of what had once been hot fields.
In effect, even programming has become so specialized that coders have become web developers or data developers or devops or testing engineers, each working within specific frameworks and generally only peripherally concerned with most of what they likely learned in college. For them, artificial intelligence can be found in compiler (or transpiler) design, in which software uses increasingly sophisticated "emergent behavior" mechanisms to optimize performance or flexibility Put another way, a coder can tell that their code runs more efficiently, but increasingly without understanding why it does so.

So What Comes Next?
In the short to intermediate term, non-augmented workers are becoming less valuable in the marketplace, while those workers who have the most complex augmentation requirements are generally doing well. This cuts across all industries and sectors. The digital divide will continue to grow, and access to both software and hardware will become one of the major determining factors in how successful a given person will end up being, regardless of socioeconomic level.
At the same time, the benefits due to productivity, which have largely fueled the imbalance in the economy and the potential for large financial gains, will continue to decline as the population of the technologically literate stabilizes. ROI is not yet quite negative - investing in new technology will bring an advantage in the marketplace, but because most such investments today can spread very quickly, the competitive advantage from that investment will be smaller (perhaps considerably smaller) than they would have been ten, twenty or thirty years ago. We are reaching a mature market. Instead, such investment basically increases the quality and resolution of the products created - better analysis, more realistic renderings, more intelligent driving systems.
Additionally, the energy costs of computation are beginning to become an issue. More sophisticated machine learning systems can require enough computing power to generate enough COto power a roundtrip flight across the US, while bitcoin mining is proving nearly as destructive to the environment as strip-mining by the same metric. Moore's law no longer really holds true - computational power is not doubling every eighteen months or even every two years, but closer to maybe five, and that factors in improvements in routers, lines, stacked arrays of chips and similar "tricks". Political factors, including trade wars with China, Mexico and Canada, are likely to pinch the supply of both hardware and software, raising the overall costs of such goods and decreasing ROI from tech investment further.
This means that the next few years will be marked by a period of consolidation, slower rates of innovation and diminishing returns on investment. During this period, a lot of the knowledge that analysts and data scientists brought to the table will be converted into software. Similarly, the next decade will mark the rise of the impresario, as tools for the creation of entertainment and informational content become both powerful enough to eliminate the need for expensive rendering and post-production suites, and the entertainment cloud, which combines stories, electronic games and video production into a single seamless whole, make the creation of good content possible at a small fraction of the cost it would have taken a decade ago.
This leads to an obvious conundrum, however. Pre-industrial and industrial economies work upon two precepts - scarcity increases the value of resources, and people's labor is treated as a resource with specialized skills having more value than unspecialized skills due to its scarcity. Efficiencies due to AI have driven down the costs of extracting those resources dramatically. For instance, in absolute terms, the cost of a gallon of gas in 1970 dollars was 33 cents. Fifty years later, in 2019 dollars, the same gallon should be $2.30, adjusted for inflation (it's about $2.80 in most of the country), despite the fact that the cost for extracting a barrel of oil has more than doubled since then in the same inflation-adjusted terms.
If AI is facing diminishing returns, however, this does not bode well for the cost of things. Again the oil field example provides a useful analogy. Early AI and related technology (satellites, gravitic detectors, 3D visualization software and so forth), when they started becoming widely deployed in the 1990s, made it possible to locate an oil field to within ten kilometers on land and perhaps twenty kilometers underwater. The AI resolution increased steadily, to the extent that this could be narrowed down to a resolution measured in perhaps tens of meters. This has been a boon to oil exploration, as each failed drilling represented a sunk cost of tens or even hundreds of millions of dollars.
However, much beyond where the technology is at the moment, reducing the resolution by 50% will not yield any significant improvement in the ability to find oil. This means at a certain point AI does not provide enough benefits to balance out the scarcity (and may very well have increased the scarcity in the process, cf. Jevon's Paradox ). Oil, water, various metals, rare earths, and arable land, all of these climb in price quickly once that happens, as the process of extracting these resource has reached the maximum efficiency possible, so is no longer masking the overall costs. The result is true commodity inflation, which very quickly translates into general price inflation as these costs get passed into housing, vehicle, food, apparel and other costs.
Price inflation has been comparatively tame for the last quarter century. In the period from 1970 to 1995 saw the dollar lose 65% of its value. From 1995 to 2020, the dollar only lost 33% of its value. I am arguing here that artificial intelligence, in its various manifestations, was responsible for that drop as the cost of resource extraction and processing was made far more efficient. Yet because most of the efficiencies to be gained have been gained (perhaps 80%), it is very likely that the next 25 years will end up seeing a resurgence in inflation, with the dollar losing 75% or more of its value between now and 2045.  Again, to put this in perspective, if you had an income of $50K in 1995, your income would need to be $100K today to have the equivalent purchasing power. In 2045, an equivalent income would need to be $200K, with the poverty line at about $85,000 a year.
When will this inflation happen? Inflation's funny - for 2019, actual commodity inflation is fairly benign at about 1% a year.  It is possible that commodity inflation may, in fact, go negative in the face of a recession (which is also looking increasingly likely this year or in early 2020), as it did in 2008 and the near-recession of 2013. However, by 2030, barring some major advances in materials science or the advent of fusion,  the limits of AI-driven efficiency will be clearly felt with the potential of inflation moving above 5% for the first time since 1983.
Conclusion
Artificial intelligence, in its various manifestations, is deflationary - it improves the efficiency of processes and augments the ability of people well beyond their negative capabilities. However, the benefits to be gained by AI are fundamentally logistical in nature - providing huge benefits initially, but eventually reaching a stage where further investment in AI results in diminishing returns of efficiency. We have likely passed the mid-point of this evolution and indeed are perhaps 3/4 of the way to a break-even, where the benefits of AI are not worth the investment into it.   As this is a process that will likely still take a couple of decades to fully play out (and has been ongoing since the innovation of the computer in the 1940s), this does not imply that there are no reasons for investment in computational innovation, only that expectations need to be adjusted about what artificial intelligence and computational efficiency are likely to bring in the years ahead.

0 comments:

Post a Comment