Monday, October 23, 2017

AI Will Take Decades to Produce Clear Productivity Results

General purpose technologies (GPT) tend to be important for economic growth as they tend to transform consumer and businesses do things. The issue is whether artificial intelligence is going to be a GPT.  

The steam engine, electricity, the internal combustion engine, and computers are each examples of important general purpose technologies. Each increased productivity directly, but also lead to important complementary innovations.

The steam engine initially was developed to pump water from coal mines. But steam power also revolutionized sailing ship propulsion, enabled railroads and increased the power of factory machinery.

Those applications then lead to innovations in supply chains and mass marketing and the creation of standard time, which was needed to manage railroad schedules.

Some argue AI is a GPT, which means there will be significant and multiple layers of impact.

Machine learning and applied artificial intelligence already can show operational improvements in all sorts of ways. Error rates in labeling the content of photos on ImageNet, a collection of more than 10 million images, have fallen from over 30 percent in 2010 to less than five percent in 2016 and most recently as low as 2.2 percent, according to Erik Brynjolfsson, MIT Sloan School of Management professor.


Likewise, error rates in voice recognition on the Switchboard speech recording corpus, often used to measure progress in speech recognition, have improved from 8.5 percent to 5.5 percent over the past year. The five-percent threshold is important because that is roughly the performance of humans at each of these tasks, Brynjolfsson says.

A system using deep neural networks was tested against 21 board certified dermatologists and matched their performance in diagnosing skin cancer, a development with direct implications for medical diagnosis using AI systems.

On the other hand, even if AI becomes a GPT, will we be able to measure its impact? That is less clear, as it has generally proven difficult to quantify the economic impact of other GPTs, at least in year-over-year terms.

It took 25 years after the invention of the integrated circuit for U.S.  computer capital stock to reach ubiquity, for example.

Likewise, at least half of U.S. manufacturing establishments remained unelectrified until 1919, about 30 years after the shift to alternating current began.

The point is that really-fundamental technologies often take decades to reach mass adoption levels.

In some cases, specific industries could see meaningful changes in as little as a decade. In 2015, there were about 2.2 million people working in over 6,800 call centers in the United States and hundreds of thousands more work as home-based call center agents or in smaller sites.

Improved voice-recognition systems coupled with intelligence question-answering tools like IBM’s Watson might plausibly be able to handle 60 percent to 70 percent  or more of the calls. If AI reduced the number workers by 60 percent, it would increase U.S. labor productivity by one percent over a decade.

But it also is quite possible that massive investment in AI could fail to find correlation with higher productivity, over a decade or so.

It might well be far too early to draw conclusions, but labor productivity growth rates in
a broad swath of developed economies fell in the mid-2000s and have stayed low since then, according to Brynjolfsson.

Aggregate labor productivity growth in the United States averaged only 1.3 percent per
year from 2005 to 2016, less than half of the 2.8 percent annual growth rate sustained over 1995
to 2004.

Fully 28 of 29 other countries for which the OECD has compiled productivity
growth data saw similar decelerations.

So some will reach pessimistic conclusions about the economic impact of AI, generally. To be sure, there are four principal candidate explanations for the discontinuity between advanced technology deployment and productivity increases: false hopes, mismeasurement,  concentrated distribution and rent dissipation or  implementation and restructuring lags.

In other words, new technology simply will not be as transformative as expected. The second explanation is that productivity has increased, but we are not able to measure it. One obvious example: as computing devices have gotten more powerful, their cost has decreased. We cannot quantify any qualitative gains people and organizations gain. We can only measure the retail prices, which are lower.

The actual use cases and benefits might come from “time saved” or “higher quality insight,” which cannot be directly quantified.

Another possible explanations are concentrated distribution (benefits are reaped by a small number of firms and rent dissipation (where everyone investing to reap gains is inefficient, as massive amounts of investment chase incrementally-smaller returns).

The final explanation is that there is a necessary lag time between disruptive technology introduction and all the other changes in business processes that allow the new technology to effectively cut costs, improve agility and create new products and business models.

Consider e-commerce, which was recognized as a major trend before 2000. In 1999, though, actual share of retail commerce was trivial, 0.2 percent of all retail sales in 1999. Only now, after 18 years, have significant shares of retailing shifted to online channels.

In 2017, retail e-commerce might represent eight percent of total retail sales (excluding travel and event tickets).


Two decades; eight percent market share. Even e-commerce, as powerful a trend as any, has taken two decades to claim eight percent share of retail commerce.  

Something like that is likely to happen with artificial intelligence, as well. If AI really is a general purpose technology with huge ramifications, it always take decades for full benefits to be seen.

It will not be enough to apply AI to “automate” existing business processes and supply chains. Those processes and supply chains have to be recrafted fundamentally to incorporate AI. Personal computers could only add so much value when they were substitutes for typewriters. They became more valuable when they could use spreadsheets to model outcomes based varying inputs.

Computing devices arguably became more valuable still when coupled with the internet, cloud-based apps, video, rich graphics, transaction capability and a general shift to online retailing.

No comments:

Many Winners and Losers from Generative AI

Perhaps there is no contradiction between low historical total factor annual productivity gains and high expected generative artificial inte...