As we jump into the fourth quarter of 2023, we’ve just released another exciting episode of Behind the Markets. Professor Daniel Rock, an Assistant Professor of Operations, Information, and Decisions at the Wharton School of the University of Pennsylvania is a returning guest to the show, and he brought with him Anton Korinek, a Professor at the University of Virginia in the Department of Economics. 

 

The topic was artificial intelligence (AI), viewed through an economic rather than technical lens. 

 

 

AI could be helpful for productivity
Professor Jeremy Siegel, WisdomTree’s Senior Economist, made a point about AI and productivity. AI could be a helpful solution to the ‘productivity paradox’, that is, the fact that whilst the economic environment has had a lot of positives in recent years, productivity growth has been non-existent. Notably, he was thinking that maybe productivity growth could possibly hit 1.5% or even 2.0%. 

 

 

From current levels, and thinking in terms of the relevant scale of this data point, that would be a massively positive economic development if it occurred. 
Anton shared his view that it is possible to go above 2.0%. It was interesting to hear him discuss aspects of a ‘measurement challenge.’ He noted that he believes the new AI tools—the large language models—make him roughly 20% more productive. We can accept his word at face value, but this 20% would not be automatically reflected in the statistics just because he says it—he would need to be paid commensurately, 20% above his current level. 

 

 

Productivity: historical context to bring us to present day
Many discussions regarding productivity cite a variety of historical periods. It’s important to understand, for example, that those measuring productivity growth will see that the period from the end of World War Two through the 1970’s was particularly strong. From the 1970’s forward to the present day, the data largely trended lower, even if there were a few upward moves in the 1990’s. 

 

 

It was thought that the upward move in productivity in the 1990’s was less about the internet or ‘dot-com’ craze and more about the general computerisation of the economy. A tricky part of any discussion of a new technology, be it the computer, the internet and now AI is the dissemination. 
Take electricity as an example. We would all likely agree that it was a big positive for the global economy, but the usage was not simply based on it being released or invented. An entire infrastructure had to be built, and people had to shift their habits and learn how to extract the maximum benefits. Building infrastructure, shifting habits, gaining skills are not things that happen instantly. 

 

 

With AI, we are in a position where interesting new technologies have emerged and we expect to see a flurry of more use cases. During this discussion, it was estimated that we are a few weeks away from seeing Google DeepMind’s new model, called Gemini, and we’ll have to then evaluate whether it is simply slightly better than GPT-4 or if it brings entirely new capabilities to the table. Even if we couldn’t cite exactly what Gemini would bring, we do know that computing power is doubling roughly every six months. There are many people who did not realise initially how strong these large language models could be with their capabilities, but we can note that if something is doubling at that type of a rate it becomes very difficult to predict exactly what will occur, especially over a longer-term period. 

 

 

A thought for the future
AI is, in our opinion, the most exciting megatrend because it has the chance to intersect with and magnify other megatrends to increase their impact. A final talking point of the discussion was robotics. The focus was on the concept of taking future versions of the large language models that we are seeing and using those to power different kinds of more general-purpose robots. If a robot could be leased to do a particular job, finish, and then be leased by another client to do another job, this would increase efficiency and make the use of robots much more economical. Businesses would not need to fully buy and then depreciate these expensive capital assets and robotics firms would not need to make a different robot for every use case. 

 

 

That’s not to mention that getting the robots up to speed, meaning doing the actual job needed, faster, could also be a productivity enhancer. 

 

It was an excellent episode, available here, and we look forward to continuing to track and examine the progress on this topic as it continues to evolve.

 
 

 

Related blogs

Post Nvidias Q2 2023 earnings will the AI hype keep driving equity performance

What Cisco teaches us for today’s ‘AI star’, Nvidia

 

 

 
 

 

 

]]>