Transcript
The following transcript has been edited lightly for clarity.
Michael Fernandez:
All right, welcome everyone. Thank you for joining us today for our next Emerging Tech Economic Research Network, or EERN event. I’m Mike Fernandez and I serve as Executive Vice President of Operations and Safety here at the Federal Reserve Bank of San Francisco.
For us at the San Francisco Fed, the EERN initiative gives us insights into how developments from emerging technologies, and particularly artificial intelligence, can affect various sectors of the economy. For example, I had the opportunity to meet with senior business leaders at a leading multinational manufacturing company earlier this year. They taught me how advances in technology are changing the production processes for microchips, which in turn drive powerful artificial intelligence systems used by many businesses across the US. These intelligence gathering efforts help inform our understanding of emerging technologies and together with events like the one you are attending today, provide important information about the future economy. Past events have explored such topics as job matching in the age of AI, and the disruptive economics of AI.
Today’s installment of our EERN event series will feature Professor Chad Jones, to help explore the potential consequences of AI for long-term economic growth as well as AI’s potential risks. He is the STANCO 25 Professor of Economics at the Graduate School of Business at Stanford University. Following his presentation, professor Jones will discuss the research with our host moderator, Huiyu Li, Co-Head of EERN and research advisor at the SF Fed. As a reminder, this event is being recorded and can be accessed on our EERN website following the discussion. Finally, please note that the views you hear today are those of our speakers and do not necessarily represent the views of the Federal Reserve Bank of San Francisco or the Federal Reserve System. So with that, let’s begin. And over to you, Huiyu.
Huiyu Li:
All right, thank you Mike for that kind introduction. It is my pleasure to welcome Professor Chad Jones to our EERN seminar series. I’ve had the privilege of knowing Chad for many years, beginning with taking his economic growth class at Stanford. So I’ll take the liberty and refer to him as Chad.
I’m particularly excited about today’s presentation on the potential growth impact and risks associated with AI. Chad is a true thought leader in the field of economic growth. His research spans a wide range of topics from the fundamental theories of what drives growth, to the relationship between growth and inequality, to alternative measures of growth beyond GDP. And to why healthcare spending increases with growth, and much more. As you will see from his presentation, Chad brings a wide range of perspectives to the discussion about AI.
I also enjoy listening to Chad talk about growth because it is clear that he loves thinking and researching in this area. In economist parlance, Chad derives utility from growth research. So let us welcome Chad and share his enthusiasm for growth research.
Prof. Chad Jones:
Wonderful. Well thanks Huiyu. Thanks Mike. It’s really a pleasure to be back here at the San Francisco Fed. Let’s see. Are my slides coming up? Good. Yeah. So what I’d like to do is talk about AI, the topic du jour, and economic growth. I have one slide on the labor market. I know that’s something everyone’s very curious about, my kids are very curious about. And then also a topic that we talk less about but I think is very important, sort of the catastrophic risk side of AI.
So just to jump in, let me talk about our theories of economic growth apart from AI, and then we can think about how AI might affect things. So your grandmother told you when you were in kindergarten that economic growth comes from the discovery of new ideas and new technologies. So that’s not a surprise. Exactly how that works was something that we didn’t figure out until relatively recently. And Paul Romer got the Nobel Prize for this insight. What he realized is that ideas are different from every other good in economics that we study.
So standard goods in economics are rival. So think about a laptop computer, an hour of a surgeon’s time. This clicker, this room, right? If you’re using it, I can’t use it. There’s an apple on the table, you can eat it or I can eat it. We can’t both eat the apple at the same time. And that rivalry gives rise to scarcity, which is the sort of topic of economics. How do we allocate scarce resources? Ideas are different. Ideas are non-rival, or what I like to say infinitely usable. You invent the idea once and then it’s possible to use it any number of times. Any number of people can use the idea simultaneously. So we’ve seen lots of great examples of this recently.
So think about the design of the mRNA COVID vaccine, right? We invent it once and then literally billions of people benefit from the vaccine. Or ChatGPT’s latest algorithm. They invent the algorithm once and then they serve it to billions of people very quickly after that. So this infinite usability of ideas is actually when you follow through the details carefully, the key to understanding economic growth. What it means is that living standards are determined by the total number of ideas we’ve ever invented. And that total word is actually the key thing.
If you think about trying to make workers more productive with computers. Well, if you give one worker one computer, you make the worker more productive. If you want to make a million workers more productive in the same way, you need a million computers. In contrast, if you invent one new idea, you can make any number of workers more productive. So if they’ve got solar panels or semiconductors or the World Wide Web. Okay, so that’s the first point.
Where do ideas come from? This is the easy part. Ideas come from people. Researchers, entrepreneurs, inventors, business people, all produce ideas. And so when you put these two ingredients together, what you see is that income per person, living standards in the long run, are determined by the total number of ideas. And the total number of ideas is determined by the number of people searching for ideas. And what’s interesting is if you connect the – each of those links seems totally obvious – but when you connect the sort of last part to the first part, you see that income per person depends on the number of people.
And then if we apply growth rates to that relationship, the growth rate in income per person depends on the growth rate of people or population growth. And that’s a link that I think is not quite so intuitive. But you see we got there following a chain of logic that kind of makes sense. So the growth rate in living standards in the long run is determined by the growth rate in the number of people hunting for ideas by population growth.
In other parts of my research, I worry about the declining rates of population growth, maybe even population growth turning negative and what the consequences are for economic growth. That’s not our topic today. Instead, our topic is AI. And one thing that I think is very obvious, and we’re seeing it already, is that this sort of AI could be very helpful in people finding new ideas. So let me give you an example of this from my own life. So before last October, like everyone, I would play with ChatGPT. And it was useful, but it was kind of a toy for me. It would be useful; I think the best use I got out of it was writing a birthday card to my mother-in-law. And it was great for that. But I would play with it once a month to see what it was up to.
Last October, OpenAI released their o1 Reasoning Model. And the claim was that this model could do math and be useful. And so as Huiyu mentioned, my favorite thing is to sit at the whiteboard in front of my office and write down math equations with growth models. And I’d been doing that on this problem of optimal population. Like how many people should we have in the future? For me, it’s a complicated problem. For most economists, it’s not that complicated problem. But it takes me an hour or two on the whiteboard to work through and get the answer. And I’m always changing exactly the way I do it, so I have to solve a bunch of these models.
So I had done one that morning, took me a couple of hours. I gave the same problem to o1 Pro, the reasoning model. It thinks for five minutes and spits out five pages of beautifully formatted equations that gets the right answer. And that’s when I said, “Holy cow, this is doing what I can do.” Now, I’ve played with it a lot since then and it does make mistakes. I kind of got lucky the first time that it got it right. But it’s mistakes. I make mistakes too. It catches my mistakes. My research assistants make mistakes. And the thing is it thinks in five minutes.
And so I can ask it to do these things a bunch. And so I use it all the time. But this notion that it’s going to improve my productivity at finding new ideas, that’s already true. Is it going to replace me at finding ideas? As you can see the fact that it can already do a lot of what I’m doing in two years or five years or 10 years, could it do everything I do? That’s an intriguing possibility.
Okay, so turning to AI itself, let me highlight two points that I think are useful for background. So the first is that automation is something that’s been going on for hundreds of years. Automation’s not a new thing. The industrial Revolution was all about replacing labor at weaving with machines. And so, AI is in some sense the latest form of an automation process that’s been going on for 200 years. And what that suggests is we can use the history of automation maybe to help us speculate about the consequences of AI.
So in the past it was textile looms and steam engines and electric power. In the future, driverless cars. I’m sure you’ve all taken the Waymo here in San Francisco. It’s amazing. Paralegals, pathologists, maybe researchers, maybe everyone, right? So those are the things that are on the table. So that’s the first point, use the history to help us think about the future.
The second point is that AI can be limited by bottlenecks. And economists have a phrase for this called Baumol’s cost disease. But it’s basically the insight that economic growth is constrained not by what we do well, but rather by what’s essential and yet hard to improve, the bottlenecks. These bottlenecks are the source of scarcity, and maybe even the source, the source of high returns. So I’ll come back to each of these points in my discussion that follows.
Okay. So I want to give you two views of the future of economic growth. One’s going to be kind of AI does amazing things, and the other’s going to be why AI might not do the amazing things. And then I’ll try to tell you how I think about it. But I think these two possibilities are absolutely on the table. They’re things that might happen. So first let me give you the AI accelerates economic growth. What does that world look like?
So in the near term, I think what we’re seeing already is AI’s boosting productivity at many activities. It’s boosting my productivity as a researcher. It’s certainly boosting software programmers’ productivity in writing computer code. Claims are that they’re 25% more productive already. In the next decade, is it plausible that the continuation of AI getting better and better and better is going to mean we have AI agents that can automate most coding? Yeah, that seems baked in. That wouldn’t be a surprising prediction to make.
Notice that gives rise to a virtuous circle. Once AI can automate most coding, you tell it, “Hey, come up with better algorithms. Come with ChatGPT-8. Come up with better ways to design computer chips.” Right? And so there’s each of those things is an idea, and once you invent it once, you can scale it up billions of times. Benefiting from Moore’s Law with computers getting cheaper and cheaper and cheaper. And so it doesn’t seem out of the question that we could have billions of virtual research assistants in the future, right?
What would these research assistants do? Well, anything I can get on a Zoom call with my research assistant and anything they can do on their computer. Well, it seems like that could also be an AI agent rather than a human researcher. So, automating all cognitive tasks or most cognitive tasks, again, in the next decade is that within the realm of possibility? Seems like it absolutely is. And once you’re automating lots of cognitive tasks, all the things that inventors are doing with computers, well, there are a lot of things you can do better.
You can design faster computer chips. You can simulate robots in virtual space and design better grippers and pickers and the things that robots do. You can take AlphaFold and AlphaFold 3 and the next versions of those things and invent new medical technologies. So there are a bunch of ideas that you can invent. Now we know they’re bottlenecks in the real world. Sometimes you need to run experiments. But the AI can help us design the experiments. They can help us evaluate the experiments. They can tell us what experiments to run next, right? They can help us design better robots. As I mentioned, once you have robots that work well in the real world and you have those robots being designed and controlled by an AI that’s incredibly smart, well then it can do lots of physical tasks as well.
And so, it seems to me as I think through this reasoning, each step, it’s not totally obvious. The next step comes, maybe it breaks down. But it’s also not out of the range of possibility. And so it seems to me, when you get to a world where AI can do all cognitive tasks, and AI plus robots can do all physical tasks, well that’s a world where AI could raise productivity, raise living standards, raise economic growth quite substantially. Okay. So that I think is the optimistic story that we would tell. And yeah, like I said, it’s not clear to me where that has to break down.
Okay, what about the pessimistic story? Well, I think there are two lessons from economic history, from this history of automation partly that are relevant here. The first one is economic historians tell us that the invention of new technologies, the invention of electricity, the invention of vacuum tubes and semiconductors and computers, that these things take decades to diffuse throughout the economy, and to have real-world economic impact that raises GDP, right?
And one of the reasons for that is there are all these complementary innovations that you have to do in order to have computers or electricity really change things. So you need to change the design of the factory. You need to adopt more organizational change to take advantage of information. So the lesson from economic history is these things matter over the course of decades, not over the course of two or three years. That’s the first point. And I think that’s very likely to be relevant for AI.
The second point, again, coming back to the thing I began with, automation has been going on for 150 years. And yet one of my favorite graphs in all of economics, this was in my PhD dissertation eons ago, is this chart. So what this is, this is average living standards in the United States, income per person plotted on a log scale or a ratio scale, right? And on a ratio scale, a straight line is constant exponential growth. And so what you see in this graph is this amazing fact that over the last 150 years living standards in the United States have grown at 2% per year, and we just haven’t gotten away from that very far either on the upside or the downside. Right, 2% growth, you stay close to that orange line, that straight line for 150 years.
Well remember all the things that are happening over the last 150 years. We’ve got the adoption of electricity throughout the economy, right? In 1870, barely using it at all. And by 1940, by 1960, by 1980, it’s completely changed the economy. Or the discovery of antibiotics or vacuum tubes and transistors and semiconductors and computers and information technology. The internet, the smartphone. All those amazing technologies, general purpose technologies that really affect a lot of things in the economy are here in this graph.
None of those things raised the growth rate. Or none of those things caused us to depart from a 2% growth rate line. And so how do we understand that? Well, I think one of the answers is that ideas are getting harder to find. It’s harder and harder to find the next thing. And if all we had was the steam engine or all we had was electricity, the economy would run into diminishing returns very quickly. And we kind of need the next big idea to keep us on the 2% trend. So by this logic, maybe AI is just the latest next big idea that absent AI growth would slow. We’re already seeing growth slowing in the last couple of decades. Maybe it would slow, but AI is the latest thing that might allow the 2% trend to continue for another 50 years. So that I think is kind of the pessimistic view.
As I said, I’ve spent my career on this graph, and so you might think I’m much more on the pessimistic side than the optimistic side. That would be my bias for sure. As I think about it though, I think AI is going to change things profoundly over the next… not next five years necessarily, but over the next 25 years. So I do find myself persuaded by the first case that I laid out that this could be different. The thing that we’ve seen for 150 years, no change from this 2% line. I think that’s the worst case scenario. And the best case scenario is maybe this time is different.
Okay, let me turn now to talk about the labor market jobs and meaningful work. And I know whenever anyone talks about AI, this is the most important thing that many of us have in mind. This is the thing my kids have in mind. I wish I had more to talk to you about. This is not the thing I research on my whiteboard every day. But I do want to give you some thoughts, and then we can talk more about it in the Q&A.
So first, the world where AI changes everything. That kind of first scenario that I laid out, that’s a world where GDP is incredibly high. It is a world of abundance. And so the world where we have to worry about AI replacing most jobs is a world of abundance. And so, there will be a lot of… There’s a big pie to be shared. We already share our pie to some extent. I think there are deals to be made there that kind of make everyone better off. It’s a political economy question, not an economics question. And we would be right to be concerned about that. The transition may be hard, but as an economist, I say, “Look, the size of the pie is going to be really big. So there ought to be a way to make that work out well.”
Okay, next point that we learn from macroeconomic history. As we get richer, we naturally work less. And this is a good thing, not a bad thing. In our economic models work is a bad, not a good. That’s why they have to pay us to do it. Okay. And so working less, taking more leisure in a world of abundance where we have a lot of income to enjoy hiking in Yosemite, and reading books, and playing music, that’s also kind of a good world.
On the other hand, as I already told you, as Huiyu already mentioned, my utility comes from sitting in front of the whiteboard, coming up with growth models. Very quickly I think I’m going to be replaced at doing that. And I’ve wondered how am I going to find meaning in the world when I can no longer invent new growth models that are really at the frontier? And I think, so there is this meaningful work. There is good meaningful work. What do we do in a world where AI can do everything we can do? Well, first of all, it’s important to notice that just because it’s possible for AI to do everything we can do doesn’t mean we’ll use the AI for everything.
So chess is a great example. My iPhone can beat Magnus Carlsen at chess now. And yet chess has never been more popular than it is today. People watch chess on YouTube all the time. It’s just this incredibly thriving community. Why is that? Well, because we place inherent value to watching humans do certain things. And I think chess is really the tip of the iceberg there. So experiences involving people are likely to be much more valuable in the future. Arts, music, sports. I want to watch Messi play for FC Barcelona, not robots kicking the soccer ball around, right? So I think things like that are going to be there.
Finally, as I reflect on my own experience working, I think we get meaning from self-improvement. We get meaning from striving to achieve a goal and achieving it. And the goal doesn’t have to be inventing a growth model that no one’s ever seen before. The goal can be, and AI invents new growth models and my friends and I sit around having the AI teach us the way the world works, right? Learning and improving ourselves, improving our understanding, I think that’s also something where meaning will come from. So I realize this doesn’t address the, “hey, our kids are going to have trouble finding jobs and what are they going to do,” question. But in the long run, I do think there are reasons for optimism along these lines.
Okay, another thing I want to come back to is this bottleneck question. One of the concerns that people often raise is, once AI and robots can do everything we can do, maybe wages plummet. Maybe the share of GDP earned by people falls to zero, and all the income goes to AI and big corporations. And what are we going to do about that world? Obviously redistribution is part of it. I wanted to highlight that it’s not totally obvious that that view of the world is correct. And we’ve seen in our history another example that at least calls some of that into question.
So ask yourself, what’s happened to the share of GDP that we paid to computers over the last 25 years? Right? Computers are everywhere. The quantity of computers is rising like crazy. But we also know computers are getting cheaper. Well, you might’ve thought the share of GDP paid as compensation to computers is rising. After all, the share of GDP paid to capital is actually rising a little bit. The labor share is falling. The capital share is rising a little bit. You might’ve thought, yeah the share of GDP going into computers is probably rising. No. Actually not.
What we see when we look at the data is during the 1990s, the Dotcom boom, okay, the share of GDP paid to computers did rise from kind of 2.5% to 3.1%. But over the last 25 years, the share of GDP paid to computers has fallen from 3% to 2%, right? Why is that? Well, computers are all over the place. The quantity of computers is rising, but the price is falling so fast that the share of our income we pay to computers is going down rather than up, right? So maybe that’s the future as well. AI is going to be incredible. AI is going to be everywhere, but it’s so productive that its price may fall.
The thing that commands the high prices are the bottlenecks. The places where things are scarce, the places that we reserve for humans for whatever reason. So I don’t know what the answer to this is, but I think the common narrative that says, “Yeah, the AI share is going to go to 100% and the labor share is going to zero.” I’m not sure that’s right. So it’s an interesting question.
All right. Finally, I’m an optimistic person by nature. I’ve kind of given you an optimistic story. I’m going to end with a pessimistic note. I want to talk about the risks associated with AI and the possible downsides. Because even that optimistic scenario, even if AI is incredible, I don’t think it has to end well for us humans. And that’s a point that I think needs to be talked about a lot more. I’ve written two papers on it, and the question I ask in the two papers is, “Well, can economic analysis help us think about these serious risks?” And to my surprise, I learned a lot of things doing it. So I want to share the insights that I learned by thinking about it.
So first, when we talk about catastrophic risk or existential risk, what do we mean? Well, one thing to appreciate, and I think we’ve all seen this, the people who founded these AI companies, Sam Altman, Demis Hassabis, Geoff Hinton, Dario Amodei, they originally got into thinking about AI and artificial general intelligence or artificial super intelligence because they thought it was this amazing technology. Maybe more important than the internet or electricity. But they all recognized it might be more dangerous than nuclear weapons. So OpenAI was founded with a mission of safety. Okay. So all those experts, the Nobel Prize winners, Demis Hassabis, Geoff Hinton, they take these safety issues very, very, very seriously. And I think if they do, we should too.
Okay, so in one slide, what do these risks look like? I think there are two kinds of risks, broadly speaking. One is bad actors. Right? So the AI doesn’t have to itself do damage. But think about ChatGPT-7 or ChatGPT-8, we will have someday this amazing technology that can answer any question that the best human could answer. And maybe even better than the best human. Well, what if some bad actor gets ahold of this technology? After all, it’s available on the internet for $20 a month, and asks it design a new virus that’s extremely lethal, way more lethal than COVID or smallpox. And takes three weeks to display symptoms. And oh, give me the cure, but don’t share the cure with anyone else. And you release this virus. Right?
So nuclear weapons were manageable because we kind of had two people, two groups of people that needed to be monitored. And we had third party verification, we had other things. We barely got away with that. Well, what if you give a red button to 8 billion people? Are we sure none of the 8 billion people press the red button? That seems risky. So that’s the first class of risk that I think is absolutely something we need to be worried about. And people are worried about this.
The second class of risk is more speculative, but I also think is something very important to worry about. And this falls in the category of alien intelligence. So as an analogy, suppose we found out this afternoon that there’s a spaceship near Pluto on its way to earth. How would we feel? We’d be pretty excited. I think that would be amazing. But then on second thought, I would say, “Wow, maybe there’s a chance that 10% chance that doesn’t end well for us.” After all, whenever people more advanced encounter people much less advanced, or a species less advanced, it often doesn’t end well on earth. Maybe it wouldn’t end well here.
Well, according to this view, AI is an alien form of intelligence, right? We don’t know what its goals are. We try to shape it, but maybe… And so there’s a computer scientist at Berkeley, Stuart Russell, he’s the co-author of one of the best-selling books for PhD machine learning classes. Very serious computer scientist. I was on a panel with him and he had a quote that I really like. I thought it put this into a nice way of thinking about it. He said, “How do we have power over entities more powerful than us forever?” I’m not sure there’s a good answer to that question, but it’s definitely a question worth thinking about.
So in these two research papers, I think about these questions. The first one is kind of a thought experiment that I think about as the Oppenheimer question. You remember in the Oppenheimer movie, before they test the first atomic bomb, the physicists say, “Hey, what if this chain reaction ignites the atmosphere and kills us all?” And they say, “Okay, hold on, let’s go back and do some calculations.” They go to their whiteboard. They come back and they say, “Good news. The chance of that’s really small.” It’s like, well, okay, but how small is really small? Well, that’s the question I ask in this paper.
So suppose AI’s amazing. Suppose it raises growth rates from 2% per year, the last 150 years, to 10% per year, right? An amazing, suppose AI is amazing, but there’s a one-time chance that when you use the AI, it kills everyone on earth. What chance are you willing to take? And this I should have prefaced all my remarks by saying, I’m an economist and we like to consider simple models, toy models to speculate on how we understand things. This is definitely in that category, right? So it’s a toy model. What chance are you willing to take to get 10% growth? A 1% chance of killing everyone, a 10% chance?
Well, I can ask our models, the kind of models we use to study business cycles, and asset pricing, and labor markets. What do the models say? It turns out you learn interesting things from that. So the first thing you learn is that if you take log utility, our standard utility function in lots of economics. That there’s lots of diminishing returns, right? Utility from income diminishes. The first thousand dollars is really valuable, the last thousand dollars, much less so. So, log utility is just a functional form there that we often use.
It turns out with that standard model, you’re willing to take a one in three chance of killing everyone on earth to get 10% growth. And that taught me, wow, I’m not log utility at all. Because there’s no way I would take a one in three chance. So it surprises me that our standard model would say something like that. It’s not exactly our standard model. We have other models where we say, suppose risk aversion, suppose the curvature is even a little bit higher. So log utility is curvature of one. You can have curvature of two or three. Turns out the answer is very non-linear in that.
If I make the curvature two, risk aversion is two, then that 33% plummets to 2%. At risk aversion three, it plummets to half a percent. So it’s just one, two, and three are not nearly as close as we thought they were in this space. And why is this? Well, it’s for a very intuitive reason. They’re sharply diminishing returns to income or consumption, right? And so I don’t need a fourth flat screen TV, or a third iPhone. I need more days of life to enjoy living in the Bay Area, right? My life in the Bay Area is great. Just don’t kill me. I don’t need another iPhone. Just don’t kill me, right? And that’s kind of what the model says. Okay, so risk aversion two or three looks good.
But then I thought more about it and I realized, look, the same world where we get 10% growth in GDP is probably a world where the AI is going to invent amazing medical progress, right? Cures for cancer, cures for heart disease. Suppose along with that 10% growth, it cuts mortality rates in half. It lets us have life expectancy twice as long as we have today. Now again, that’s very speculative, but 10% growth was pretty speculative too.
Well, it turns out even with that risk aversion parameter of three, now you’re willing to take a one in four chance of killing everyone to get the halving of mortality. And if you think about it, you could sort of see why. Well, now we’re not comparing lives versus consumption. I said lives versus consumption, we pick lives every time. Right? Now we’re comparing lives versus lives. I don’t care what kills me, I care about not dying. Right? So if the AI can cut my mortality rate in half, reduce cancer, heart disease, etc., well I’m willing to take more of the existential type risk and so is everyone else. At least if we’re all identical. Right? And so this surprised me. I thought, no, I was a half a percent person maybe before. And now even my preferences say maybe I should be a 25% chance. So that was surprising.
Okay, next point. Of course, we don’t want to take this risk if we don’t have to, right? So the second project I worked on is about AI safety. How much should we be investing to mitigate these risks, right? We’d like to get that 25 as low as possible. And are we massively under-investing? That was the question I asked myself. Again kind of after seeing this o1 Pro. o1 Pro made me realize, wait, this is changing a lot faster than I thought.
So how much should we invest to mitigate these risks and are we under-investing? Well, it turns out we’ve experienced something quite analogous to the AI risk just recently, and that was COVID-19. During the COVID pandemic, we as a society faced a probability of dying of 0.3%. Less than 1%. 0.3% of people died in the COVID pandemic. Kind of small. But remember, we stopped going out for a year and a half. We shut down the economy for a while. We spent 4% of GDP mitigating that risk to avoid dying. Okay.
I suspect the probability of a bad outcome from AI is bigger than 0.3%. But we’re not spending anything like 4% of GDP to mitigate that risk, so maybe we’re under-investing. You might say, “How do we know the 4% was the right amount to spend? Maybe we overdid it.” So like all economists, I wrote a paper on economics of COVID and with some co-authors. And we kind of asked how much should we be willing to spend to mitigate that COVID risk? And it turns out no, the 4% was probably if anything conservative. What do I mean by that?
Well, if you ask what’s the value of life? And again, this is another question that economists are much more comfortable asking than typical people. But the government has to ask this question every day. When we’re citing how safe to make the bridge? How safe to make the highway? How many pollution scrubbers to require firms to install? We’re trading off lives versus dollars. How do we make that trade-off? Go to the Environmental Protection Agency, the Department of Transportation. They’ll tell you they’re using a value of life of $10 million.
So $10 million for the average American, say a 40-year-old American. Well suppose someone faces a risk of dying of 1%. And suppose you value your life the way the government does at $10 million, how much are you willing to pay to completely avoid that risk? Well, 1% of $10 million, that’s $100,000. That’s more than 100% of per capita income. Per capita income is around $90,000, right? So we’re willing to pay more than 100% of GDP per person to avoid a 1% mortality risk, right?
Well, that’s a big number. That’s way bigger than the 4%. What suggests to be is, okay, the existential risk associated with AI, maybe it applies over 20 years or something. We’re not sure exactly what the timeframe is. But could that justify spending 5% of GDP a year for 20 years? It’s possible. That’s within the ballpark of being entertained here. Now, there are various reasons why I haven’t talked about how effective the mitigation is. So the reason there’s a paper is you have to think about, well, if we spend another billion dollars, how much does that reduce the probability? That’s a tricky question. So I have a more detailed analysis.
But here’s the way I like to think about it. Could we justify spending $100 billion, which is 0.3% of GDP to mitigate this risk? Yeah, even just from a selfish standpoint. Put zero weight on future generations, just us caring about not dying? You would easily justify $100 billion a year.
Okay, so let me conclude now with just some thoughts that sum up. How much did the internet change the world between 1990 and 2020? I think a lot. Now, again, in that 2% growth chart, you don’t really see it, but I think we would all agree the internet changed the world profoundly. How much will AI change things between 2015 and 2045? More or less? I think probably even more. Okay. Now, just because the change takes 30 years rather than five years to happen, doesn’t mean there’s not a profound change on the way. And so the short, short horizons that people say, “Is it going to change the world tomorrow?” Probably not. But that doesn’t mean it’s not incredibly important.
I think we’re massively under investing. I think 100 billion dollars, it’d be easy to spend that amount if you ask how much are we spending on AI safety now? Maybe a billion dollars across all the different companies and things. So 100X under-investment in AI safety, probably.
Externalities and race dynamics. So each of these labs, there’s a dynamic here that I think is pernicious and negative. And it’s kind of like a prisoner’s dilemma in economics. Sam Altman, Dario Amodei, Demis Hassabis, the people running these different labs, they each say, “Look, I’m worried about safety. Maybe I should stop.” But then they say, “If I stop, there are five other people racing and if we’re going to die, we’re going to die. So maybe I should race too. Because after all, I’m probably, I care more about safety than the average person and I’ll do it carefully. And if it works out and we don’t all die, I’ll be the most famous person in history.” So everyone makes that calculation. Everyone races, when we would all agree if we could coordinate to slow down. So there’s externalities that they’re imposing on all of us.
If you ask about policies. I think taxing NVIDIA’s computer chips. Put a big tax on the GPUs that are used to train these models, and use the revenue to subsidize AI safety. That would slow things down, give us money to do things in a safer way. I think that would probably be a good idea. Let me stop there and happy to take more questions. Thank you.
Huiyu Li:
All right, thank you Chad for that very engaging talk. I thought the figure with a declining share of computer income is really interesting. I guess 1995 to 2005 was a period of high productivity growth that was usually attributed to adoption of IT technology. You showed that computer income, share of income declined after 2005. But then computer usage, as in number of computers per person, per worker continued to increase. So can you just talk us through a little bit about how to think about why quantity keep on increasing, but the share of income declines?
Prof. Chad Jones:
Yeah. No, it’s a great question. I think that’s a graph that it’s readily available if you look at the right place. Again, I think underappreciated in economics. So I was really happy when we asked the question and found that answer. So in some sense, total spending on computers is P times Q. Price of computers, times the number of computers you purchase. Right? You could divide that by GDP. So total sales of computers divided by GDP. And then that’s not exactly the number I’m plotting. We’re plotting the returns, but if you think about it the same as P times Q, that’s probably a useful place to start.
We know the quantity is rising like crazy. Right? We all have multiple computers now, between our iPhone and our iPad and our laptop and our desktop. Right? The quantity of computers is rising like crazy. That leads us to think the share of spending on computers is probably rising like crazy too. The other side though is computers are getting cheaper and cheaper and cheaper. And the price effect evidently outweighs the quantity effect. When you look at payments to computers. That’s what that graph says. And there’s no, I’m not manipulating any data, I’m just taking a number from the Bureau of Labor Statistics and showing it basically.
So I think that’s the explanation. It’s just that the price effects… This is the Moore’s Law, right? Moore’s Law says the number of transistors we can fit on a computer chip is doubling every two years. Our iPhones today are way more powerful than the best computer in the world 30 years ago, right? And so the price of each computation is falling like a rock. And so that price effect dominates the quantity effect and the spending share evidently in the last 25 years has gone down rather than up.
The intriguing question is that AI is using lots of computers. AI is also getting better and better like Moore’s law. It’s everywhere. It’s going to be even more widely diffused in the future. It’s going to be in all of our devices in the future. It might make you think the spending share is going to go up. And what I want to suggest is maybe not. The fact that computers in the last 25 years are getting less of our GDP rather than more, maybe the price effect dominates there. And the way to think about it in the models is those bottlenecks, right? So what is it that people do that computers and machines and AI either can’t do? And maybe the answer to that’s nothing eventually if you wait long enough. But for a while, there are definitely things they can’t do.
I tell my MBA students, “I think I’m going to be automated in the next five years. It can already solve my growth models. It hasn’t learned to ask the right questions yet. But give it five years of progress and it’ll ask the right questions. But managing companies, that strikes me as much harder to do, and maybe that gets automated much later than me. So I think that the managers of companies are an example of something that’s going to be automated much later if ever. Maybe we choose to just say we want all companies to be run by a human. Or the human has to check the AI decision. And then that’s the bottleneck and that human still gets good returns.
Huiyu Li:
You also have a series of papers that look at how to measure well-being beyond GDP, considering leisure, inequality, mortality, etc. Where do you see AI’s effect on well-being beyond this growth?
Prof. Chad Jones:
Yeah, this is a great question. There’s a famous quote from JFK’s brother, the original Bobby Kennedy about GDP. People have been dumping on GDP for a long time. You think the politicians, maybe they don’t know what GDP is. But no, Bobby Kennedy was a smart guy and he said, “You know, GDP measures everything except the stuff that we hold valuable.” GDP doesn’t measure poetry. GDP doesn’t measure music. GDP doesn’t measure me walking in the park with my favorite person. Right? And so GDP misses a lot of things that are incredibly important.
And so economists are aware of this point and they say, “Well, is it highly correlated with the things that are important? Is this rise in GDP per person at 2% per year for 150 years? Are we really better off or are we missing things so badly that we’re actually not any better off?” So economists are very concerned with this question. And so we try to think about taking other things into account. So the paper that you mentioned says, well, what if we, in addition to measuring GDP, we measure life expectancy, we measure leisure, and we measure inequality. Behind some veil of ignorance. You could be born into a world with lots of inequality or less inequality. Well, we’d like some insurance against that. So inequality could be bad for that reason.
It turns out it does change some of the things you think about, but it doesn’t change other things. So if you look across countries, the correlation between GDP per person and this broader measure is 0.95. They’re very highly correlated. If you can only know one number, GDP per person is a great number to know. It’s highly correlated with the things we care about. On the other hand, it does make a big difference.
So the poster child for that paper was France versus the United States. When I go visit Paris, it seems like this amazing place, right? It’s great. The living standard seems very high. When you look at the GDP statistics, France has about two-thirds the GDP per person of the United States. Okay. Much lower than when I go visit Paris. Well, if you take into account leisure, life expectancy, and inequality, it changes the picture. So in France, they work a lot less. Remember working is a bad in our model it’s not a good. Leisure is the good, so they get a lot more leisure. So you ought to value that.
Life expectancy. People in France live on average two and a half years longer than people in the United States. That’s also very valuable. And inequality is lower, right? If you take each of these things into account. Each of them adds about 10 percentage points to welfare in France. And that puts France and the US kind of 95% or depending on the numbers, right, much more on equal footing. And so I totally agree that these other things matter.
And that was kind of reflected in my remarks when I looked at what if AI cures cancer and heart disease? The life expectancy effects I think could be much more important than the consumption effects, because of diminishing returns. Again, give me a faster iPhone, fine. My iPhone’s like an iPhone 13, and they’ve never made a better one. In case anyone here has some influence. I like the small iPhone. I don’t want this big thing in my pocket, right? And so I don’t need the fastest iPhone, but I do need to live a long time and enjoy the Bay Area, and hiking in Yosemite.
Huiyu Li:
Yes. The speed of adoption for mitigating risk, I think you mentioned investment. I guess one form of investment is giving up a little bit on the fast productivity growth in the near term to just check the AI capabilities before releasing them. But then I guess you also mentioned it’s hard for private actors to coordinate. So what do you see are potential mechanisms that can help to mitigate risk?
Prof. Chad Jones:
Yeah. So I do think there’s a chance that if you put all these experts leading the labs in a room and said, “Look, let’s all slow down.” And if they could, in a verifiable way, be convinced that everyone else was slowing down, I think they would say, “Yeah, it’s a good idea to slow down.” I think again, they all founded these companies being very concerned about AI safety. So I think this negative race dynamic, I think they understand it and probably feel it.
One of the responses by the way that everyone gives when you mention these things, they say, “Sure, they would all agree. But what about China? China’s going to race ahead. They’re going to get AI first. They’re going to take over the world. And wouldn’t that be bad?” That’s a valid concern. But I think it misses an important point, which is once you reduce the game to two actors, us versus China, then we’re back in the nuclear risk scenario, and then we negotiate with China. China doesn’t want to race ahead and kill everyone either, right? And so China would also say, “Yeah, we’ll slow down. We’ll tax NVIDIA’s computer chips or we’ll tax Huawei’s computer chips as well.”
Now we’d need third party verification. But just like in the nuclear weapons scenario, yeah, get some expert third parties. Give them access to all the labs. Make them verify that we’re taxing computer chips by a large amount. I think that’s a problem that’s solvable. The thing I like about taxing the computer chips is it seems to me it’s a verifiable way. It slows things down. And by taxing, I’m going to throw out a number just to… I don’t mean a 10% tax. How about a 10X tax? Makes the thing… People want lots of computer chips, they’re paying a lot for these computer chips now. So a 10% tax probably isn’t going to do anything.
Put a huge tax on there. I think that’ll slow things down. Use the revenue to help fund safety research. I think there’s some kind of deal like that to be struck, that I think would delay the best scenario. Right? That’s unfortunate. I’d like to cure cancer sooner rather than later. But I’d also like to do it in a way that’s safe. And it feels like something there is a deal to be struck.
One other thing on this point. Again, some people say, “What would you spend the safety funding on? Are there really things you could spend 100 billion dollars?” Clearly, I don’t know. Each of these labs have people that are working on safety. OpenAI told some of the labs, they would get 20% of the compute to work on safety, and then maybe that didn’t work out. But with the right tax and subsidy, we could get more compute on safety.
The other thing, I think if you look at how DeepMind started out AlphaFold, the protein folding, right? That’s not this general intelligence that can solve my math problems and write the birthday card to my mother-in-law. But it’s a narrow AI that can help cure cancer, right? So do more of the narrow AI that’s targeted toward the medical side where the returns are highest, I think. That’s, again, probably safer and more targeted toward things that are better. So I do think smarter people than me can come up with good ideas along these lines that would probably move us in a better direction.
Huiyu Li:
Okay, thank you. So let’s take some questions from the audience. We have pre-submitted questions as well as questions from the live audience. I’ll give priority to live audience. So first question, “AI can come up with new ideas. How do you know what percent of those ideas are going to be good ideas?”
Prof. Chad Jones:
No, this is a great question, and this is where Huiyu was a graduate student, we’re teaching growth, we talked about growth as a good thing. That ideas are all good and that’s the way things work. If I talk to MBAs or my parents or other students or people rather than economists, they’ve long realized maybe not all ideas are good. There was the Oppenheimer question, the Trinity project. So the Cuban Missile Crisis, the early-1960s. Historians look back at the Cuban Missile crisis and estimate there was a one in three chance that we had a nuclear exchange, right?
I think our view of technology in the 20th century would be very different if that coin toss had gone in the wrong direction. We’re very lucky it didn’t. But had we had some major nuclear exchange, we would view technology very differently. So I do think there are bad ideas as well as good ideas and we need to guard against the bad ideas. And partly that insight has shown up in various things I’ve done. It’s hard to tell what are the good ideas and what are the bad ideas.
Marie Curie invents radium or discovers radium, the element. And after she discovered it, people are using it in jewelry. They’re making earrings and bracelets and necklaces because it looks great, and it turns out it causes cancer, right? Or asbestos. There are lots of ideas that at first looked good that turn out to be bad. So you just have to have the scientific and medical establishments judge these things and figure them out as soon as possible. I think it’s not obvious, but it certainly matters a lot more than I’d appreciated. That I would say.
Huiyu Li:
One of the common questions we get from the pre-submitted questions is about potential policies to mitigate the hard transition on the labor market side. I was wondering if you have any thoughts on that?
Prof. Chad Jones:
Yeah, I wish. So you probably saw there was a paper that some economists at Stanford, Erik Brynjolfsson, Bharat Chandar, and a third author whose name I’m forgetting right now, just came out with that used the ADP data. This sort of labor market data, private companies, paycheck processing. So they have access to I think let’s say two thirds of the private payrolls of the United States. So they could really look at big data to see, are we noticing places where AI is disrupting the labor market? And before their paper, I think we didn’t have any evidence actually.
So if you look at employment overall in areas where we think AI would have big effects, it was growing rather than falling. Okay. So before this paper there was kind of with small data sets where you couldn’t look finely. With this big data, they broke it out by age group. And what they found is the growth, it is growing in all these areas where you think AI would be having impacts. But all the growth is for people age 30 and above. If you look at new entrants to the labor market, so the 22 to 25-year olds or 25 to 30-year olds employment among those groups in jobs where AI we think is going to have impact is actually fallen by 15%. For the youngest group, the 22 to 25 year olds.
And so there’s a question of whether that’s really AI having its impact. If you look at those groups, their wages are going up not down. And if you thought it was AI displacing people, you think the employment would go down and the wages would go down. So we don’t see it on the wage side, so it’s not totally clear what that is. But we’re looking hard for the first indications of AI displacement, and this is kind of the best evidence so far. So we don’t have great evidence yet, but I think everyone thinks this is something to be worried about.
I don’t know what the answer is. I think it’s an important topic. As I said, I think some types of skills are easier to automate away and some types of skills are less easy to automate away. And investing in the skills that are less easy to automate away seems like a good thing for young people to be doing right now. There are all sorts of other things that people are going to be able to do, right?
So I was reading a speech by David Deming, who’s one of the experts on AI and labor markets. He’s now the dean of students at Harvard College, and he was giving a welcome speech to the new generation of students, and they were very concerned with this question. And he said, “I know people say it can be difficult and it can be, but there are also opportunities here.” So creating new companies has probably never been easier, right? Your ability to create something of enormous value given that you do something enormously valuable and you can sell it to a billion people tomorrow. There are lots of opportunities out there as well. So I think be aware of the opportunities.
As time goes on, I do think some redistribution may very well be necessary. You give a share of the S&P 500 to every child when they’re born. And maybe something like that is going to be enough. But I think we do redistribute. We may need to do it in the future even more, and that’s something to be aware of and look out for. But yeah, we’re not there yet.
Huiyu Li:
Okay. Well, this is related to one of the most courteous questions I saw in the pre-submitted questions. It begins with, “Dear Dr. Jones, I’m very excited to listen to your presentation and learn about the potential consequences of AI. I’m a student who is looking for opportunities to work in AI fields. I think this AI and economic growth speech, it’ll be very helpful to know what skills you would recommend new generations should gain.”
Prof. Chad Jones:
Yeah, that’s a great question. I do think some of my remarks are relevant here as well. Remember, one of the lessons of economic history is that the full adoption, the full diffusion of these technologies throughout the economy to where they have visible impacts on GDP often takes a decade or two decades or three decades. It’s a gradual process. And I think we see these AI models coming, we all use them, so they are having their impact. But they haven’t yet diffused through to every job, every aspect of the corporate world, every aspect of every business.
I think that integration, the integration of AI into businesses is going to create enormous value. But it’s also the kind of thing that’s going to take 20 years. And so I think, if you ask programmers and software engineers, are they going to have jobs two years from now when AI can do lots of coding? I think they will because I think integrating AI into business is going to be a multi-decade process. And there are going to be lots of jobs there to help facilitate that. And so I do think that, for example, is the kind of thing that the returns are going to be high rather than low.
Huiyu Li:
Okay. I think we have time for one more question. I see a lot of questions in our pre-submitted group that talks about learning, incentive to learn. Now that AI, you can type into ChatGPT to get your answers, where do students or even us get the incentive to learn?
Prof. Chad Jones:
Yeah, these are fantastic questions. As a teacher, we face these kind of questions very, very… They’re right in our face right now. Because for the last 10 years when I taught MBAs, I would give them a take-home exam. We’d give them four or five questions that let them go gather some data. Integrate the things that we’d talked to them about. Think about how to apply something to a business model. And they were great questions that we could ask them to do. And I think they were things that they were going to be doing in their business world when they went back to the business world. So I thought the ability to ask great questions on take-home exams was a wonderful way to test students and make sure they learned what we think they need to learn.
You can’t do that now. The AI can answer every take-home question that we’re going to ask, better than 95% of our MBAs. Right? And it’s just the incredibly good at this stuff. So that worries me. How do we ensure students have the right incentives to learn? Because I think all of us go into learning with the best of intentions. So I say, look, so first of all, AI as a tutor is amazing. Right? There’s never been a better time to learn than now. Anything you want to learn, you ask the AI to teach you.
And I can tell it, “No, teach me about eigenvalues in the context of growth models, not in the context of predator-prey models or physics models.” It’ll give me exactly in my area of expertise an explanation of what I’m asking about. So and it can quiz you. It could go back and forth. Learning has never been easier than it is today. And yet, if I were writing a research paper or doing a problem set, I want to learn, I have the best of intentions. And yet I think this is the negative race dynamic of the AI labs.
I think, okay, even if I do my best, the AI is better than me. And all my classmates are going to use the AI and they’re going to get a better grade and I’m going to get the worst grade because I tried to learn and they didn’t. And so what do I have to do? I have to use the AI instead. And then once I’m using the AI, the incentive for me to learn, it’s easy for it to slide away. Because I’ve got lots of other things to do in my life.
So I definitely worry that even though learning’s never been easier than today, the incentives are distorted. So what we’ve done with our MBAs, and none of us like this, is we’ve gone back to blue book exams in class. So you come into the classroom for three hours at the end of the quarter, I give you a blue book with some questions. You don’t get a phone, you don’t get anything. It’s just you and the blue book, the way we all learned. And that provides high-powered incentives, because they know they’re going to be tested on what’s in their brain. And I think that’s actually what you need to do. So I think there are good answers to these questions, but it’s very much something that everyone has to worry about now.
Huiyu Li:
Yes, a lot of things to figure out. Well, thank you very much Chad, for a very engaging discussion.
Prof. Chad Jones:
Great.
Huiyu Li:
Thank you.
Prof. Chad Jones:
Thank you very much.
Huiyu Li:
Thank you. So our next event is on September 19th. We will see presentation about AI Implications for Workforce Development and Economic Mobility. I hope you can all join us. Thank you.
Summary
Chad Jones, the STANCO 25 Professor of Economics at the Graduate School of Business at Stanford University, delivered a live presentation on AI and the potential consequences for economic growth over the next two decades on September 2, 2025.
Following his presentation, Professor Jones answered live and pre-submitted questions with our host moderator, Huiyu Li, co-head of the EmergingTech Economic Research Network (EERN) and research advisor at the Federal Reserve Bank of San Francisco.
You can view the full recording on this page.
Subscribe to notifications of future EERN events and our monthly SF Fed Newsletter.
About the Speaker

Charles I. (Chad) Jones is the STANCO 25 Professor of Economics at the Graduate School of Business at Stanford University. He is a member of the American Academy of Arts and Sciences, a Fellow of the Econometric Society, a former co-editor of Econometrica, and the author of two undergraduate textbooks. His research centers on the causes and consequences of long-run economic growth. Most recently, he’s focused on how population and economic growth interact and the economics of artificial intelligence, growth, and existential risk.