Anton Korinek | The Economics of Transformative AI

Date

Tuesday, Apr 22, 2025

Time

9:00 a.m. PT

Location

San Francisco, CA

ADDITIONAL INFORMATION

Download the Slides
(2.6 MB)

Transcript

The following transcript has been edited lightly for clarity.

Niel Willardson:

Welcome everyone. Thank you for joining us for the next EmergingTech Economic Research Network or EERN event today. I’m Niel Willardson. I serve as the Interim Executive Vice President of Supervision and Credit here at the Federal Reserve Bank of San Francisco. For those of you that have yet to experience one, EERN events are opportunities to exchange ideas, learn about research, and share insights with those who are interested in studying the economic impacts of emerging technologies. For those of us at the Fed, they give us insight in how developments from emerging technologies and particularly artificial intelligence can affect productivity and the labor market. This understanding is critical to achieving the Fed’s monetary policy, dual mandate of stable prices and full employment. Past events have focused on such topics as job matching in the age of AI, the future of work with AI and the impacts on real world productivity.

Today’s installment of our EERN event series will explore the economics of transformative AI technologies to dig into the possible impacts on economic growth as well as the use of AI as an economic research tool. We’ll hear from Anton Korinek of the University of Virginia, Department of Economics and Darden School of Business. Following this presentation, Professor Korinek will discuss his research with our host/moderator, Huiyu Li, co-head of EERN and research advisor at the San Francisco Fed. As a reminder, this event is being recorded and will be available on our EERN website following the discussion. Finally, please note that the views you hear today are those of our speakers and do not represent the views of the Federal Reserve Bank of San Francisco or the Federal Reserve System. With that, we’re ready to get started, and over to you, Huiyu.

Huiyu Li:

All right, thank you very much. Okay. Today, we have a lot of exciting information to share with you, so we’ll start right away. Over to you, Anton.

Anton Korinek:

Thank you very much, Huiyu, and thanks to all your colleagues who’ve made this possible. I thought I’d prepare a couple of slides to give a kickoff presentation on the economics of transformative AI, which is more or less what some people call AGI or powerful AI. There’s lots of words for this for very similar concepts. I want to start with some introduction and context. So, we have seen roughly four paradigms in AI over the past 15 years. In the 2010s was the paradigm of deep learning that gave rise to lots of specialized or narrow AI systems that could do one thing but only one thing like say image recognition. Then in 2022, LLMs emerged. We had the ChatGPT moment. Of course, we had preliminary LLMs much earlier, but that’s when they entered public consciousness. And what I wanted to point out about them was that they were the first type of AI systems that were general purpose.

You could ask ChatGPT for a recipe or to help you with economic research or with a medical diagnosis that they’re general in the sense that they span a whole range of essentially cognitive capabilities that we humans, for example, have. Although, of course, at present at a level that’s still not quite where we are. In 2024, last fall, essentially, a third paradigm started, the paradigm of reasoning models that addressed many of the shortcomings of the first wave of LLMs, many of the hallucinations, the difficulties with basic analytic tasks. And now, in 2025, we are starting to see the emergence of the first AI agents. What was this all driven by? Well, I’m an economist, so I have to show you some charts and numbers. This is a chart from Epoch AI, which shows the amount of computational power that has been used to train some of the leading AI systems at a given time over the past 15 years.

This is a logarithmic scale, so it means if you see a straight line that essentially corresponds to exponential growth and it is exponential growth at a really breathtaking speed at a rate of 4.5x per year. So that means after five years, the amount of computational power has increased more than a thousand fold. And this has also gone hand in hand with rapid improvements in algorithmic efficiency at a rate that’s estimated to be something like 2.5x per year. So if you multiply those two numbers, you can see that the effective amount of computational power going into training, our leading AI systems has grown at a rate of 10x or more every single year for the past 15 years. And right now, there is no end in sight to this trend.

One manifestation of this is that we have seen efficiency increase rapidly and the cost of AI systems go down quite significantly. So, this is a chart from a paper of mine last December where we could see basically how much cheaper just one particular AI system, GPT-4 or later 4o became within 18 months. And you can see here that essentially the cost of digesting text and producing text has decreased by 80 or 90% respectively. While at the same time the quality of the outputs, which is shown here on the right-hand graph, has grown very significantly. The measure here in this graph is the so-called LMSYS score, which is equivalent to an ELO score in chess. And what that tells us, so the difference between 1186 and 1340 tells us that the system in August ’24 would beat the one from March 2023 in almost three quarters of all cases.

Now, the reasoning models that emerged last fall have given rise to new scaling laws. So, scaling laws tell us how much AI systems improve when we throw more computational power at them. Last fall, there were concerns that we may see the end of scaling, but in fact, what we saw instead is that we saw the old scaling laws that described how advances in AI derive from more and more computing power have in fact been complemented by these two new scaling laws which come from basically the AI being given more time to think about its responses. So that’s called inference scaling or the AI being trained longer on how to reason. That’s called post-training scaling. And based on these two new scaling laws, we have seen rapid advances in the ability of reasoning models just in the past six months. So what has this led to? In the past six months, it’s led to stark improvements in analytic benchmarks.

Those were the things that large language models were traditionally really bad at. So at this point, the latest reasoning model by OpenAI can solve questions at the American Invitational Math Examination at a rate of 99.5%, which would place it essentially among the top 10 to 20 humans every year. And we even had to create new benchmarks that are harder than anything we have seen before. And the AI is starting to really make inroads in those benchmarks. For example, 15 to 19% on so-called FrontierMath. However, I should also point out this is coming hand in hand with new economics. So sometimes solving a single one of these really difficult FrontierMath questions, which I, or probably Huiyu couldn’t solve, requires the AI to produce so much text, so many tokens that it amounts essentially to writing several hundred books, and then at the end of it, it spits out the right response.

So, this is how these new reasoning models work. We’ve also seen rapid progress in coding to the extent that it has given rise to the vibe coding trend where people just let the AI write things and don’t even check it anymore. And one of the leading labs, Anthropic, is predicting that 2026 will be when they’re going to write the last line of code. So let me turn to the AI agents. What are AI agents? Well, there are essentially systems that autonomously pursue long ago was then just a single response in a chat like what we are used to from the traditional ChatGPT and how do they do that? They can engage in strategic planning, they can use a certain extent of long-term memory and they can also employ external tools like visiting the internet and clicking stuff or using a compiler, writing their own code and then writing their own and executing their own computer programs and so on.

So, these agents build on the increase in the reliability of generative AI systems and their ability to process longer texts and to operate much faster. And they also have a growing number of use cases in economics, which is where I will turn to in two or three minutes. But first, since I’m an economist, I can’t help but speak about the comparative advantage of the AIs. Well, at this point in April 2025, and this is moving so fast, I would say that the leading AI systems have broad general world knowledge but maybe not quite as deep as we humans in a specific context. And I would also say the leading AI systems already have superhuman performance in processing information within the context window, meaning within everything that we can upload and process at once. And that’s a pretty limited amount of information. So, in some sense, I would say we have AGI subjects to context window limits.

Well, what’s our comparative advantage as humans? We have significantly narrower world knowledge, like I know much less about say, quantum physics than the leading large language models, but our specialization means we can go and be deeper. So right now, I believe that I still have more kind of specialized knowledge in my domain of expertise in economics. And, of course, one of the big advantages of being human is that our knowledge persists and we don’t wake up with an empty blank slate context window every morning. So comparative advantage means that we should let the AI do what the AI is good at and we humans should focus what we are good at and that’s going to make for the most productive working relationship. However, one of the big bottlenecks, and I think a lot of the systems that people are currently working on deal with that, is the information exchange between humans and AI.

So, for example, AI agents deal with this by letting the AI do more on its own before it has to go back to the human. And there are a number of systems that try to address this from various dimensions. So in the second part of my presentation, I want to focus on generative AI for economic research. And I should say I have an ongoing project with one of our journals in economics, the Journal of Economic Literature for which I am writing an update on the latest use cases and the latest capabilities of generative AI every half year. If you are interested in this, if you are a fellow economist, you can sign up for this using the QR code that I’m showing here at the bottom left. And the update I’m currently working on is essentially about agentic AI and will come out in the next few weeks.

So, in this line of research, I have identified six categories of capabilities that AI can help with, ideation feedback, background research, then the meat of research, which is coding, data analysis, and math. And then, at the end of it, writing it all up, and I don’t want to bore you with that, but there is a long list of all the capabilities that I’m documenting carefully. Here, I’ll just highlight two of them, which have been possible only for the past six months roughly, thanks to the reasoning models that I mentioned. So, the first one here is when I asked OpenAI o1 to log-linearize an equation that essentially represents an arbitrage relationship between short-term and long-term interest rates. Now, for those of you who have done log-linearization, it’s a pretty painful arduous process. Well, the AI solved it in 53 seconds and gave me the correct result.

My co-author didn’t want to believe it and did it manually. It took him about two and a half hours and he got to the same solution. A second example is when I used o1 to produce code to simulate one of the fundamental growth models in economics, it’s called the Ramsey model. And the difficulty of it is that you have to solve for a dynamic system that is saddle path stable. I won’t explain the details, but essentially, one year ago every AI system I tested on that failed these reasoning models and actually all of them, they aced it. And here o1 preview had to think just for 34 seconds and then it correctly simulated the Ramsey growth model and gave me really beautifully written and documented code. All right. As I observed, the way that economic researchers have been using these tools for the past two and a half years is they have used them to automate specific tasks.

And I listed these tasks here in the rectangles that you can see, ideation, background research, writing, coding, math and so on. Increasingly, we can define workflows and produce agents that can do the same thing. And that means you as a researcher or you can do this in any area of cognitive work, you can essentially produce an army of virtual workers that can do the work for you. I also want to spend a moment to look at the economic implications of these shifting paradigms. For the past two and a half years, many of us have paid $20 a month to use systems like ChatGPT, Claude, Gemini and so on. And for us in advanced countries, $20 a month, it’s not terribly much, right? However, late last year, OpenAI came out with their ChatGPT Pro subscription and Claude just recently came out with their Claude Max subscription and they’re suddenly charging $200 a month.

So, that’s quite an increase in one and a half years. The cost of using the leading AI systems basically in an almost unlimited manner has grown tenfold. Well, 200 bucks, that’s where lots of people are starting to wonder, “Well, do I really want to pay that every month?” But that’s by no means the ceiling. Sam Altman has recently proclaimed that he will soon offer a $20,000 a month PhD level scientist that you can subscribe to and that will be able to perhaps solve many problems that professors and PhD researchers like myself are able to solve. So this brings me right to the economic implications of where this is all going. And whenever you see a really fundamental shift in the world, I think it is useful to take a step back and not just extrapolate what has been happening in the past couple of years or even the past few decades, but to ask more fundamentally what is going to change, what will be different under the new paradigm that we may be about to enter?

And for that, I want to focus on the long arc of economic history. I want to start before the Industrial Revolution, during the Malthusian Age, think the Middle Ages in Europe or something like that. Well, from an economic perspective, the Malthusian Age was characterized by technology that was pretty much stagnant. We economists, we usually label technology by the letter A. And the two most important factors of production, what we used to produce output were land and labor. And back during those times, land was in fact the bottleneck. If you were a land owner, you were among the most powerful people on earth. And if you didn’t have land, if you only had labor, then you were economically speaking almost dispensable. It was a really brutal age. During the Malthusian times, the marginal worker earned just enough to survive. The vast majority of the population lived at subsistence levels and this was really a brutal time.

Well then, 250 years ago, something really beautiful happened. We entered the Industrial Age. All of a sudden technology started to advance quite rapidly at a pace of 2 to 3% a year. We still don’t know for sure what this was driven by, but it probably had something to do with the scientific method, with the Age of Enlightenment and so on. And the other beautiful thing that happened was that land, which was this constraining factor that held us back from further expanding output was replaced by capital as the second most important factor of production. So now, our two factors of production were labor and capital and capital can be reproduced, you can just build more machines, more factories and so on. And that means labor was suddenly the bottleneck factor to the expansion of output, labor was special and labor became very valuable. By some measures, the average worker today earns about 20 times as much as they earned at the beginning of the Industrial Revolution.

So, you can say the big beneficiary of the Industrial Age was labor. Well, now, we are about to enter a new age, the age of AI, and I think we can be almost certain that technology is going to accelerate further, capital is still going to continue to be reproducible. But the big change is that if we have machines that are as powerful as our minds, labor is suddenly also reproducible. And that means labor may lose its special status, its bottleneck status, its scarcity because we can produce virtual workers with the push of a button. And if we have requisite robots, which may take a few years longer than AI, but is almost certain to happen, if we have powerful AI, then physical labor is also reproducible using capital and can just be accumulated at infinitum. So labor is going to lose its special status. Now, what will this imply? Well, there is a lot of uncertainty right now as to how far and how fast AI is going to advance.

So, the best approach to deal with this uncertainty is scenario planning. And I want to propose three useful scenarios to you. The first one is what I want to call a business as usual scenario. AI, generative AI is going to give us a productivity boost kind of akin to the internet boom. But it’s not going to fundamentally change our world. And a lot of people believe that this is what’s going to happen. However, scenarios two and three represent scenarios where we will develop artificial general intelligence within either 10 to 20 years or in the fastest aggressive AGI scenario within two to five years. And that means that AI will advance either gradually or rapidly to the point where it can do everything that our brains can do.

And in fact, the third scenario is probably the closest to the predictions coming out of the frontier labs in San Fran. So, what would this imply for output and wages? Well, for output, and you can see the blue line is the baseline, and the yellow and red are the slow and aggressive AGI scenarios. So, output would grow much faster if we have AGI because you would suddenly have all these virtual and machine workers that can perform what only humans currently can perform. But on the other hand, if we are close to reaching AGI, wages would first stagnate and then decline and potentially even collapse as machines can do everything that we can do. So, what would this imply for income distribution? There’s a potential that we would have unprecedented levels of income concentration. The benefits of AGI may accrue primarily to capital owners and the economy will produce what is demanded.

So, if humans lose a significant fraction of their income, the economy would retool and would probably produce lots and lots of server farms and we would probably have fewer animal farms to produce human goods. So, I think preparing for AGI means that we need to really fundamentally rethink our mechanisms of income distribution and social insurance. And for example, I have floated a proposal for a seed UBI (universal basic income). Right now, we don’t need a UBI and we don’t want a UBI because there are still so many tasks that we need humans for. But if AGI is developed really quickly, then we may want to have a system in place that we can employ to make sure that income distribution does not deteriorate too much. So one of the fundamental questions that we would have to face in an AGI world is whether humans actually need work, if they can’t earn a living from it or if we can be happy finding a different way of spending our time.

Now, we would also have to rethink our macroeconomic frameworks under AGI. We would have to rethink aggregate demand management tools like the Phillips curve that we use for demand management. Ultimately, in an AGI economy, human unemployment may not be the best indicator for capacity utilization anymore. We may need other indicators for that and we’ll have to adapt our monetary policy frameworks. We would probably also have to shift fiscal policy since right now the prime source of revenue for governments is labor. And then, we would also have to redefine our economic indicators and measurements. So, I want to also speculate a little bit for the short to medium term, and I should emphasize that there’s so much uncertainty that I really don’t have a lot of confidence, but my median expectation is that the rise of AI agents will lead to noticeable productivity and labor market effects within the coming 12 months.

My personal expectation is we are going to see a productivity increase from this that will be reflected in macro statistics. And then if or when AGI is reached, I expect that the digital or AI part of the economy will just skyrocket. But now, we need to keep in mind that that part is still relatively small compared to overall GDP. So we are essentially adding up a rapidly growing exponential that starts from a slow pace with a much slower exponential, which is traditional output growth. And what that means is that under an aligned takeoff of AI, I believe that the full macro effects will take several years to be reflected in things like growth rates and also robotics and physical actuators would be a major bottleneck, especially at first.

So, I want to conclude by emphasizing that if the predictions coming out of the frontier labs are right, then it’s crunch time now, and all of us, economists and people working in related areas, we need to ask ourselves some very fundamental questions. We need to be prepared for these scenarios. And so I want to ask you, if you knew that AGI will be achieved within two to three years, what do we need to understand better? What should you or your organization do now to be prepared for that scenario? And also, how would you prepare personally? Thank you. I’ll just leave a slide with further readings up for a moment and then I will switch to the overall EERN slide. Back to you, Huiyu.

Huiyu Li:

All right, thank you very much, Anton. That was very thought-provoking. First, I must say I read the Journal of Economic Literature piece carefully and I have to say it’s a great public service. It gave me a lot of excitement because there were a lot of tools that we could use. I’m also a little bit scared because you mentioned a tool that in computer science can generate a research paper from ideation all the way to the end and have peer review. So then I ask myself, what’s my job? Where’s that going? So I think there are a lot of things to think about. Let me just start with something that’s a little bit more concrete. There are so many tools out there and they’re all changing very quickly. What would be your advice for an economist or any professional for going about choosing what tool to use?

Anton Korinek:

Yeah. So, I should say first as you emphasized, the trajectory is moving rapidly. And so that means if you use one tool today, then tomorrow another one may already be better. But what I recommend is to use probably two if you want to be very ambitious, maybe even three of the frontier chatbots. And that includes OpenAI’s ChatGPT, that includes Anthropic’s Claude, that includes Google DeepMind’s Gemini, and see what works best for you, try things out on different systems. And what I also want to say, make sure that you keep yourself up to date. I personally, I spend probably an hour every day just reading about the recent advances in AI and I feel like I’m having difficulty keeping up. But at the same time, if I didn’t, I know that my workflows would be much more inefficient and I would actually be wasting a lot of time because I would be doing things that the AI can already do.

Huiyu Li:

I see. You mentioned one, I guess, a score, LMSYS, how well does that correlate with usefulness for economic research? I mean, is that a number that I can use to say if I go for the highest number, that’s likely going to be the tool that gives the biggest payoff or we really have to try things to figure it out?

Anton Korinek:

Overall, it’s a pretty good benchmark, but yes, it has been gamed at times and it captures more or less the traditional functioning of chatbots. So for these new reasoning models, I would suggest that you look at other benchmarks like for example, performance on the AMIE or performance on FrontierMath, or if you want to do lots of coding, it’s also useful to look at coding benchmarks.

Huiyu Li:

I see. And then I think in economics, different fields, different fields tend to coordinate onto one tool. Do you see something happening with these AI tools that eventually will coordinate on one particular set of tools?

Anton Korinek:

I don’t think I can see any of that right now. And the main reason is probably that in some ways right now they’re pretty interchangeable. So let’s say if you are a ChatGPT user today and then you find out tomorrow Anthropic releases a chatbot that’s really an order of magnitude better. The switching cost for you is very low. And that’s why I think we can’t see one clear front-runner, you have very low switching costs.

Huiyu Li:

Okay, that’s great. That’s different from switching from MATLAB to Python.

Anton Korinek:

Yes. That’s absolutely right.

Huiyu Li:

That was painful. That’s painful.

Anton Korinek:

And the AI can translate your code from MATLAB to Python and it’s very good with that.

Huiyu Li:

All right, I guess, I want to ask a little bit about potential bottlenecks for using these tools. I mean, the reasoning on agents are big progress, but my own experimentation with AI, I find that there’s a question about reproducibility. Say if you write a paper with these tools and we need to produce codes that other people can use to reproduce a results, I wonder if there’s any limitation to that. And another one is transparency. For example, if I use these AI tools to come up with a policy recommendation, but then there’s a question about what exactly generates that, what is the mechanism behind that gives confidence to I guess at least for people to believe those tools? Could those be bottlenecks or you think that these reasoning models could actually help us break all of that bottleneck?

Anton Korinek:

No, I think you’re right that they are bottlenecks and there’s always a risk of excessively anthropomorphizing these tools. But I think it’s similar to when you hire a research assistant, they will produce stuff that is going to be different from what another RA produces. Let’s say if they rate text, they will rate it slightly differently than someone else. And at some level, if you hire a research assistant, there’s also a bit of that black box problem. How did you get to something? And it’s very similar with these AI tools.

So we have to use this same mechanisms to deal with that. We have to document everything very carefully and essentially share and be transparent about everything we can be transparent like your prompts, your unprocessed data files and so on. But ultimately, there’s always going to be some element of fundamental irreproducibility and I think that’s something that we humans regularly face in our lives.

Huiyu Li:

I see. And then, on the transparency, I guess, right now, if we need to use our economic models or data to produce a policy recommendation, we need to answer what exactly is in our model, what exactly is the data that’s driving that result? I find it a little bit difficult to explain what’s in the AI answer that’s driving results. Are there any improvements, advancements in the technology that allows us to talk more about what’s underlying the results?

Anton Korinek:

Yeah, it’s the same problem that you face when you ask a human what’s underlying your answer. And usually, it can give you some explanation, but sometimes the explanation is actually an after the fact explanation. Now, if you have a system where you have accumulated lots of information and you use the AI for retrieval, kind of a retrieval augmented generation system, then I think we have the attribution and the sourcing already under control nowadays. But if you just ask a plain vanilla language model, tell me about X, then these model sticks can’t really reliably tell you where they’ve got the information from.

Huiyu Li:

Okay, I see. And then, at the Fed right now, I mean, we’re experimenting with using AI tools. I’m trying Copilot within my computer system. So, I guess it’s a question about how do we protect data like classified data, confidential data while we want to interact with these tools to take advantage of these potential productivity gains. Do you have any thoughts on that?

Anton Korinek:

Yeah, I think in many organizations, that’s a really critical problem and sometimes it’s an important bottleneck. And I’ll say two things for how to minimize that. The first one is if you run your own local model, there are some models which are actually very good already that are open source and that you can just download and then run on your closed computer system without any internet connection even. So that would be the safest thing of all because you can really guarantee that there’s not going to be any information leakage. Then the second thing, a lot of organizations have agreements, for example, you mentioned Copilot.

Lots of organizations have an agreement with Microsoft that they have a dedicated Azure server that is only for them and where they promise that no information leaks out. If you store your files on it, I think you can be equally confident that a language model running on that server and processing your information will be as secure as the file storage. Now, probably an institution like a central bank won’t want to do it with everything, but for a lot of things, if you upload it in the cloud that belongs to your institution, you can also use language models on that.

Huiyu Li:

So, this is kind of a segue into the second part of you talk about implications for workers. If I look at your Journal of Economic Literature piece, it seems like what the AI tools can do now can replace a pre-doc or I guess an RA that hasn’t had the PhD training or maybe even PhD training since they can do the Ramsey growth model now, but maybe there are still things that the economists have to do to actually tinker with to come up the final solution. Is that the right way of thinking about the status right now that AI can replace an RA but not yet an economist?

Anton Korinek:

Yeah, I would say a year ago, a pre-doc was probably a good description, but this is such a fast-moving target. And when I wrote the last update in December, I would’ve probably said something like a second-year graduate student or so. And increasingly, I feel like whatever I can do, if I can fit it into the AI’s context window, the AI can do as well. So my only superior ability that I perceive right now is that I can make connections beyond that context window, beyond let’s say the three papers that I’m uploading into the AI and I have contacts, I know data sources that the AI can’t download from the internet and so on. And I think honestly our comparative advantage as researchers is shrinking fast, I should say our absolute advantage is shrinking fast, and of course, we don’t know how fast it’s going to continue to shrink in the future. That’s the million-dollar question.

Huiyu Li:

Yeah, that’s the scary part. After reading your survey piece, I think this is probably not just economics, but maybe many fields, people are feeling a little bit scared or worried about their job prospects. We got a lot of pre-submitted questions that are along this line, so I’m going to ask some questions from our audience that’s along the line of job security. So first, we had a question from a college professor asking, are college professors going to be replaced by these AI tools? What are your views on that? And do you see that happening in the next five years, next 10 years? I guess what’s the scenario planning for a college professor?

Anton Korinek:

Yeah, being a college professor myself, I have to admit I have spent some time thinking about that question. And again, there’s tremendous uncertainty. Let me perhaps in some ways it’s the worst-case scenario, but in some ways it’s the best-case scenario. So let me perhaps start with the possible scenario that we will have something like AGI within two years or three or five, and I think the first thing that would happen is that essentially our ability to do research is going to be devalued quite significantly. Now, the question is what will it imply for education? Because we are both researchers and teachers, and I do believe that part of the value of education that we provide is a civic value and that there will still be demand for educating our students, not just because it makes them valuable workers, but also because it makes them better educated citizens. And I think that role will be preserved, but the fact that college graduates won’t necessarily earn the same skill premium that they earn today, that will probably reduce demand for education and by extension demand for college professors.

Huiyu Li:

Yes, and I think this relates to thinking about RAs or pre-docs or even graduate school students. Part of that, their work is not just doing the task but learning how to do it. So, if they’re being replaced, say this generation or incoming generation are replaced by AI, then I’m also worried that there won’t be any future college professors to teach the future generation. So, I wonder if that could put a bottleneck on how much humans can actually continue to maintain their comparative advantage. I was wondering what are your thoughts on that part? I mean, passing down knowledge to human.

Anton Korinek:

Yeah, I can see this very much, so there’s lots of tasks that I asked RAs to do, let’s say two years ago, three years ago that I just hand over to the AI nowadays because the AI is just so quick. Now, I guess talking specifically about skill formation, the good thing is that if you ask the AI, can you explain to me how to do X, Y or Z? Or can you teach me this? It can also do it. So in some sense, let’s say especially before we reach something like AGI, the AI can act as a tutor for students and can actually allow them to learn more efficiently. And so that’s the positive side. The flip side is, yeah, if we do reach artificial general intelligence, then maybe it won’t be necessary to learn a whole range of things anymore and the value of teaching that is going to go down.

Huiyu Li:

I see. Still along the line of job security, actually, a large part of our audience are college students. They’re thinking about what they should learn and how they should prepare for the future with AI. So one of the questions we had was from students asking for your advice on how they should prepare in terms of what they should learn now and what type of AI fields they should get involved in. Do you have any advice for them?

Anton Korinek:

Yeah. So, I think in the short term, there is one winning strategy, which is whatever you are interested in, whether that is doing economics or producing movies or studying quantum physics, learn how to effectively use the AI for doing that, and I think that’s going to be a winning strategy for at least the next year or two. Then after that, all bets are off in some ways. I think at some level we are going to have to step back and we are going to have to ask ourselves, well, how should we structure our economy and our world?

If we have these amazingly smart machines, does it make sense for us to try and compete with them or should we limit ourselves to understanding them, limit ourselves to some oversight role to ensure that there’s some level of human control? But in some sense, we may be facing what a lot of manual physical workers faced over the past century that there are suddenly machines that can do what they used to do much, much more efficiently and much better and much cheaper. And when that happens, I guess the reasonable thing to do is to let the machines do what they’re good at.

Huiyu Li:

I see. You mentioned that there are potential bottlenecks in terms of robotics. I guess once you have these AI agents, they could actually send commands to different robots if the robots can understand them. How is that technology going? Is there still a gap in that so that there could still be physical type of work that could be remaining for humans for say next five years?

Anton Korinek:

Yeah, there’s definitely a gap, but I should also say that once we had this really rapid progress in foundation models in the past couple of years, robots have also become significantly better. So essentially, the brains of the robots were a significant bottleneck and now that they have better brains, they have become a lot better. Now, if the aggressive scenarios coming out of the leading labs are correct, then I would think we would have AGI before we have robots that can do everything. And that means there would be this kind of in-between period. I don’t think it would be long because the AIs would be really good at designing better robots, but maybe a year or two or so in which there’s going to be a significant premium on physical work because it will be something that only humans can do and the AI couldn’t do yet.

Huiyu Li:

Okay. We had another set of questions related to productivity. One is about how will the productivity gains from advancement in AI be measured in the data, will it be captured in the data, or are we missing some important gains?

Anton Korinek:

How will it be captured? It will be captured very badly. And I think the reason is that in some ways our GDP statistics were designed for the industrial age. They were designed for… Basically, they were developed in the 1930s, at the time when the vast majority of our economy was industrial. And you could count the number of machines or cars or whatever you produced and easily assign values to that and then easily create productivity statistics. In a digital economy, that’s much, much harder. So just to make this tangible, I don’t know, Huiyu, you have probably felt that AI has made you significantly more productive, right?

Huiyu Li:

Somewhat. I’m still learning. I’m still learning how to use that.

Anton Korinek:

Yeah, I have certainly felt that. But the problem is we usually measure the value of cognitive labor by the salaries that people are paid and my productivity increases haven’t been reflected in my salary yet. So it means this is all unmeasured productivity gains and it’s probably true for lots and lots of cognitive workers.

Huiyu Li:

Okay. Relating to productivity gains, this is kind of related to innovation, what you can do with these AI tools. One of the questions was these tools are trained on historical data. So are there limitations in these tools adapting to say rapid changes or coming up with paradigm shifting new ideas? Or do you think that it’s actually going to go towards the direction where they’re able to do these things as well?

Anton Korinek:

I’ll anthropomorphize a little bit, again, that’s one of the big bottlenecks for us humans too, right? We have been trained with historical data. In fact, I took six years of Latin when I went to school, and there’s lots and lots of new things that we need to acquire. And it’s the same with AIs. Now, I think they’re actually faster at processing new information than we are. So I think they have the upper hand when it comes to absorbing new things.

Huiyu Li:

All right, we have some questions from the live audience here with us. I will just go through some of those questions. First, the commentator says, “Fantastic and thought-provoking, what do you think rising costs do to the speed of adoption and use? Does the technology get ahead of the practical applications in the economy because of the cost hurdle?”

Anton Korinek:

Yeah, so costs have been rising quite rapidly. I showed this one graph of the scaling where basically we’ve deployed 4.5x as much compute every single year, and now it has also gotten cheaper to deploy compute, and the actual cost has been rising only at a rate of 3x per year. So the question is, there is two things that are rising rapidly. One is the cost and one is the benefit, and there is indeed a horse race between the two. And I can see a scenario where the benefits don’t quite keep up with costs if the AI disappoints. So last fall in particular, there were lots of people worried about that. Right now, it’s not my mainline scenario, but if that were to happen, then I think it would be plausible that we’ll see a correction, maybe even a little bit of an AI mini winter because investments would adjust with the adjustment of expectations. I personally think right now, over the next 12 months, the probability of that is probably less than 20%.

Huiyu Li:

Okay. Another question, is the shrinking availability of new training data creating a plateau for model performance?

Anton Korinek:

It is something that makes it harder to make further progress. And one of the ways the labs are dealing with it is lots of synthetic data. In some ways, the analogy that I want to put to it is when you go to first school and then college and then maybe even do a PhD, you first absorb lots and lots of materials, all the material that has been researched in the past to make your way to the frontier of knowledge. And then after that, further progress is much slower. So in some sense, you can think that currently the AI is pretty close to the frontier of all the knowledge in the world and making further progress is harder. Just like it’s harder for a PhD student to write their thesis than for a master’s student to read and study a book that has already been written. And I think we are at that frontier now and we can see the first glimmers of the AI making progress and creating new knowledge on its own. But yeah, it’s more difficult than just absorbing knowledge.

Huiyu Li:

Okay. And then last question, how is AI changing the way economists collect and understand data?

Anton Korinek:

I think it hasn’t changed it enough yet. In some ways, our profession has been slow to adopt these tools. Now, when we say AI these days, we often mean large language models and large language models have opened a whole new avenue of creating information, creating quantitative information from text. So that’s been one of the exciting areas of economic research in the past two years that we have essentially lots and lots of new data sources that we can analyze that were just not possible to analyze before we could automatically use LLMs to parse them. But I want to emphasize that language models are not the only types of AI systems.

They have received so much attention now, but AI is broader than language models and especially in the scientific domain and in economics, we can use them for lots of other things. And so that means we can use, for example, neural networks to solve very complicated models. We can use neural networks to parse relationships in very complicated supply networks and things like that. So I think there are lots and lots of applications of AI that people are only starting to figure out. And as AI gets better and better, we are going to increasingly deploy AI agents both to collect and to analyze the data that is relevant for us as economists.

Huiyu Li:

Thank you very much. Thank you for sharing your insight with us. Thank you.

Anton Korinek:

Thanks for having me.

Huiyu Li:

That would conclude our seminar. Thank you very much.

Summary

Anton Korinek, Professor at the University of Virginia, Department of Economics and Darden School of Business, delivered a live presentation on the economics of transformative AI on April 22, 2025.

Following his presentation, Professor Korinek answered live and pre-submitted questions with our host moderator, Huiyu Li, co-head of the EmergingTech Economic Research Network (EERN) and research advisor at the Federal Reserve Bank of San Francisco.

You can view the full recording on this page.

Stay in the know

Subscribe to notifications of future EERN events and our monthly SF Fed Newsletter.


About the Speaker

Anton Korinek is a Professor at the University of Virginia, Department of Economics and Darden School of Business as well as a Visiting Scholar at the Brookings Institution, a Senior Researcher at the Complexity Science Hub Vienna, a Research Associate at the NBER, and a Research Fellow at the CEPR. He received his PhD from Columbia University in 2007 after several years of work experience in the IT and financial sectors. He has also worked at Johns Hopkins and at the University of Maryland and has been a visiting scholar at Harvard University, the World Bank, the IMF, the BIS and numerous central banks.

His research analyzes how to prepare for a world of transformative AI systems and has been featured in the New York Times, Washington Post, Wall Street Journal, the Economist, and TIME Magazine. He investigates the implications of advanced AI for economic growth, labor markets, inequality, and the future of our society. In his past research, he investigated the mechanics of financial crises and developed policy measures to prevent future crises, including an influential framework for capital flow regulation in emerging economies.


Speaker’s Related Research