Monday, Dec 15, 2025
9:00 a.m. PT
San Francisco, CA
Transcript
The following transcript has been edited lightly for clarity.
Laura Monfredini:
Welcome everyone. Thank you for joining us today at our EmergingTech Economic Research Event or EERN Event. I’m Laura Monfredini and I serve as Executive Vice President and General Counsel here at the Federal Reserve Bank of San Francisco. I am so pleased to be kicking off our last event of 2025. We have had quite a year with conversations from leading economists and thought leaders on the economic implications of artificial intelligence.
For us, at the Fed, the EERN initiative gives us insights into how developments from emerging technologies, such as AI, affect productivity and the labor market across various sectors of the economy. These events are just great opportunities to exchange ideas, learn about research, and share insights with those who are interested in studying the economic impacts of emerging tech.
Today’s installment of our EERN Event series will explore a really timely topic; management and re-skilling in the age of AI. And we’re here with Raffaella Sadun, a Charles E. Wilson Professor of Business Administration at Harvard Business School. Following her presentation, Professor Sadun will discuss her research with our host moderator, Huiyu Li, co-head of EERN and Research Advisor at the SF Fed. Today’s discussion on re-skilling and management strategies in the AI era aligns closely with ongoing efforts in all sectors of the economy to prepare teams for the future of work in a rapidly changing technological landscape.
As a reminder, we are recording this event and it can be accessed on our EERN website following this discussion. Also, I would be remiss if I did not note that the views you hear today are those of our speakers and do not necessarily represent the views of the Federal Reserve Bank of San Francisco or the Federal Reserve System. So with that, let’s begin. Over to you, Huiyu.
Huiyu Li:
Thank you, Laura. I’m very excited to have Professor Raffaella Sadun with us today. Raffaella is a leading expert on managerial and organizational drivers of productivity, both at the firm level as well as at the macro level. She also co-leads the digital re-skilling lab at the Harvard Business School. Management and re-skilling, I believe, are at the heart of how AI could diffuse across workers and firms. For AI to raise productivity, firms have to adopt that successfully. For AI to not displace workers, workers probably will need to re-skill. I’m very excited to hear what Raffaella has to say about all of this. Welcome to EERN, Raffaella, over to you.
Raffaella Sadun:
Thank you very much. It’s a real honor to be with you. And this is a talk that may be a little bit more managerial than the ones you had in the past, but you guaranteed that it was okay for me to go in this space, so I’ll go there.
Let me start just by framing a little bit what the sample question is right now. And I believe it’s really the question of whether AI creates value for the economy. I think it’s actually very important to ask ourselves and try to probe this question of economic value and societal value. And I think that the main tension that we are experiencing is really between two realities, if you like. One is the reality of massive potential of AI, which is often exemplified by voices like the CEO of Klarna, and this is a tweet from about a year ago. But also reiterated by other people in the business community that see tremendous potential in generative AI, in particular, so much so that they believe that at some point, not so far in the future, generative AI will be able to substitute CEOs. So maybe in the future, rather than training MBA students, I’ll be training bots, who knows? But that’s the reality that they are portraying for us as potentially as possible over the short time periods.
The other piece that really creates a lot of excitement around generative AI is, as you know, this massive adoption at the individual level. And here I’m reporting one graphic from the Generative AI, the Adoption Tracker, which is generated here on the other side and delivered by the team on the Harvard Project on the Workforce. And it compares the adoption rates at the individual level of generative AI. This is the two orange points that you see here on the left, compared to other previous technological breakthroughs. So these are all signs of an impending potential technological revolution.
Now, on the other hand, and this is, I think, where people who… I give tremendous credit to the Census Bureau, for example, that is really able to create these metrics on adoption that are representative of the economy. Not just speaking on larger firms or most advanced firms, but looking at the census numbers, it’s clear that the diffusion of generative AI can produce – so this is the categorization on the diffusion on generative AI to produce goods and services is way behind the individual adoption. And it’s in the latest numbers, the ones that I saw from September ’25 were giving us an aggregate number of 10% adoption at the business firm. Which of course hides heterogeneity, and we’ll talk about the importance of this heterogeneity, but I think it’s a sobering number compared to the hype.
I think it’s also really interesting, and there are papers now that are vigorously trying to estimate the potential displacement or even impact more generally of generative AI on jobs. And they are telling us that maybe this is a change that is happening in the background, but we don’t yet see these numbers in the labor statistics. This paper by Humland and Vestergaard particularly, which is based on the English data, but it goes really quite big in telling us that even among the firms that are adopting AI, we don’t see massive changes in occupational composition, for example, on wages.
And then lastly, I think perhaps more frustratingly, this article that was, one of the co-authors is one of my colleagues here at HBS, and he was able to look at the adoption of generative AI. And these are Co-pilot licenses across 66 firms using reliable statistical methods to look at the heterogeneity of the adoption decision. But even there, we spend less time writing emails, but unfortunately the time spent in meetings is always the same. So these are all sort of signs that this impending change is not there yet.
So I asked myself, are we at a point in which this is the Segway moment? And I don’t know if the people in the audience would remember, but when the Segway came out, at least one very important business person said it will be as big a deal as the PC. And this guy was actually Steve Jobs in 2001. Or, and this is where I’m going with this talk, maybe we just need to think about this new diffusion of what seems to be a new general purpose technology through the lenses of the J-curve.
And so as we’ve known from past technological revolutions, the adoption of a new technology… is never actually linear. And there are moments of experimentation, there are also moments where nothing seems to happen that are followed by a moment of intense change. And so with this talk today, I want to spend the next 20 minutes basically going through three questions that I think are important for us as researchers, but I think also for managers. And in fact, this is a talk that I often give also to practitioners, not just execs. As you will see that the talk is varied and the content is very thematic.
The first one is stepping back for a moment beyond the technology and thinking about what generative AI can do or is doing, inference. And then the core questions are, what is the role of management and organizations in mediating the diffusion and the impact of the technology based on what we know from the past. And lastly, what does this imply for re-skilling and policy, which is really, how should we think about investments in human capital in this context?
And starting from the first question, I think early on in the discussion of generative AI, it was very inspiring to read David Autor’s comment on, it’s a short paper that he wrote about how he saw generative AI differently from previous technologies and the PC in particular. And I think this is really capturing the essence of what’s captivating about these technologies.
What generative AI introduces for organizations is this idea of scaling knowledge across all layers of the organization at a fraction of the cost of human experts. And as I step back, I spent 20 years looking at organizations as novice hierarchies and this is a really big deal. I mean, if this is really what generative AI can do, it has very profound implications for how we promote people, how we select people, who has the power in an organization and how we learn.
And in practice, this is what we are seeing in concrete terms from this big idea of scaling knowledge is two distinct applications. One is perhaps the one that we’ve been all more familiar with, which is this idea of co-pilots that can work with an AI research assistant. And then the other concrete application, which has been really discussed quite intensively this year, more in marketing terms I think actual adoption, but it’s this idea of agentic AI, which is really an agent that works independently on complex tasks.
What is interesting is that the underlying technology is the same, and the lens and their applications, but the actual implications for humans is very different. On the one hand, we’ve gained an apprentice, a complement, a team member potentially. On the other hand, potentially we are witnessing the evidence of subsidy. And so I want to go a little bit deeper into a couple of examples of co-pilots and some examples of what agents are, and then think about the implications for organizations.
So the first example of our co-pilot, I think it’s the first paper in economics that looked at a vigorous application of co-pilots in a call center. This is ‘Generative AI at Work’, which was recently published on QJE (The Quarterly Journal of Economics). And basically the two key findings of this paper are first…
… changing resolutions for our review order of between 30 and 50 and 60%. So massive increases in productivity. And the second piece of this paper, which again, speaks to this idea of generative AI being a force for the diffusion of knowledge, is the fact that these reviews were highly concentrated among workers that had the least amount of skills at the deployment of the technology. So as you can see from this graph, very large changes concentrated in the local lower parts of the skills, and the relative skills of the agents at deployment. This is really saying something about potential AI being important for novices and helping novices being more productive early on in their career.
Another example of this complementariness for novices and others is something that I explored together with other colleagues here. Yes, where we explored rather than the diffusion of knowledge across different hierarchical levels, novices versus experts. In this experiment that we ran with Procter & Gamble, we actually were interested in understanding whether AI could help facilitate the diffusion of knowledge across different functions. And the premise, we worked with the R&D function at Procter & Gamble, and their interest was rooted in the fact that their innovation processes are based on collaborations across functions, in particular marketing and R&D. And these collaborations are costly, they’re lengthy, they create all sorts of potential friction. And so we explored whether providing individuals, individual experts with access to generative AI could help them be up to speed and gain the same type of expertise that they would normally get talking with another functional expert. So we randomize the allocation of generative AI across individuals that were employed in the R&D function or the marketing function.
And what we did is compared the extent to which AI could help these individuals go to the same level of the innovation quality that they would normally get if they worked in a team of humans. An interesting piece is that the answer to that question, this is an abundant proxy for the quality of the innovations that came up through these experiments. The caveat is this is all done within a single hack it. And so we are working now on extrapolate and seeing whether these results extrapolate to in-a-while, if you like. But it seems like providing individuals with generative AI brings them to the same level of quality of innovation that they would normally get if they worked in teams of humans. And so that’s what led us to believe that in this case, we have some evidence that generative AI can be like a cybernetic teammate, that’s the essence of its complement mentality.
And in the data, you see why this might be the case, and there is really an exchange of expertise. You can see the proposals that they come up. Being, rather than reflecting your individual expertise being more of a combination of different types of expertise. It’s basically telling our macroeconomists that they can become… That they don’t need to sit down with the macroeconomist, but they can talk to an LLM and get some of their expertise, discoverable in our frameworks.
Now, the issue with AI agent is that things are not that complementary, unfortunately. And instead, we’re talking here about potential substitution. And so the issue here is that the substitution really depends on the level of expertise of the AI. And if we’re getting with low level of expertise, that’s a potential substitute for low level expertise, low level experts. But in some instances, AI can get to higher level of expertise. In that case, we have some theoretical work at this point that traces how this could really change the allocation of the labor market implications, both within firms and across firms. And so things are really quite a little bit more complicated.
One example of this is what we are seeing, this is an instance of one company, Klarna, the CEO that I mentioned at the beginning is a fervid supporter of the adoption of AI in this firm, and that’s what it is. It basically introduced a generative AI system in customer service function and replace the work of approximately 700 human agents through this system, with massive increases in productivity. So this is really the tension, I think, when we talk about the potential impact of generative AI and expertise in humans. We are caught between these two potential applications, and the reality is we still don’t know which one of the two dominates in the aggregate.
Anyway, for us to… Some of this uncertainty I think is leading to scary extrapolations. And in particular, I think the interpretation that loads more on these agentic capabilities are the interpretations that are associated with calls for universal basic income, for example, or even Dario Amodei, who’s the CEO of Anthropic, predicting the disappearance of entry-level jobs. I have to say, I find these extrapolations jarring, not just because I’m afraid for myself as well, right? If that’s true, I think even professors are at risk, but I think that they are basically extrapolations that ignore what we know about general purpose technologies. And in particular, the fact that their diffusion and their impact is really just a technological factor. It’s actually something that depends on the presence of complementary skills, systems, and processes that don’t exist yet.
And so this is really the concept of the J-curve, I think, is especially relevant for where we are now, because in a sense, we don’t have just one single J-curve. We have every firm engage in its own J-curve with different steepness and shallowness along the curve. And what we know from the past is really that this is a process that shows up in the aggregate, but it takes time and masks a ton of heterogeneity.
And so I know that with this audience, I don’t really have to go too much into history, but the work of Paul David was really important to help us understand how electricity diffused throughout the economy and how long it took for electricity, about 40 years, to go from point solutions, which is basically just manufacturing plants, substituting the steam engine with a different source of energy. Not seeing much impact to what Josh Gans and his co-authors call application solutions, meaning moving machines away from the central shaft through which energy was transmitted when the steam engine was there. And so thinking a little bit more about complementarities and specialization, and then finally to the system solutions, which is a complete rethinking of the factory.
And so in a sense, here, I think the analogy is that potentially generative AI is a new source of electricity, and it’s going to take a while for business people and for workers to figure out how to use it. And in fact, with ICT (information and communication technology), if it took 40 years with electricity, with information and communication technology, there are some very interesting parallelism that were made by Erik Brynjolfsson and co-authors, took about 20 years for ICTs to diffuse and show up in the productivity statistics of the changes within jobs and the emergence of new jobs that that led to.
I think the other point that I want to make on this is that not only we know that general purpose technologies take a while to diffuse, but this is the point that I think is often missed because it’s hard to measure. This diffusion and the adoption process is incredibly heterogeneous across firms. And as I’m working in the temple of management here, I think I’ve seen all sorts of pathologies, but let me mention four that I think are really important for, in these cases of emergence of new technologies with uncertain potential.
And I think that the first ones are not everybody sees the potential, so there is an awareness problem. There is an incentive problem, and this was documented incredibly well by Paul David. Part of the reason why it took 40 years for electricity to diffuse is that people had vested interest in a different type of production paradigm. And so it’s only when new entrepreneurs who didn’t have the sunk costs and the existing plans, it’s only when they had the opportunity to apply this new technology, they figured out how to reconfigure completely the plan for this new opportunity.
But I think with generative AI, it’s also two additional blind spots. The first one is really the skills that are necessary to move to a new system and the coordination between workers and managers together. It’s really a cultural component. And in fact, in the past, we know with the information and communication technology, this was chapter one of my dissertation back in the day. But we documented that the adoption of information and communication technology in Europe was mediated by differences in managerial capabilities, meaning some companies had enough flexibility, and especially HR processes, that allowed them to figure out how to use computers. You’re talking still about basic software and basic computers, how to allocate new skills, how to change workflows. And those managerial practices were really important, a complement to the productivity for the effective use of ICTs. And my sense is that a lot of this is showing up in generative AI. Let me tell you a little bit why.
So the first one is this is still a really uncertain… The value of generative AI, the applications of generative AI are certain in some cases and very uncertain in others. And so even within the same tasks, I think we are facing what my colleagues here at HBS call the Jagged Technological Frontier. You may get some tasks done incredibly well with generative AI, but then you move just a moment and you get well inside the productivity frontier because this is new. And so it’s really a technology that is prone to mistakes.
The second one is that directly related to the novelty and the uncertainty of the technology, and this is what I’m seeing every day with companies that are engaged with this adoption process: It’s really more of a strategic issue and an experimental issue right now adoption, more than anything else. What do I mean by that? Well, potentially you could invest money everywhere, but I’m seeing tremendous differences across firms in those that know where they should invest. So they have the strategic clarity of understanding where to invest for what reason rather than engaging in a hundred different types of experiments. But also some companies are just not wired for experimentation. And I think that this is really a differentiator at this point in time. Since the playbook doesn’t exist, experimentation is incredibly valuable and we are seeing differences in experimental capabilities across firms. It doesn’t come natural.
And finally, there is a very strong tension between inside some organizations because in a sense, the fear of replacement is a real barrier for the training of new AI systems, even when they can have complementary applications. And so my sense is that a lot of what in the past perhaps with ICTs was HR practices that were more operational in spirit. Here, my sense is that there is a lot of heterogeneity on really the strategic and experimental capabilities of firms that is playing a role. And then finally, we get to what I think is obvious, which are the technological complementarities, the quality of the data and so forth. But I’d say in the limited, frequent and selected, but I think quite substantial introduction that I’m having with firms, I think that these nodes here in the middle, these strategic and cultural complementarities are where the game is being played.
We have some evidence of this heterogeneity with generative AI. Again, I’m looking at census data, fascinating to see differences in adoption across firm size, but also firm age. And so where the action is hard to say it in aggregate, I think this is a moment, where as researchers, we really need to go to the frontier of where technology is happening and trying to get a sense of that.
This is one specific example. It’s a case study, but again, just to go a little bit deeper into what AI adoption looks like inside firms. This is JPMorgan Chase, started adopting generative AI for their wealth function in 2023, initially with a simple chat interface. This was a proprietary use of ChatGPT within their walls. 2024, they start introducing agents in support to human advisors. So this would be agents that prepared automatically reports in preparation for client meetings. And then just this year, they introduced managers for these AI agents with this agent orchestrators that take on some of the coordination tasks that we would typically think about for humans.
So you can see that this is one firm, a very large firm, they’re investing a lot in AI, but these are the type of organizational innovations that are happening on the ground. And again, I want to stress the point, this is a process that is slow and it’s incredibly heterogeneous, so hard to capture just from afar.
Let me just conclude and go to the third question, which I think is really the central one. What does that imply for re-skilling and policy? If you take my perception, the description of what is happening at face value, I think to me, the real point here is that the success of companies, what does it mean to successfully adopt generative AI? It’s a story of reorganization or re-optimization rather than a simple substitution.
And so this clearly has implications for the workforce, but not necessarily requiring specific skills. I think it’s more of the ability to experiment, to exercise judgment, if we remember the Jagged Technological Frontier, and I think more importantly, the ability to adapt. So these are unfortunately not things where you can get a degree on, so it’s hard to… We really should think about how do we train for these type of capabilities, but this seems to be really the critical point at the early stages of the adoption of a new technology.
An issue here, one point is that paradoxically, this is the point where maybe I see tremendous value in thinking about investments in the current workforce and the attention to upskilling and re-skilling as being part of this change management toolkit. And I think it’s a little bit of a mix between opportunity and challenge. The opportunity is really the fact that we need firm specific knowledge to inform the adoption of new technologies. And we also need, as I mentioned earlier, to create the right incentives for this technological knowledge transfer to happen. That requires effort. It’s not something that is going to happen automatically. And the other side is finding the right skills for AI adoption inside firms is actually not trivial. I have a hard time even articulating what these are. And so I think that combination of opportunity and challenge is what my sense would tilt the investment in human capital more towards internal talent rather than necessarily going outside the firm.
Now the issue is, and we recognize this is my practitioner slide. When I ask my students in executive education to do a reality check and I ask them, “It’s 8:00 AM, your company’s new generative AI skills training course is starting.” And I ask them, “How do you feel?” Maybe I can ask you the same question. Ranging between incredibly excited, to skeptical, to resigned. Unfortunately, I cannot take a virtual poll now, but typically the answers unfortunately range between B and C. And that’s a reality that I think reflects the fact that often investments in upskilling and re-skilling under deliver on their promises.
There is an emerging literature, an academic literature, that is looking at this, but I think there are essentially three problems we have to contend with. The first one is the low take-up and in particular the low take-up among people who need skilling the most. Chris Stanton here at HBS is doing very interesting work on this. Low completion rates and limited tangible outcomes. And so what this means is that the current training approaches are probably not enough. They’re not really what’s needed to facilitate AI adoption.
This is early work, but we’ve been doing some work mostly talking with companies that seem to be approaching training differently. And I think the main differences that we see, and again, I want to stress this is absolutely not a representative sample and it’s more qualitative evidence than quantitative evidence at this point. But what we see is that there is an alternative way to think about training, which is much more like a strategic lever, meaning you first decide how you want to compete and where you want to compete. And then you think about your training investments. They’re not just training investments that pertain to the HR function, which for better or for worse, is often not connected with technology and strategy. And it’s also, I think, training investments or human capital investments, that take very seriously the need to incentivize both middle managers and employees to be part of the story.
And again, here in personal economics, we have really interesting papers that look at this potential important but often not fulfilled role of middle managers in fostering investments in human capital. Ingrid Haegele has really interesting work looking at how middle managers can be essentially talent hoarders rather than fostering incentives in human capital among their direct reports. But also equally, I think it’s important to recognize that investments in training and in particular investments in re-skilling, which essentially implies an occupational choice, are very risky. And from the perspective of employees, what we are observing is that there isn’t often the right understanding of how employees perceive the riskiness of these investments.
So what’s interesting is that these organizations that are rethinking training or re-skilling, are really tracing a direct line between the strategic objectives, the incentives of the middle managers and the incentives of the employees, and they’re casting these investments really like you would do with a strategic investment more than a one-off investment.
I want to show you just one piece because I believe maybe in the audience there are many middle managers, who knows, but one thing that we are noticing, and this is research that is actually very quantitative, is based on more than 100,000 workers across three firms in Latin America. And we looked at one specific question, how important are middle managers for the training investments of their direct reports? And what we are seeing is that actually middle managers are the single most important factor in determining whether people take up training opportunities.
So this is just one of the graphs of the paper, but it shows that the arrival of a middle manager who is basically a coach, we can trace the differences across middle managers within the same firm, and we see that some middle managers are really more like coaches than anything else. And the arrival of these coaches, it’s associated with a very large increase in take up rates of training opportunities among their direct reports. And so I think that this is like an organizational lens to training that shows this very clearly.
Now the issue is, as I said, these companies are not representatives, the ones we spoke with. And so one part of the heterogeneity that I think we are seeing and we will continue to see is due to the fact that the investments in human capital are perceived very differently across firms. And in particular, this is a survey that we did a couple of years ago with chief human research officers of US companies. And we could see that there is very heterogeneous perceptions also, for example, taking into account the incentives of employees for training and re-skilling in particular. So this is, I think, one of the underlying reasons behind this heterogeneity.
I’ll conclude just by saying, it’s an area understanding training inside firms and across firms where my sense is we really need more evidence. We are doing a lot of work here at HBS, but it’s interesting to see how much more the noise ratio for this topic is enormous. And I think this is one area where good economics research can really make a lot of difference.
So I’ll conclude just with the three messages. The first one is generative AI is a transformative technology, something that can fundamentally change organizations. How to cope with this goes well beyond just training. I think it’s a strategic and organizational question behind the heterogeneity that we already see. I think training can play a very important role if seen as an investment to create competitive advantage, but the difficulty and where I think we really need to do more work is how do we think about a new training playbook? And it’s going to be much more than just dollars allocated to training investments. It’s really how you do it and how you communicate it and how you implement it inside the firm that is going to make a difference.
So one policy comment, and this relates to work that we are doing at the lab, is just how ineffective generic and un-targeted training subsidies are in creating the incentives to train. It’s much more about professional identity and information barriers more than just wage prospects. I’ll conclude here. Thank you very much.
Huiyu Li:
Thank you very much, Raffaella. That was very insightful. We actually are lucky enough to have many of our executives here, so maybe we could take the poll you mentioned. No, but actually in our audience, we have economists, but we also have our executives and also, I guess what you call middle managers. So I feel like what you presented was actually very insightful, even as the Fed thinking about how we might adopt AI.
Just following up on that, can you just recap on what do you see as the roles of different levels of management? Because I can see that adopting AI might require changes at the organization level that seems to be more at the CEO’s vision level, but then you also mentioned the importance of middle managers. Can you just help us understand what are different roles for different managers?
Raffaella Sadun:
Yeah. I mean, I think that the main point and the starting point for these investments to be effective is really the top. And I teach strategies, I might be biased in what I’m telling you, but if you’re not able to communicate to the rest of the organization why the company exists or what are the broad objectives, it’s going to be very hard, I think, to create the right incentives for training. Because training is costly and it’s not necessarily something that average adults want to do. I think this is a missing link in many of the discussions that I hear about training that sort of take for granted that the incentives are there to learn. And in fact, that’s not true. And so I think it really starts from the top. It’s more of a cultural piece.
I think the middle manager part is very important because middle managers are the link that makes these commitments and these strategies real for people on the ground. And so in the research I was mentioning before, we were surprised to see tremendous heterogeneity in training take-up within the same firms with the same policies and same incentives, and seeing the importance of this human link in fostering investments in training. So I think that that’s really the piece here is the translation of what comes from the top to the rest of the organization. These things are not automatic and I think they still need that human connection.
Huiyu Li:
You mentioned talent hoarding. Could you just clarify that a little bit for our audience as an example, how middle managers really matter?
Raffaella Sadun:
Oh yeah, sure. So this is more of the negative side of middle managers. I think at the end of the day, we just have to accept that there is a lot of heterogeneity, but what’s the negative side? And this is work that Ingrid Haegele has done. She’s at Munich. There is the idea, how can middle managers hurt? So first of all, for training specifically, not everybody’s happy to see their workers take time for learning. And so there is an immediate productivity loss or workforce loss when people go off to learn new things. But more dynamically, especially if you have a good person, you might want to retain that person in your team rather than see the person being promoted or moving to different tasks. And this is the talent hoarding phenomenon that she documents.
In practice, I think this is very common. I’ve seen organizations overcome this issue by making very explicit that middle managers are in charge of the development of their own people. And so creating the incentives against talent hoarding, again, we go back to the piece, that there needs to be an explicit recognition of why learning is important and there needs to be, I think, the right incentives, not just for the workers, but also for middle managers.
Huiyu Li:
Okay. You are an expert on management practice measurement. You’ve measured management practice in the US all around the world. So in this case, for adoption of AI, what’s a good way of tracking whether AI is being adopted successfully? Do you need additional metrics from before?
Raffaella Sadun:
Really, whether people click on co-pilots, for example, or use co-pilots, not necessarily a great metric. It’s actually, I would say that the most interesting metrics in a phase of experimentation and learning, which is where we are now, are probably not about compliance, but are about, are you using the technology to address some real business needs? And so I don’t think that we are yet at the stage in which we can have metrics that are standardized and unified across the firm. I think it’s much more about working with the experimenters inside the firm to understand the different use cases, and seeing how much of the technology is being experimented with at the local level. Not compliance, but experimentation I would say is the most important indicator right now.
Huiyu Li:
Yes, I fully agree. I mean, I think you showed a graph of adoption we ask workers, that seems higher than we ask firms, so there’s some sort of gap about how you ask in terms of measuring adoption, right? Yeah, fully agree. Okay. I’m going to take an opportunity to get a little bit of education from Harvard for free.
Yeah. But jokes aside, you are an educator of executives, so now AI has arrived. How are you thinking about educating executives? Are there any changes that you think that will be made to the curriculum for educating them?
Raffaella Sadun:
For sure. I mean, and by the way, we are seeing also a lot of demand for education precisely because we’re at a time in which things are very uncertain and I think there is no playbook. That’s what I tell everybody. And whoever tells you that they have already a playbook, I think they’re probably lying because it doesn’t exist.
For people who are already executives, I think this is the time to double down on strategy and on organizational learning, which are really things about value creation and capabilities to learn inside the firm, which are very soft concepts. But again, we tell them, for example, if you can’t articulate your strategy in 100 words or less, it’s probably the case that nobody in your company understands what they should be doing and why they’re there. You have to really articulate the trade-offs very clearly. And the reason is the opportunities for experimentation, as we’ve discussed, are really multiple, but you need to have some strategic lens on where to experiment and how to learn.
Now, the second piece is experimentation inside the firm. Again, not every company is geared for that. I would say that 90% of the companies are probably not geared for experimentation. So that really means organizing teams in ways that are cross-functional, where you can bring both the technology side of things as well, the customer experience, the legal experience and so forth. And so that implies different organizational structures for firms.
Huiyu Li:
Let me shift gear a little bit to re-skilling. I think you mentioned in your writing as well as in the talk, that re-skilling is very difficult and it requires you to take time from workers like current work, put that into re-skilling. So what provides incentive for companies to actually go for re-skilling as opposed to just laying off the workers and trying to hire someone who’s already skilled?
Raffaella Sadun:
So I go back to the point, it really depends on what are the objectives of the companies and also what are the constraints of the companies. For example, if for the company tacit internal knowledge, firm-specific knowledge is really important, I think this is something that would tilt more towards re-skilling because you need to find a way for that knowledge to be leveraged inside the organization. There are also constraints. I work with a lot of European companies that for labor laws reasons cannot lay off people. And so for them, it’s more of a necessity, understanding how to retrain their own workers.
I think at the end of the day, I think the problem that I see is an implementation problem. Companies are spending money for training, but often without really a lot of evidence that guides these investments. And so if I can make an appeal to my fellow economists, I think that we need to step up because this is a massive market. It’s a very consequential market and there isn’t enough evidence to understand where to invest and how to invest. So that is an arbitrage opportunity I think that exists and we should bring a little bit more rigor into how these investments are made.
Huiyu Li:
Thank you very much. We actually received many questions, pre-submitted, and we also received a question from our live audience. So let me shift to that so we have enough time for that. So first, let me start with a question that we got from our live audience. So what can fill the void in the elimination of entry level jobs for college graduates? They need some way of gaining experience in order to actually become productive in a company. How can we allow them to gain that experience?
Raffaella Sadun:
Look, I think this is a great question. And again, I think one where we don’t have a lot of evidence now on what policy is better suited. What I’m seeing is companies talking more and more about apprenticeships, which is a model that is more common in certain parts of Europe than here in the United States. But I think it’s getting students to experience the reality of work earlier in their trajectory rather than just waiting until the completion of the college studies. And in part, this is to learn the skills that are actually quite valuable. If we will get to the point of sustained AI adoption, which is judgment, experience and communication and so forth, but also because I think getting your foot into the labor market that potentially will be more valuable than just the degree on its own.
Huiyu Li:
Thank you. Okay. So another question is about culture. I guess from an economist perspective, culture is kind of vague, like how do we even think about it? But I think the question is valid. How do you help the company build a culture that is okay with experimenting and adopting AI? But at the same time, experimenting, but also thinking about the risks and potential downfalls carefully.
Raffaella Sadun:
Yeah. Look, I think here there is always a very wide dispersion of attitudes towards experimentation. It’s going to be very difficult to bring the left tail, those who really see experimentation as an anathema to the table to experiment. I think that that’s a lost cause. Where I see a lot of value, and we are doing a lot of work in partnership with companies now at HBS, are companies that are willing to learn and understand that they are at the point in which they need to understand how AI can create value for them, but they don’t have the instruments, they don’t have the knowledge on how to structure and experiment inside the company. So I think that the marginal company where I think we can make a difference is really the culture is already there, but they don’t have the tools.
And so it’s been very interesting over the past year. I’ve been working with different large organizations in the US and in Europe, where they already have a scientific mindset, but they don’t have something as trivial as the ability to run power calculations or thinking about hypothesis to test. And it’s been relatively easy to bring them there.
Huiyu Li:
I see. Okay. And then business strategy, this is your expertise. What is the most effective way for companies to assess current employee skills and match them with future AI augmented tasks?
Raffaella Sadun:
Again, an area where I don’t think that there is a unique answer. I wish I knew it. I would create a company immediately. I think Erik Brynjolfsson has done this. But my perception is, because things are so dependent on the specific task and occupations are a combination of tasks, one approach that we are following here, for example, is to think about the production function of specific occupations. So I’ll give you one concrete example. We’re doing some work with companies where we are experimenting on the use of AI for their sales function. And so we actually work with them to characterize what is the production function of a sales engineer in this case, and what are the scenarios that they are facing, and then thinking about when can AI add value? In this case, it’s a formulation of a proposal for a customer, which is typically something that relies on tacit knowledge, AI could potentially help there.
So I think that because we are very early stage, my approach, my hunch, is that it’s going to be important to understand at the occupation level and then at the task level where the technology can make a difference, test and then assess from there whether there is an impact.
Huiyu Li:
Thank you. Another question, as AI tools become more embedded in core business practices, which human skills or managerial behavior do you expect to become the most critical for firms to remain competitive?
Raffaella Sadun:
Okay, great question. We actually have one paper from David Deming and co-authors that has looked at this experimentalist. I’m going to quote that because that’s the evidence that I found very interesting. They looked at the management of AI agents and they actually measured, they saw first of all, in an experiment, tremendous heterogeneity in humans’ abilities to manage AI agents, which is already quite interesting. And then they measure what type of skills were associated with better completion or productivity of AI agents. And funnily enough, these were very human skills.
For example, the ability to ask questions, the ability to probe, going deeper, communication, these are exactly the skills that make managers of humans successful. And so if we think that AI agents are these novices, a little bit arrogant and overconfident novices that can work for us, then I think this ability to communicate, ask, probe, judgment, are going to be very important.
Huiyu Li:
I see. I see. I can see that for economists working with RAs. You can also imagine the AI agent being an RA, you need to train them and be patient. Yes. Okay. And then from your discussions with the business folks, are there any misconceptions about AI or AI re-skilling that you believe leaders need to correct?
Raffaella Sadun:
Who am I to say if it’s a mistake? Okay. But I have to say that the biggest misconception, which is common across different eras of technology adoption, is this idea that the investment in the technology is going to be what determines whether they benefit or not from the technology. I think that this is the biggest misconception that I see, because technology, as we know, is one part of a bundle of managerial and organizational complements. And my sense is that many people actually get this and work on culture before technology adoption, but it’s easy to get caught into the enthusiasm or the sci-fi scenarios, which lead you to completely underestimate the human frictions in technology adoption, which I think are just there.
Now, a different issue is for companies that are native AI. I think that’s a different type of… For those organizations, they’re very interesting because they start from completely different premises and they are probably organizing very differently because of that.
Huiyu Li:
Thank you. So one last question. Okay. We talked about entry level position. There’s also a question about mid-career employees. How do they get re-skilled? How will management be able to make sure that middle-career employees are able to re-skill and continue to be successful at the companies?
Raffaella Sadun:
Yeah. I mean, I think to me, this is a really difficult problem for companies. And I tell you why. It’s not so much the inability to learn because actually for mid-skill employees, they have a lot of tacit knowledge that is incredibly valuable right now, but it’s creating the right incentives to learn.
So we know that the incentive to train goes down with age and tenure. I think here the key point will be something that relates to providing people with the right time horizon and clarity on what happens, what are the opportunities that open up for them if they train and if they re-skill. Now, this seems obvious, and I think that we have economic models that make this point very clear on the necessity for commitments for these uncertain investments in human capital to be made by people. But I am always astounded by the lack of clarity and communication that are often seen in organizations that again, make significant investments in training, but don’t make the same type of investments in making sure that the training opportunities are well understood and received in the lower levels of the organization.
Huiyu Li:
Thank you very much, Raffaella. It was a really good learning experience hearing from you. Okay. Since this is our final event for 2025, I would like to thank everyone who participated in EERN this year. We saw significant increases in engagement this year. Our subscription list actually tripled during this year, so thank you very much.
So next year we’ll continue to do the same type of events. We will have events like ones we’ve seen last year, which I encourage you to check in our archive. We’ve had past seminars with Anton Korinek, Avi Goldfarb, as well as Chad Jones. So those recordings are on our website, so please feel free to check out. Next year we’ll bring you more events like that. Thank you again for staying connected with EERN. Happy holidays and happy new year.
Summary
Raffaella Sadun, Charles E. Wilson professor of business administration at Harvard Business School, delivered a live presentation on management and reskilling in the age of AI on December 15, 2025.
Following her presentation, Professor Sadun answered live and pre-submitted questions with our host moderator, Huiyu Li, co-head of the EmergingTech Economic Research Network (EERN) and research advisor at the Federal Reserve Bank of San Francisco.
This was a virtual event hosted by the EmergingTech Economic Research Network (EERN). You can view the full recording on this page.
Subscribe to notifications of future EERN events and our monthly SF Fed Newsletter.
About the Speaker
