Can ChatGPT Decipher Fedspeak?

Author

Anne Lundgaard Hansen and Sophia Kazinnik

March 1, 2024

Federal Reserve Research: Richmond

This paper investigates the ability of Generative Pre-training Transformer (GPT) models to decipher Fedspeak, a term used to describe the technical language used by the Federal Reserve to communicate on monetary policy decisions. We evaluate the ability of GPT models to classify the policy stance of Federal Open Market Committee announcements relative to human assessment. We show that GPT models deliver a considerable improvement in classification performance over other commonly used methods. We then demonstrate how the GPT-4 model can provide explanations for its classifications that are on par with human reasoning. Finally, we show that the GPT-4 model can be used to identify macroeconomic shocks using the narrative approach of Romer and Romer (1989, 2023).

Read the Paper