Insurance companies and pension funds have liabilities far into the future and typically well beyond the longest maturity bonds trading in fixed-income markets. Such long-lived liabilities still need to be discounted, and yield curve extrapolations based on the information in observed yields can be used. We use dynamic Nelson-Siegel (DNS) yield curve models for extrapolating risk-free yield curves for Switzerland, Canada, France, and the U.S. We find slight biases in extrapolated long bond yields of a few basis points. In addition, the DNS model allows the generation of useful financial risk metrics, such as ranges of possible yield outcomes over projection horizons commonly used for stress-testing purposes. Therefore, we recommend using DNS models as a simple tool for generating extrapolated yields for long-term interest rate risk management.
We examine the effect of negative nominal interest rates on bank profitability and behavior using a cross-country panel of over 5,100 banks in 27 countries. Our data set includes annual observations for Japanese and European banks between 2010 and 2016, which covers all advanced economies that have experienced negative nominal rates, including currency union members as well as both fixed and floating exchange rates countries. When we compare negative nominal interest rates with low positive rates, banks experience losses in interest income that are almost exactly offset by savings on deposit expenses and gains in non-interest income, including capital gains on securities and fees. We find heterogeneous effects of negative rates: floating exchange rates, small banks, and banks with low deposit ratios drive most of our results. Low-deposit banks have enjoyed particularly striking gains in non-interest income, likely from capital gains on securities. There have only been modest differences between high and low deposit-ratio banks’ changes in interest expenses; high deposit banks do not seem disproportionately vulnerable to negative rates. Overall, our results indicate surprisingly benign implications of negative rates for commercial banks thus far.
Fiscal deficits, elevated debt-to-GDP ratios, and high inflation rates suggest hyperinflation could have potentially emerged in many European countries after World War I. We demonstrate that economic policy uncertainty was instrumental in pushing a subset of European countries into hyperinflation shortly after the end of the war. Germany, Austria, Poland, and Hungary (GAPH) suffered from frequent uncertainty shocks – and correspondingly high levels of uncertainty – caused by protracted political negotiations over reparations payments, the apportionment of the Austro-Hungarian debt, and border disputes. In contrast, other European countries exhibited lower levels of measured uncertainty between 1919 and 1925, allowing them more capacity with which to implement credible commitments to their fiscal and monetary policies. Impulse response functions show that increased uncertainty caused a rise in inflation contemporaneously and for a few months afterward in GAPH, but this effect was absent or much more limited for the other European countries in our sample. Our results suggest that elevated economic uncertainty directly affected inflation dynamics and the incidence of hyperinflation during the interwar period.
The introduction of macroprudential responsibilities at central banks and financial regulatory agencies has created a need for new measures of financial stability. While many have been proposed, they usually require further transformation for use by policymakers. We propose a transformation based on transition probabilities between states of high and low financial stability. Forecasts of these state probabilities can then be used within a decision-theoretic framework to address the implementation of a countercyclical capital buffer, a common macroprudential policy. Our policy simulations suggest that given the low probability of a period of financial instability at year-end 2015, U.S. policymakers need not have engaged this capital buffer.
In the U.S. Treasury market, the most recently issued, or so-called “on-the-run,” security typically trades at a price above those of more seasoned but otherwise comparable securities. This difference is known as the on-the-run premium. In this paper, yield spreads between pairs of Treasury Inflation-Protected Securities (TIPS) with identical maturities but of separate vintages are analyzed. Adjusting for differences in coupon rates and values of embedded deflation options, the results show a small, positive premium on recently issued TIPS – averaging between one and four basis points – that persists even after new similar TIPS are issued and hence is different from the on-the-run phenomenon observed in the nominal Treasury market.
The ability of the usual factors from empirical arbitrage-free representations of the term structure — that is, spanned factors — to account for interest rate volatility dynamics has been much debated. We examine this issue with a comprehensive set of new arbitrage-free term structure specifications that allow for spanned stochastic volatility to be linked to one or more of the yield curve factors. Using U.S. Treasury yields, we find that much realized stochastic volatility cannot be associated with spanned term structure factors. However, a simulation study reveals that the usual realized volatility metric is misleading when yields contain plausible measurement noise. We argue that other metrics should be used to validate stochastic volatility models
A common assumption in the academic literature and in the actual supervision of banking systems worldwide is that franchise value plays a key role in limiting bank risk-taking. As the underlying source of franchise value is assumed to be market power, reduced competition has been considered to promote banking stability. Boyd and De Nicolo (2005) propose an alternative view where concentration in the loan market could lead to increased borrower debt loads and a corresponding increase in loan defaults that undermine bank stability. Martinez-Miera and Repullo (2007) encompass both approaches by proposing a nonlinear relationship between competition and bank risk-taking. Using unique datasets for the Spanish banking system, we examine the empirical nature of that relationship. After controlling for macroeconomic conditions and bank characteristics, we find that standard measures of market concentration do not affect the ratio of non-performing commercial loans (NPL), our measure of bank risk. However, using Lerner indexes based on bank-specific interest rates, we find a negative relationship between loan market power and bank risk. This result provides evidence in favor of the franchise value paradigm.
Access to external finance is a key determinant of a firm’s ability to develop, operate and expand. To date, the literature has examined a variety of macroeconomic and microeconomic factors that influence firm financing. In this paper, we examine access by Spanish firms to external financing, both from bank and non-bank sources. We use dynamic panel data estimation techniques to estimate our models over a sample of 60,000 firms during the period from 1992 to 2002. We find that Spanish firms are quite dependent on short-term non-bank financing (such as trade credit), which makes up about 65 percent of total firm debt. Our results indicate that this type of financing is less sensitive to firm characteristics than short-term bank financing. However, we also find that short-term bank debt seems to be accessed more during economic expansions, which may suggest a substitution away from non-bank financing as firm conditions improve. Short-term bank debt also seems to be accessed more as funding rates rise, possibly again suggesting a substitution away from higher-priced non-bank alternatives. Using data from the Spanish Credit Register maintained by the Banco de Espana, we find that the impact of funding costs on access to external financing, whether from banks or non-banks, is affected by the nature of borrowing firms’ bank relationships and collateral. In particular, we provide evidence of a potential hold-up problem in loan markets. Moreover, collateral plays a key role in making long-term finance available to firms.
The need to monitor aggregate financial stability was made clear during the global financial crisis of 2008-2009, and, of course, the need to monitor individual financial firms from a microprudential standpoint remains. In this paper, we propose a procedure based on mixed-frequency models and network analysis to help address both of these policy concerns. We decompose firm-specific stock returns into two components: one that is explained by observed covariates (or fitted values), the other unexplained (or residuals). We construct networks based on the co-movement of these components. Analysis of these networks allows us to identify time periods of increased risk concentration in the banking sector and determine which firms pose high systemic risk. Our results illustrate the efficacy of such modeling techniques for monitoring and potentially enhancing national financial stability.
We use an arbitrage-free term structure model with spanned stochastic volatility to determine the value of the deflation protection option embedded in Treasury inflation protected securities (TIPS). The model accurately prices the deflation protection option prior to the financial crisis when its value was near zero; at the peak of the crisis in late 2008 when deflationary concerns spiked sharply; and in the post-crisis period. During 2009, the average value of this option at the five-year maturity was 41 basis points on a par-yield basis. The option value is shown to be closely linked to overall market uncertainty as measured by the VIX, especially during and after the 2008 financial crisis.
To support the economy, the Federal Reserve amassed a large portfolio of long-term bonds. We assess the Fed’s associated interest rate risk — including potential losses to its Treasury securities holdings and declines in remittances to the Treasury. Unlike past examinations of this interest rate risk, we attach probabilities to alternative interest rate scenarios. These probabilities are obtained from a dynamic term structure model that respects the zero lower bound on yields. The resulting probability-based stress test finds that the Fed’s losses are unlikely to be large and remittances are unlikely to exhibit more than a brief cessation.
In response to the global financial crisis that started in August 2007, central banks provided extraordinary amounts of liquidity to the financial system. To investigate the effect of central bank liquidity facilities on term interbank lending rates, we estimate a six-factor
arbitrage-free model of U.S. Treasury yields, financial corporate bond yields, and term interbank rates. This model can account for fluctuations in the term structure of credit risk and liquidity risk. A significant shift in model estimates after the announcement of
the liquidity facilities suggests that these central bank actions did help lower the liquidity premium in term interbank rates.
We examine the impact of foreign underwriting activity on bond markets using issuelevel data in the Japanese “Samurai” and euro-yen bond markets. Firms choosing Japanese underwriters tend to be Japanese, riskier, and smaller. We find that Japanese underwriting fees, while higher overall on average, are actually lower after conditioning for issuer characteristics. Moreover, firms tend to sort properly in their choice of underwriter, in the sense that a switch in underwriter nationality would be predicted to result in an increase in underwriting fees. Finally, we conduct a matching exercise to examine the 1995 liberalization of foreign access to the “Samurai” bond market, using yen-denominated issues in the euro-yen market as a control. Foreign entry led to a statistically and economically significant decrease in underwriting fees in the Samurai bond market, as spreads fell by an average of 23 basis points. Overall, our results suggest that the market for underwriting services is partially segmented by nationality, as issuers appear to have preferred habitats, but entry increases market competition.
We construct probability forecasts for episodes of price deflation (i.e., a falling price level) using yields on nominal and real U.S. Treasury bonds. The deflation probability forecasts identify two “deflation scares” during the past decade: a mild one following the 2001 recession and a more serious one starting in late 2008 with the deepening of the financial crisis. The estimated deflation probabilities are generally consistent with those from macroeconomic models and surveys of professional forecasters, but they also provide high-frequency insight into the views of financial market participants. The probabilities can also be used to price the deflation protection option embedded in real Treasury bonds.
Bond Currency Denomination and the Yen Carry Trade
In Asia and China in the Global Economy, ed. by Y W Cheung and G Ma, 2012. 245-282 | With Candelaria and Spiegel
We examine the determinants of issuance of yen-denominated international bonds over the period from 1990 through 2010. This period was marked by low Japanese interest rates that led some investors to pursue “carry trades,” which consisted of funding investments in higher interest rate currencies with low interest rate, yen-denominated obligations. In principle, bond issuers that have exibility in their funding currency could also conduct a carry-trade strategy by funding in yen during this low interest rate period. We examine the characteristics of firms who appeared to have adopted this strategy using a data set containing almost 80,000 international bond issues. Our results suggest that there was a movement towards issuing in yen in the international bond markets starting in 2003, but this appears to have ended with the outbreak of the global financial crisis in 2007. Furthermore, the breakdown of carry-trade conditions in 2007 corresponds to a resurgence in the ability of economic fundamentals, such as the volume of trade with Japan, to explain the decision to issue international bonds denominated in yen.
Inflation Expectations and Risk Premiums in an Arbitrage-Free Model of Nominal and Real Bond Yields
Journal of Money, Credit, and Banking 42, September 2010, 143-178 | With Christensen and Rudebusch
Differences between yields on comparable-maturity U.S. Treasury nominal and real debt, the so-called breakeven inflation (BEI) rates, are widely used indicators of inflation expectations. However, better measures of inflation expectations could be obtained by subtracting inflation risk premiums from the BEI rates. We provide such decompositions using an estimated affine arbitrage-free model of the term structure that captures the pricing of both nominal and real Treasury securities. Our empirical results suggest that long-term inflation expectations have been well anchored over the past few years, and inflation risk premiums, although volatile, have been close to zero on average.
Empirical Analysis of Corporate Credit Lines
Review of Financial Studies 22(12), December 2009, 5069-5098 | With Jimenez and Saurina
Since bank credit lines are a major source of corporate funding, we examine the determinants of their usage with a comprehensive database of Spanish corporate credit lines. A line’s default status is a key factor driving its usage, which increases as firm financial conditions worsen. Firms with prior defaults access their credit lines less, suggesting that bank monitoring influences firms’ usage decisions. Line usage has an aging effect that causes it to decrease by roughly 10% per year of its life. Lender characteristics, such as the length of a firm’s banking relationships, as well as macroeconomic conditions affect usage decisions.
Empirical Analysis of the Average Asset Correlation for Real Estate Investment Trusts
The credit risk capital requirements within the current Basel II Accord are based on the asymptotic single risk factor (ASRF) approach. The asset correlation parameter, defined as an obligor’s sensitivity to the ASRF, is a key driver within this approach, and its average values for different types of obligors are to be set by regulators. Specifically, for commercial real estate (CRE) lending, the average asset correlations are to be determined using formulas for either income-producing real estate or high-volatility commercial real estate. In this paper, the value of this parameter was empirically examined using portfolios of U.S. publicly traded real estate investment trusts (REITs) as a proxy for CRE lending more generally. CRE lending as a whole was found to have the same calibrated average asset correlation as corporate lending, providing support for the recent U.S. regulatory decision to treat these two lending categories similarly for regulatory capital purposes. However, the calibrated values for CRE categories, such as multifamily residential or office lending, varied in important ways. The comparison of calibrated and regulatory values of the average asset correlations for these categories suggest that the current regulatory formulas generate parameter values that may be too high in most cases.
U.S. bank supervisors conduct comprehensive inspections of bank holding companies and assign them a supervisory rating, known as a BOPEC rating prior to 2005, meant to summarize their overall condition. We develop an empirical model of these BOPEC ratings that combines supervisory and securities market information. Securities market variables, such as stock returns and bond yield spreads, improve the model’s in-sample fit. Debt market variables provide more information on supervisory ratings for banks closer to default, while equity market variables provide useful information on ratings for banks further from default. The out-of-sample accuracy of the model with securities market variables is little different from that of a model based on supervisory variables alone. However, the model with securities market information identifies additional ratings downgrades, which are of particular importance to bank supervisors who are concerned with systemic risk and contagion.
Regional Economic Conditions and Aggregate Bank Performance
In Research in Finance, 24, ed. by A. Chen | Bingley, UK: Emerald Group Publishing, 2008. 103-127 | With Daly and Krainer
The idea that a bank’s overall performance is influenced by the regional economy in which it operates is intuitive and broadly consistent with historical bank performance. Yet, micro-level research on the topic has borne mixed results, failing to find a consistent link between various measures of bank performance and regional economic variables. This chapter attempts to reconcile the intuition with the micro-level data by aggregating bank performance, as measured by nonperforming loans, up to the state level. This level of aggregation reduces the influence of idiosyncratic bank effects sufficiently so as to examine more clearly the influence of state-level economic variables. We show that regional variables, such as employment growth and changes in real estate prices, are not particularly useful for predicting changes in bank performance, but that coincident indicators developed to track a state’s gross output are quite useful. We find that these coincident indicators have a statistically significant and economically important influence on state-level, aggregate bank performance. In addition, the coincident indicators potentially contribute to the out-of-sample forecasts of the relative riskiness of state-level bank portfolios, which should be of interest to bankers and bank supervisors.
In China and Asia: Economic and Financial Interactions, Proceedings of the 2006 Asian Pacific Economic Association Conference, ed. by Yin-Wong Cheung and Kar-Yiu Wong | London: Routledge, 2008. 197-214 | With Spiegel
We examine foreign intermediation activity in Japan during the so-called “lost decade” of the 1990s, contrasting the behavior of lending by foreign commercial banks and underwriting activity by foreign investment banks over that period. Foreign bank lending is shown to be sensitive to domestic Japanese conditions, particularly Japanese interest rates, more so than their domestic Japanese bank counterparts. During the 1990s, foreign bank lending in Japan fell, both in overall numbers and as a share of total lending. However, there was marked growth in foreign underwriting activity in the international yen-denominated bond sector. A key factor in the disparity between these activities is their different clientele: While foreign banks in Japan lent primarily to domestic borrowers, international yen-denominated bond issuers were primarily foreign entities with yen funding needs or opportunities for profitable swaps. Indeed, low interest rates that discouraged lending activity in Japan by foreign banks directly encouraged foreign underwriting activity tied to the so-called “carry trades.” Regulatory reforms, particularly the “Big Bang” reforms of the 1990s, also play a large role in the growth of foreign underwriting activity over our sample period.
Managing the credit risk inherent to a corporate credit line is similar to that of a term loan, but with one key difference. For both instruments, the bank should know the borrower’s probability of default (PD) and the facility’s loss given default (LGD). However, since a credit line allows the borrowers to draw down the committed funds according to their own needs, the bank must also have a measure of the line’s exposure at default (EAD). Our study, which is based on a census of all corporate lending within Spain over the last 20 years, provides the most comprehensive overview of corporate credit line use and EAD calculations to date. Our analysis shows that defaulting firms have significantly higher credit line usage rates and EAD values up to five years prior to their actual default. Furthermore, we find that there are important variations in EAD values due to credit line size, collateralization, and maturity. While our results are derived from data for a single country, they should provide useful benchmarks for further academic, business and policy research into this underdeveloped area of credit risk management.
Alternative Measures of the Federal Reserve Banks’ Cost of Equity Capital
Journal of Banking and Finance 30(6), June 2006, 1687-1711 | With Barnes
The Monetary Control Act of 1980 requires the Federal Reserve System to provide payment services to depository institutions through the 12 Federal Reserve Banks at prices that fully reflect the costs a private-sector provider would incur, including a cost of equity capital (COE). Although Fama and French [Fama, E.F., French, K.R., 1997. Industry costs of equity. Journal of Financial Economics 43, 153-193] conclude that COE estimates are “woefully” and “unavoidably” imprecise, the Reserve Banks require such an estimate every year. We examine several COE estimates based on the CAPM model and compare them using econometric and materiality criteria. Our results suggest that the benchmark CAPM model applied to a large peer group of competing firms provides a COE estimate that is not clearly improved upon by using a narrow peer group, introducing additional factors into the model, or taking account of additional firm-level data, such as leverage and line-of-business concentration. Thus, a standard implementation of the benchmark CAPM model provides a reasonable COE estimate, which is needed to impute costs and set prices for the Reserve Banks’ payments business.
Foreign exchange rates are examined using cointegration tests over various time periods linked to regime shifts in central bank behavior. The number of cointegrating vectors varies across these regime changes within the foreign exchange market. For example, cointegration is generally not found prior to the Plaza Agreement of September 22, 1985, but it is present after that date. The significance of these changes is tested using a ikelihood ratio procedure proposed by Quintos (1997). The changing nature of these cointegrating relationships indicate that certain types of central bank activity do have long-term effects on exchange rates.
We examine the relationship between indicators of financial development and economic performance for a cross-country panel over long and short periods. Our long-term results are consistent with much of the literature in that we find a positive relationship between financial development and economic growth. However, we fail to find a significant positive relationship after accounting for disparities in factor accumulation. These results therefore indicate that the primary channel for financial development to facilitate growth over the long run is through physical and human capital accumulation. We also identify a significant negative relationship between financial development and income volatility, suggesting that financial development does mitigate economic fluctuations in the long run. We then turn to short-run analysis, concentrating on the period immediately surrounding the 1997 Asian financial crisis. Unlike our long-term results, our short-term panel analysis fails to find a significant relationship between financial development and economic performance during this period, both for a broad sample of countries and for a small sample of developing Asian nations. Taken as a whole, our analysis appears to support a relatively new idea in the literature that while financial development is beneficial over the long run, it may exacerbate short-term volatility in isolated episodes. One reason for this discrepancy may be that financial liberalizations are typically only partial, resulting in increased financial market distortions. We analyze the Korean experience in the period surrounding the Asian financial crisis and argue that this experience supports the idea of distortionary partial liberalization.
A key component of managing international interest rate portfolios is forecasts of the covariances between national interest rates and accompanying exchange rates. How should portfolio managers choose among the large number of covariance forecasting models available? We find that covariance matrix forecasts generated by models incorporating interest-rate level volatility effects perform best with respect to statistical loss functions. However, within a value-at-risk (VaR) framework, the relative performance of the covariance matrix forecasts depends greatly on the VaR distributional assumption, and forecasts based just on weighted averages of past observations perform best. In addition, portfolio variance forecasts that ignore the covariance matrix generate the lowest regulatory capital charge, a key economic decision variable for commercial banks. Our results provide empirical support for the commonly used VaR models based on simple covariance matrix forecasts and distributional assumptions.
We examine whether equity market variables, such as stock returns and
equity-based default probabilities, are useful to U.S. bank supervisors for assessing the condition of domestic bank holding companies. We develop a model of supervisory ratings that combines supervisory and equity market information. We find that the model’s forecasts anticipate supervisory rating changes by up to four quarters. Relative to simply using supervisory variables, the inclusion of equity market variables in the model does not improve forecast accuracy. However, we argue that equity market information should still be useful for forecasting supervisory ratings and should be incorporated into supervisory monitoring models.
The asymptotic single risk factor approach is a framework for determining regulatory capital charges for credit risk, and it has become an integral part of the second Basel Accord. Within this approach, a key parameter is the average asset correlation. I examine the empirical relationship between this parameter, firm probability of default, and firm asset size measured by the book value of assets. Using data from year-end 2000, credit portfolios consisting of U.S., Japanese, and European firms are analyzed. The empirical results suggest that average asset correlation is a decreasing function of probability of default and an increasing function of asset size. The results suggest that these factors may need to be accounted for in the final calculation
of regulatory capital requirements for credit risk.
According to the 1980 Monetary Control Act, the Federal Reserve Banks
must establish fees for their priced services to recover all operating costs as well as the imputed costs of capital and taxes that would be incurred by a profit-making firm. Since 2002, the Federal Reserve has made fundamental changes to the calculations used to set the imputed costs. This article describes and analyzes the current approach, which is based on a simple average of three methods as applied to a peer group of bank holding companies. The methods estimate the cost of equity capital from three perspectives–the historical average of comparable accounting earnings, the discounted value of expected future cash flows, and the equilibrium price of investment risk as per the capital asset pricing model. The authors show that the current approach also provides stable and sensible estimates of the cost of equity capital over the past 20 years.
Forecasting Supervisory Ratings Using Securities Market Information
In Corporate Governance: Implications for Financial Services Firms. The 39th Annual Conference on Bank Structure and Financial Services Firms | Chicago: FRB Chicago, 2003 | With Krainer
Approximately once a year, bank supervisors in the United States conduct
a comprehensive on-site inspection of a bank holding company and assign
it a supervisory rating meant to summarize its overall condition. We develop an empirical forecasting model of these ratings that combines
accounting and financial market data. We find that securities market variables, such as stock returns and changes in bond yield spreads, improve the model’s in-sample fit. Both equity and debt market variables appear to be useful for explaining upgrades and downgrades. We conclude that stock and bond market investors possess different but complementary information about bank holding company condition. In an out-of-sample forecasting exercise, we find that the forecast accuracy of the model with both equity and debt variables is little different from the accuracy of a model based on accounting and lagged supervisory data alone.
Supervisory and Regulatory Concerns Regarding Bank Internal Ratings Systems
In Credit Ratings: Methodologies, Rationale and Default, ed. by Ong | London: Risk Books, 2002. 305-314 | With Saidenberg
Internal rating systems are one of the oldest and most widely used credit risk measurement tools used by commercial banks. These systems distill the information on potentially thousands of borrowers into common ratings that summarize risk characteristics and permit comparisons across the entire loan portfolio. Many large banks use their ratings in several aspects of credit risk management, such as loan origination and pricing, credit portfolio monitoring, profitability analysis, and management reporting. Since internal ratings are such a key element of credit risk management systems, it is not surprising that they come under greater attention from international bank regulators and supervisors. In particular, the ongoing work of the Basel Committee on Banking Supervision to update international regulatory capital requirements, commonly referred to as the Basel II process, has brought internal ratings to the center of regulatory concerns. This increased
emphasis and expanded use of internal ratings clearly should alert supervisory concerns since national bank supervisors will be faced with the task of implementing these regulatory requirements and monitoring bank adherence to them over time. In this paper, we discuss the supervisory concerns that already exist and outline the regulatory issues arising from the Basel II process. We attempt to highlight where the Basel II process will affect supervisory concerns the most and discuss possible future directions for supervisory and regulatory concerns regarding internal rating systems.
Covariance matrix forecasts of financial asset returns are an important
component of current practice in financial risk management. A wide variety of models, ranging from matrices of simple summary measures to covariance matrices implied from option prices, are available for generating such forecasts. In this paper, we evaluate the relative accuracy of different covariance matrix forecasts using standard statistical loss functions and a value-at-risk (VaR) framework. This framework consists of hypothesis tests examining various properties of VaR models based on these forecasts as well as an evaluation using a regulatory loss function.
Using a foreign exchange portfolio, we find that implied covariance matrix forecasts appear to perform best under standard statistical loss functions. However, within the economic context of a VaR framework, the performance of VaR models depends more on their distributional assumptions than on their covariance matrix specification. Of the forecasts examined, simple specifications, such as exponentially weighted moving averages of past observations, perform best with regard to the magnitude of VaR exceptions and regulatory capital requirements. These results provide empirical support for the commonly used VaR models based on simple covariance matrix forecasts and distributional assumptions.
Standard statistical loss functions, such as mean-squared error, are commonly used for evaluating financial volatility forecasts. In this paper, an alternative evaluation framework, based on probability scoring rules that can be more closely tailored to a forecast user’s decision problem, is proposed. According to the decision at hand, the user specifies the economic events to be forecast, the scoring rule with which to evaluate these probability forecasts, and the subsets of the forecasts of particular interest. The volatility forecasts from a model are then transformed into probability forecasts of the relevant events and evaluated using the selected scoring rule and calibration tests. An empirical example using exchange rate data illustrates the framework and confirms that the choice of loss function directly affects the forecast evaluation results.
This article provides an example of how geometric concepts can help visualize and interpret the sometimes complex relations between financial variables. We illustrate the power and the elegance of the geometric approach to statistical concepts in finance by analyzing the volatilities and correlations in a currency trio, i.e., a set of three currencies. We expand previous work in this area by providing further insight into the relationship between volatilities and correlations in a currency trio and by analyzing differences in the correlation structure across currency trios and over time. We also present a graphical method for comparing the predictive ability of correlation forecasts from several competing models. The geometric approach towards analyzing correlation structures and correlation forecasts may be particularly helpful for financial institutions. As these institutions survive on their ability to react to the massive amount of data generated by financial markets and management information systems, they can take advantage of the human capability to instantaneously understand pictures by transforming such data into graphics.
Over the past decade, commercial banks have devoted many resources to developing
internal models to better quantify their financial risks and assign economic capital. These efforts
have been recognized and encouraged by bank regulators. Recently, banks have extended these
efforts into the field of credit risk modeling. However, an important question for both banks and
their regulators is evaluating the accuracy of a model’s forecasts of credit losses, especially given
the small number of available forecasts due to their typically long planning horizons. Using a
panel data approach, we propose evaluation methods for credit risk models based on cross-sectional
simulation. Specifically, models are evaluated not only on their forecasts over time, but
also on their forecasts at a given point in time for simulated credit portfolios. Once the forecasts
corresponding to these portfolios are generated, they can be evaluated using various statistical
The vast majority of the term structure literature has focused on modeling the risk-free term structure as implied by Treasury bond yields. As fixed-income markets should be interconnected, we combine the modeling of Treasury yields with a modeling of the common factors present in representative risky credit spread term structures derived from Bloomberg corporate bond data. The question we address is whether we can improve our understanding of, and our ability to forecast, Treasury yields by incorporating information from corporate bond market. We use the arbitrage-free dynamic version of the Nelson-Siegel yield-curve model derived Christensen, Diebold and Rudebusch (2007) to model Treasury yields and corporate bond spreads across rating and industry categories. In addition to the three-factor Nelson-Siegel factors for Treasury yields, we find two common factors—a level and a slope factor—are required to capture the time series dynamics of aggregated credit spreads. We find that the preferred specifications of the joint dynamics of all five factors have feedback effects from the Treasury factors to the credit risk factors, but we also find feedback effects from the credit risk factors to the Treasury factors. To determine the significance of these feedback effects, we perform an out-of-sample forecast exercise. The results so far suggest that the preferred Treasury yield model can easily beat the random walk and that adding the information from the credit markets allows us to improve forecast performance even further for forecast horizons up to 26-weeks.