Will Fiscal Risk Analysis Cause the Next Global Crisis?

Timothy Irwin

October 6, 2010

Posted by Timothy Irwin

In the wake of the financial crisis, the models that banks use to estimate their exposure to risk have come in for a lot of criticism. Nassim Taleb has said that one “put the world at risk”, while Felix Salmon described another “as instrumental in causing the unfathomable losses that brought the world financial system to its knees.” [1] Underlying these claims are at least three concerns—first that it’s next to impossible to accurately estimate the probabilities of very unlikely events because, inevitably, there is little data on them; second, that financial models often assume for simplicity that price changes are normally distributed, while their true distribution has fatter tails—making extreme price moves more common than the models imply; and third that people naively assume that the models are more accurate than they are, creating a false sense of security.

What are the implications of this critique for the estimation of fiscal risks? Will fiscal risk analysis cause the next global crisis?

One reason for thinking not is that sophisticated modeling plays a modest role in fiscal risk analysis. Whereas banks typically have estimates of the current values of their assets and liabilities, and want to know the probability distributions of future values, ministries of finance often have little information on the current values of some of the government’s important assets and liabilities. So, much work that goes by the name of fiscal risk analysis is devoted to estimating expected future cash flows related to unrecorded assets and liabilities, such as those associated with pensions, future health spending, and public-private partnerships. Work on government guarantees and other contingent liabilities is more likely to focus on collecting and publishing basic information about liabilities than on deriving probabilistic estimates of their values or how those values might evolve. Other work on fiscal risk just estimates the sensitivity of forecasts of spending and revenue to changes in GDP and other variables.[2]

Yet some fiscal risk analysis does involve more-sophisticated modeling. The US government, for example, uses stochastic models to estimate the cost of several loan-guarantee programs, while Chile’s uses them to estimate the costs and the cash-flow risks associated with the revenue guarantees it has granted to toll-road and airport concessionaires. The New Zealand Treasury estimates value-at-risk in the part of the government’s debt portfolio that is matched to financial assets, and it has experimented with estimates of the value-at-risk in its total portfolio of assets and liabilities. The IMF in some cases has also used stochastic models for example to get an idea of the vulnerability of a government’s budget to changes in oil prices and to test how likely governments are to run into problems repaying their debt. Some of these models are more or less direct adaptations of models developed for the financial sector, and most assume that risks are normally distributed.

Should these more ambitious attempts at estimating fiscal risks be abandoned for fear that they will lead governments to take on more risk than they should? Abandoning modeling is unappealing, because the alternative to using models to estimate risks is to rely on intuition, and the flaws of intuitive judgments about risks have been convincingly demonstrated by psychologists Daniel Kahneman, Amos Tversky, and their colleagues. Government may underestimate risks if they rely on faulty models, but they may underestimate them even more if they don’t. Moreover, governments have less ability than private firms to avoid exposure to extreme adverse events. Whether they like it or not, they tend to be lenders and risk-bearers of last resort. Thus it is crucial to governments how big and how probable those events are.

But the problems with the assumption of normality suggest that fiscal risk analysts should use models that can generate fat tails. The use of simple models that make use of the normal distribution is natural when governments are just starting to model risks, and there are cases in which it is defensible even if the government has the capacity to use more-complex models: governments are often more concerned about long-term changes than the short-term changes that preoccupy banks, and the distribution of long-term changes tends to be less fat-tailed than the distribution of short-term changes.[3] But the assumption that returns are normally distributed creates too rosy a view of many risks.

Second, the problems of risk modeling also suggest taking great care in the presentation and use of fiscal risk models. Models that don’t assume normality may get closer to the truth, but they will still be wrong, especially in what they say about very unlikely events. Even when the analyst who developed a model understands its flaws, the model’s users may put too much faith in it. It’s not impossible to imagine politicians, impressed by fancy models of the risks, exposing their countries to too much financial risk.


[1] Taleb, Black Swan: The Impact of the Highly Improbable, second edition, p. 225; Salmon, “Recipe for Disaster: The Formula That Killed Wall Street”, Wired Magazine, 23 February 2009.

[2] See for example the sections on “risks and scenarios” and “fiscal risks” in the New Zealand government’s twice yearly economic and fiscal updates.

[3] See, for example, John Y. Campbell, Andrew W. Lo, and A. Craig MacKinlay, The Econometrics of Financial Markets, 1997, ch. 1.

Note: The posts on the IMF PFM Blog should not be reported as representing the views of the IMF. The views expressed are those of the authors and do not necessarily represent those of the IMF or IMF policy.