André Reslow defends his thesis Electoral Incentives and Information Content in Macroeconomic Forecasts


André Reslow defends his thesis Electoral Incentives and Information Content in Macroeconomic Forecasts on Friday 19th of March at 10:15 in Lecture Hall 2 at Ekonomikum, Kyrkogårdsgatan 10, Uppsala. Please note that the defence is a digital event.

André Reslow

The thesis deals with the behavior and performance of macroeconomic forecasters. It introduces political incentives in macroeconomic forecasts and shows that forecasters who prefer a particular outcome in an election or a referendum will try to influence voters by publishing biased forecasts. The thesis also studies forecasters’ behavior and performance with a focus on information content. Specifically, it investigates how forecasters use information from competitors and the importance of considering the timing of the publication and the information available when assessing forecast performance.

Discussant is Professor Gisle Natvik, BI Norwegian Business School, Oslo and the grading committee members are Professor Sven Oskarsson, Department of Government, Uppsala University, Professor Oskar Norström Skans, Department of Economics, Uppsala University and Associate Professor Anna Seim, Department of Economics, Stockholm University.

Advisors are Professor Mikael Carlsson, Department of Economics, Uppsala University and Assistant Professor Jesper Lindé, Sveriges Riksbank.


Essay I (with Davide Cipullo): This essay introduces macroeconomic forecasters as new political agents and suggests that they use their forecasts to influence voting outcomes. The essay develops a probabilistic voting model in which voters do not have complete information about the future economy and rely on professional forecasters when forming beliefs. The model predicts that optimal forecasters with economic interests (stakes) and influence publish biased forecasts before a referendum. The theory is tested using data surrounding the Brexit referendum. The results show that forecasters with stakes and influence released more pessimistic and incorrect estimates for GDP growth subject to the leave outcome than other forecasters.

Essay II (with Davide Cipullo): This essay documents the existence of Political Forecast Cycles. A theoretical model of political selection shows that governments release overly optimistic GDP growth forecasts ahead of elections to increase the reelection probability. The theory is tested using forecast data from the United States, the United Kingdom, and Sweden. The results confirm key model predictions and show that governments overestimate short-term GDP growth by 10 to 13 percent during campaign periods. Moreover, the bias is larger when the incumbent is not term-limited or constrained by a parliament led by the opposition. Furthermore, election timing determines the size of the bias at different forecast horizons.

Essay III: This essay assesses to what extent forecasters use competitors’ forecasts efficiently. Empirical results using a large panel of forecasters suggest that forecasters underuse information from their competitors when forecasting GDP growth and inflation. The results also show that forecasters pay more attention to competitors when releasing short-term forecasts than medium-term forecasts. A belief updating model with noisy and private information supports the underuse interpretation and predicts that it is optimal to pay sizable attention to competitors’ work. Furthermore, the essay shows that a revision cost model can only match the observed behavior if asymmetric horizon discounting between cost from revisions and loss from forecast errors is assumed.

Essay IV (with Michael K. Andersson and Ted Aranki): This essay proposes a method to account for differences in release dates when assessing an unbalanced panel of forecasters. Cross-institutional forecast evaluations may be severely distorted because forecasts are made at different points in time and thus with different amounts of information. The proposed method computes the timing effect and the forecaster’s ability (performance) simultaneously. Simulations demonstrate that evaluations that do not adjust for the differences in information may be misleading. The method is also applied to a real-world data set of 10 Swedish forecasters, and the results show that the forecasters’ ability ranking is affected by the proposed adjustment.

Download the thesis here

Read more about André on his personal web

Last modified: 2022-10-07