Forecasting uncertainty in practice: Snowperbole

Example of snow forecast communicating levels of undertainty; image from the Capital Weather Gang

Example of snow forecast communicating levels of uncertainty; image from the Capital Weather Gang

Because making accurate predictions is extremely difficult, we can dramatically improve both the accuracy of forecasts and enable effective communication about the forecast by embracing the uncertainty involved in the forecast. This allows decision-makers to both use the information available while understanding the limits of those predictions.

Following forecasts for a “potentially historic” storm set to hit New York and New England, public officials in New York City went to great lengths to emphasize the dangers of the storm. The Governor closed down New York’s subways in anticipation of the storm (showing one of the quirks of New York’s transit governance, local transit is under state control).

There was just one problem: the storm mostly missed NYC.

In their forecast post-mortem, the Washington Post’s Capital Weather Gang highlighted the key shortcomings of the forecast – a failure to present the level of uncertainty in the forecast.

Why were the forecasts so bad?

It’s simple: Many forecasters failed to adequately communicate the uncertainty in what was an extremely complicated forecast. Instead of presenting the forecast as a range of possibilities, many outlets simply presented the worst-case scenario.

Especially for New York City, some computer model forecasts were extremely dire, predicting upwards of 30 inches of snow – shattering all-time snowfall records. The models producing these forecasts (the NAM model and European model) had a sufficiently good enough track record to take them seriously.

However, some model forecasts (e.g. the GFS model) signaled reason for caution. They predicted closer to a foot of snow.

Part of the challenge here is that most of the forecast was accurate. This was a historic storm; the storm simply tracked a bit further to the east. Areas like New York City were right on the margins, where a small change to the inputs can mean a large change in the outcome  – and the forecast did not adequately convey that uncertainty. Add in the fact that the forecast miss happened to be the largest city in the United States, and you have a very public error.

When a forecast is so sensitive to small changes (eastern Long Island, not far away, received 30-plus inches), it is imperative to loudly convey the reality that small changes could have profound effects on what actually happens.

It’s easy to second-guess public officials making key decisions like closing transit systems after the fact (and after the forecast bust), but they can only act on the information that they have in front of them. It’s easy to argue that it is better to be safe than sorry (and this is certainly true – it is better safe than sorry) but there is a real risk of eroding public confidence in these kinds of decisions when the forecast doesn’t pan out. (It doesn’t help that despite closing the subways, the MTA’s snow plan called for trains to remain in operation without passengers to keep the tracks clear of snow)

As some meteorologists suggest, conveying the uncertainty in their forecasts should be a larger element of both the forecast and communication. It’s not just a matter of using the best information available, but also understanding the uncertainty involved.

1 thought on “Forecasting uncertainty in practice: Snowperbole

Comments are closed.