Tag Archives: uncertainty

672,228 – DC’s growth continues – short-term trend or long-term shift?

Just before the end of the year, the US Census Bureau releases their state-level population estimates. Thanks to DC’s city-state status, we get an early view of the District’s population trends before other major cities. DC’s 2015 estimate clocks in at 672,228 people – an 1.9% increase over 2014.

In 2009 and early 2010, I had a chance to help coordinate the District’s local outreach for the decennial census, emphasizing the importance of getting an accurate count of the city’s population. Back then, we were hoping to see a number above 600,000. Five years later, we’ve blown past that, climbing back to the city’s population in 1977:

DC (and Baltimore) population estimates, hovered over 1977. Screenshot from a Google search for DC Population; data from the US Census Bureau.

DC (and Baltimore) population estimates, hovered over 1977. Screenshot from a Google search for DC Population; data from the US Census Bureau.

(There’s also a great deal of uncertainty to contend with. Census estimates are often revised as better data is collected.)

DC’s press release about the data documents the elements of the recent population growth. Of DC’s increased population, about 1/3 was a natural increase, 1/3 from net new domestic migrants, and 1/3 from new international migrants:

According to the US Census Bureau, the main driver of the increase was domestic and international migration—people moving to the District from other parts of the United States, and from abroad. Between July 2014 and July 2015, in addition to the natural increase (births minus deaths) of 4,375 residents, a total of 8,282 more people moved into the District than moved out. Of these 8,282 net new residents to the city, 3,731 more people moved from other U.S. states than moved out and 4,551 more moved to the District from other countries than the number of residents that left the District for other countries. While net international migration made a greater contribution to the District’s population growth than net domestic migration, net domestic migration has grown four times its previous year total and demonstrates that the District continues to attract residents from other U.S. states.

Back in 2013, DC’s Chief Financial Officer forecast a slowdown in the District’s growth, citing slower economic growth in the region (thanks to decreased Congressional spending) as well as a slowdown in new housing starts. Part of the CFO’s job is to be appropriately conservative in these forecasts, but the Census Bureau’s estimates bucked the CFO’s forecast.

Part of the question is if this growth in DC represents a flash in the pan, or a real long-term shift in migration patterns. Last week saw some hearty twitter debate over this piece by Lyman Stone, questioning the narratives about a major shift away from suburbs and towards more urban locations (examples: here, here and a counter-example here). Stone argues that the data doesn’t support the conclusion of a major shift towards urban living. And given the macro-trends, it’s hard to argue against his broad conclusion.

Consider the analogue of driving, where a sustained period of high gas prices and a weak economy put a serious dent in US vehicle miles traveled, spawning all sorts of theories about how we’ve passed ‘peak car.’ But as soon as oil prices dropped, we’ve seen a massive increase in VMT (never mind the negative consequences of cheap gas). The broad narratives about a paradigm shift against car usage seemed hung up on anecdotes about Millennials using smartphones instead of cars, rather than looking at the broader trends of where people live and work (which hadn’t changed much). Beware reading too much into the data; or missing the outside factor.

However, the smaller-scale evidence is also hard to dismiss. Apartments in DC are sprouting like mushrooms (where they are allowed by zoning), and DC’s population can only increase as fast the city’s housing stock can expand. And even with the District’s sustained growth, rents and home prices continue to rise, indicating demand for urban living greater than the available supply.

Those peak-car arguments might accurately assess our desires to drive less, but the driving data is based on the reality of housing and transportation options available, rather than the options we might wish were available. Likewise, urban migration patterns are based on available housing, not what migrants might wish were available.

 

Forecasting uncertainty in practice: Snowperbole

Example of snow forecast communicating levels of undertainty; image from the Capital Weather Gang

Example of snow forecast communicating levels of uncertainty; image from the Capital Weather Gang

Because making accurate predictions is extremely difficult, we can dramatically improve both the accuracy of forecasts and enable effective communication about the forecast by embracing the uncertainty involved in the forecast. This allows decision-makers to both use the information available while understanding the limits of those predictions.

Following forecasts for a “potentially historic” storm set to hit New York and New England, public officials in New York City went to great lengths to emphasize the dangers of the storm. The Governor closed down New York’s subways in anticipation of the storm (showing one of the quirks of New York’s transit governance, local transit is under state control).

There was just one problem: the storm mostly missed NYC.

In their forecast post-mortem, the Washington Post’s Capital Weather Gang highlighted the key shortcomings of the forecast – a failure to present the level of uncertainty in the forecast.

Why were the forecasts so bad?

It’s simple: Many forecasters failed to adequately communicate the uncertainty in what was an extremely complicated forecast. Instead of presenting the forecast as a range of possibilities, many outlets simply presented the worst-case scenario.

Especially for New York City, some computer model forecasts were extremely dire, predicting upwards of 30 inches of snow – shattering all-time snowfall records. The models producing these forecasts (the NAM model and European model) had a sufficiently good enough track record to take them seriously.

However, some model forecasts (e.g. the GFS model) signaled reason for caution. They predicted closer to a foot of snow.

Part of the challenge here is that most of the forecast was accurate. This was a historic storm; the storm simply tracked a bit further to the east. Areas like New York City were right on the margins, where a small change to the inputs can mean a large change in the outcome  – and the forecast did not adequately convey that uncertainty. Add in the fact that the forecast miss happened to be the largest city in the United States, and you have a very public error.

When a forecast is so sensitive to small changes (eastern Long Island, not far away, received 30-plus inches), it is imperative to loudly convey the reality that small changes could have profound effects on what actually happens.

It’s easy to second-guess public officials making key decisions like closing transit systems after the fact (and after the forecast bust), but they can only act on the information that they have in front of them. It’s easy to argue that it is better to be safe than sorry (and this is certainly true – it is better safe than sorry) but there is a real risk of eroding public confidence in these kinds of decisions when the forecast doesn’t pan out. (It doesn’t help that despite closing the subways, the MTA’s snow plan called for trains to remain in operation without passengers to keep the tracks clear of snow)

As some meteorologists suggest, conveying the uncertainty in their forecasts should be a larger element of both the forecast and communication. It’s not just a matter of using the best information available, but also understanding the uncertainty involved.

The cone of uncertainty

One of the elements that makes prediction difficult is uncertainty. In one of the chapters of Donald Shoup’s High Cost of Free Parking (adapted for Access here), Professor Shoup poses the question:

HOW FAR IS IT from San Diego to San Francisco? An estimate of 632.125 miles is precise—but not accurate. An estimate of somewhere between 400 and 500 miles is less precise but more accurate because the correct answer is 460 miles. Nevertheless, if you had no idea how far it is from San Diego to San Francisco, whom would you believe: someone who confidently says 632.125 miles, or someone who tentatively says somewhere between 400 and 500 miles? Probably the first, because precision implies certainty.

Shoup uses this example to illustrate the illusion of certainty present in the parking and trip generation estimates from the Institute of Transportation Engineers. Many of the rates are based on small samples of potentially unrepresentative cases – often with a very wide range of observed parking/trip generation. Shoup’s concluding paragraph states:

Placing unwarranted trust in the accuracy of these precise but uncertain data leads to bad policy choices. Being roughly right is better than being precisely wrong. We need less precision—and more truth—in transportation planning

Part of the challenge is not just knowing the limitations of the data, but also understanding the ultimate goals for policy. David Levinson notes that most municipalities simply adopt these rates as requirements for off-street parking. This translation of parking estimates to hard-and-fast regulation is “odd” in and of itself. What is the purpose of a parking requirement? To meet the demand generated by new development?

Parking demand for a given building will be a range throughout the course of a day and a year, and demand for any given building category will itself fall within a large range. That range is reality, but that unfortunately doesn’t translate into simply codified regulations.

In the previous post, I discussed the challenges of accurate prediction and specifically referenced Nate Silver’s work on documenting the many failures and few successes in accurate forecasting. One area where forecasting improved tremendously is in meteorology – weather forecasts have been steadily improving – and a large part of that is disclosing the uncertainty involved in the forecasts. One example is in hurricane forecasts, where instead of publicizing just the predicted hurricane track, they also show the ‘cone of uncertainty‘ where the hurricane might end up:

Example of a hurricane forecast with the cone of uncertainty - image from NOAA.

Example of a hurricane forecast with the cone of uncertainty – image from NOAA.

So, why not apply these methods to city planning? A few ideas: as hypothesized before, the primary goal for parking regulations isn’t to develop the most accurate forecasts. The incentives for weather forecasting are different. The shifts to embrace uncertainty stems from a desire finding the most effective way to communicate the forecast to the population. There are a whole host of forecast models that can predict a hurricane track, but their individual results can be a bit messy – producing a ‘spaghetti plot,’ often with divergent results. The cone of uncertainty both embraces the lack of precision in the forecast, but also simplifies communication.

For zoning, a hard and fast requirement doesn’t lend itself to any cone of uncertainty. Expressing demand in terms of a plausible range means that the actual requirement would need to be set at the low end of that range – and in urban examples, the low end of potential parking demand for any given project could be zero. Of course, unlike weather forecasts, these regulations and policies are political creations, not scientific predictions.

Meteorologists also have the benefit of immediate feedback. We will know how well hurricane forecasters did within a matter of days, and even then we will have the benefit of several days of iterations to better hone that forecast. Comparatively, many cities added on-site parking requirements to their zoning codes in the 1960s; regulations that often persist today. Donald Shoup didn’t publish his parking opus until 2005.

There’s also the matter of influencing one’s environment. Another key difference between a hurricane forecast and zoning codes is that the weather forecasters are looking to predict natural phenomena; ITE is trying to predict human behavior – and the very requirements cities impose based on those predictions will themselves influence human behavior. Build unnecessary parking spaces, and eventually those spaces will find a use – inducing the very demand they were built to satisfy. There, the impacts of ignoring uncertainty can be long-lasting.

Here’s to embracing the cone of uncertainty!