Tag Archives: precision

The cone of uncertainty

One of the elements that makes prediction difficult is uncertainty. In one of the chapters of Donald Shoup’s High Cost of Free Parking (adapted for Access here), Professor Shoup poses the question:

HOW FAR IS IT from San Diego to San Francisco? An estimate of 632.125 miles is precise—but not accurate. An estimate of somewhere between 400 and 500 miles is less precise but more accurate because the correct answer is 460 miles. Nevertheless, if you had no idea how far it is from San Diego to San Francisco, whom would you believe: someone who confidently says 632.125 miles, or someone who tentatively says somewhere between 400 and 500 miles? Probably the first, because precision implies certainty.

Shoup uses this example to illustrate the illusion of certainty present in the parking and trip generation estimates from the Institute of Transportation Engineers. Many of the rates are based on small samples of potentially unrepresentative cases – often with a very wide range of observed parking/trip generation. Shoup’s concluding paragraph states:

Placing unwarranted trust in the accuracy of these precise but uncertain data leads to bad policy choices. Being roughly right is better than being precisely wrong. We need less precision—and more truth—in transportation planning

Part of the challenge is not just knowing the limitations of the data, but also understanding the ultimate goals for policy. David Levinson notes that most municipalities simply adopt these rates as requirements for off-street parking. This translation of parking estimates to hard-and-fast regulation is “odd” in and of itself. What is the purpose of a parking requirement? To meet the demand generated by new development?

Parking demand for a given building will be a range throughout the course of a day and a year, and demand for any given building category will itself fall within a large range. That range is reality, but that unfortunately doesn’t translate into simply codified regulations.

In the previous post, I discussed the challenges of accurate prediction and specifically referenced Nate Silver’s work on documenting the many failures and few successes in accurate forecasting. One area where forecasting improved tremendously is in meteorology – weather forecasts have been steadily improving – and a large part of that is disclosing the uncertainty involved in the forecasts. One example is in hurricane forecasts, where instead of publicizing just the predicted hurricane track, they also show the ‘cone of uncertainty‘ where the hurricane might end up:

Example of a hurricane forecast with the cone of uncertainty - image from NOAA.

Example of a hurricane forecast with the cone of uncertainty – image from NOAA.

So, why not apply these methods to city planning? A few ideas: as hypothesized before, the primary goal for parking regulations isn’t to develop the most accurate forecasts. The incentives for weather forecasting are different. The shifts to embrace uncertainty stems from a desire finding the most effective way to communicate the forecast to the population. There are a whole host of forecast models that can predict a hurricane track, but their individual results can be a bit messy – producing a ‘spaghetti plot,’ often with divergent results. The cone of uncertainty both embraces the lack of precision in the forecast, but also simplifies communication.

For zoning, a hard and fast requirement doesn’t lend itself to any cone of uncertainty. Expressing demand in terms of a plausible range means that the actual requirement would need to be set at the low end of that range – and in urban examples, the low end of potential parking demand for any given project could be zero. Of course, unlike weather forecasts, these regulations and policies are political creations, not scientific predictions.

Meteorologists also have the benefit of immediate feedback. We will know how well hurricane forecasters did within a matter of days, and even then we will have the benefit of several days of iterations to better hone that forecast. Comparatively, many cities added on-site parking requirements to their zoning codes in the 1960s; regulations that often persist today. Donald Shoup didn’t publish his parking opus until 2005.

There’s also the matter of influencing one’s environment. Another key difference between a hurricane forecast and zoning codes is that the weather forecasters are looking to predict natural phenomena; ITE is trying to predict human behavior – and the very requirements cities impose based on those predictions will themselves influence human behavior. Build unnecessary parking spaces, and eventually those spaces will find a use – inducing the very demand they were built to satisfy. There, the impacts of ignoring uncertainty can be long-lasting.

Here’s to embracing the cone of uncertainty!

A matter of language – defining congestion and sprawl

LA the405

CC - by Atwater Village Newbie

Ahh, the power of creeping bias in language (hat tip to Jarrett Walker):

Everyone at the City should strive to make the transportation systems operate as efficiently as possible. However, we must be careful how we use efficient because that word is frequently confused with the word faster. Typically, efficiency issues are raised when dealing with motor vehicles operating at slow speeds. The assumption is that if changes were made that increase the speeds of the motor vehicles, then efficiency rises. However, this assumption is highly debatable. For example, high motor vehicle speeds lead to urban sprawl, motor vehicle dependence, and high resource use (land, metal, rubber, etc) which reduces efficiency. Motor vehicles burn the least fuel at about 30 miles per hour; speeds above this result in inefficiencies. In urban areas, accelerating and decelerating from stopped conditions to high speeds results in inefficiencies when compared to slow and steady speeds. The there also are efficiency debates about people’s travel time and other issues as well. Therefore, be careful how you use the word efficient at the City. If you really mean faster then say faster. Do not assume that faster is necessarily more efficient. Similarly, if you mean slower, then say slower.

Of course, biased language can be very useful when advocating for a certain point of view.  The real challenge is in sifting through biased language that poses as an objective statement.

Along those lines, Streetsblog notes how various congestion metrics, posing as an unbiased measure of the inadequacy of our transportation infrastructure, are actually misleading in terms of the impacts on our commutes and our land use choices.  They look at a recent report from CEOs for Cities:

The key flaw is a measurement called the Travel Time Index. That’s the ratio of average travel times at peak hours to the average time if roads were freely flowing. In other words, the TTI measures how fast a given trip goes; it doesn’t measure whether that trip is long or short to begin with.

Relying on the TTI suggests that more sprawl and more highways solve congestion, when in fact it just makes commutes longer. Instead, suggests CEOs for Cities, more compact development is often the more effective — and more affordable — solution.

Take the Chicago and Charlotte metro areas. Chicagoland has the second worst TTI in the country, after Los Angeles. Charlotte is about average. But in fact, Chicago-area drivers spend more than 15 minutes less traveling each day, because the average trip is 5.5 miles shorter than in Charlotte. Charlotte only looks better because on average, its drivers travel closer to the hypothetical free-flowing speed.

The Streetsblog Network chimes in, as well:

The problem was, the analysis inevitably concluded — without fail! — that expanding a road would reduce air pollution.

That’s because the formula only accounted for short-term air quality impacts. Any given road project was likely to reduce congestion in the short-term and provide an immediate reduction in vehicle emissions. But the formula ignored long-term impacts of highway expansion — sprawl, longer commutes — which run directly counter to the cause of air quality.

Precisely.

Minneapolis LRV, a project built with New Starts before the total focus on the CEI.  CC image from joelplutchak on flickr.

Minneapolis LRV, a project built with New Starts before the total focus on the CEI. CC image from joelplutchak on flickr.

Following up from previous discussions of precision and accuracy, Elana Schor at Streetsblog delves deeper into the subject.

While addressing the U.S. Conference of Mayors, assistant transport secretary for policy Polly Trottenberg was asked by the mayor of Clearwater, Florida, to outline how the agency might “quantify livability” in its upcoming rulemaking.

“Not everything can be measured,” Trottenberg said, adding that her colleagues wanted to avoid making the “mistake of false precision.”

She also addressed the pitfalls of relying on in-house economic predictions to assess transit projects. Several local rail lines have quickly exceeded initial federal ridership projections, casting doubt on the models used for the so-called New Starts program.

“Sometimes we’ve gotten so tangled up in the perfect mathematical science — we did it in New Starts,” Trottenberg said.

Data is good.  But we cannot limit our information inputs to just quantitative measures.  And when we do use them, we need to understand their limits.  I’m eager to see how the FTA decides to evaluate the livability criteria, but the acknowledgment that the numbers have limits is a big step forward.

Cost-effectiveness

Streetcar tracks, H St NE - CC image from flickr

Streetcar tracks, H St NE - CC image from flickr

Over the past couple of days, there have been lots of reactions to the DOT’s decision to lessen the importance of their cost-effectiveness measures in decisions on new transit starts funding (TTP, Yglesias, TNR, TOW, Streetsblog), almost all of them positive. There are, however, some key points to consider.  With the emphasis on livability as opposed to cost-effectiveness, the question will now be about measuring that livability.  Jarrett Walker notes:

Great news, perhaps, but I look forward to seeing how FTA is going to turn something as subjective as livability into a quantifiable measure that can be used to score projects, particularly since the payoffs lie in development that a proposed transit line might be expected to trigger, but that usually isn’t a sure thing at the point when you’re deciding to fund the line.  And of course, travel time does still matter.

Measurement is indeed the key.  Part of the problem of the Bush Administration’s emphasis on the CEI was an expansive definition of costs and a rather narrow definition of ‘effectiveness.’

The other problem is one that Donald Shoup talks about extensively in his book, The High Cost of Free Parking.  Namely, often imprecise data points are given undue precision in a bias towards quantifiable results and numbers – precision and accuracy are two different things, and it is important not to conflate them:

HOW FAR IS IT from San Diego to San Francisco? An estimate of 632.125 miles is precise—but not accurate. An estimate of somewhere between 400 and 500 miles is less precise but more accurate because the correct answer is 460 miles. Nevertheless, if you had no idea how far it is from San Diego to San Francisco, whom would you believe: someone who confidently says 632.125 miles, or someone who tentatively says somewhere between 400 and 500 miles? Probably the first, because precision implies certainty.

This doesn’t disprove Jarrett’s point – there are still metrics that can be used for more qualitative factors – but the larger issue here is a move away from false precision and towards outcomes that are more accurate – outcomes that better reflect the true (qualitative and quantitative) nature of cities.

With that in mind, it’s interesting to read some reactions published in the National Journal (h/t Planetizen).

Anthony Shorris: The new approach laid out by Secretary Lahood should force a re-thinking of all of our evaluative tools — cost-benefit analysis, alternatives analysis, environmental impact statements — with an eye toward re-balancing them away from an excessive reliance on only those measures that can be readily quantified.  This re-thinking should be inter-departmental (including other agencies and OMB) and inter-disciplinary (including the perspectives of urban planners and designers as well as economists).  One thing the financial crash should have taught us is that there are limitations to even the most seemingly sophisticated financial models, and that apparently crisp spreadsheets are no substitute for the prudent exercise of judgment that the American people have a right to expect of their leaders.

William Millar, APTA: With the action taken by DOT to consider all the factors required by law, transit projects can now be looked at from a holistic perspective. By judging a project on the multiple benefits it offers (i.e. mobility, economic development, environmental impact, land use improvements etc.), a well-rounded and more informed decision can be made. By removing the barrier that the Bush Administration implemented, the process is now in alignment with how it was originally intended to be.

Projects must still be cost effective and meet at least an overall medium rating in project justification and local financing. However, now, instead of a narrow prism through which to judge a project, a wider lens will offer a larger perspective. It should encourage innovative projects to be proposed and funded.