Tag Archives: Congestion

Prediction is hard – so why do we make key decisions based on bad information?

Comparison of USDOT predictions for Vehicle Miles Traveled, compared to actual values. Chart from SSTI.

Comparison of USDOT predictions for Vehicle Miles Traveled, compared to actual values. Chart from SSTI.

Back in December, David Levinson put up a wonderful post with graphical representations looking to match predictions to reality. The results aren’t good for the predictors. Lots of official forecasts call for increased vehicle travel, while many places have seen stagnant or declining VMT. It’s not just a problem for traffic engineers, but for a variety of professions (I took note of similar challenges for airport traffic here previously).

Prediction is hard. What’s curious for cities is that despite the inherent challenges of developing an accurate forecast, we nonetheless bet the house on those numbers with expensive regulations (e.g. requiring off-street parking to meet demand) and projects (building more road capacity to relieve congestion) based on bad information and incorrect assumptions.

One of the books I’ve included in the reading list is Nate Silver’s The Signal and the Noise, Silver’s discussion of why most efforts at prediction fail. In Matt Yglesias’s review of the book, he summarizes Silver’s core argument: “For all that modern technology has enhanced our computational abilities, there are still an awful lot of ways for predictions to go wrong thanks to bad incentives and bad methods.”

Silver rose to prominence by successfully forecasting US elections based on available polling data. In the process, he argued the spin of pundits added nothing to the discussion; political analysts were seldom held accountable for their bad analysis. Yet, because of the incentives for punditry, these analysts with poor track records continued to get work and airtime.

Traffic forecasts have a lot in common with political punditry – many of the projects are woefully incorrect; the methods for predicting are based more on ideology than observation and analysis.

More troubling, for city planning, is the tendency to take these kinds of projections and enshrine them in our regulations, such as the way that the ITE (Institute of Transportation Engineers) projections for parking demand are translated into zoning code requirements for on-site parking. Levinson again:

But this requirement itself is odd, and leads to the construction of excess off-street parking, since at least some of that parking is vacant 300, 350, 360, or even 364 days per year depending on how tight you set the threshold and how flat the peak demand is seasonally. Is it really worth vacant paved impervious surface 364 days so that 1 day there is no spillover to nearby streets?

In other words, the ideology behind the requirement wants to maximize parking.

It’s not just the ideology behind these projections that is suspect; the methods are also questionable at best. In the fall 2014 issue of Access, Adam Millard-Ball discusses the methodological flaws of ITE’s parking generation estimates. (Streetsblog has a summary available) Millard-Ball notes that the “seemingly mundane” work of traffic analysis has enormous consequences for the shape of our built environment, due to the associated requirements for new development. Indeed, the trip generation estimates for any given project appear to massively overestimate the actual impact on traffic.

There are three big problems with the ITE estimates: first, they massively overestimate the actual traffic generated by a new development, due to non-representative samples and small sample sizes. Second, the estimates confuse marginal and average trip generation. Build a replacement court house, Millard-Bell notes, and you won’t generate new trips to the court – you’ll just move them. Third, the rates have a big issue with scale. Are we concerned about the trips generated to determine the impact on a local street, or on a neighborhood, or the city, or the region?

What is clear is that these estimates aren’t accurate. Why do we continue to use them as the basis of important policy decisions? Why continue to make decisions based on bad information? A few hypotheses:

  • Path dependence and sticky regulations: Once these kinds of regulations and procedures are in place, they are hard to change. Altering parking requirements in a zoning code can seem simple, but could take a long time. In DC, the 2006 Comprehensive Plan recommended a review and re-write of the zoning code. That process started in earnest in 2007. Final action didn’t come until late in 2014, with implementation still to come – and even then, only after some serious alterations of the initial proposals.
  • Leverage: Even if everyone knows these estimates are garbage, the forecasts of large traffic impacts provide useful leverage for cities and citizens to leverage improvements and other contributions from developers. As Let’s Go LA notes, “traffic forecasting works that way because politicians want it to work that way.”
  • Rent seeking: There’s money to be made from consultants and others in developing these inaccurate estimates and then proposing remedies to them.

Driverless cars: implications for city planning and urban transportation

Nevada autonomous vehicle license plate. CC image from National Museum of American History.

Nevada autonomous vehicle license plate. CC image from National Museum of American History.

Building on the implications of driverless cars on car ownership, as well as the notion that planners aren’t preparing for the rise of autonomous vehicles,  I wanted to dive further into potential implications of widespread adoption of the technology. Nat Bottigheimer in Greater Greater Washington argues that city planning as a profession is unprepared for autonomous vehicles:

Self-driving cars address many of the safety and travel efficiency objections that Smart Growth advocates often make about road expansion, or the use of limited street space.

Part of Bottingheimer’s concern is a lack of quantitative analysis, particularly as it relates to the impacts of self-driving cars. However, the real debate is about qualitative values that feed into our analysis.

The officials responsible for parking lot and garage building, transit system growth, bike lane construction, intersection expansions, sidewalk improvements, and road widenings need to analyze quantitatively how self-driving cars could affect their plans, and to prepare alternatives in case things change.

There is one over-arching problem with this approach: our current quantitative analysis all too often is nothing but bad pseudo-science. Donald Shoup has extensively documented the problems with minimum parking requirements in zoning codes, for example. Here, poor policy with vast unintended consequences is based on some level of flawed quantitative analysis, the kind that does not acknowledge the inherent uncertainty in our understanding or ability to project the future. Instead, the analysis is based on assumptions, yet the assumptions are really value-laden statements that carry a great deal of weight.

Even the very structure of the planning and  regulation for the future carries a bias: a requirement to provide parking spaces in anticipation of future demand will, by nature, ignore the complexity of the marketplace for off-street parking and the natural range of parking demand.

Bottigheimer is also concerned about the impacts of self-driving cars on future land use forecasts:

Planners need to examine how travel forecasting tools that are based on current patterns of car ownership and use will need to change to adapt to new statistical relationships between population, car ownership, trip-making, car-sharing, and travel patterns.

By all means, we need to adjust our forecasting tools. However, we shouldn’t be doing so simply based on the arrival of a new technology. We should adjust them because they’re not particularly accurate and their erroneous projections have large impacts on how we plan. Driverless cars aren’t the problem here. The problem is in our assumptions, our inaccurate analysis, and our decision-making processes that rely on such erroneous projections.

Leaving the limitations of quantitative analysis aside for the moment, we can still hypothesize (qualitatively, perhaps) about the future world of driverless cars. Assuming that autonomous vehicles do indeed reduce car ownership and begin to serve as robo-taxis, we can sketch out plausible scenarios for the future. We assume car ownership will decrease, but vehicle-miles traveled may increase.

City Planning and Street Design:

One of Bottigheimer’s chief concerns is that “planners and placemaking advocates will need to step up their game” given the potential benefits for safety, increased car capacity,

As mentioned above, much of the ‘safety’ benefits are about cars operating in car-only environments (e.g. highways), when the real safety challenges are in streets with mixed traffic: pedestrians, bikes, cars, and buses all sharing the same space. In this case, the values planners and placemaking advocates are pushing for remain the same, regardless of who – or what – is driving the cars. The laws of physics won’t change; providing a safe environment for pedestrians will still be based on the lowest common denominator for safe speeds, etc.

The biggest concern should be in the environments that aren’t highways, yet aren’t city streets, either. Will driverless cars forever push stroads into highway territory? Borrowing Jarrett Walker’s phrasing, technology can’t change geometry, except in some cases at the margins.

Instead of a technical pursuit of maximum vehicle throughput (informed by quantitative analysis), the real question is one of values. The values that inform planning for a place or a street will set the tone for the quantitative analysis that follows. Maximizing vehicle throughput is not a neutral, analytical goal.

Congestion: 

Congestion is a more interesting case, as it will still be an economic problem – centralized control might help mitigate some traffic issues, but it doesn’t solve the fundamental economic conundrum of congestion. Here, too, the economic solutions in a world of human-driven cars will have the same framework as one with computers behind the wheel.

Driverless cars might change the exact price points, but they don’t alter the basic logic behind congestion-mitigation measures like a cordon charge in London or Stockholm, or like Uber’s surge pricing (efficient and rational as it might bebut perhaps too honest). Again, technology can’t fundamentally change geometry. Cars will still be cars, and even if driverless cars improve on the current capacity limitations of highways, they do not eliminate such constraints.

Qualitative Concerns:

Instead of twisting ourselves in knots over projections about the future that are sure to be wrong, planning for autonomous cars should instead focus on the values and the kind of places we want to plan for. We should adjust our policies to embrace the values of the communities (which alone is a challenging process). We should be aware about the poor accuracy of forecasts and work to build policies with the flexibility to adapt.

A matter of language – defining congestion and sprawl

LA the405

CC - by Atwater Village Newbie

Ahh, the power of creeping bias in language (hat tip to Jarrett Walker):

Everyone at the City should strive to make the transportation systems operate as efficiently as possible. However, we must be careful how we use efficient because that word is frequently confused with the word faster. Typically, efficiency issues are raised when dealing with motor vehicles operating at slow speeds. The assumption is that if changes were made that increase the speeds of the motor vehicles, then efficiency rises. However, this assumption is highly debatable. For example, high motor vehicle speeds lead to urban sprawl, motor vehicle dependence, and high resource use (land, metal, rubber, etc) which reduces efficiency. Motor vehicles burn the least fuel at about 30 miles per hour; speeds above this result in inefficiencies. In urban areas, accelerating and decelerating from stopped conditions to high speeds results in inefficiencies when compared to slow and steady speeds. The there also are efficiency debates about people’s travel time and other issues as well. Therefore, be careful how you use the word efficient at the City. If you really mean faster then say faster. Do not assume that faster is necessarily more efficient. Similarly, if you mean slower, then say slower.

Of course, biased language can be very useful when advocating for a certain point of view.  The real challenge is in sifting through biased language that poses as an objective statement.

Along those lines, Streetsblog notes how various congestion metrics, posing as an unbiased measure of the inadequacy of our transportation infrastructure, are actually misleading in terms of the impacts on our commutes and our land use choices.  They look at a recent report from CEOs for Cities:

The key flaw is a measurement called the Travel Time Index. That’s the ratio of average travel times at peak hours to the average time if roads were freely flowing. In other words, the TTI measures how fast a given trip goes; it doesn’t measure whether that trip is long or short to begin with.

Relying on the TTI suggests that more sprawl and more highways solve congestion, when in fact it just makes commutes longer. Instead, suggests CEOs for Cities, more compact development is often the more effective — and more affordable — solution.

Take the Chicago and Charlotte metro areas. Chicagoland has the second worst TTI in the country, after Los Angeles. Charlotte is about average. But in fact, Chicago-area drivers spend more than 15 minutes less traveling each day, because the average trip is 5.5 miles shorter than in Charlotte. Charlotte only looks better because on average, its drivers travel closer to the hypothetical free-flowing speed.

The Streetsblog Network chimes in, as well:

The problem was, the analysis inevitably concluded — without fail! — that expanding a road would reduce air pollution.

That’s because the formula only accounted for short-term air quality impacts. Any given road project was likely to reduce congestion in the short-term and provide an immediate reduction in vehicle emissions. But the formula ignored long-term impacts of highway expansion — sprawl, longer commutes — which run directly counter to the cause of air quality.