One of the elements that makes prediction difficult is uncertainty. In one of the chapters of Donald Shoup’s High Cost of Free Parking (adapted for Access here), Professor Shoup poses the question:
HOW FAR IS IT from San Diego to San Francisco? An estimate of 632.125 miles is precise—but not accurate. An estimate of somewhere between 400 and 500 miles is less precise but more accurate because the correct answer is 460 miles. Nevertheless, if you had no idea how far it is from San Diego to San Francisco, whom would you believe: someone who confidently says 632.125 miles, or someone who tentatively says somewhere between 400 and 500 miles? Probably the first, because precision implies certainty.
Shoup uses this example to illustrate the illusion of certainty present in the parking and trip generation estimates from the Institute of Transportation Engineers. Many of the rates are based on small samples of potentially unrepresentative cases – often with a very wide range of observed parking/trip generation. Shoup’s concluding paragraph states:
Placing unwarranted trust in the accuracy of these precise but uncertain data leads to bad policy choices. Being roughly right is better than being precisely wrong. We need less precision—and more truth—in transportation planning
Part of the challenge is not just knowing the limitations of the data, but also understanding the ultimate goals for policy. David Levinson notes that most municipalities simply adopt these rates as requirements for off-street parking. This translation of parking estimates to hard-and-fast regulation is “odd” in and of itself. What is the purpose of a parking requirement? To meet the demand generated by new development?
Parking demand for a given building will be a range throughout the course of a day and a year, and demand for any given building category will itself fall within a large range. That range is reality, but that unfortunately doesn’t translate into simply codified regulations.
In the previous post, I discussed the challenges of accurate prediction and specifically referenced Nate Silver’s work on documenting the many failures and few successes in accurate forecasting. One area where forecasting improved tremendously is in meteorology – weather forecasts have been steadily improving – and a large part of that is disclosing the uncertainty involved in the forecasts. One example is in hurricane forecasts, where instead of publicizing just the predicted hurricane track, they also show the ‘cone of uncertainty‘ where the hurricane might end up:
Example of a hurricane forecast with the cone of uncertainty – image from NOAA.
So, why not apply these methods to city planning? A few ideas: as hypothesized before, the primary goal for parking regulations isn’t to develop the most accurate forecasts. The incentives for weather forecasting are different. The shifts to embrace uncertainty stems from a desire finding the most effective way to communicate the forecast to the population. There are a whole host of forecast models that can predict a hurricane track, but their individual results can be a bit messy – producing a ‘spaghetti plot,’ often with divergent results. The cone of uncertainty both embraces the lack of precision in the forecast, but also simplifies communication.
For zoning, a hard and fast requirement doesn’t lend itself to any cone of uncertainty. Expressing demand in terms of a plausible range means that the actual requirement would need to be set at the low end of that range – and in urban examples, the low end of potential parking demand for any given project could be zero. Of course, unlike weather forecasts, these regulations and policies are political creations, not scientific predictions.
Meteorologists also have the benefit of immediate feedback. We will know how well hurricane forecasters did within a matter of days, and even then we will have the benefit of several days of iterations to better hone that forecast. Comparatively, many cities added on-site parking requirements to their zoning codes in the 1960s; regulations that often persist today. Donald Shoup didn’t publish his parking opus until 2005.
There’s also the matter of influencing one’s environment. Another key difference between a hurricane forecast and zoning codes is that the weather forecasters are looking to predict natural phenomena; ITE is trying to predict human behavior – and the very requirements cities impose based on those predictions will themselves influence human behavior. Build unnecessary parking spaces, and eventually those spaces will find a use – inducing the very demand they were built to satisfy. There, the impacts of ignoring uncertainty can be long-lasting.
Here’s to embracing the cone of uncertainty!
Comparison of USDOT predictions for Vehicle Miles Traveled, compared to actual values. Chart from SSTI.
Back in December, David Levinson put up a wonderful post with graphical representations looking to match predictions to reality. The results aren’t good for the predictors. Lots of official forecasts call for increased vehicle travel, while many places have seen stagnant or declining VMT. It’s not just a problem for traffic engineers, but for a variety of professions (I took note of similar challenges for airport traffic here previously).
Prediction is hard. What’s curious for cities is that despite the inherent challenges of developing an accurate forecast, we nonetheless bet the house on those numbers with expensive regulations (e.g. requiring off-street parking to meet demand) and projects (building more road capacity to relieve congestion) based on bad information and incorrect assumptions.
One of the books I’ve included in the reading list is Nate Silver’s The Signal and the Noise, Silver’s discussion of why most efforts at prediction fail. In Matt Yglesias’s review of the book, he summarizes Silver’s core argument: “For all that modern technology has enhanced our computational abilities, there are still an awful lot of ways for predictions to go wrong thanks to bad incentives and bad methods.”
Silver rose to prominence by successfully forecasting US elections based on available polling data. In the process, he argued the spin of pundits added nothing to the discussion; political analysts were seldom held accountable for their bad analysis. Yet, because of the incentives for punditry, these analysts with poor track records continued to get work and airtime.
Traffic forecasts have a lot in common with political punditry – many of the projects are woefully incorrect; the methods for predicting are based more on ideology than observation and analysis.
More troubling, for city planning, is the tendency to take these kinds of projections and enshrine them in our regulations, such as the way that the ITE (Institute of Transportation Engineers) projections for parking demand are translated into zoning code requirements for on-site parking. Levinson again:
But this requirement itself is odd, and leads to the construction of excess off-street parking, since at least some of that parking is vacant 300, 350, 360, or even 364 days per year depending on how tight you set the threshold and how flat the peak demand is seasonally. Is it really worth vacant paved impervious surface 364 days so that 1 day there is no spillover to nearby streets?
In other words, the ideology behind the requirement wants to maximize parking.
It’s not just the ideology behind these projections that is suspect; the methods are also questionable at best. In the fall 2014 issue of Access, Adam Millard-Ball discusses the methodological flaws of ITE’s parking generation estimates. (Streetsblog has a summary available) Millard-Ball notes that the “seemingly mundane” work of traffic analysis has enormous consequences for the shape of our built environment, due to the associated requirements for new development. Indeed, the trip generation estimates for any given project appear to massively overestimate the actual impact on traffic.
There are three big problems with the ITE estimates: first, they massively overestimate the actual traffic generated by a new development, due to non-representative samples and small sample sizes. Second, the estimates confuse marginal and average trip generation. Build a replacement court house, Millard-Bell notes, and you won’t generate new trips to the court – you’ll just move them. Third, the rates have a big issue with scale. Are we concerned about the trips generated to determine the impact on a local street, or on a neighborhood, or the city, or the region?
What is clear is that these estimates aren’t accurate. Why do we continue to use them as the basis of important policy decisions? Why continue to make decisions based on bad information? A few hypotheses:
- Path dependence and sticky regulations: Once these kinds of regulations and procedures are in place, they are hard to change. Altering parking requirements in a zoning code can seem simple, but could take a long time. In DC, the 2006 Comprehensive Plan recommended a review and re-write of the zoning code. That process started in earnest in 2007. Final action didn’t come until late in 2014, with implementation still to come – and even then, only after some serious alterations of the initial proposals.
- Leverage: Even if everyone knows these estimates are garbage, the forecasts of large traffic impacts provide useful leverage for cities and citizens to leverage improvements and other contributions from developers. As Let’s Go LA notes, “traffic forecasting works that way because politicians want it to work that way.”
- Rent seeking: There’s money to be made from consultants and others in developing these inaccurate estimates and then proposing remedies to them.
Several months ago, Charlie Gardner had an excellent, thought-provoking post asking why have American cities seen the demise of the duplex? In a time when growing cities are bursting at the seams and facing severe affordability challenges, an incremental kind of development might be welcome in many cities, offering new housing while allowing an evolutionary pace of change to a neighborhood’s physical fabric, instead of the abrupt transition of large-scale redevelopment. So why don’t we see more of it?
Consider international comparisons of small-scale incremental development: Charlie Gardner compares the built form on both sides of the US-Mexico border, noting how on the Mexican side houses grow incrementally over time, often adding new uses along the street. The net result is a slow transformation of the entire neighborhood, evolving towards denser development patterns. Gardner speculates on reasons for the difference with standard American development patterns (including finance and regulation), noting that the small-scale development open the door to homeownership at a much lower price threshold.
Conversely, there are examples of American neighborhoods adding units on a relatively small scale. Let’s Go LA has been tweeting highlights from Wallace Frances Smith’s “The Low-Rise Speculative Apartment,” published in 1964. The book documents the replacement of single-family homes with low-rise speculative apartments (often in the form of dingbats), concluding that this small-scale, relatively low-cost form of construction plays an important role in adding housing supply to the market. Without requiring challenging lot consolidation or more-expensive construction methods, this kind of incremental, small-scale development allowed neighborhoods of single-family homes to evolve into denser places – even without large incomes in the neighborhoods to afford expensive new construction.
Despite the small scale of each individual building, the net result was a substantial increase in housing production overall.
So, why don’t we see more of this today? While various New Urbanists might not like the specific dingbat product, the idea of small-scale urban density is still appealing. The so-called ‘missing middle’ forms, such as townhouses, flats, and small apartment buildings are all lauded as contextually-friendly ways to add housing and increase density in already developed areas. So, why are these housing types missing?
As Let’s Go LA points out, much of this kind of development has been regulated out of existence. In LA, large portions of the city have been downzoned; the newer zoning no longer allows for by-right development of dingbats and other small-scale apartment buildings. In aggregate, the result is a huge decrease in the potential development allowed in LA.
Much of that LA zoning potential would’ve been in the hands of small-scale landowners rather than large real estate development firms. One consequence of removing that development potential is to erode the ‘franchise’ for incremental development. Let’s Go LA notes that “by zoning small developments out of existence, we’ve made land development a much less democratic process, in the sense that far fewer individuals in the community are able to participate economically.” Instead, 20% of LA’s recent growth has been absorbed in the relatively small confines of downtown. While this is good for downtown (thanks to regulatory changes such as LA’s adaptive re-use ordinance and relaxation of off-street parking requirements – discussed previously here), limiting growth to such a small area of the city has consequences: “when growth is restricted across so much of the rest of the city, there will still be pressure on regional housing prices, and gentrification will continue.”
The phenomenon isn’t limited to LA or to dingbats. Stephen Smith, writing at New York YIMBY, looks at the demise of small-scale development (buildings smaller than five units) in New York: “Put simply: New York City’s small builders have been nearly eradicated. The segment of the market that normally produces about half the city’s new building stock has all but vanished.”
New York City building permits, by number of units. Chart from New York YIMBY, data from the US Census Bureau.
Smith considers several hypotheses for this decline in small-scale development, including the end of some tax abatement programs and weak markets in some parts of the city. Smith also hypothesizes that New York’s recent ‘contextual rezonings’ removed development potential from areas ripe for small-scale development:
The result is that many neighborhoods that were once full of redevelopment opportunities are now closed off to anything but the smallest of one- or two-family projects on vacant lots. This sort of redevelopment was largely banned after the implementation of the 1961 zoning code, but throughout her tenure Amanda Burden closed off the last few areas where it was still allowed.
DC is seeing similar conversations. Demand for additional housing often leads to ‘pop-up’ development, often in the form of vertical additions to existing rowhouses. The term even gets used as a catch-all for any kind of smaller scale infill development. Many existing residents are concerned about the changes (though others are supportive).
Responding to political pressure and resident requests, the Office of Planning proposed their own version of a contextual rezoning.However, during a hearing on the measure, one of the zoning commissioners expressed deep concern about the overall impact of reducing this development potential in a city with a growing population and decreasing housing affordability. Greater Greater Washington’s summary of the exchange captures the concern: “I just don’t think we have a comprehensive housing policy in this city and I’m worried about all the unintended consequences of [this proposal].”
While Charlie Gardner contrasted American urbanism to Mexico, there are other options as well. This paper from Sonia Hirt looks at German land use regulations. German zoning is guided by federal standards, localities have some flexibility within those standards but cannot add restrictions to the basic zoning classifications. One end result is that there is no such thing as a residential zone devoted solely to single-family homes. Likewise, even residential zones must accommodate commerce to meet the “daily needs” of the neighborhood.
In outlining potential routes for zoning reform in the United States building off of lessons learned from Germany, Hirt suggests that instead of relatively small areas of mixed-use zoning, planners could focus on a wider area of limited flexibility for residential development – something that might not look that different from the small, speculative apartment developments of the 50s and 60s; or of duplex development.
CC image from carnagenyc.
The confluence of events in my life (new apartments, travel, wedding planning, etc) haven’t left time for much blogging recently. However, there’s always time to read. With that in mind, a few additions to the reading list (and correcting one egregious omission):
The New Geography of Jobs: Enrico Moretti (2012)
Berkeley economist Enrico Moretti delivers a concise and readable summary of the economic geography of innovative industries – the kinds of jobs that produce what Jane Jacobs referred to as “New Work” (Moretti cites Jacobs’ books on urban economics repeatedly). This transition to the ‘innovation sector’ means a profound shift in the economic geography of the US, just as past shifts from agriculture to manufacturing had large impacts on where and how we live. Moretti also explains how these innovative jobs tend to cluster together and the paradox of location and local interactions becoming more and more important in a world of globalization and ever-improving communication technologies.
Also, credit to Moretti for writing such an accessible book. In the acknowledgements, he notes that “serious economists are not supposed to write books – they are supposed to write technical papers.” Yet, such papers don’t easily spread outside of the academia bubble and into the hands of planners and policy-makers.
Edge City: Life on the New Frontier: Joel Garreau (1991)
First, a confession: despite Edge City‘s place in the urban planning canon, I had never read the entire thing (just a chapter here and there as a part of grad school assignments). With the opening of the Metro’s Silver Line through the quintessential Edge City, Tysons Corner, I wanted to correct my own reading list gap. It was also an opportunity to look at Garreau’s work nearly 25+ years after he wrote about these places.
Edge City describes the rise of the suburban office/retail node, usually located at a key transportation intersection, obtaining a critical mass of jobs and retail and pulling the business focus away from the traditional downtowns and business districts. Garreau’s description of the thought process behind development deals is insightful (as well as the impacts of unintended consequences, development following the path of least resistance, etc), but hardly limited to the suburban context of edge city.
Some statements from 1990 seem laughable now (“there is no petrochemical analyst around who thinks there is any supply-and-demand reason… that the price of oil should go higher than $30 a barrel in constant dollars in this generation.”), but others seem prescient: speaking of Tysons Corner, Garreau notes that parking lots alone represent a massive land bank, just waiting for a “higher and smarter and more economic use.”
The error, however, seems to be in thinking of places like Tysons as fundamentally decentralized, rather than strengthening centers in a polycentric metropolis. The future of an edge city like Tysons has more in common with urbanism than with the model Garreau describes. Nevertheless, his description of these places is an important element of the grand American suburban experiment.
The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger: Marc Levinson (2006)
Levinson’s history of the shipping container is a fascinating look behind the scenes of how we move goods around. Consequences for cities involve containers making old break bulk piers in Manhattan, San Francisco, and other ports obsolete; lower shipping costs enabling greater trade; intermodal shipping opportunities eventually enabling all sorts of new models for trade and distribution.
Levinson documents the challenges of overcoming proprietary interests to develop a series of standards that ensure interoperability, as well as the economic and institutional challenges (from port operators to unions to shipping companies to regulators) in embracing the new model. Levinson provides an insightful account of the difficulties in implementing new systems.
The Power Broker: Robert Moses and the Fall of New York: Robert Caro (1974)
I’m not sure how I missed including this in the reading list. It’s not a recent read for me, but reading Cap’n Transit’s post on the book and the reminder of Caro’s focus on the use of power rather than a personal, David v. Goliath struggle between the Moses and Jane Jacobs, I realized that I didn’t have it on the list. Here’s to correcting that omission.
More than just a documentation of Moses’s life and his use of the institutions to wield power, Caro’s book provides an excellent history of New York City and the background for so many of the institutions that shaped and continue to shape the city to this day. Caro’s focus on the institutional levers of power (a theme he carried through to his biographies of LBJ) gives the book applicability to any major city.
CC image from Thomas Hawk.
Some great articles on the challenges to affordable housing in high-demand cities over the past few days, worthy of some reflection:
Kim-Mai Cutler’s epic Tech Crunch article addresses all sides of the affordability problems facing San Francisco: noting that the situation isn’t unique to the Bay Area nor is it caused solely by tech-industry demand; the regulatory and political constraints to growth not just in the city but in the entire region; rent control, Prop 13, evictions, etc. After thorough documentation of this complex and multifaceted issue, Cutler circles back to the core issue of supply and demand:
[W]ithout serious additions to the entire region’s housing supply, these crisis measures just make San Francisco’s existing middle- and working-class a highly-protected, but endangered population in the long-run. With such limited rental stock available on the market at any time, what kind of person can afford to move here today when the city’s median rent is $3,350?
For the more extreme groups, you cannot logically fight both development and displacement. The real estate speculation running through the city right now is just as much a bet on political paralysis in the face of a long-term housing shortage as it is on San Francisco’s desirability as a place to live.
Cutler’s article lists a whole host of other potential actions, but concludes that any path forward must work towards adding more housing units to the region’s overall supply.Unfortunately, even this broad conclusion isn’t shared by everyone. In section #5 of Cutler’s article, she notes “parts of the progressive community do not believe in supply and demand.”
Ryan Avent notes that this denial of the market dynamics, no matter the motive, is not only misguided but also counter-productive: “ However altruistic they perceive their mission to be, the result is similar to what you’d get if fat cat industrialists lobbied the government to drive their competition out of business.” This extraction of economic rent from those that own the land and embrace tight land use regulations only aids those with capital:
The housing dynamic in San Francisco raises the capital intensity of consumption. That contributes to an increase in the capital share of income and to the stock of wealth in the economy. Zoning restrictions are a tool of the oligarchy, effectively. I’m only one-fourth kidding. But they are; they are a means by which owners of capital extract an outsized share of the surplus generated by job creation.
Emphasis added. Yet, not everyone is convinced.
This exact denial of economics confounds Let’s Go LA:
It’s important to recognize that the “supply and demand doesn’t apply” argument is wrong, because if we don’t identify the right problems, we can’t develop solutions that work. And in fact, the housing markets in places like LA and SF are operating pretty much how you’d expect them to work if you accept the basic principles of supply and demand as constrained by the regulatory environment.
For example, why are developers only building markets for the high end of the market? Well, the zoning and permitting requirements make it difficult, time-consuming, and costly to build. Therefore, only a little new supply is going to get built every year.
This point is particularly important, because without agreement on the nature of the problem, it’s hard to even talk about potential policy solutions. And there are a whole host of potential policy solutions once we get over that hump. Unfortunately, discussion about supply constraints in cities (via exclusionary zoning, high construction costs, neighborhood opposition to development, etc) means the conversation naturally focuses on the constraint. Advocating for loosening the constraints can easily be mistaken for (or misconstrued as) mere supply-side economics, a kind of trickle-down urbanism.
This doesn’t need to be the case. Let’s Go LA writes:
Admitting that supply matters doesn’t mean you have to favor unrestrained urban development…
Admitting that supply matters also doesn’t mean you have to favor eliminating existing rent-controlled or rent-stabilized units, and it doesn’t mean that no government intervention is necessary…
Finally, this doesn’t mean that we don’t understand and appreciate the efforts of affordable housing advocates and planners operating within the current zoning and regulatory environment, trying to make sure that low income folks have at least some access to the opportunity of the city…
Another definitional problem when talking about affordability is the very term itself: are we talking about affordable housing? Or are we talking about Affordable Housing? As Dan Keshet notes, affordable housing (lowercase) refers simply to housing that people can afford at market rates – it is both relative to a household’s income (and therefore represents something slightly different for everyone) and also the kind of affordability important to the middle class. Affordable Housing, however, refers to a broad set of subsidized housing programs, ranging from rapid rehousing for the homeless to inclusionary zoning to housing units available for families at 80% of the Area Median Income ($68,500 for a family of four in DC).
Perhaps it’s because of a desire to frame these various subsidy programs more favorably (“affordable housing” sells better than “public housing” or “housing subsidies” – who would be against housing that is affordable?), but the same language that frames subsidy policies favorably can confuse the issue analytically.
The same can be said for housing supply in cities – perhaps the analytic focus isn’t a great selling point or a way to frame the issue.
Cass Gilbert’s Woolworth Building. CC image from Wiki.
Cass Gilbert famously defined a skyscraper as “a machine that makes the land pay,” the kind of structure justified (and often required) by high land values. Gilbert’s distillation of the logic behind these buildings is inherently economic (hat tip to Kazys Varnelis):
Speaking of such enterprises from the financial aspect it is a rule that holds almost invariably that where the building costs less than the land, if properly managed, it is a success and where its costs more than the land it is usually a failure. The land value is established by its location and desirability from a renter’s standpoint hence high rentals make high land values and conversely. The building is merely the machine that makes the land pay. The more economical the machine both in construction and operation provided it fulfills the needs the more profitable the land. At the same time one must not lose sight of the fact that the machine is none the less a useful one because it has a measure of beauty and that architectural beauty judged even from the economic standpoint has an income bearing value.
The economic logic still holds. For private development, you need a building that can make the land pay. The challenge, however, is when such a building isn’t feasible – or isn’t allowed. Consider the dilemma of high land prices, high construction costs, and zoning that constrains the allowable building space. Payton Chung raises this issue, investigating why DC doesn’t see more affordable mid-rise construction:
The Height Act limit for construction in outlying parts of Washington, DC, enacted back in 1899, is 90′ — effectively 7-8 stories. This particular height poses a particularly vexing cost conundrum for developers seeking to build workforce housing in DC’s neighborhoods, since it’s just beyond one of the key cost thresholds in development: that between buildings supported with light frames vs. heavy frames…
In most other cities, the obvious solution is to go ever higher. Once a building crosses into high-rise construction, the sky’s ostensibly the limit. In theory, density can be increased until the additional space brings in enough revenue to more than offset the higher costs. As Linsey Isaacs writes in Multifamily Executive: ”Let’s say you have a property on an urban infill site that costs $100 per square foot of land. Wood may cost 10 percent less than its counterpart materials, but by doing a high-rise on the site, you get double the density and the land cost is cut in half.”
In other words, the cost of building taller is not linear. Once you enter the realm of Type I construction, the marginal cost of an additional floor is relatively low. However, Type I construction is substantially more expensive in DC than the mid-rise methods; and many of the 7-9 story buildings ubiqitous in DC fall into the range that require more expensive construction methods, yet do not allow for the kind of height/density those structures can achieve.
The challenge, Payton notes, is where land is pricey enough to justify high-rise densities, but rents in that area cannot support the construction cost. It’s DC’s version of ‘the viability trap.‘
There are a few options to break the logjam: lowering construction costs, and adjusting policies. Payton makes the case for new building technology to lower construction costs – prefabrication, new materails, and so on. Each holds the promise of decreasing construction costs. In the policy realm, reducing the required parking can also substantially reduce costs, providing a pathway out of the viability trap.
For real-world examples, consider Metro’s recent request for development proposals for station-adjacent land the agency owns. Metro’s requirement that the developer replace 422 parking spaces at Fort Totten (in addition to parking required by zoning and/or demanded by the market) likely pushed any development proposal beyond feasibility. That parcel didn’t get any bids. In practice, this isn’t any different from a large minimum parking requirement via the zoning code.
Another policy change is increasing the allowed height and density. In DC’s consideration of altering the city’s height limit, the benefits of scale with taller construction become apparent:
Per square foot construction costs for new office and apartment buildings at 130, 160, 200 and 250 feet peak at 200 feet but begin to decrease at 250 feet due to cost efficiencies that occur at taller heights. Beyond the cost of construction, other conditions need to be in place to make it financially attractive for a developer or property owner to be willing to tear down an existing building with tenants and build new and taller. These conditions include a substantial increase in rentable space due to taller height; the potential for higher rents; major leases expiring or the opportunity to attract a new anchor tenant; or the need for major investment into an obsolete building. There are also a number of constraints that affect new construction, such as the need to pre-lease a major portion of a new building to obtain financing and the inadequacies of existing transportation and utility infrastructure.
A few feet of height can make a big difference.
The more things change, the more they remain the same.
DC is nearing the end of a lengthy process to re-write the city’s zoning code. The re-write is mostly a reorganization, combining overlays and base zones in an effort to rationalize a text that’s been edited constantly over the better part of half a century. While there are a number of substantive policy changes (all good and worth supporting – reducing parking requirements, allowing accessory dwelling units, allowing corner stores, etc.), the intent of the re-write is to look at the structure and policy of the code, rather than look for areas of the city where the zoning classification should change.
Actual re-zoning will require an update to the city’s comprehensive plan (as all zoning changes must be consistent with the comprehensive plan). As promising as the policy changes in the zoning re-write may be, they do not represent any kind of change to the basic city layout – areas currently planned for high density will see more development, and areas zoned for single-family homes will not.
Last year, the District Government and the National Capital Planning Commission worked on dueling reports (see the documents from DC and NCPC) at the request of Congress on the potential for changing DC’s federally-imposed height limit. Leaving aside the specific merits and drawbacks of this law, the planning team needed first to identify areas that would likely see taller buildings if the height limit were to change.
I’ve borrowed the title of this post from Charlie Gardner, to try to show how little room we’ve planned in our cities for change. Even with the perception of runaway development in growing cities, the amount of space that’s set aside for a physical transformation is remarkably small. Zoning is a relatively new force shaping our cities – about a century old. We’re now seeing the effects of this constraint.
Consider the following examples of freezing city form in place via zoning codes:
Old Urbanist – The zoning straightjacket, part II, writing about Stamford, Connecticut:
In general, the zoning maps continue to reflect the land use patterns and planning dogma of the 1920s, with a small, constrained downtown business district hemmed in by single-use residential districts through which snake narrow commercial corridors.
This, if nothing else, seems like a fundamental, if not the only, purpose and challenge of city planning: accommodating population growth in a way that takes into account long-term development prospects and the political difficulty of upzoning low-density SFD areas. In light of this, can a zoning code like Stamford’s, with a stated purpose of preserving existing neighborhoods in their 1960s form, and resistant to all but changes in the downtown area, really be called a “planning” document at all? The challenges that Stamford faces are not unique, but typical, and progress on them, as zoning approaches its 100th birthday, remains the exception rather than the rule.
Better Institutions – Look at the Amount of Space in Seattle Dedicated to Single-Family Housing, writing about Seattle:
Putting aside the issue of micro-housing and apodments, [ed – I wrote about Seattle’s apodments here] what I’d actually like you to draw your attention to is everything that’s not colored or shaded — all the grey on that map. [ed – here is a link to the map] That’s Single-Family Seattle. That’s the part of the city where most people own their homes, and where residents could actually financially benefit from the property value-increasing development necessary to keep Seattle affordable. It’s also the part of the city that’s off-limits to essentially any new residential construction because preserving single-family “character” is so important. And it’s why residents in the remaining 20% of the city can barely afford their rents.
Dan Keshet – Zoning: the Central Problem, in Austin, Texas:
Zoning touches on most issues Austin faces. But with these maps in mind, I think we can get more specific: one of the major zoning problems Austin faces is the sea of low-density single-family housing surrounding Austin’s islands of high residential density.
Daniel Hertz – Zoning: It’s Just Insane, in Chicago, Illinois:
So one thing that happens when I bring up the fact that Chicago, like pretty much all American cities, criminalizes dense development to the detriment of all sorts of people (I’m great at parties!) is that whoever I’m talking to expresses their incredulity by referencing the incredible numbers of high-rises built in and around downtown over the last decade or so. Then I try to explain that, while impressive, the development downtown is really pretty exceptional, and that 96% of the city or so doesn’t allow that stuff, or anything over 4 floors or so, even in neighborhoods where people are lining up to live, waving their money and bidding up housing prices.
Chris D.P. – The High Cost of Strict Zoning, in Washington, DC:
Across town, the Wesley Heights overlay zone strictly regulates the bulk of the buildings within its boundaries for the sake of preserving the neighborhood character. Is it ethical for the city government to mandate, essentially, that no home be built on less than $637,500 worth of land in certain residential neighborhoods?
The largest concentration of overly restrictive zoning (from an economic perspective) appears to be downtown, along Pennsylvania Ave and K Streets NW. If we value our designated open spaces, and won’t concede the exclusivity of certain neighborhoods, but understand the environmental and economic benefits of compact development, then isn’t downtown as good a place as any to accommodate the growth this city needs?
DC’s height study shows a similar pattern. The very nature of the thought exercise, the hypothetical scenarios for building taller and denser buildings in DC requires first identifying areas that might be appropriate for taller buildings. As a part of this exercise, the DC Office of Planning identified areas not appropriate for additional height based on existing plans, historic districts, etc.
These excluded areas included: all federal properties, all historic landmarks and sites; low density areas in historic districts; all remaining low density areas, including residential neighborhoods; institutional sites and public facilities. Those areas are illustrated in the Figure 4 map below. The project team determined that sites already designated as high and medium density (both commercial and residential) were most appropriate for the purposes of this study to model increased building heights because those areas had already been identified for targeting growth in the future through the District’s prior Comprehensive Plan processes.
Put this on a map, and the exlcuded areas cover 95% of the city:
Now, this isn’t analogous to the comparsions to areas zoned for single-family homes in other cities, nor are all of the areas in red innoculated from substantial physical change. However, it does illustrate just how limited the opportunities for growth are. It broadly parallel’s the city’s future land use map from the Comprehensive Plan, where large portions of the city are planned for low/medium density residential uses (click to open PDF):
The plan’s generalized policy map also illustrates the extent of the planned and regulatory conservation of the existing city form (click to open PDF):
The areas without any shading are neighborhood conservation areas.
All of this should be reassuring to those concerned about the proposed zoning changes, since all changes must be consistent with the comprehensive plan.
Toronto is looking to Honolulu for transit inspiration – looking to tap into the potential for elevated rapid transit to improve the city’s transit expansion plans. However, key city officials are extremely concerned about the impacts of elevated transit to the city. Skepticism is good, any may be required to ensure that elevated rail is successfully integrated into an urban environment, but it shouldn’t be an automatic disqualifier for the kinds of improvements that make rapid transit possible. From the Toronto Star:
Toronto chief planner Jennifer Keesmaat cites the shadow that a structure like the [elevated Gardiner expressway] casts on the street below. She also brandishes one of the chief arguments for building Toronto’s LRTs in the first place.
“From a land use planning perspective, if our objective in integrating higher order transit into our city is to create great places for walking, for commerce, living,… elevated infrastructure doesn’t work so well for any of those objectives,” she said.
It’s true that making elevated rail work in urban areas is a challenge, but it shouldn’t be so easily dismissed. Of particular concern is the willingness to equate the visual impact of the six-lane Gardiner Expressway with a potential two-track elevated rail structure. The other key concern is the equivocation of grade-separated transit with at-grade light rail.
Toronto seems full of transit terminology confusion these days. Embattled Mayor Rob Ford has been pushing for subways as the only kind of transit that matters (SUBWAYS SUBWAYS SUBWAYS!) regardless of context or cost. Meanwhile, the transit agency is looking to implement a ‘light rail’ project that features full grade separation and an exclusive right of way – in other words, a subway. Ford opposes the light rail plan in favor of an actual, tunneled line with fewer stations and higher cost. Much of the rhetoric seems focused on equating light rail with Toronto’s legacy mixed-traffic streetcar network.
However, just as Ford’s dogmatic insistence of subways at any cost is irresponsible, Keesmaat’s suggestion that at-grade LRT can accomplish the same transit outcomes as grade-separated LRT can is equally misleading. Remember the differences between Class/Category A, B, and C right of way (from Vukan Vuchic, summarized here by Jarrett Walker), paraphrased here:
- Category C – on-street in mixed traffic: buses, streetcars, trams, all operating in the same space as other street users.
- Category B – partially separated tracks/lanes: exclusive right of way for transit, but not separate from cross-traffic. Vuchic dubs this “Semirapid Transit.” often seen with busways or light rail.
- Category A – right of way exclusive to transit, separated from all cross traffic: This is required for rapid transit. Examples include subways/metro systems and some grade-separated busways.
Transit system types by class of right-of-way. X-axis is system performance (speed, capacity, and reliability), Y-axis is the investment required.
The distinction matters because the quality of the transit service is substantially different. Service in Class A right of way will be faster and more reliable than Class B, at-grade LRT. Part of the planning challenge is matching the right level of investment (and ROW category) to the goals for the system. However, even with the need to balance transit goals with those for urban design, planners like Keesmaat shouldn’t categorically dismiss the possibility of building Class A transit facilities.
Part of the confusion might be from the technology. A catenary-powered rail vehicle can operate in Class A, B, or C right of way, and fill the role of streetcar, light rail, or metro – all with little change in technology. Consider San Francisco, where Muni trains operate in all three categories – in mixed traffic, in exclusive lanes, and in a full subway. The virtue of light rail technology is flexibility, but that flexibility can also confuse discussions about the kind of transit system we’re talking about. The vehicle technology isn’t as important as the kind of right-of-way. Indeed, many of the streetcar systems that survived the rise of buses precisely because they operated in Class A and B rights-of-way.
Keesmaat certainly appreciates the difference between the kind of regional rapid transit you’ll see in Honolulu and at-grade LRT:
“The Honolulu transit corridor project is really about connecting the city with the county…. It’s about connecting two urban areas. That’s very different from the context we imagine along Eglinton where we would like to see a significant amount of intensification along the corridor,” said Keesmaat.
At the same time, the kind of transit she’s describing and the kind of land use intensity aren’t mutually exclusive at all – quite the opposite.
Subways are nice, but require a high level of density/land use intensity. Payton Chung put it succinctly: “no subways for you, rowhouse neighborhoods.” Payton cites Erick Guerra and Robert Cervero’s research on the cost/benefit break points for land use density around transit lines. This table to the right shows the kind of density needed to make transit cost-effective at various per-mile costs.
The door swings both ways. Rowhouse densities might not justify subways, but they could justify the same Class A transit if it were built at elevated rail construction costs. Finding ways to lower the high US construction costs would be one thing, but given the systemic increase of US construction costs, using elevated transit would be a good way to extend Class A rights-of-way to areas with less density.
Instead of categorically dismissing elevated rail, work to better integrate it into the urban environment. Consider the potential for the mode to transform suburban areas ripe for redevelopment. Wide rights-of-way along suburban arterials are readily available for elevated rail; redevelopment can not only turn these places into walkable station areas, but also help integrate elevated rail infrastructure into the new built environment.
Keesmaat’s concerns about elevated rail in Toronto stem from the impact on the street:
“The Catch22 with elevating any kind of infrastructure – a really good example of this is the subway in Chicago – not only is it ugly, it creates really dark spaces,” she said.
It’s not just the shadow but the noise of elevated transit lines that can be problematic, said TTC CEO Andy Byford. If you build above the street you’ve also got to contend with getting people there, that means elevators or escalators.
First, it’s not clear what Byford is talking about: accessing subway stations also requires elevators and escalators. The nature of grade separated rights-of-way is that they are separated from the grade of the street.
Keesmaat’s concerns about replicating Chicago’s century-old Els are likely misplaced. No one is building that kind of structure anymore – and a quick survey of newer elevated rail shows slimmer, less intrusive structures. Reducing the visual impact and integrating the transit into the cityscape is the real challenge, but the price advantage and the benefits of Class A right-of-way cannot be ignored. It’s not a surprise that the Star paraphrases UBC professor Larry Frank: “On balance… elevated transit should probably be considered more often.”
Nevada autonomous vehicle license plate. CC image from National Museum of American History.
Building on the implications of driverless cars on car ownership, as well as the notion that planners aren’t preparing for the rise of autonomous vehicles, I wanted to dive further into potential implications of widespread adoption of the technology. Nat Bottigheimer in Greater Greater Washington argues that city planning as a profession is unprepared for autonomous vehicles:
Self-driving cars address many of the safety and travel efficiency objections that Smart Growth advocates often make about road expansion, or the use of limited street space.
Part of Bottingheimer’s concern is a lack of quantitative analysis, particularly as it relates to the impacts of self-driving cars. However, the real debate is about qualitative values that feed into our analysis.
The officials responsible for parking lot and garage building, transit system growth, bike lane construction, intersection expansions, sidewalk improvements, and road widenings need to analyze quantitatively how self-driving cars could affect their plans, and to prepare alternatives in case things change.
There is one over-arching problem with this approach: our current quantitative analysis all too often is nothing but bad pseudo-science. Donald Shoup has extensively documented the problems with minimum parking requirements in zoning codes, for example. Here, poor policy with vast unintended consequences is based on some level of flawed quantitative analysis, the kind that does not acknowledge the inherent uncertainty in our understanding or ability to project the future. Instead, the analysis is based on assumptions, yet the assumptions are really value-laden statements that carry a great deal of weight.
Even the very structure of the planning and regulation for the future carries a bias: a requirement to provide parking spaces in anticipation of future demand will, by nature, ignore the complexity of the marketplace for off-street parking and the natural range of parking demand.
Bottigheimer is also concerned about the impacts of self-driving cars on future land use forecasts:
Planners need to examine how travel forecasting tools that are based on current patterns of car ownership and use will need to change to adapt to new statistical relationships between population, car ownership, trip-making, car-sharing, and travel patterns.
By all means, we need to adjust our forecasting tools. However, we shouldn’t be doing so simply based on the arrival of a new technology. We should adjust them because they’re not particularly accurate and their erroneous projections have large impacts on how we plan. Driverless cars aren’t the problem here. The problem is in our assumptions, our inaccurate analysis, and our decision-making processes that rely on such erroneous projections.
Leaving the limitations of quantitative analysis aside for the moment, we can still hypothesize (qualitatively, perhaps) about the future world of driverless cars. Assuming that autonomous vehicles do indeed reduce car ownership and begin to serve as robo-taxis, we can sketch out plausible scenarios for the future. We assume car ownership will decrease, but vehicle-miles traveled may increase.
City Planning and Street Design:
One of Bottigheimer’s chief concerns is that “planners and placemaking advocates will need to step up their game” given the potential benefits for safety, increased car capacity,
As mentioned above, much of the ‘safety’ benefits are about cars operating in car-only environments (e.g. highways), when the real safety challenges are in streets with mixed traffic: pedestrians, bikes, cars, and buses all sharing the same space. In this case, the values planners and placemaking advocates are pushing for remain the same, regardless of who – or what – is driving the cars. The laws of physics won’t change; providing a safe environment for pedestrians will still be based on the lowest common denominator for safe speeds, etc.
The biggest concern should be in the environments that aren’t highways, yet aren’t city streets, either. Will driverless cars forever push stroads into highway territory? Borrowing Jarrett Walker’s phrasing, technology can’t change geometry, except in some cases at the margins.
Instead of a technical pursuit of maximum vehicle throughput (informed by quantitative analysis), the real question is one of values. The values that inform planning for a place or a street will set the tone for the quantitative analysis that follows. Maximizing vehicle throughput is not a neutral, analytical goal.
Congestion is a more interesting case, as it will still be an economic problem – centralized control might help mitigate some traffic issues, but it doesn’t solve the fundamental economic conundrum of congestion. Here, too, the economic solutions in a world of human-driven cars will have the same framework as one with computers behind the wheel.
Driverless cars might change the exact price points, but they don’t alter the basic logic behind congestion-mitigation measures like a cordon charge in London or Stockholm, or like Uber’s surge pricing (efficient and rational as it might be, but perhaps too honest). Again, technology can’t fundamentally change geometry. Cars will still be cars, and even if driverless cars improve on the current capacity limitations of highways, they do not eliminate such constraints.
Instead of twisting ourselves in knots over projections about the future that are sure to be wrong, planning for autonomous cars should instead focus on the values and the kind of places we want to plan for. We should adjust our policies to embrace the values of the communities (which alone is a challenging process). We should be aware about the poor accuracy of forecasts and work to build policies with the flexibility to adapt.
CC image from the Museum of American History.
To date, most of the writing about driverless cars seems to focus on technology’s potential to make driving safer by eliminating collisions between vehicles. The thinking is similar to other auto safety improvements such as air bags or anti-lock brakes. These technological advances (endorsed by the US DOT) incrementally improve the safety of those driving – assuming that you are using a narrowly focused definition of ‘safety.’ However, an auto-centric definition of safety only works in auto-centric environments; in urban environments where cars and bikes and pedestrians are all sharing the same space, the definition of safety cannot solely focus on eliminating collisions between high-tech cars (more on this later).
Other articles predict that driverless cars mean the end of transit – an unlikely scenario that ignores the basic geometry of car-based systems and the capacity advantages of transit (imagine shutting down New York’s transit system and trying to fill that role with nothing but taxis – good luck). Furthermore, if driverless cars make vehicle automation easy, then it should also help drive down the costs for automating transit itself (among other potential uses) and unlock the benefits of automated transit.
The far more interesting scenario is one where autonomous vehicles completely upset the benefits of owning your own car. In the Atlantic Cities, Eric Jaffe questions the assumptions of car ownership in a world of driverless cars:
But we’re not so far away from this future that it’s too early to start considering what it might look like. As Matt Yglesias wrote at Slate in August, Google, the leaders in autonomous car technology, must have had some vision in mind to shell out $258 million for the car-slash-ridesharing service Uber: “ubiquitous taxis — summoned via smartphone or weird glasses — that are so cheap they make car ownership obsolete.”
Think about this world of shared autonomous vehicles for a moment. You wake up and get ready for work, and a few minutes before it’s time to leave you press a button and order an SAV [Shared Autonomous Vehicle]. The car has been strategically positioned to wait in high-demand areas, so you don’t have to wait long. You might share the ride with a couple travelers just as you share an elevator, or perhaps pay a premium to ride alone. Either way, you clear your inbox or read the paper during the commute, which is safer and more reliable than it used to be.
So, basically Robo-Uber. Or Auto-Car2go. Or Johnny Cab. This kind of behavior seems to be a far more likely outcome of the technology than the continued paradigm of each individual owning a car for personal use. Just as transit consultant Jarrett Walker talks about the importance of frequent transit service in providing freedom for users, the on-demand nature of the personal car is similarly freeing – but it required a) ownership of the car to ensure on-demand use, and b) the owner to actually do the driving.
But what kind of changes in behavior can we expect from this shift away from car ownership? Writing at Greater Greater Washington, Nat Bottigheimer notes that planners haven’t even begun to address the issue. Jaffe’s article, however, cites some preliminary research from Austin on the impact of robotaxis.
Civil engineer Kara M. Kockelman of the University of Texas at Austin recently modeled the potential ownership change with grad student Daniel Fagnant…
The results offer an enticing glimpse of a world without car-ownership. Each SAV in the Austin model replaced about 11 conventional household vehicles. The roughly 20,000 people who made up this shared network, formerly owners of roughly as many cars, were now served by a mere 1,700 SAVs. Travelers waited an average of only 20 seconds for their ride to arrive, and you could literally count the number who waited more than 10 minutes on one hand (three). That’s to say nothing of personal savings in terms of cost (insurance, parking, gas) and time.
“Even when we doubled or quadrupled or halved or quartered that trip-making, we didn’t have big changes in our key variables,” says Kockelman. “This replacement rate, this eleven-to-one, those things were very stable.”
Kockelman is quick to point out the caveats. The biggest is that for all the savings in private car-ownership, vehicle-miles traveled doesn’t go down in the Austin model. In fact, it goes up about 10 percent. That’s because not only are SAVs making all the trips people used to make on their own, but they’re repositioning themselves in between trips to reduce wait times (see below). The additional wear also means manufacturers produce about the same number of cars, too, though each new fleet is no doubt a bit smaller and cleaner than the last.
So, a huge decrease in the total number of cars (presumably, with a corresponding decrease in parking demand, making the already-questionable logic behind zoning code parking requirements even more dubious) but an increase in the total vehicle miles traveled indicates that such technology won’t be a magic cure for congestion. It won’t spell the end of public transit in our cities. If the safety benefits accrue mostly to highway travel, it won’t change the need for safer streets where pedestrians, bikes, and cars mix.
The next question is on the impacts of driverless cars on cities and city planning.