CC image from carnagenyc.
The confluence of events in my life (new apartments, travel, wedding planning, etc) haven’t left time for much blogging recently. However, there’s always time to read. With that in mind, a few additions to the reading list (and correcting one egregious omission):
The New Geography of Jobs: Enrico Moretti (2012)
Berkeley economist Enrico Moretti delivers a concise and readable summary of the economic geography of innovative industries – the kinds of jobs that produce what Jane Jacobs referred to as “New Work” (Moretti cites Jacobs’ books on urban economics repeatedly). This transition to the ‘innovation sector’ means a profound shift in the economic geography of the US, just as past shifts from agriculture to manufacturing had large impacts on where and how we live. Moretti also explains how these innovative jobs tend to cluster together and the paradox of location and local interactions becoming more and more important in a world of globalization and ever-improving communication technologies.
Also, credit to Moretti for writing such an accessible book. In the acknowledgements, he notes that “serious economists are not supposed to write books – they are supposed to write technical papers.” Yet, such papers don’t easily spread outside of the academia bubble and into the hands of planners and policy-makers.
Edge City: Life on the New Frontier: Joel Garreau (1991)
First, a confession: despite Edge City‘s place in the urban planning canon, I had never read the entire thing (just a chapter here and there as a part of grad school assignments). With the opening of the Metro’s Silver Line through the quintessential Edge City, Tysons Corner, I wanted to correct my own reading list gap. It was also an opportunity to look at Garreau’s work nearly 25+ years after he wrote about these places.
Edge City describes the rise of the suburban office/retail node, usually located at a key transportation intersection, obtaining a critical mass of jobs and retail and pulling the business focus away from the traditional downtowns and business districts. Garreau’s description of the thought process behind development deals is insightful (as well as the impacts of unintended consequences, development following the path of least resistance, etc), but hardly limited to the suburban context of edge city.
Some statements from 1990 seem laughable now (“there is no petrochemical analyst around who thinks there is any supply-and-demand reason… that the price of oil should go higher than $30 a barrel in constant dollars in this generation.”), but others seem prescient: speaking of Tysons Corner, Garreau notes that parking lots alone represent a massive land bank, just waiting for a “higher and smarter and more economic use.”
The error, however, seems to be in thinking of places like Tysons as fundamentally decentralized, rather than strengthening centers in a polycentric metropolis. The future of an edge city like Tysons has more in common with urbanism than with the model Garreau describes. Nevertheless, his description of these places is an important element of the grand American suburban experiment.
The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger: Marc Levinson (2006)
Levinson’s history of the shipping container is a fascinating look behind the scenes of how we move goods around. Consequences for cities involve containers making old break bulk piers in Manhattan, San Francisco, and other ports obsolete; lower shipping costs enabling greater trade; intermodal shipping opportunities eventually enabling all sorts of new models for trade and distribution.
Levinson documents the challenges of overcoming proprietary interests to develop a series of standards that ensure interoperability, as well as the economic and institutional challenges (from port operators to unions to shipping companies to regulators) in embracing the new model. Levinson provides an insightful account of the difficulties in implementing new systems.
The Power Broker: Robert Moses and the Fall of New York: Robert Caro (1974)
I’m not sure how I missed including this in the reading list. It’s not a recent read for me, but reading Cap’n Transit’s post on the book and the reminder of Caro’s focus on the use of power rather than a personal, David v. Goliath struggle between the Moses and Jane Jacobs, I realized that I didn’t have it on the list. Here’s to correcting that omission.
More than just a documentation of Moses’s life and his use of the institutions to wield power, Caro’s book provides an excellent history of New York City and the background for so many of the institutions that shaped and continue to shape the city to this day. Caro’s focus on the institutional levers of power (a theme he carried through to his biographies of LBJ) gives the book applicability to any major city.
CC image from Thomas Hawk.
Some great articles on the challenges to affordable housing in high-demand cities over the past few days, worthy of some reflection:
Kim-Mai Cutler’s epic Tech Crunch article addresses all sides of the affordability problems facing San Francisco: noting that the situation isn’t unique to the Bay Area nor is it caused solely by tech-industry demand; the regulatory and political constraints to growth not just in the city but in the entire region; rent control, Prop 13, evictions, etc. After thorough documentation of this complex and multifaceted issue, Cutler circles back to the core issue of supply and demand:
[W]ithout serious additions to the entire region’s housing supply, these crisis measures just make San Francisco’s existing middle- and working-class a highly-protected, but endangered population in the long-run. With such limited rental stock available on the market at any time, what kind of person can afford to move here today when the city’s median rent is $3,350?
For the more extreme groups, you cannot logically fight both development and displacement. The real estate speculation running through the city right now is just as much a bet on political paralysis in the face of a long-term housing shortage as it is on San Francisco’s desirability as a place to live.
Cutler’s article lists a whole host of other potential actions, but concludes that any path forward must work towards adding more housing units to the region’s overall supply.Unfortunately, even this broad conclusion isn’t shared by everyone. In section #5 of Cutler’s article, she notes “parts of the progressive community do not believe in supply and demand.”
Ryan Avent notes that this denial of the market dynamics, no matter the motive, is not only misguided but also counter-productive: “ However altruistic they perceive their mission to be, the result is similar to what you’d get if fat cat industrialists lobbied the government to drive their competition out of business.” This extraction of economic rent from those that own the land and embrace tight land use regulations only aids those with capital:
The housing dynamic in San Francisco raises the capital intensity of consumption. That contributes to an increase in the capital share of income and to the stock of wealth in the economy. Zoning restrictions are a tool of the oligarchy, effectively. I’m only one-fourth kidding. But they are; they are a means by which owners of capital extract an outsized share of the surplus generated by job creation.
Emphasis added. Yet, not everyone is convinced.
This exact denial of economics confounds Let’s Go LA:
It’s important to recognize that the “supply and demand doesn’t apply” argument is wrong, because if we don’t identify the right problems, we can’t develop solutions that work. And in fact, the housing markets in places like LA and SF are operating pretty much how you’d expect them to work if you accept the basic principles of supply and demand as constrained by the regulatory environment.
For example, why are developers only building markets for the high end of the market? Well, the zoning and permitting requirements make it difficult, time-consuming, and costly to build. Therefore, only a little new supply is going to get built every year.
This point is particularly important, because without agreement on the nature of the problem, it’s hard to even talk about potential policy solutions. And there are a whole host of potential policy solutions once we get over that hump. Unfortunately, discussion about supply constraints in cities (via exclusionary zoning, high construction costs, neighborhood opposition to development, etc) means the conversation naturally focuses on the constraint. Advocating for loosening the constraints can easily be mistaken for (or misconstrued as) mere supply-side economics, a kind of trickle-down urbanism.
This doesn’t need to be the case. Let’s Go LA writes:
Admitting that supply matters doesn’t mean you have to favor unrestrained urban development…
Admitting that supply matters also doesn’t mean you have to favor eliminating existing rent-controlled or rent-stabilized units, and it doesn’t mean that no government intervention is necessary…
Finally, this doesn’t mean that we don’t understand and appreciate the efforts of affordable housing advocates and planners operating within the current zoning and regulatory environment, trying to make sure that low income folks have at least some access to the opportunity of the city…
Another definitional problem when talking about affordability is the very term itself: are we talking about affordable housing? Or are we talking about Affordable Housing? As Dan Keshet notes, affordable housing (lowercase) refers simply to housing that people can afford at market rates – it is both relative to a household’s income (and therefore represents something slightly different for everyone) and also the kind of affordability important to the middle class. Affordable Housing, however, refers to a broad set of subsidized housing programs, ranging from rapid rehousing for the homeless to inclusionary zoning to housing units available for families at 80% of the Area Median Income ($68,500 for a family of four in DC).
Perhaps it’s because of a desire to frame these various subsidy programs more favorably (“affordable housing” sells better than “public housing” or “housing subsidies” – who would be against housing that is affordable?), but the same language that frames subsidy policies favorably can confuse the issue analytically.
The same can be said for housing supply in cities – perhaps the analytic focus isn’t a great selling point or a way to frame the issue.
Cass Gilbert’s Woolworth Building. CC image from Wiki.
Cass Gilbert famously defined a skyscraper as “a machine that makes the land pay,” the kind of structure justified (and often required) by high land values. Gilbert’s distillation of the logic behind these buildings is inherently economic (hat tip to Kazys Varnelis):
Speaking of such enterprises from the financial aspect it is a rule that holds almost invariably that where the building costs less than the land, if properly managed, it is a success and where its costs more than the land it is usually a failure. The land value is established by its location and desirability from a renter’s standpoint hence high rentals make high land values and conversely. The building is merely the machine that makes the land pay. The more economical the machine both in construction and operation provided it fulfills the needs the more profitable the land. At the same time one must not lose sight of the fact that the machine is none the less a useful one because it has a measure of beauty and that architectural beauty judged even from the economic standpoint has an income bearing value.
The economic logic still holds. For private development, you need a building that can make the land pay. The challenge, however, is when such a building isn’t feasible – or isn’t allowed. Consider the dilemma of high land prices, high construction costs, and zoning that constrains the allowable building space. Payton Chung raises this issue, investigating why DC doesn’t see more affordable mid-rise construction:
The Height Act limit for construction in outlying parts of Washington, DC, enacted back in 1899, is 90′ — effectively 7-8 stories. This particular height poses a particularly vexing cost conundrum for developers seeking to build workforce housing in DC’s neighborhoods, since it’s just beyond one of the key cost thresholds in development: that between buildings supported with light frames vs. heavy frames…
In most other cities, the obvious solution is to go ever higher. Once a building crosses into high-rise construction, the sky’s ostensibly the limit. In theory, density can be increased until the additional space brings in enough revenue to more than offset the higher costs. As Linsey Isaacs writes in Multifamily Executive: ”Let’s say you have a property on an urban infill site that costs $100 per square foot of land. Wood may cost 10 percent less than its counterpart materials, but by doing a high-rise on the site, you get double the density and the land cost is cut in half.”
In other words, the cost of building taller is not linear. Once you enter the realm of Type I construction, the marginal cost of an additional floor is relatively low. However, Type I construction is substantially more expensive in DC than the mid-rise methods; and many of the 7-9 story buildings ubiqitous in DC fall into the range that require more expensive construction methods, yet do not allow for the kind of height/density those structures can achieve.
The challenge, Payton notes, is where land is pricey enough to justify high-rise densities, but rents in that area cannot support the construction cost. It’s DC’s version of ‘the viability trap.‘
There are a few options to break the logjam: lowering construction costs, and adjusting policies. Payton makes the case for new building technology to lower construction costs – prefabrication, new materails, and so on. Each holds the promise of decreasing construction costs. In the policy realm, reducing the required parking can also substantially reduce costs, providing a pathway out of the viability trap.
For real-world examples, consider Metro’s recent request for development proposals for station-adjacent land the agency owns. Metro’s requirement that the developer replace 422 parking spaces at Fort Totten (in addition to parking required by zoning and/or demanded by the market) likely pushed any development proposal beyond feasibility. That parcel didn’t get any bids. In practice, this isn’t any different from a large minimum parking requirement via the zoning code.
Another policy change is increasing the allowed height and density. In DC’s consideration of altering the city’s height limit, the benefits of scale with taller construction become apparent:
Per square foot construction costs for new office and apartment buildings at 130, 160, 200 and 250 feet peak at 200 feet but begin to decrease at 250 feet due to cost efficiencies that occur at taller heights. Beyond the cost of construction, other conditions need to be in place to make it financially attractive for a developer or property owner to be willing to tear down an existing building with tenants and build new and taller. These conditions include a substantial increase in rentable space due to taller height; the potential for higher rents; major leases expiring or the opportunity to attract a new anchor tenant; or the need for major investment into an obsolete building. There are also a number of constraints that affect new construction, such as the need to pre-lease a major portion of a new building to obtain financing and the inadequacies of existing transportation and utility infrastructure.
A few feet of height can make a big difference.
The more things change, the more they remain the same.
DC is nearing the end of a lengthy process to re-write the city’s zoning code. The re-write is mostly a reorganization, combining overlays and base zones in an effort to rationalize a text that’s been edited constantly over the better part of half a century. While there are a number of substantive policy changes (all good and worth supporting – reducing parking requirements, allowing accessory dwelling units, allowing corner stores, etc.), the intent of the re-write is to look at the structure and policy of the code, rather than look for areas of the city where the zoning classification should change.
Actual re-zoning will require an update to the city’s comprehensive plan (as all zoning changes must be consistent with the comprehensive plan). As promising as the policy changes in the zoning re-write may be, they do not represent any kind of change to the basic city layout – areas currently planned for high density will see more development, and areas zoned for single-family homes will not.
Last year, the District Government and the National Capital Planning Commission worked on dueling reports (see the documents from DC and NCPC) at the request of Congress on the potential for changing DC’s federally-imposed height limit. Leaving aside the specific merits and drawbacks of this law, the planning team needed first to identify areas that would likely see taller buildings if the height limit were to change.
I’ve borrowed the title of this post from Charlie Gardner, to try to show how little room we’ve planned in our cities for change. Even with the perception of runaway development in growing cities, the amount of space that’s set aside for a physical transformation is remarkably small. Zoning is a relatively new force shaping our cities – about a century old. We’re now seeing the effects of this constraint.
Consider the following examples of freezing city form in place via zoning codes:
Old Urbanist – The zoning straightjacket, part II, writing about Stamford, Connecticut:
In general, the zoning maps continue to reflect the land use patterns and planning dogma of the 1920s, with a small, constrained downtown business district hemmed in by single-use residential districts through which snake narrow commercial corridors.
This, if nothing else, seems like a fundamental, if not the only, purpose and challenge of city planning: accommodating population growth in a way that takes into account long-term development prospects and the political difficulty of upzoning low-density SFD areas. In light of this, can a zoning code like Stamford’s, with a stated purpose of preserving existing neighborhoods in their 1960s form, and resistant to all but changes in the downtown area, really be called a “planning” document at all? The challenges that Stamford faces are not unique, but typical, and progress on them, as zoning approaches its 100th birthday, remains the exception rather than the rule.
Better Institutions – Look at the Amount of Space in Seattle Dedicated to Single-Family Housing, writing about Seattle:
Putting aside the issue of micro-housing and apodments, [ed – I wrote about Seattle’s apodments here] what I’d actually like you to draw your attention to is everything that’s not colored or shaded — all the grey on that map. [ed – here is a link to the map] That’s Single-Family Seattle. That’s the part of the city where most people own their homes, and where residents could actually financially benefit from the property value-increasing development necessary to keep Seattle affordable. It’s also the part of the city that’s off-limits to essentially any new residential construction because preserving single-family “character” is so important. And it’s why residents in the remaining 20% of the city can barely afford their rents.
Dan Keshet – Zoning: the Central Problem, in Austin, Texas:
Zoning touches on most issues Austin faces. But with these maps in mind, I think we can get more specific: one of the major zoning problems Austin faces is the sea of low-density single-family housing surrounding Austin’s islands of high residential density.
Daniel Hertz – Zoning: It’s Just Insane, in Chicago, Illinois:
So one thing that happens when I bring up the fact that Chicago, like pretty much all American cities, criminalizes dense development to the detriment of all sorts of people (I’m great at parties!) is that whoever I’m talking to expresses their incredulity by referencing the incredible numbers of high-rises built in and around downtown over the last decade or so. Then I try to explain that, while impressive, the development downtown is really pretty exceptional, and that 96% of the city or so doesn’t allow that stuff, or anything over 4 floors or so, even in neighborhoods where people are lining up to live, waving their money and bidding up housing prices.
Chris D.P. – The High Cost of Strict Zoning, in Washington, DC:
Across town, the Wesley Heights overlay zone strictly regulates the bulk of the buildings within its boundaries for the sake of preserving the neighborhood character. Is it ethical for the city government to mandate, essentially, that no home be built on less than $637,500 worth of land in certain residential neighborhoods?
The largest concentration of overly restrictive zoning (from an economic perspective) appears to be downtown, along Pennsylvania Ave and K Streets NW. If we value our designated open spaces, and won’t concede the exclusivity of certain neighborhoods, but understand the environmental and economic benefits of compact development, then isn’t downtown as good a place as any to accommodate the growth this city needs?
DC’s height study shows a similar pattern. The very nature of the thought exercise, the hypothetical scenarios for building taller and denser buildings in DC requires first identifying areas that might be appropriate for taller buildings. As a part of this exercise, the DC Office of Planning identified areas not appropriate for additional height based on existing plans, historic districts, etc.
These excluded areas included: all federal properties, all historic landmarks and sites; low density areas in historic districts; all remaining low density areas, including residential neighborhoods; institutional sites and public facilities. Those areas are illustrated in the Figure 4 map below. The project team determined that sites already designated as high and medium density (both commercial and residential) were most appropriate for the purposes of this study to model increased building heights because those areas had already been identified for targeting growth in the future through the District’s prior Comprehensive Plan processes.
Put this on a map, and the exlcuded areas cover 95% of the city:
Now, this isn’t analogous to the comparsions to areas zoned for single-family homes in other cities, nor are all of the areas in red innoculated from substantial physical change. However, it does illustrate just how limited the opportunities for growth are. It broadly parallel’s the city’s future land use map from the Comprehensive Plan, where large portions of the city are planned for low/medium density residential uses (click to open PDF):
The plan’s generalized policy map also illustrates the extent of the planned and regulatory conservation of the existing city form (click to open PDF):
The areas without any shading are neighborhood conservation areas.
All of this should be reassuring to those concerned about the proposed zoning changes, since all changes must be consistent with the comprehensive plan.
Toronto is looking to Honolulu for transit inspiration – looking to tap into the potential for elevated rapid transit to improve the city’s transit expansion plans. However, key city officials are extremely concerned about the impacts of elevated transit to the city. Skepticism is good, any may be required to ensure that elevated rail is successfully integrated into an urban environment, but it shouldn’t be an automatic disqualifier for the kinds of improvements that make rapid transit possible. From the Toronto Star:
Toronto chief planner Jennifer Keesmaat cites the shadow that a structure like the [elevated Gardiner expressway] casts on the street below. She also brandishes one of the chief arguments for building Toronto’s LRTs in the first place.
“From a land use planning perspective, if our objective in integrating higher order transit into our city is to create great places for walking, for commerce, living,… elevated infrastructure doesn’t work so well for any of those objectives,” she said.
It’s true that making elevated rail work in urban areas is a challenge, but it shouldn’t be so easily dismissed. Of particular concern is the willingness to equate the visual impact of the six-lane Gardiner Expressway with a potential two-track elevated rail structure. The other key concern is the equivocation of grade-separated transit with at-grade light rail.
Toronto seems full of transit terminology confusion these days. Embattled Mayor Rob Ford has been pushing for subways as the only kind of transit that matters (SUBWAYS SUBWAYS SUBWAYS!) regardless of context or cost. Meanwhile, the transit agency is looking to implement a ‘light rail’ project that features full grade separation and an exclusive right of way – in other words, a subway. Ford opposes the light rail plan in favor of an actual, tunneled line with fewer stations and higher cost. Much of the rhetoric seems focused on equating light rail with Toronto’s legacy mixed-traffic streetcar network.
However, just as Ford’s dogmatic insistence of subways at any cost is irresponsible, Keesmaat’s suggestion that at-grade LRT can accomplish the same transit outcomes as grade-separated LRT can is equally misleading. Remember the differences between Class/Category A, B, and C right of way (from Vukan Vuchic, summarized here by Jarrett Walker), paraphrased here:
- Category C – on-street in mixed traffic: buses, streetcars, trams, all operating in the same space as other street users.
- Category B – partially separated tracks/lanes: exclusive right of way for transit, but not separate from cross-traffic. Vuchic dubs this “Semirapid Transit.” often seen with busways or light rail.
- Category A – right of way exclusive to transit, separated from all cross traffic: This is required for rapid transit. Examples include subways/metro systems and some grade-separated busways.
Transit system types by class of right-of-way. X-axis is system performance (speed, capacity, and reliability), Y-axis is the investment required.
The distinction matters because the quality of the transit service is substantially different. Service in Class A right of way will be faster and more reliable than Class B, at-grade LRT. Part of the planning challenge is matching the right level of investment (and ROW category) to the goals for the system. However, even with the need to balance transit goals with those for urban design, planners like Keesmaat shouldn’t categorically dismiss the possibility of building Class A transit facilities.
Part of the confusion might be from the technology. A catenary-powered rail vehicle can operate in Class A, B, or C right of way, and fill the role of streetcar, light rail, or metro – all with little change in technology. Consider San Francisco, where Muni trains operate in all three categories – in mixed traffic, in exclusive lanes, and in a full subway. The virtue of light rail technology is flexibility, but that flexibility can also confuse discussions about the kind of transit system we’re talking about. The vehicle technology isn’t as important as the kind of right-of-way. Indeed, many of the streetcar systems that survived the rise of buses precisely because they operated in Class A and B rights-of-way.
Keesmaat certainly appreciates the difference between the kind of regional rapid transit you’ll see in Honolulu and at-grade LRT:
“The Honolulu transit corridor project is really about connecting the city with the county…. It’s about connecting two urban areas. That’s very different from the context we imagine along Eglinton where we would like to see a significant amount of intensification along the corridor,” said Keesmaat.
At the same time, the kind of transit she’s describing and the kind of land use intensity aren’t mutually exclusive at all – quite the opposite.
Subways are nice, but require a high level of density/land use intensity. Payton Chung put it succinctly: “no subways for you, rowhouse neighborhoods.” Payton cites Erick Guerra and Robert Cervero’s research on the cost/benefit break points for land use density around transit lines. This table to the right shows the kind of density needed to make transit cost-effective at various per-mile costs.
The door swings both ways. Rowhouse densities might not justify subways, but they could justify the same Class A transit if it were built at elevated rail construction costs. Finding ways to lower the high US construction costs would be one thing, but given the systemic increase of US construction costs, using elevated transit would be a good way to extend Class A rights-of-way to areas with less density.
Instead of categorically dismissing elevated rail, work to better integrate it into the urban environment. Consider the potential for the mode to transform suburban areas ripe for redevelopment. Wide rights-of-way along suburban arterials are readily available for elevated rail; redevelopment can not only turn these places into walkable station areas, but also help integrate elevated rail infrastructure into the new built environment.
Keesmaat’s concerns about elevated rail in Toronto stem from the impact on the street:
“The Catch22 with elevating any kind of infrastructure – a really good example of this is the subway in Chicago – not only is it ugly, it creates really dark spaces,” she said.
It’s not just the shadow but the noise of elevated transit lines that can be problematic, said TTC CEO Andy Byford. If you build above the street you’ve also got to contend with getting people there, that means elevators or escalators.
First, it’s not clear what Byford is talking about: accessing subway stations also requires elevators and escalators. The nature of grade separated rights-of-way is that they are separated from the grade of the street.
Keesmaat’s concerns about replicating Chicago’s century-old Els are likely misplaced. No one is building that kind of structure anymore – and a quick survey of newer elevated rail shows slimmer, less intrusive structures. Reducing the visual impact and integrating the transit into the cityscape is the real challenge, but the price advantage and the benefits of Class A right-of-way cannot be ignored. It’s not a surprise that the Star paraphrases UBC professor Larry Frank: “On balance… elevated transit should probably be considered more often.”
Nevada autonomous vehicle license plate. CC image from National Museum of American History.
Building on the implications of driverless cars on car ownership, as well as the notion that planners aren’t preparing for the rise of autonomous vehicles, I wanted to dive further into potential implications of widespread adoption of the technology. Nat Bottigheimer in Greater Greater Washington argues that city planning as a profession is unprepared for autonomous vehicles:
Self-driving cars address many of the safety and travel efficiency objections that Smart Growth advocates often make about road expansion, or the use of limited street space.
Part of Bottingheimer’s concern is a lack of quantitative analysis, particularly as it relates to the impacts of self-driving cars. However, the real debate is about qualitative values that feed into our analysis.
The officials responsible for parking lot and garage building, transit system growth, bike lane construction, intersection expansions, sidewalk improvements, and road widenings need to analyze quantitatively how self-driving cars could affect their plans, and to prepare alternatives in case things change.
There is one over-arching problem with this approach: our current quantitative analysis all too often is nothing but bad pseudo-science. Donald Shoup has extensively documented the problems with minimum parking requirements in zoning codes, for example. Here, poor policy with vast unintended consequences is based on some level of flawed quantitative analysis, the kind that does not acknowledge the inherent uncertainty in our understanding or ability to project the future. Instead, the analysis is based on assumptions, yet the assumptions are really value-laden statements that carry a great deal of weight.
Even the very structure of the planning and regulation for the future carries a bias: a requirement to provide parking spaces in anticipation of future demand will, by nature, ignore the complexity of the marketplace for off-street parking and the natural range of parking demand.
Bottigheimer is also concerned about the impacts of self-driving cars on future land use forecasts:
Planners need to examine how travel forecasting tools that are based on current patterns of car ownership and use will need to change to adapt to new statistical relationships between population, car ownership, trip-making, car-sharing, and travel patterns.
By all means, we need to adjust our forecasting tools. However, we shouldn’t be doing so simply based on the arrival of a new technology. We should adjust them because they’re not particularly accurate and their erroneous projections have large impacts on how we plan. Driverless cars aren’t the problem here. The problem is in our assumptions, our inaccurate analysis, and our decision-making processes that rely on such erroneous projections.
Leaving the limitations of quantitative analysis aside for the moment, we can still hypothesize (qualitatively, perhaps) about the future world of driverless cars. Assuming that autonomous vehicles do indeed reduce car ownership and begin to serve as robo-taxis, we can sketch out plausible scenarios for the future. We assume car ownership will decrease, but vehicle-miles traveled may increase.
City Planning and Street Design:
One of Bottigheimer’s chief concerns is that “planners and placemaking advocates will need to step up their game” given the potential benefits for safety, increased car capacity,
As mentioned above, much of the ‘safety’ benefits are about cars operating in car-only environments (e.g. highways), when the real safety challenges are in streets with mixed traffic: pedestrians, bikes, cars, and buses all sharing the same space. In this case, the values planners and placemaking advocates are pushing for remain the same, regardless of who – or what – is driving the cars. The laws of physics won’t change; providing a safe environment for pedestrians will still be based on the lowest common denominator for safe speeds, etc.
The biggest concern should be in the environments that aren’t highways, yet aren’t city streets, either. Will driverless cars forever push stroads into highway territory? Borrowing Jarrett Walker’s phrasing, technology can’t change geometry, except in some cases at the margins.
Instead of a technical pursuit of maximum vehicle throughput (informed by quantitative analysis), the real question is one of values. The values that inform planning for a place or a street will set the tone for the quantitative analysis that follows. Maximizing vehicle throughput is not a neutral, analytical goal.
Congestion is a more interesting case, as it will still be an economic problem – centralized control might help mitigate some traffic issues, but it doesn’t solve the fundamental economic conundrum of congestion. Here, too, the economic solutions in a world of human-driven cars will have the same framework as one with computers behind the wheel.
Driverless cars might change the exact price points, but they don’t alter the basic logic behind congestion-mitigation measures like a cordon charge in London or Stockholm, or like Uber’s surge pricing (efficient and rational as it might be, but perhaps too honest). Again, technology can’t fundamentally change geometry. Cars will still be cars, and even if driverless cars improve on the current capacity limitations of highways, they do not eliminate such constraints.
Instead of twisting ourselves in knots over projections about the future that are sure to be wrong, planning for autonomous cars should instead focus on the values and the kind of places we want to plan for. We should adjust our policies to embrace the values of the communities (which alone is a challenging process). We should be aware about the poor accuracy of forecasts and work to build policies with the flexibility to adapt.
CC image from the Museum of American History.
To date, most of the writing about driverless cars seems to focus on technology’s potential to make driving safer by eliminating collisions between vehicles. The thinking is similar to other auto safety improvements such as air bags or anti-lock brakes. These technological advances (endorsed by the US DOT) incrementally improve the safety of those driving – assuming that you are using a narrowly focused definition of ‘safety.’ However, an auto-centric definition of safety only works in auto-centric environments; in urban environments where cars and bikes and pedestrians are all sharing the same space, the definition of safety cannot solely focus on eliminating collisions between high-tech cars (more on this later).
Other articles predict that driverless cars mean the end of transit – an unlikely scenario that ignores the basic geometry of car-based systems and the capacity advantages of transit (imagine shutting down New York’s transit system and trying to fill that role with nothing but taxis – good luck). Furthermore, if driverless cars make vehicle automation easy, then it should also help drive down the costs for automating transit itself (among other potential uses) and unlock the benefits of automated transit.
The far more interesting scenario is one where autonomous vehicles completely upset the benefits of owning your own car. In the Atlantic Cities, Eric Jaffe questions the assumptions of car ownership in a world of driverless cars:
But we’re not so far away from this future that it’s too early to start considering what it might look like. As Matt Yglesias wrote at Slate in August, Google, the leaders in autonomous car technology, must have had some vision in mind to shell out $258 million for the car-slash-ridesharing service Uber: “ubiquitous taxis — summoned via smartphone or weird glasses — that are so cheap they make car ownership obsolete.”
Think about this world of shared autonomous vehicles for a moment. You wake up and get ready for work, and a few minutes before it’s time to leave you press a button and order an SAV [Shared Autonomous Vehicle]. The car has been strategically positioned to wait in high-demand areas, so you don’t have to wait long. You might share the ride with a couple travelers just as you share an elevator, or perhaps pay a premium to ride alone. Either way, you clear your inbox or read the paper during the commute, which is safer and more reliable than it used to be.
So, basically Robo-Uber. Or Auto-Car2go. Or Johnny Cab. This kind of behavior seems to be a far more likely outcome of the technology than the continued paradigm of each individual owning a car for personal use. Just as transit consultant Jarrett Walker talks about the importance of frequent transit service in providing freedom for users, the on-demand nature of the personal car is similarly freeing – but it required a) ownership of the car to ensure on-demand use, and b) the owner to actually do the driving.
But what kind of changes in behavior can we expect from this shift away from car ownership? Writing at Greater Greater Washington, Nat Bottigheimer notes that planners haven’t even begun to address the issue. Jaffe’s article, however, cites some preliminary research from Austin on the impact of robotaxis.
Civil engineer Kara M. Kockelman of the University of Texas at Austin recently modeled the potential ownership change with grad student Daniel Fagnant…
The results offer an enticing glimpse of a world without car-ownership. Each SAV in the Austin model replaced about 11 conventional household vehicles. The roughly 20,000 people who made up this shared network, formerly owners of roughly as many cars, were now served by a mere 1,700 SAVs. Travelers waited an average of only 20 seconds for their ride to arrive, and you could literally count the number who waited more than 10 minutes on one hand (three). That’s to say nothing of personal savings in terms of cost (insurance, parking, gas) and time.
“Even when we doubled or quadrupled or halved or quartered that trip-making, we didn’t have big changes in our key variables,” says Kockelman. “This replacement rate, this eleven-to-one, those things were very stable.”
Kockelman is quick to point out the caveats. The biggest is that for all the savings in private car-ownership, vehicle-miles traveled doesn’t go down in the Austin model. In fact, it goes up about 10 percent. That’s because not only are SAVs making all the trips people used to make on their own, but they’re repositioning themselves in between trips to reduce wait times (see below). The additional wear also means manufacturers produce about the same number of cars, too, though each new fleet is no doubt a bit smaller and cleaner than the last.
So, a huge decrease in the total number of cars (presumably, with a corresponding decrease in parking demand, making the already-questionable logic behind zoning code parking requirements even more dubious) but an increase in the total vehicle miles traveled indicates that such technology won’t be a magic cure for congestion. It won’t spell the end of public transit in our cities. If the safety benefits accrue mostly to highway travel, it won’t change the need for safer streets where pedestrians, bikes, and cars mix.
The next question is on the impacts of driverless cars on cities and city planning.
I’m working through my pile of books I collected at the end of the year. I just finished George Packer’s The Unwinding, a book telling the story of the Great Recession through the eyes of several main characters (factory worker turned organizer Tammy Thomas; civil servant turned lobbyist turned civil servant again Jeff Connaughton; truck stop owner turned biodiesel entrepreneur Dean Price) as well as vignettes of famous ones (Jay-Z, Oprah, Robert Rubin, Elizabeth Warren, among others).
The fourth main character in Packer’s story isn’t a single person, but the story of Tampa, Florida. Packer weaves several individuals together as a part of the storyline, including Mike Van Sickler. Van Sickler now writes for the Tampa Bay Times’ Tallahassee bureau, but reported extensively on foreclosures, mortgage robo-signing, and general planning and development issues in sprawling Tampa. Packer introduces Van Sickler, the journalist who once pondered a career change:
When he was covering city hall at The Palm Beach Post, he’d gotten deeply interested in urban planning – for a while he even thought about switching careers, until he realized that city planners had even less clout than reporters.
I had mixed emotions reading this. It’s a shot at my chosen profession that strikes awfully close to home, but also because it speaks to the challenges facing our institutions across the board – not just those involved in planning, development, and all things urban. It’s one of those uncomfortable statements we know to be true.
Packer’s focus on narrative means telling the story from the viewpoint of the characters, rather than offering an overarching analytical framework. This approach threw off Chris Lehmann (“a chronicle of the fraying of our productive lives that shuns cogent ideological or political explanations of the causes of our present crisis in favor of a thick narrative description of its symptoms”), accusing Packer of letting Robert Rubin off too easily for his role in the unwinding.
Lehmann clearly doesn’t prefer the subtlety of Packer’s method, using the perspective of different characters to critique someone like Rubin, rather than state so explicitly. Packer isn’t trying to be Chris Hayes (another good read, by the way) and lay out a theory of institutional decline. Even for Lehmann, however, adding Van Sickler’s character to the story helped provide some critical thinking:
Van Sickler’s story led to a high-profile federal indictment of Kim on money laundering and fraud charges, but the reporter wasn’t satisfied. He pushed against the complacent truisms about the mortgage meltdown that were being retailed by the other prominent outposts of his profession: “We don’t know why, we just got really greedy, and everybody wanted a house they couldn’t afford,” he says, summing up the prevailing consensus in the mediasphere. Van Sickler adds, “I think that’s lazy journalism. That’s a talking point for politicians who want to look the other way. We’re not all to blame for this.”
After Kim pleaded guilty, the United States attorney for Florida’s Middle District announced that more indictments, of far bigger fish in the mortgage food chain, were in the offing. They never came. “Where are the big arrests?” Van Sickler wonders. “Where are the bankers, the lawyers, the real estate professionals?” Packer finishes the thought for him, in a refrain his readers by now know quite well: “Kim was just one piece of a network—what about the institutions?”
Of course, urban planning isn’t separate from the unwinding. The foreclosure crisis, sprawl, and the decline of the middle class are all linked and all have spatial consequences. And these outcomes are all shaped by our institutions, often with substantial unintended consequences. Perhaps that was part of Van Sickler’s hesitation about a career change. What does that say about the planner’s role, both operating within our institutions and outside of them?
As WMATA moves forward on their next generation fare payment system (selecting Accenture to manage a pilot program), there are a few lessons to learn from transit operators around the world. During my most recent trip to Europe, I had the chance to use a number of technologies, showing the direction that operators like WMATA are interested in going with their next generation fare systems.
The wonders of technology:
Part of WMATA’s reasoning to replace the existing fare system is the need to accomodate a wider arrange of fare systems and fare structures. When WMATA experimented with their peak-of-the-peak rail fare surcharge, the additional coding to implement the fares introduced a noticeable lag for customers tapping their SmarTrip cards at the faregates.
At the same time, technology is not fare policy. Customers and advocates have been asking for unlimited ride pass products that mesh with WMATA’s distance-based fare structure. They’re now offering a ‘short trip’ pass available on SmarTrip cards, but it still doesn’t offer the full coverage of the rail system’s price points (no sense in getting this pass if most of your rail trips are shorter and thus cheaper), nor does it include bus fares. WMATA indicates that they’ve reached the technical limits of what the current SmarTrip card technology can do.
Beyond those current limitations, the NEPP is also interested in making SmarTrip cards useable for proof-of-payment systems. The DC area’s existing commuter rail operators currently use paper-based tickets, manually checked by conductors. Maryland’s Purple Line and DC’s streetcar introduce two more candidates for proof-of-payment in the regional transit mix – both of which would benefit from easy SmarTrip card connections to the existing faregate-based rail system. The NEPP’s goal is to provide the required back-end systems for all of these capabilities.
Two versions of the OV-chipkaart. CC image from Elisa Triolo.
Consider the Netherlands. The Dutch don’t have a particularly large country, and they’ve managed to implement one single farecard for the entire country. The OV-Chipkaart (literally, ‘public transport chip card‘ – so much for cutesy branding) is used by all of the public transit agencies and private operators in the Netherlands, as well as the national rail operator, Nederlandse Spoorwegen. For all trips, regardless of mode (or the presence of faregates), you must check in to board/enter and check out to alight/leave. Transfers are handled automatically. Customers can load money onto the cards and pay as you go, or load pass products from any of the partner agencies (such as these examples from GVB in Amsterdam)
The use of check-in/check-out on all modes (including surface transport like buses and trams) is the kind of fare policy that takes advantage of the technology. It enables mixing different collection systems together (such as faregates and validator targets). The busiest national rail stations are equipped with fare gates (though most are locked in the open position for now), while smaller stations have simple pylons with validators. For surface transit without large stations, validators for check-in/out are located near all doors.
Fare media and fare policy are not the same:
Technology is part of the challenge, but it alone cannot overrule fare policy decisions. WMATA is an excellent case, where the technical capabilities of the SmarTrip platform limit the complexity and type of unlimited ride passes, but that doesn’t explain fare policy decisions that penalize transfers between modes. This is a policy decision, not one based on technical limits.
Integrating fares across a transit network is critical in shaping the behavior of users. New York has big ideas for infill commuter rail stations that could make better use of existing infrastructure for transit purposes, but without an integrated fare system (so that intra-city regional rail rides are cost-effective for passengers compared to the subway) the idea will never reach its full potential.
T+ ticket for Paris Metro and RER. CC image from josh.
Consider Paris, where all transit is part of the same fare structure. From the passenger’s standpoint, there’s no difference between using the RER vs. the Metro within the city. The T+ ticket is easily available to visitors and makes use of the universal faregates shared by the Metro and RER. This unification of technology enables a unified fare policy, but the specific policies allow and encourage passengers to use RER services within the city.
Paris has a smartcard, branded as NaviGo. The first version was available only to residents, but worked for the Metro, RER and the Parisian bikeshare system, Velib (something New York is hoping to do with the MTA’s planned open payment system).
Oyster Card. CC image from David King.
Consider London, where the addition of rapid transit service, branding (inclusion on the Tube map; use of roundel and other brand elements), and fare policy to legacy commuter and mainline rail infrastructure created the Overgroud. The Overground is now expanding, thanks to its success. London’s Crossrail project will share some of the same principles but with new tunnels akin to the Paris RER.
London’s smartcard, Oyster, takes advantage of the system’s technical ability to simplify a complicated fare system for users. Capping daily fares at the price of an equivalent day pass ensures that passengers using pay-as-you-go (particularly visitors) won’t get stiffed. It helps those unfamiliar with the system, demystifying the fares and zones. Like other unlimited use products, it encourages use of the system.
Buying a fare card:
As great as these products are, they’re not always easy to obtain. The Paris NaviGo isn’t marketed to visitors. In other cities, cards are available through ticket vending machines, but those TVMs likely won’t accept American magstripe credit cards. We can hope that recent fraud will speed the transition to pin-and-chip credit cards.
Beyond just chip and pin, American transit agencies like WMATA and New York’s MTA are looking for using contact-less credit and debit cards to collect fares directly. Even London is looking to end the Oyster card as a separate fare media, meshing the daily fare cap, only tracking based on the use of bank-provided cards.
Concerns for Future Technology:
Each of the European fare card systems has plenty of criticism. However, none of the problems with London’s Oyster card seem as severe as the issues with Chicago’s new Ventra card (replacing the older contactless Chicago Card). Ventra’s rollout has been plagued with errors, but the more concerning are Ventra’s wide range of hidden fees. From a system under the transit agency’s control, such fees are alarming – but it’s hard to see how you could avoid similar fees in a fully open payment system – such as London’s proposal – where the banks are issuing the fare media.
There’s also a concern about the ability of transit agencies to continue to offer useful unlimited ride pass products if they turn over the production of all fare media to banks and other payment providers. Good technology can’t magically craft good fare policy, but the two are linked.
CC image from carnagenyc.
Reading and writing about Vishaan Chakrabarti’s A Country of Cities reminded me that I need to add a few titles to the reading list. I’ve read several of these in the past year but since I haven’t been the most diligent in updating the list, there are also several that I’ve read (and written about) a while ago – such as John Kasarda and Greg Lindsay’s Aerotropolis.
It’s a rather wide range, including a whole string of economics-influenced books. Daniel Kahneman’s Thinking Fast and Slow specifically mentioned Nassim Taleb’s The Black Swan, which lead to reading his other books, which lead to reading Thaler and Sunstein’s Nudge, and so on.
Here are the additions, presented in no particular order. As always, I’m open to suggestions for books to add and/or books to read.
Zoned Out: Regulation, Markets, and Choices in Transportation and Metropolitan Land-Use – Jonathan Levine (2006)
A concise re-framing of the debate about market outcomes in planning and development. Levine disputes the idea that sprawl is a free market outcome, but rather a product of regulation. Arguments in favor of more traditional urban growth often needs to prove that it won’t increase traffic (as one example) to justify alterations to the rules that demand auto-centric development. Levine argues because of myth of free-market sprawl is just that, reforms to allow more urban development should be framed as market-friendly and as improving consumer choice. Doing so shifts the default option for urban development.
Levine was one of my graduate school professors at the University of Michigan.
Fooled by Randomness: The Hidden Role of Chance in Life and in Markets – Nassim Nicholas Taleb (2001)
The first installment of Taleb’s trilogy starts with the premise that humans are oblivious (thanks to our cognitive biases) to the role of randomness in our lives and that we make mistakes about the causality of events all the time. Given the assumptions about causality baked into numerous decision-making points as a part of the city planning process, as well as role of randomness in any sort of complex system (like a city), this is an excellent read to better understand the limits of our own understanding.
The Black Swan: The Impact of the Highly Improbable – Nassim Nicholas Taleb (2007)
The second book in Taleb’s series discusses the impacts of improbable events. A Black Swan is a surprise event with a large impact, and one that can be rationalized after the fact. Taleb posits that these unexpected changes (events, by definition, that we cannot predict) are tremendously consequential. One of the more interesting arguments for cities is the narrative fallacy, where we use stories to explain things, even if the explanation is wrong.
Taleb’s tone is often openly antagonistic towards establishment figures (more so than in his first book, Fooled by Randomness). You can find an excerpt from the book introducing the concept here.
Antifragile: Things That Gain from Disorder – Nassim Nicholas Taleb (2012)
Taleb’s third and most recent book builds off of the previous two, not just to find random events of large significance, but things that gain from that chaos. The mythical version would be the Hydra; cut off one head, and it grows two more. It is a different concept from resiliency, because the disorder must actually make the subject stronger. The idea can apply to some cities and urban economies, where creative destruction makes the end result stronger.
Nudge: Improving Decisions About Health, Wealth, and Happiness – Richard Thaler and Cass Sunstein (2008)
Lays out the way we make decisions and the powerful implications of default options on the eventual outcomes. Thaler and Sunstein call this a ‘choice architecture.’ Implications about choice architecture for cities are numerous, both in terms of individual behavior (such as travel mode choice) as well as the firm level such as zoning codes and development decisions (and the unintended consequences therein).
Sunstein also wrote about his government service in the Obama administration, applying these principles of choice architecture and libertarian paternalism to government, but Nudge is by far the more interesting book. Wikipedia’s summary provides a good synopsis of book’s argument.
The Signal and the Noise – Nate Silver (2012)
This book from the popular election-prediction, baseball statistician, poker player and quant analysis guru talks about all different kinds of prediction across all sorts of fields (macroeconomics, meteorology, elections, baseball, global warming, and geology) and the relative successes and failures of each. Some fare better than others, some express more confidence in their predictions than others (and that doesn’t necessarily correlate with their accuracy), and some are complete failures.
Given the outsized role of prediction in planning for the future, understanding the limits of those predictions is key in shaping policies and plans. Don Shoup’s takedown of the pseudo-science of parking minimum requires in The High Cost of Free Parking hits on the same themes of the lack of accuracy and precision; some blog discussion on those topics here and here.
The Warmth of Other Suns – Isabel Wilkerson (2010)
A history of the Great Migration of African Americans from the South to northern industrial cities and California. Told through the eyes of three individuals who left the South to establish new lives outside of the direct influence of Jim Crow, it tells the story of a key part of urban history in the US. For more, read Ta-Nehisi Coates’s initial reactions to the book.
Why Nations Fail: The Origins of Power, Prosperity, and Poverty – Daron Acemoglu and James A. Robinson (2012)
This isn’t a book about cities per se, but it does speak to economies and governance and with lessons for cities, not just nations. The authors posit that the main difference between prosperous societies and impoverished ones is the development of inclusive political and economic institutions, spreading power across the society instead of extractive institutions controlled by a few. The critique is that the book short changes other environmental factors such as geography.
Aerotropolis: The Way We’ll Live Next – John D. Kasarda and Greg Lindsay (2011)
A story about globalization and the power of agglomeration economies in urban development, told through the lens of a boom in air travel around the world. The description about the value of air travel is persuasive, but Kasarda’s prescription for additional aerotropoli is a tad formulaic. Nevertheless, Lindsay’s description of how air travel enables agglomeration and helps concentrate economic activity is an important story.
Discussed in the blog here and here; also see the aerotropolis tag.