2035 Predictions for Washington Avenue Offer Precision Without Accuracy

Yesterday, Brendon Slotterback (my colleague here on Streets.mn) tweeted something that caught my brain. It was about the plans for Washington Avenue through downtown Minneapolis:

brendon-tweet

The comment was a reaction to a debate taking place online this week (in admittedly small circles) about the recently released Hennepin County study on over traffic projections and alternatives for Washington Avenue (downtown Minneapolis, between Hennepin and 35W). The street is up for a complete reconstruction this year, and over the past few months there’s been a a long-running discussion over how the street will be re-designed. Will it be the 7-lane status quo, a 6-lane version, a 5-lane boulevard with bike lanes, or something even more pedestrian friendly (e.g. something with cycletracks or extended sidewalks)?

This has proved to be a hot topic, and city and county politicians have probably spent some time sweating it out. It’s major issue for the bicycle coalition, for downtown residents, for city council candidates, and for planners and pedestrian activists. And I imagine there are interested parties with deeper pockets. (Zygi Wilf springs to mind, or Target…)

Recently the debate has turned rather wonky. The Washington Avenue study is a 73-page document filled with maps and charts of scrutable detail. Reading it provides little pleasure, and left to itself, it would be another of thousands of such documents produced yearly by Departments of Transportation or Public Works across the country.

forecast-ADT-for-2035This one seems different, though, because it’s so important. It’s refreshing that in the Twin Cities, we have people willing to read, understand, and critique these kinds of analysis, challenging both its methodology and conclusions. For example, Brendon posted earlier this week about the study’s assumptions about increasing VMT. (It’s something I’ve written about before.) And later this week, David suggested that the traffic numbers were slightly “massaged,” and Janne over at the MBC posted an epic rebuttal of the study’s conclusions, arguging that the reported results contradict themselves. For anyone interested in the fine details of urban planning and transportation debates, this is an excellent time to be online.

 

Textbook Case of Precision Without Accuracy?


I could think of nothing to add until Brendon’s tweet got me thinking about traffic modeling more generally, the methodological approach and effects of studies such as this one. This study seems to be an excellent example of what Donald Shoup, in his tome on parking policy, calls “precision without accuracy.” (I’ve written about this before, too.) In his chapter on the origin of parking minimums, Shoup wades hip-deep into the origins of minimum parking requirements, for example, why hair salons require one space per chair, restaurants require two spaces per thousand square foot, or swimming pools require one space per cubic meter of water (or whatever the exact numbers might be).

shoup-1

Shoup’s example of meaningless statistical inference.

As Shoup explains, most of these requirements come from studies with shoddy methodology. Most parking studies have very small sample sizes with widely varying contexts and low statistical significance. Yet in parking policy documents, they’re reported with a high degree of precision, often with a ridiculous number of significant digits.

(I.e, the example at right. The “parking generation rate” for a fast-food restaurant is reported to be “9.95 spaces per 1,000 feet of leasable square footage on a weekday”, yet the R-squared value of the study is a meaningless 0.038. For almost all land uses, the range of possible parking outcomes is widely varied, and few of these studies can make a meaningful claim to predict parking demand.)

Shoup correctly mocks parking minimum studies as quasi-science, but mourns the fact that these figures are treated as scripture by the civil engineers who use them every day. He suggests that cities should rid themselves of this kind of imprecise faux-statistics. Instead, he advises “accuracy without precision,” or that parking studies should offer up a range of possible outcomes that reflect the stastical rigor of the data. Such an approach would say that, for example, restaurants require between 2 and 20 parking spaces per thousand square feet (or something like that). That would give policy makers a range of possible outcomes, allowing cities to choose the kind of built environment they want, freeing cities from the yoke of traffic predictions.

Reading through the Washington Avenue study, the same principle might apply. Even without knowing the details of the traffic prediction model used in this study, how can any analysis pretend to know, to incredible degrees of precision, what someone’s commute time will be in 2035, twenty-three years into the future? The variables are intricate beyond measure. Twenty-three years ago it was 1990, and much has changed since then. Nobody had heard of the internet, a gallon of gas cost $1.16, and a barrel of oil was under $20.  How will our commuting patterns, transportation choices, and economic decision-making change between now and 2035, when I’m 67 years old and yelling at kids to get off my lawn? How many trends could you project forward that would completely change this model’s outcome? Google cars, car sharing servies, telecommuting, downtown population increases, increases in transit, increases in the gas price… any of these could change the rules of the game. How can we possibly say, with a precise number of seconds, what a driver moving this particular mile through downtown Minneapolis will experience in 2033? This is surely another case of precision without accuracy.

In 1990, this was cool.

But what are the alternatives? What might this study look like if it provided “accuracy without precision”? Without being a traffic engineer, I’d guess that, based on similar streets around the country and around the world, there would be a wide range of possible outcomes for a street like this. Can we even say that Washington Avenue will congested at 5:30 in the afternoon on Tuesday, June 13th, 2034? The odds are good, but it’s not a given. At the very least, we have to admit uncertainty. Ideally, a study like this would say that the 2035 traffic flows will exist within a range of X to Y (say between 23 and 35K cars/day), depending on a number of land use and planning variables.

An “accuracy without precision” approach would give city and county officals a bit more breathing room, more of a sense that they can affect the transportation future of Minneapolis. (Today, this report reads like one of Moses’ tablets.) By 2035, the experience of driving or walking down Washington Avenue might be radically different, based on what we do now. As Brendon tweeted yesterday, in what other fields are we trying to predict the interaction of complex human systems to within 90 seconds 20 years into the future? If you’ve stuck with this article this far, I challenge you to come up with an answer.

 

2035-traffic-estimates

The 2035 estimates for cars in each lane at each intersection.

 

PS. There’s a public meeting about this on Tuesday (5/14), at the Mill City Museum, at 5:00

PPS. The answer is, of course, astrology.

Bill Lindeke

About Bill Lindeke

Pronouns: he/him

Bill Lindeke has writing blogging about sidewalks and cities since 2005, ever since he read Jane Jacobs. He is a lecturer in Urban Studies at the University of Minnesota Geography Department, the Cityscape columnist at Minnpost, and has written multiple books on local urban history. He was born in Minneapolis, but has spent most of his time in St Paul. Check out Twitter @BillLindeke or on Facebook.