A recent report by researchers Greg Lindsey, Kristopher Hoff, Steve Hankey, and Xize Wang from the University of Minnesota is available for download from the Center for Transportation Studies website here. The report is titled Understanding the Use of Non-Motorized Transportation Facilities, and it is a fantastic read for anyone interested in bicycle and pedestrian counts in the Twin Cities.
The report may be a bit academic for casual blog readers, but it is also surprisingly readable. Even if the discussion of statistics isn’t your cup of tea, anyone familiar with the Minneapolis cycling scene will appreciate that all the data used in this study was collected on local trails at locations even casual cyclists will be familiar with. You’ve probably even seen some of the counting technology on the trail side and may not have known what it was. This report is an excellent and accessible document that will tell you everything you ever wanted to know about bike counts and prediction models in the Twin Cities, including how accurate they are.
The report starts by giving a brief overview of various bicycle and pedestrian counting strategies, then explains which methods are in use in the twin cities (manual, induction loop, infrared).
If you’ve ever wondered how accurate the various automated counters are, take a look at Chapter 3. Turns out (spoiler alert!) that the data collected by the induction loops is all over the place (mean error of 45% at one location), but the infrared counters are pretty reliably undercounting by about 10%*. In addition, the induction loops are only collecting usable data around 80% of the time, while the infrared counters are collecting usable data about 90% of the time.
Infrared: 1 Induction: 0
Oh, and manual counts? About 1.4% error.
Chapter 4 is a smattering of descriptive statistics about the bike and pedestrian counts based on factors such as the presence of a bus line or roadway functional classification. There’s nothing groundbreaking about this info, but it’s certainly an interesting read. For example, the average estimated number of pedestrians on roadways with a bus route is 1,071, but the average estimated number of pedestrians on roadways without a bus route is 547. The report allows readers to draw their own conclusions.
Chapter 5 establishes procedures for creating daily (monthly, annual) bike counts based on a limited amount of field data. The authors calculate scaling factors that can be used to estimate 12 hour bike or ped counts based on only one or two hours of collected data. Here’s the short version: If you can only collect one hours worth of bike data at a location, your best bet is to collect data from 4:00-5:00 PM and multiply the number of bikes you count by 8.4.
They also compared weekend traffic with weekday traffic by creating a ratio (weekend/weekday) to estimate recreational vs. utilitarian trips on the Midtown Greenway. Generally, the ratios ranged somewhere between 1 and 2. The authors roughly associate weekend trips with recreational trips, which is certain to be somewhat true, but this is one of the sketchier assumptions the authors make in the paper.
Chapter 6 is the real meat of the paper (and the real contribution to academia), which develops several quantitative models for predicting bicycle and pedestrian counts based on a host of other measurable variables. You’ll have to read the report to get the full effect, but the result is this graphic, which attempts to predict bicycle and pedestrian travel on Minneapolis roadways and trails:
Readers, please take a moment to review this article if you have an interest in bicycle and pedestrian counts. In the comments section below, I’d love to hear some of your own thoughts about the study.
*Either I’m reading it wrong, or there is an error in the column headings for Table 3.4 (mean hourly manual count and mean hourly TMI count should be swapped)