Researchers at the University of Minnesota have published in in Physical Review Letters[link][bib] an article claiming a “universal power law” governing pedestrian behavior. Their website has some nice videos.
Abstract: We address a long standing hypothesis as to what is the fundamental nature of the laws that drive human interactions in a crowd. Here, we take a “big-data” approach to resolve this question, performing the largest meta-study of existing crowd data to date. The resulting analysis reveals a simple, universal power law governing pedestrian behavior. Applications of this law include simulating crowd flows and predicting pedestrian behaviors, which could potentially lead to safer buildings, improved pedestrian tracking, and ultimately new ways to understand what drives human behaviors.
The folks at Taylor and Francis have given me 50 electronic copies of my papers published by journals they own to distribute. Since papers don’t earn interest sitting on the shelf, please download if you have any interest in reading the published version. Of course, the preprints of all my papers are always available.
There are now a slew of reports that rank congestion. The grand-daddy of these reports is the Urban Mobility Report from Texas A&M University’s Texas (A&M) Transportation Institute (TTI). Texas A&M also performs a separate analysis of congestion for the Federal Highway Administration. Some of the highway GPS data vendors (TomTom, Inrix) put out their own reports. Maybe I have missed some.
These reports are useful for comparisons across cities (which city is most congested?) and ideally for comparisons across time (is city X getting more or less congested over time). It is more difficult at a national level, because then you need to weight the congestion by those who experience it, with recognition that many people experience very little congestion or are not in a monitored place.
TTI Urban Congestion Report for FHWA (data here) using Highway Performance Data (probes (INRIX I assume) in recent years and loop detectors in earlier years)
Reading the Urban Congestion Report we find that congestion increased from 2007 pre-recession to 2015, the travel time index (cleverly abbreviated TTI) rising from 1.26 to 1.37 over time time period on the roads considered. Congestion is getting worse. The hours of congestion dropped however from 5:20 to 4:33. Congestion is getting better.
Reading the Urban Mobility Report the travel time index rose over the same years from 1.21 to 1.22. Congestion is getting worse. Delay per commuter remains at 42 hours. Congestion is unchanged.
TomTom does not as far as I can tell produce a national average. [Disclosure, TomTom has a partnership with the Accessibility Observatory, their disaggregate data are inputs to our National Accessibility Evaluation].
Inrix says the average commuter wastes 50 hours per year in congestion, but does not report a Travel Time Index. We can try to reverse engineer one. At 250 days of commuting, this Inrix estimate of 50 hours is actually 1 hour per week or 12 minutes per day, or 6 minutes in each direction. A commute of say the national average of 25 minutes or so each way and implies a freeflow time of 19 minutes. 25/19 = 1.31. But 6 minutes is not really that much (certainly it is a cost), and averages can be misleading, it is more likely something like 3 minutes for 4 days a week and 18 minutes 1 day. Unreliability is an issue.
In contrast the TTI estimate of 42 hours is 10 minutes per day or 5 minutes each way, which gives us an estimate of 1.2. This is lower than the 1.37 TTI they report. The estimate also differs by 20% from that implied by INRIX. Recognizing different methodologies, this is still a lot given they are presumably using the same data.
We can compare individual cities as well. I will pick two that I have familiarity with: Minneapolis and Washington for the most recent year.
TTI data is the same from the two reports here (though it differs above). I am not clear why it should be identical.
INRIX doesn’t produce a travel time index (ratio of freeflow to congested time) so I will construct it from the 75 hours of congestion, 250 commute days per year, and a one-way commute time for the metro area of Washington DC of 30 minutes. 75 hours of congestion implies 9 minutes of congestion per commute, or a 21 minute freeflow time in DC. 30/21 = 1.47.
TomTom produces a congestion level. I will add 1 to that to get the Travel Time Index equivalent. It is reported for Morning and Evening peak separately. I average the two
TTI INRIX TOMTOM
MSP 1.39 X 1.35
DCA 1.59 1.47 1.50
The general trends are similar, DC is not surprisingly more congested than MSP. The published numbers are describing the same thing with different methodologies and slightly different sources.
So in terms of most congested for these two cities, TTI comes in first and INRIX comes in third. However nationally, INRIX says we are more congested than TTI.
TomTom doesn’t report details on every city, but they profile selected cities, for instance Pittsburgh. They say PGH has 81 h of extra travel per year. But that is not just “commuting”. INRIX does not report PGH. TTI says 39 hours per commuter per year. These numbers are not directly comparable either. Most travel is not work travel, and some of that experiences congestion. 81 h implies 13 minutes per day for a 365 day year.
Someone with more time and/or funding could do this analysis more systematically.
One additional problem with the GPS-based data sets is the sparseness of data in previous years, and the short-term time trend available. The problem with loop detectors is that more-or-less they only cover freeways. Also older data is likely suspect due to the lack of detector deployments in the 1980s and 1990s.
Many state DOTs have their own congestion reports (e.g. Minnesota). These are undoubtedly more accurate locally, but cannot be compared between places in other states due to differences in methodologies.
Now, I certainly don’t believe congestion reduction is an appropriate goal for transportation, but it may be a means to achieve a more appropriate goal like accessibility. There are many ways to reduce congestion, I have identified 21 strategies in an earlier post.
The measures are usually place weighted, so don’t account for trips, and cannot address the question if 30 miles at 30 miles per hour is better or worse than 60 miles at 60 miles per hour, both of which take an hour, the latter of which results in twice as many vehicle miles traveled (but may also have higher accessibility). For that you need more information.
Still, people like these indices, and they could be useful as a time series indicator of whether traffic is worsening or not. But given the high variability between sources, my advice is to remain skeptical.
• Social equity is increasingly incorporated as a long-term objective into urban transportation plans.
• This research proposes a set of new transit accessibility measures that incorporates both travel time and transit fares.
• It then uses this measure to evaluate the equitable distribution of accessibility by transit in Montreal, Canada.
• Travel time accessibility measures estimate a higher number of jobs that can be reached compared to combined travel time and cost measures.
• The degree and impact of these measures varies across the social deciles.
• Residents of socially disadvantaged areas have more equitable accessibility to jobs using transit.
Social equity is increasingly incorporated as a long-term objective into urban transportation plans. Researchers use accessibility measures to assess equity issues, such as determining the amount of jobs reachable by marginalized groups within a defined travel time threshold and compare these measures across socioeconomic categories. However, allocating public transit resources in an equitable manner is not only related to travel time, but also related to the out-of-pocket cost of transit, which can represent a major barrier to accessibility for many disadvantaged groups. Therefore, this research proposes a set of new accessibility measures that incorporates both travel time and transit fares. It then applies those measures to determine whether people residing in socially disadvantaged neighborhoods in Montreal, Canada experience the same levels of transit accessibility as those living in other neighborhoods. Results are presented in terms of regional accessibility and trends by social indicator decile. Travel time accessibility measures estimate a higher number of jobs that can be reached compared to combined travel time and cost measures. However, the degree and impact of these measures varies across the social deciles. Compared to other groups in the region, residents of socially disadvantaged areas have more equitable accessibility to jobs using transit; this is reflected in smaller decreases in accessibility when fare costs are included. Generating new measures of accessibility combining travel time and transit fares provides more accurate measures that can be easily communicated by transportation planners and engineers to policy makers and the public since it translates accessibility measures to a dollar value.
I found this old video — Accessibility* – An Attainable Goal, from the Montgomery County Department of Transportation and uploaded it.** It’s about the challenges of traveling by wheelchair in an environment that just doesn’t care. The Americans with Disabilities Act addresses this to some extent on new construction, but there is so much work to do. While this is from the 1990s, the general problems remain, although hopefully not as widespread as they once were.
* Accessibility for the physically disabled, not access to jobs or other destinations, the way the term is normally used on this blog.
To reduce racial-bias in traffic stops, we in the transport community need to respond Not in Our Name. We need to offer solutions to this problem. One way to start is to reduce traffic stops. While traffic safety remains important, many stops are not genuinely necessary from a safety perspective, and may not be the most effective ways to achieve that safety. Below I identify some strategies to this end, undoubtedly there are others.
Automated Enforcement. Many traffic violations, like speeding and red-light running, can be effectively enforced by tireless and largely unbiased machines, avoiding the requirement for human interaction. Unfortunately, these are not legal in Minnesota, but they are widespread in other parts of the US. But the revenue from these should be returned to the taxpayer directly, not used as a piggy bank for local government. (See point 5)
Inspections. Many states have standard annual vehicle inspection programs. Many vehicle violations (tail lights, mufflers, and so on) can be addressed at that time, and need not require police attention at all. Just as police do not enforce pollution control violations on a regular basis (although these may be about as deadly in the aggregate), they need not enforce minor vehicle condition issues.
Secondary offenses. Most vehicle violations can be treated as secondary offenses rather than primary offenses, the way some states deal with seatbelts. “Secondary” seat belt laws “state that law enforcement officers may issue a ticket for not wearing a seat belt only when there is another citable traffic infraction.” In other words, if broken vehicles were secondary offenses, we could establish that vehicles not be stopped for these offenses, and only tickets written if some other stop was made for a more serious driving violation.
Evidence-based safety regulation. Much of the safety regulation in the existing legal code was drafted in the middle of the 20th century, and based on engineering judgment, but not empirical evidence. It is time to review existing all existing traffic law implicit safety claims, decide which are important to be enforced, which can be enforced through an annual vehicle inspection process rather than through vehicle stops, and which can be secondary rather than primary offenses. An objective study, for instance by the Transportation Research Board of the National Academies, should review existing laws against existing empirical evidence. The aim would be to ensure traffic laws were evidence-based. After such a study, laws that are supported by evidence as being cost-effective ways to increase safety should be retained, laws lacking evidence require further study, and laws that evidence finds waste resources or are counter-productive should be changed.
Fine reform. All revenue earned from traffic fines should be returned to citizens of the municipality, rather than kept by government agencies. This reduces the bureaucracy’s incentive for enforcement solely for the purpose of municipal finance. Fines are not taxes, and create perverse incentives, as was seen in Ferguson, Missouri among other places. The reforms should emphasize safety rather than revenue as the core. The redistribution could follow the model of the Alaska Permanent Fund, which pays a dividend to each year-long resident of the state. Cities and states that need revenue should raise taxes, rather than depending on, or worse encouraging, their citizens to break the law. Using a profit maximizing strategy for fines, which expects and induces violations, rather than a safety maximizing strategy for enforcement undoubtedly reduces safety to gain revenue.
Clearly these changes need to be made state-by-state, though the federal government should provide a carrot and stick through making them a condition of receiving highway funds, the way many other safety changes, like seatbelt and drunk driving laws, have been introduced, and by supporting review of traffic and vehicle codes. Eventually, with autonomous vehicles, some of these issues might become moot. But that hasn’t happened yet, and these changes should happen soon.
This will remove a tool from the police, taking away an excuse for stopping vehicles which may possibly be driven by criminals. So does the Fourth Amendment.
There are many other reforms that are needed, and better police training and methods are required. Lots has been written, much will continue to be written. We in the transport community can make progress here.
The US will soon face the choice of monarchy vs. fascism.*
Should the US continue down the unfortunate path of nominating people from the same family to serve as President because they have a brand? Note, this is hardly unique. In the context of the United States, Clinton follows the Presidents Bush, Kennedy, Roosevelt, Harrison, and Adams (and Governors Romney, we might add) who have had same-named family members follow the same path in seeking the Presidency with more or less success. The dynastic nature of politics also must consider positions like Senator, Representative, and Governor which are often family businesses.
A possible mechanism for familial brands is that the brand name gives a leg up on the marketing and funding machine, behind them, and in theory would reproduce similar policies and governance styles, and the second or nth member of a family might adhere to all the commitments of their forebears. Alternatively, many people might just be sheep who like monarchs, and this is as close as it gets.
The Clintons, as is their wont, have found yet another loophole, this around the 22nd Amendment limiting Presidents to two terms. You may say Hillary and Bill are separate individuals, but the law considers marriage such a tight bond that spouses cannot be compelled to testify. “Two for the price of one” as one of them once said.
My ancestors had an expression: Best government: Good Czar, Worst government: Bad Czar. The point I think is that dictators (who in the Russian case combine fascism with monarchy) can be more efficient and effective, but lack checks and balances that democracy provides.
So in addition to the normal questions of who will govern better or who aligns with my policy preferences or who would I rather drink a beer with (which is perhaps the stupidest reason to support someone), the question arises: who is more likely to turn over power in 4 or 8 years, the monarch or the fascist? I think the evidence in the US is that the successful dynasts, as bad as they have generally been (FDR aside, and he was only a fifth cousin of Theodore Roosevelt), did not try to take over the government, and all willingly turned over power (FDR aside again, though one assumes had he lived he would not have run in 1948, and that had he lost he would have relinquished the office).
Even if the dynasts did try to hang on to power, the military would have been unlikely to support them. For the fascist, American experience is insufficient, and the evidence abroad is that fascists do not tend to turn over power willingly, but instead change the constitution to retain office “legally”.
The question is not simply how bad they will be in office, but how long they or their cronies or children will be in office. For the first time in a while there’s a risk they will end the Republic. So much of America just feels like End of Empire days now, this election is really not helping.
* The US also has third party candidates, whose chances are more hopeful than usual given the climate, but who remain disfavored under the electoral regime where a vote for a third place candidate is considered “wasted”. Ranked choice voting would help, but it will not be in place for national elections any time soon. Full disclosure: I’ll probably vote for the Governors, Johnson/Weld (Libertarian) myself, unless the election looks close in Minnesota with the Libertarians in a distant 3rd.
Highways also carry a particular resonance for the grievances today of black civil rights activists, given that many deadly encounters with police, such as Castile’s, began with traffic stops (this patten has also prompted a new cry from transportation planners: “not in our name!”).