A Model of the Rise and Fall of Roads

Recently published:

6c

Abstract: This paper analyzes the relationship between network supply and travel demand and describes a road development and degeneration mechanism microscopically at the link (road-segment) level. A simulation model of transportation network dynamics is developed, involving iterative evolution of travel demand patterns, network revenue policies, cost estimation, and investment rules. The model is applied to a real-world congesting network for Minneapolis-St. Paul, Minnesota (Twin Cities), which comprises nearly 8000 nodes and more than 20,000 links, using network data collected since 1978. Four experiments are carried out with different initial conditions and constraints, the results of which allow us to explore model properties such as computational feasibility, qualitative implications, potential calibration procedures, and predictive value. The hypothesis that road hierarchy is an emergent property of transportation networks is corroborated and the underlying reasons discovered. Spatial distribution of capacity, traffic flow, and congestion in the transportation network is tracked over time. Potential improvements to the model, in particular, and future research directions in transportation network dynamics, in general, are also discussed.

Comparing measures of auto accessibility

This note compares the original Access Across America, using 2010 data (A2010), released in April 2013 with the most recent Access Across America: Auto 2015 (A2015), which was released in September 2016.

Auto accessibility to jobs at 8:00 in 2015.
Auto accessibility to jobs at 8:00 in 2015.

As we write in A2015:

An earlier report in the Access Across America series took a different methodological approach approach to evaluating accessibility across multiple metropolitan areas. [A2010] used average speed values and average job densities across entire metropolitan areas, rather than detailed block-level travel time calculations, to estimate job accessibility. Because of these differences, the results of that report cannot be directly compared with those presented here. 

Thus it is important to note that almost all differences between the two reports can be attributed to differences in methodology, rather than changes over time due to different land use patterns or travel speeds. We anticipate that as subsequent reports come out annually in the 2016-2020 time period, a methodologically consistent time series will be available enabling substantive comparisons of accessibility over time across the United States.

A2010 used a macroscopic approach to computing accessibility, as a function of average network speed, circuity by time band, and average employment density for the region, constrained so that accessibility did not exceed the region’s number of jobs.

macroaccesseqn

ρ_emp = Urban area employment density (jobs · km^−2).

t = time threshold.

V_n = Average network velocity in km · h^−1 

C_t = Average circuity of trips (ratio of actual distance to Euclidean distance) in time threshold (ex: 20-min threshold measures circuity of trips 0-20min).

The methodology is outlined in detail in Network Structure and City Size.

In addition, A2010 assumes that regions do not interact. So a traveler in the Baltimore region, for instance, only has a available jobs in Baltimore, not jobs in the Washington, DC. If the representative traveler for Baltimore could reach all of Baltimore’s jobs in 40 minutes, then the 50 and 60 minute accessibility would be the same. The same phenomenon is especially important in Baltimore, Hartford, Los Angeles, Providence, Raleigh, Riverside, San Francisco, San Jose.

A2010 used 2010 travel speeds (metropolitan averages on arterials and freeways from Inrix) and 2010 Census data.

A2015, used a microscopic approach to computing accessibility, making a separate computation for each census block in the United States.  A2015 used 2015 link-level travel speeds (essentially every road segment in the US has a speed provided by TomTom based on their GPS data collection program). Jobs by Census block from the Census LEHD program were also employed, using data from 2013 LEHD at the census block level.

There are additional differences in geographies about how the two reports defined areas, as Census definitions of metropolitan areas may have changed.

While the results are similar in terms of order of magnitude, they differ in specific ways.

If we accept A2015 as “ground truth”, as it is far more detailed in its use of data and accurate in assumptions, the expected biases A2010 are as follows:

Relative to A2015, A2010 underestimates access in regions that adjoin other region, particularly at longer travel time thresholds, because it does not consider jobs in other regions as available and because it uses average speeds, which are too high for short trips and too low for long trips.

Relative to A2015, A2010 overestimates access for short travel time thresholds, because it uses regional average speeds, rather than the lower speeds actually available for short trips that typically use less freeway.

As shown in the figure below, this pattern is largely, but not uniformly, borne out by comparison of the metropolitan averages. On average, A2010 is higher than A2015 for thresholds of 10, 20, and 30 minutes, but higher for 40, 50, and 60 minutes. The averages are very close at between 30 and 40 minutes, longer than the average daily commute in these cities (a bit under 30 minutes on average).

I am actually surprised (and pleased) at how close the overall average is, given all of the assumptions that went into the macroscopic estimate. Macroscopic approximations can work well for a general understanding of the patterns of cities, but remain macroscopic and approximations. More detailed analysis is necessary to get at spatial differences and irregularities in those general patterns that we illustrate on the maps, differences that give life to individual cities.

 

compareauto2013vauto2015

Population exposure to ultrafine particles: size-resolved and real-time models for highways

Recently published:

Highlights

• This study estimates individual and population exposure to ultra-fine particles in the Minneapolis – St. Paul (Twin Cities) metropolitan area, Minnesota.
• We combine a model of on-highway size-resolved UFP concentrations with GPS travel patterns.
• The most significant exposures were found at freeway interchanges.
• Peak hour concentration is about 10 times larger than the off-peak hour concentration.
• Peak hour exposure is about 100 times compared to its off-peak hour counterpart.

Abstract

Prior research on ultrafine particles (UFP) emphasizes that concentrations are especially high on-highway, and that time on highways contribute disproportionately to total daily exposures. This study estimates individual and population exposure to ultra-fine particles in the Minneapolis – St. Paul (Twin Cities) metropolitan area, Minnesota. Our approach combines a real-time model of on-highway size-resolved UFP concentrations (32 bins, 5.5–600 nm); individual travel patterns, derived from GPS travel trajectories collected in 144 individual vehicles (123 h at locations with UFP estimates among 624 vehicle-hours of travel); and, loop-detector data, indicating real-time traffic conditions throughout the study area. The results provide size-resolved spatial and temporal patterns of exposure to UFP among freeway users. On-highway exposures demonstrate significant variability among users, with highest concentrations during commuting peaks and near highway interchanges. Findings from this paper could inform future epidemiological studies in on-road exposure to UFP by linking personal exposures to traffic conditions.

First best, second best, and third parties

There is a lot of noise about how voting for a third party is “wasting your vote” and taking it away from the only real candidates in this election, those of the two largest parties (The Democrats and the Republicans). Only one person can be elected, and only one other person has a reasonable shot of being elected if the first one isn’t.

In some sense, this is true. Gary Johnson (Libertarian), and save us, Jill Stein (Green), are highly unlikely to win. So a vote for them prevents you from voting from the alternative you prefer from the two candidates who might plausibly win.

Leaving aside arguments such as how well Ross Perot, George Wallace, Theodore Roosevelt, and Abraham Lincoln have done historically suggesting there is a possible role for Third Parties in US Presidential Elections, just as there are in Governor races (Jesse Ventura), I want to propose another argument.

The system of two major parties will never be broken if everyone always votes for a candidate from the two major parties. It’s basically a mutually-reinforcing belief system. If you believe everyone believes that a third party cannot win, you yourself will believe that a third party cannot win and not vote for a third party in an effort to win. The system cannot change under that logic.

We are stuck in what Richard Lipsey and Kelvin Lancaster dubbed a “second best” solution (the best solution given the imperfections of the rest of the world differs from the best solution if the rest of the world were optimal).

The only way to change beliefs about what is possible in the future is to act differently. If we want what we perceive as a first-best world where the candidate we like might win, we are going to have to vote for those candidates in earlier elections to change the belief structure of other voters about their electability.

Now this election is especially risky (they all are, but this one more so), in that a demagogue is closer to power than usually occurs. However the likelihood of any one vote being the marginal vote is infinitesimally small.

People will trot out the 2000 election and Nader and Gore on the risk of protest votes. Gore lost for lots of reasons, there are many ‘butfors’, Nader possibly among them, but hardly the only one. (Sighs, Bill Clinton, butterfly ballots, hanging chads, the US Supreme Court, Gore’s team asking for a partial but not total recount, and so on).

Yet there are a few million people in this country who think we should have a Green Party government. I think they are wrong. I think Greens should elect some actual Legislators and Senators before they try to run for the Presidency (they do have some Minneapolis City Council members, including my own district). But I also think they should have a voice. And if bless-your-soul, you should think Moscow should run US foreign policy and that vaccines are on net bad, you should vote for Jill Stein (and rethink your life choices). Or even if you think the Greens would be better in the long run, and government should be more green, and want to move the Overton Window in the green direction, you can justify a vote for Stein.

Similarly, this year Gary Johnson is the Libertarian nominee, and he is a more serious threat to the political party establishment, in that he (and his VP) have more governing experience than the major party candidates, and are polling in the neighborhood of 10%. If your first-best world has Libertarian President and Congress (or you simply think we should move farther in the direction of lower-case “l” libertarianism without being absolutely upper-case “L” Libertarian) you should vote for Johnson/Weld. Again, it would be good for the Libertarians to show success at lower levels of government and Congress (beyond Ron Paul) before trying to take over the Executive Branch, but we can’t always get what we want.

A serious showing in 2016 helps a third party’s candidates in 2020. It reframes people’s expectations. It moves towards your first-best world in the long run, even if it is suboptimal from a second-best perspective in the short run. Once a third party gets in the 30% range of support, it becomes a plausible, unwasted vote in the short term. The third party will be unlikely to get 30% support before it gets 20% or 10%. That could all happen in one election cycle, or it could take multiples.

Of course, if you like either the Democratic or Republican nominees best, you should vote for them.

Politics may seem like a one-shot game, but it is in fact repeated. If all goes well, I will spend as much of my life between the years in the next administration between 2017 and 2021 as in the following administration between the years 2021 and 2025. The rules for electing Presidents in the US may be nuts, but they are the only rules we have right now.

Adding transit costs to the accessibility equation offers a better gauge of transportation equity

CTS Catalyst reports on our recent paper: The cost of equity: Assessing transit accessibility and social disparity using total travel costTransportation Research part A: Policy and Practice, 2016. Reprinted below:

Access to opportunities such as jobs and services is one of the main benefits of public transit. To ensure this benefit is maximized, transportation planners are increasingly seeking to distribute transportation resources as fairly as possible in order to provide a variety of options to commuters and increase their access to opportunities.montrealtransit

Typically, transportation accessibility is measured using the number of opportunities that can be reached within a given time threshold. For example, a planner might look at how many jobs residents of a socially disadvantaged neighborhood can reach within 45 minutes to see where improvements might be needed. However, these traditional accessibility measurements have a significant shortcoming.

“Research shows us that low-income and socially disadvantaged individuals are the most likely to be transit dependent and face barriers to accessing their desired destinations,” says David Levinson, a professor in the Department of Civil, Environmental and Geo- Engineering. “If we only look at time as a constraint on accessibility, we leave the crucial factor of financial access out of the equation. For low-income populations, transit fares can present a major barrier to accessibility, since fares can consume a large share of individuals’ budgets. As a result, planners and researchers may overestimate job accessibility, particularly for low-income riders.”

In recent research, Levinson and his co-authors developed a set of innovative accessibility measures that incorporate both travel time and transit fares. Then, they applied those measures to determine whether people living in socially disadvantaged neighborhoods—in this case, in Montreal, Canada—experienced the same levels of transit accessibility as those living in other neighborhoods. Finally, they compared the results of their new measurement with traditional accessibility measures that account only for travel time.

“We found that accessibility measures relying solely on travel time estimate a higher number of jobs than our measure,” says Levinson. “For the most socially disadvantaged residents, factoring in a single fare reduces job accessibility 50 percent; adding a monthly pass reduces it 30 percent.”

The study also found that public transit generally favors vulnerable populations in Montreal. “Low-income populations generally reside in the central city, near transit stations and job concentrations. Higher-income populations are concentrated in suburban areas, and suburban fares are much more expensive. So in this case, residents of socially disadvantaged areas have more equitable accessibility to jobs even when fare cost is included,” he explains.

The new accessibility measure offers several benefits for transportation planners, Levinson says. First, it will allow them to better explain to policymakers the number of jobs a resident can reach for a given cost, thereby allowing fare structures and hourly wages to be judged against the cost of commuting. In addition, it can help planners identify neighborhoods that need transportation benefits the most and provide broader insight for the transportation community into how combined measures of accessibility can be used to better understand the impact of transportation planning decisions.

This research was conducted as a collaboration between Levinson and the Transportation Research at McGill (TRAM) group, which is led by Ahmed El-Geneidy, associate professor at McGill University in Montreal and a former U of M researcher.


Related Link

Elements of Access: Introduction to Topology

Networks play a role in nearly all facets of our daily lives, particularly when it comes to transportation. Even within the transportation realm lays a relatively broad range of different network types such as air networks, freight networks, bus networks, and train networks (not to mention the accompanying power and communications networks). We also have the ubiquitous street network, which not only defines how you get around a city, but it provides the form upon which our cities are built and experienced. Cities around the world that are praised for having good street networks come in many different configurations ranging from the medieval patterns of cities like Prague and Florence, to the organic networks of Boston and London, and the planned grids of Washington D.C. and Savannah, Georgia. But how do researchers begin to understand and quantify the differences in such networks?

PRIMAL STREET NETWORK FOR METROPOLIS WITH INTERSECTIONS AS NODES AND SEGMENTS AS LINKS
PRIMAL STREET NETWORK FOR METROPOLIS WITH INTERSECTIONS AS NODES AND SEGMENTS AS LINKS

The primary scientific field involved with the study of shapes and networks is called topology. Based in mathematics, topology is a subfield of geometry that allows one to transform a network via stretching or bending. Under a topological view, a network that has been stretched like a clock in a Salvador Dali painting would be congruent with the original, unstretched network. This would not be the case in conventional Euclidean geometry where differences in size or angle cannot be ignored. The transportation sector typically models networks as a series of nodes and links (Levinson & Krizek, 2008). The node (or vertex) is the fundamental building block of the model; links (or edges) are not independent entities but rather are represented as connections between two nodes. Connectivity – and the overall structure of the network that emerges from that connectivity – is what topology is all about. In other words, topology cares less about the properties of the objects themselves and more about how they come together.

For instance if we look at the topology of a light rail network, the stations would typically be considered the nodes and the rail lines would be the links. In this case, the stations are the actors in the network, and the rail lines represent the relationships between the actors (Neal, 2013). Those relationships – and more specifically, those connections – embody what is important. Taking a similar approach with a street network, we might identify the intersections as the nodes and the street segments as the links as shown in the network based on an early version of Metropolis above (Fleischer, 1941). For most street networks, however, the street segments are just as important as the intersections, if not more so. The ‘space syntax’ approach takes the opposite (or ‘dual’) approach with street networks: the nodes represent the streets, and the lines between the nodes (i.e. the links or edges) are present when two streets are connected at an intersection, as shown below using the same Metropolis network (Jiang, 2007).

DUAL STREET NETWORK VIEW OF METROPOLIS WITH SEGMENTS AS NODES AND INTERSECTIONS AS LINKS
DUAL STREET NETWORK VIEW OF METROPOLIS WITH SEGMENTS AS NODES AND INTERSECTIONS AS LINKS

Initial theories related to topology trace back to 1736 with Leonhard Euler and his paper on the Seven Bridges of Königsberg. Graph theory based topological measures first debuted in the late 1940s (Bavelas, 1948) and were initially developed in papers analyzing social networks (Freeman, 1979) and the political landscape (Krackhardt, 1990). Since then, topological analyses have been widely adopted in attempting to uncover patterns in biology (Jeong, Tombor, Albert, Oltvai, & Barabasi, 2000), ecology (Montoya & Sole, 2002), linguistics (Cancho & Sole, 2001), and transportation (Carvalho & Penn, 2004; Jiang & Claramunt, 2004; Salingaros & West, 1999). Topology represents an effort to uncover structure and pattern in these often complex networks (Buhl et al., 2006). The topological approach to measuring street networks, for instance, is primarily based upon the idea that some streets are more important because they are more accessible, or in the topological vernacular, more central (Porta, Crucitti, & Latora, 2006). Related to connectivity, centrality is another important topological consideration. A typical Union Station, for example, is a highly central (and important) node because it acts as a hub for connecting several different rail lines. Some common topological measures of centrality include Degree and Betweenness, which we will discuss in more detail in subsequent sub-sections.

There are also some peculiarities worth remembering when it comes to topology.

When thinking about the Size of a network, our first inclination might be measures that provide length or area. In topological terms, however, Size refers to the number of nodes in network. Other relevant size-related measures include: Geodesic Distance (the fewest number of links between two nodes); Diameter (the highest geodesic distance in a network); and Characteristic Path Length (the average geodesic distance of a network).

Density is another tricky term in the topological vernacular. When talking about the density of a city, we usually seek out measures such as population density, intersection density, or land use intensity. In most cases, these metrics are calculated in terms of area (e.g. per km2). In topology, however, Density refers to the density of connections. In other words, the density of a network can be calculated by dividing the number of links by the number of possible links. Topologically, the fully-gridded street networks of Portland, OR and Salt Lake City, UT are essentially the same in terms of Density; with respect transportation and urbanism, however, there remain drastic functionality differences between the 200’ (~60m) Portland blocks and the 660’ (~200 m) Salt Lake City blocks.

As illustrated with the Portland/Salt Lake City example, one limitation of topology is that it ignores scale. However, this can also be an advantage. For instance, Denver might be much closer to Springfield, IL than Washington, DC as the crow flies, but a combination of several inexpensive options for direct flights to DC and relatively few direct flight options for Springfield mean that DC is essentially closer in terms of network connectivity. Topology captures such distinctions by focusing on connectedness rather than length.

While topological analyses such as the above are scale-free, we also need to be careful about use of this term because scale-free networks are not equivalent to scale-free analyses. In topological thinking, scale-free networks are highly centralized. More specifically, if we plot the number of connections for each node, the resulting distribution for what is known in topology as a scale-free network would resemble a Power law distribution with some nodes having many connections but most having very few. A hub-and-spoke light-rail system, for instance, tends to exhibit scale-free network qualities with relatively few stations connecting many lines. The nodes in a random network, on the other hand, tend to have approximately the same number of connections. For instance when we define the intersections of a street network as the nodes and the segments as the links, the results tends towards a random network. If we flip the definition again, so that the streets are the nodes and the intersections the links, we trend back towards a scale-free network (Jiang, 2007; Jiang & Claramunt, 2004).

One reason to look at connectivity in these terms has to do with the critical issues of resilience and vulnerability. In general, robustness is associated with connectivity. When we have good connectivity, removing one node or link does not make much of a difference in terms of overall network performance. In contrast, scale-free networks are more susceptible to strategic attacks, failures, or catastrophes. However, as shown in a recent paper about urban street network topology during a Zombie apocalypse, good connectivity could actually be a double-edged sword (Ball, Rao, Haussman, & Robinson, 2013).

 

Shifting Economic Gears in Transport

You can’t swing a dead cat without hitting another story about driverless cars, shared taxis, new mobility or other tech oriented developments in passenger travel. Invariably, these stories treat the technological advance as the Big Change for travel, where being able to summon a cab on your mobile phone changes everything we know about the world. However, looking at the future of transportation through just a technological lens misses the biggest economic shift underway—the shift from high average cost/low marginal cost travel to low average cost/high marginal cost travel. While this shift is enabled by technology, it is not just technology.

Eliminating drivers will not automatically reduce transportation costs to something approaching zero (I will explain this in a different post, but the short explanation is that drivers do far more than just drive.).  Cars will still cost money to build, maintain and operate. Recently I saw a quote from Emily Castor of Lyft where she said even if autonomous vehicles cost $500,000 each the cost of a Lyft ride will remain the same as it is today. This claim suggests that even companies in the middle of “disrupting” transportation don’t really know where the field is going or what the relationships between capital and operating costs are. In short, capital costs cannot be separated from operating costs.

Consider the current cost of driving in the US. Once you buy and insure a car, driving is extremely cheap for most trips. Most trips begin and end with free parking, and to drive a few miles requires fuel costs measured in nickels and dimes, not dollars. (A car that gets 20 miles per gallon will use $0.50 in fuel for a five mile trip if gas is $2 per gallon, for instance.) Automobility in the US features high average costs (some fraction of the fixed costs of buying and insuring plus the cost of travel) and very low marginal costs (the cost of an additional trip).

Now consider the cost of taxis (or transit). You don’t pay any fixed costs, but the cost of an individual trip is high. A taxi trip of five miles might cost $10 or more, where the same marginal cost for driving was $0.50. A transit trip will cost a few dollars as well, plus additional travel time. Considering these differences in marginal costs, I may drive for a bag of potato chips if I’m in the mood, but I’m not going to pay an Uber or taxi to take me to the Circle K and back. Potato chips aren’t that valuable.

This shift in costs will affect the travel we undertake, and many firms will aim to get the costs of travel as low as possible. In short, as the cost of travel increases per trip, we will travel less.

As an example, when I lived in New York City, I would sometimes use Amazon Prime Now for two hour deliveries since they charged $6 to deliver while my subway fares would have been $5.50. Now that I live in Phoenix, I either ride my bike or make a short drive with free parking that saves me most of the delivery fee. In NYC I transferred my trip to a delivery person. Hopefully that person will use the transport systems more efficiently than a bunch of individual travelers out for sacks of coffee beans.

88090-1444607417742

The shift to higher trip costs that reflect fixed and operating costs will be the factor that affects how and how much we travel in the future. If trip costs are high, then we will travel less and demand for effectively zero cost travel—think walking and biking—will increase. If trip costs remain low, then we will travel more by car.

How low can firms get the costs of travel? Probably not as low as they think. There are cost they don’t control: including paying for infrastructure through fuel taxes, road pricing, parking charges, congestion pricing, etc. Plus someone has to buy and maintain all those vehicles that will get hired, and those costs won’t get lost in the haze of public subsidy (hopefully). Somebody will want a return on their capital investment—just like a real company!

(This is a guest post by David King)

Access Across America: Auto 2015

CTS Catalyst September 2016 just came out, and announces our Access Across America: Auto 2015 study: Study estimates accessibility to jobs by auto in U.S. cities. The article is reprinted below:

Map of Accessibility to jobs by auto in U.S.
Accessibility to jobs by auto

A new report from the University’s Accessibility Observatory estimates the accessibility to jobs by auto for each of the 11 million U.S. census blocks and analyzes these data in the 50 largest (by population) metropolitan areas.

“Accessibility is the ease and feasibility of reaching valuable destinations,” says Andrew Owen, director of the Observatory. “Job accessibility is an important consideration in the attractiveness and usefulness of a place or area.”

Travel times are calculated using a detailed road network and speed data that reflect typical conditions for an  8 a.m. Wednesday morning departure. Additionally, the accessibility results for 8 a.m. are compared with accessibility results for 4 a.m. to estimate the impact of road and highway congestion on job accessibility.

Map of U.S. showing reduced job accessibility due to congestion
Reduced job accessibility due to congestion

Rankings are determined by a weighted average of accessibility, with a higher weight given to closer, easier-to-access jobs. Jobs reachable within 10 minutes are weighted most heavily, and jobs are given decreasing weights as travel time increases up to 60 minutes.

Based on this measure, the research team calculated the 10 metropolitan areas with the greatest accessibility to jobs by auto (see sidebar).

A similar weighting approach was applied to calculate an average congestion impact for each metropolitan area. Based on this measure, the team calculated the 10 metropolitan areas where workers experience, on average, the greatest reduction in job access due to congestion (see sidebar).

Areas with the greatest loss in job accessibility due to congestion

  1. Los Angeles
  2. Boston
  3. Chicago
  4. New York
  5. Phoenix
  6. Houston
  7. Riverside
  8. Seattle
  9. Pittsburgh
  10. San Francisco

Metropolitan areas with the greatest job accessibility by auto

  1. New York
  2. Los Angeles
  3. Chicago
  4. Dallas
  5. San Jose
  6. San Francisco
  7. Washington, DC
  8. Houston
  9. Boston
  10. Philadelphia

“Rather than focusing on how congestion affects individual travelers, our approach quantifies the overall impact that congestion has on the potential for interaction within urban areas,” Owen explains.

“For example, the Minneapolis–St. Paul metro area ranked 12th in terms of job accessibility but 23rd in the reduction in job access due to congestion,” he says. “This suggests that job accessibility is influenced less by congestion here than in other cities.”

The report—Access Across America: Auto 2015—presents detailed accessibility and congestion impact values for each metropolitan area as well as block-level maps that illustrate the spatial patterns of accessibility within each area. It also includes a census tract-level map that shows accessibility patterns at a national scale.

The research was sponsored by the National Accessibility Evaluation Pooled-Fund Study, a multi-year effort led by the Minnesota Department of Transportation and supported by partners including the Federal Highway Administration and 10 state DOTs.


Related Links

Demand for Future Transport

There are differing beliefs about the effects of autonomous vehicles on travel demand. On the one hand, we expect that automation of itself is a technology that makes travel easier, it pushes the demand curve to the right. For the same general cost, people are more willing to travel. Exurbanization has a similar effect (and automation and exurbanization form a nice positive feedback system as well).

Demand vectors for vehicle travel in a changing technological and socio-economic environment.
Demand vectors for vehicle travel in a changing technological and socio-economic environment.

On the other hand, the move from private vehicle ownership to mobility as a service, which is likely in larger cities means that the marginal cost of a trip might rise from very low (since the vehicle is already owned) to high (since the cost of the vehicle has to be recovered on a per-trip basis). This moves the demand curve to the left. It is similar in effect to urbanization (and urbanization and mobility-as-a-service form a nice positive feedback system). Lots of other changes also move the demand curve to the left, including demographic trends, substituting information technologies for work, socializing, and shopping, and dematerialization.

Income moves the willingness to pay for the same amount of travel up or down.

Changes in the price structure of travel move along the demand curve as shown here.

This is one scheme for thinking about the effects of new technologies on travel demand (which we will introduce in the . How these vectors net out is a problem that could be solved with analytical geometry, if only we knew their relative magnitudes. In The End of Traffic and the Future of Transport, we argue demand in the US is generally moving a bit more to the left than the right (though the last year saw sharp reductions in fuel costs and higher incomes and thus moved us more to the right than the left). But we also note that new automation technologies change the available capacity of roads through improved packing of vehicles in motion and smaller vehicles. Less demand plus more supply reduces congestion effects in the net.