Intra-household Bargaining for School Trip Accompaniment of Children: A Group Decision Approach

Recently published:

Highlights1-s2-0-s0965856415300082-gr1

• The significance of model misspecification is underscored in terms of policy outcomes.
• Children aged 12–18 are more dominant than 6–12 years on their travel decision.
• In 63% of the cases the unitary household model underestimates the results.
• Results of models differ in both magnitude and sign of coefficients in some cases.

Abstract

This paper tests a group decision-making model to examine the school travel behavior of students 6–18 years old in the Minneapolis-St. Paul Metropolitan area. The school trip information of 1737 two-parent families with a student is extracted from Travel Behavior Inventory data collected by the Metropolitan Council between the Fall 2010 and Spring 2012. The model has four distinct characteristics including: (1) considering the student explicitly in the model, (2) allowing for bargaining or negotiation within households, (3) quantifying the intra-household interaction among family members, and (4) determining the decision weight function for household members. This framework also covers a household with three members, namely, a father, a mother, and a student, and unlike other studies it is not limited to dual-worker families. To test the hypotheses we build two models, each with and without the group-decision approach. The models are separately built for different age groups, namely students 6–12 and 12–18 years old. This study considers a wide range of variables such as work status of parents, age and gender of students, mode of travel, and distance to school. The findings of this study demonstrate that the elasticities of the two modeling approaches differ not only in the value, but in the sign in some cases. In 63% of the cases the unitary household model underestimates the results. More precisely, the elasticities of the unitary household model are as much as 2 times more than that of the group-decision model in 20% of cases. This is a direct consequence of model misspecification that misleads both long- and short-term policies where the intra-household bargaining and interaction is overlooked in travel behavior models.

A Model of the Rise and Fall of Roads

Recently published:

6c

Abstract: This paper analyzes the relationship between network supply and travel demand and describes a road development and degeneration mechanism microscopically at the link (road-segment) level. A simulation model of transportation network dynamics is developed, involving iterative evolution of travel demand patterns, network revenue policies, cost estimation, and investment rules. The model is applied to a real-world congesting network for Minneapolis-St. Paul, Minnesota (Twin Cities), which comprises nearly 8000 nodes and more than 20,000 links, using network data collected since 1978. Four experiments are carried out with different initial conditions and constraints, the results of which allow us to explore model properties such as computational feasibility, qualitative implications, potential calibration procedures, and predictive value. The hypothesis that road hierarchy is an emergent property of transportation networks is corroborated and the underlying reasons discovered. Spatial distribution of capacity, traffic flow, and congestion in the transportation network is tracked over time. Potential improvements to the model, in particular, and future research directions in transportation network dynamics, in general, are also discussed.

Comparing measures of auto accessibility

This note compares the original Access Across America, using 2010 data (A2010), released in April 2013 with the most recent Access Across America: Auto 2015 (A2015), which was released in September 2016.

Auto accessibility to jobs at 8:00 in 2015.
Auto accessibility to jobs at 8:00 in 2015.

As we write in A2015:

An earlier report in the Access Across America series took a different methodological approach approach to evaluating accessibility across multiple metropolitan areas. [A2010] used average speed values and average job densities across entire metropolitan areas, rather than detailed block-level travel time calculations, to estimate job accessibility. Because of these differences, the results of that report cannot be directly compared with those presented here. 

Thus it is important to note that almost all differences between the two reports can be attributed to differences in methodology, rather than changes over time due to different land use patterns or travel speeds. We anticipate that as subsequent reports come out annually in the 2016-2020 time period, a methodologically consistent time series will be available enabling substantive comparisons of accessibility over time across the United States.

A2010 used a macroscopic approach to computing accessibility, as a function of average network speed, circuity by time band, and average employment density for the region, constrained so that accessibility did not exceed the region’s number of jobs.

macroaccesseqn

ρ_emp = Urban area employment density (jobs · km^−2).

t = time threshold.

V_n = Average network velocity in km · h^−1 

C_t = Average circuity of trips (ratio of actual distance to Euclidean distance) in time threshold (ex: 20-min threshold measures circuity of trips 0-20min).

The methodology is outlined in detail in Network Structure and City Size.

In addition, A2010 assumes that regions do not interact. So a traveler in the Baltimore region, for instance, only has a available jobs in Baltimore, not jobs in the Washington, DC. If the representative traveler for Baltimore could reach all of Baltimore’s jobs in 40 minutes, then the 50 and 60 minute accessibility would be the same. The same phenomenon is especially important in Baltimore, Hartford, Los Angeles, Providence, Raleigh, Riverside, San Francisco, San Jose.

A2010 used 2010 travel speeds (metropolitan averages on arterials and freeways from Inrix) and 2010 Census data.

A2015, used a microscopic approach to computing accessibility, making a separate computation for each census block in the United States.  A2015 used 2015 link-level travel speeds (essentially every road segment in the US has a speed provided by TomTom based on their GPS data collection program). Jobs by Census block from the Census LEHD program were also employed, using data from 2013 LEHD at the census block level.

There are additional differences in geographies about how the two reports defined areas, as Census definitions of metropolitan areas may have changed.

While the results are similar in terms of order of magnitude, they differ in specific ways.

If we accept A2015 as “ground truth”, as it is far more detailed in its use of data and accurate in assumptions, the expected biases A2010 are as follows:

Relative to A2015, A2010 underestimates access in regions that adjoin other region, particularly at longer travel time thresholds, because it does not consider jobs in other regions as available and because it uses average speeds, which are too high for short trips and too low for long trips.

Relative to A2015, A2010 overestimates access for short travel time thresholds, because it uses regional average speeds, rather than the lower speeds actually available for short trips that typically use less freeway.

As shown in the figure below, this pattern is largely, but not uniformly, borne out by comparison of the metropolitan averages. On average, A2010 is higher than A2015 for thresholds of 10, 20, and 30 minutes, but higher for 40, 50, and 60 minutes. The averages are very close at between 30 and 40 minutes, longer than the average daily commute in these cities (a bit under 30 minutes on average).

I am actually surprised (and pleased) at how close the overall average is, given all of the assumptions that went into the macroscopic estimate. Macroscopic approximations can work well for a general understanding of the patterns of cities, but remain macroscopic and approximations. More detailed analysis is necessary to get at spatial differences and irregularities in those general patterns that we illustrate on the maps, differences that give life to individual cities.

 

compareauto2013vauto2015

Population exposure to ultrafine particles: size-resolved and real-time models for highways

Recently published:

Highlights

• This study estimates individual and population exposure to ultra-fine particles in the Minneapolis – St. Paul (Twin Cities) metropolitan area, Minnesota.
• We combine a model of on-highway size-resolved UFP concentrations with GPS travel patterns.
• The most significant exposures were found at freeway interchanges.
• Peak hour concentration is about 10 times larger than the off-peak hour concentration.
• Peak hour exposure is about 100 times compared to its off-peak hour counterpart.

Abstract

Prior research on ultrafine particles (UFP) emphasizes that concentrations are especially high on-highway, and that time on highways contribute disproportionately to total daily exposures. This study estimates individual and population exposure to ultra-fine particles in the Minneapolis – St. Paul (Twin Cities) metropolitan area, Minnesota. Our approach combines a real-time model of on-highway size-resolved UFP concentrations (32 bins, 5.5–600 nm); individual travel patterns, derived from GPS travel trajectories collected in 144 individual vehicles (123 h at locations with UFP estimates among 624 vehicle-hours of travel); and, loop-detector data, indicating real-time traffic conditions throughout the study area. The results provide size-resolved spatial and temporal patterns of exposure to UFP among freeway users. On-highway exposures demonstrate significant variability among users, with highest concentrations during commuting peaks and near highway interchanges. Findings from this paper could inform future epidemiological studies in on-road exposure to UFP by linking personal exposures to traffic conditions.

First best, second best, and third parties

There is a lot of noise about how voting for a third party is “wasting your vote” and taking it away from the only real candidates in this election, those of the two largest parties (The Democrats and the Republicans). Only one person can be elected, and only one other person has a reasonable shot of being elected if the first one isn’t.

In some sense, this is true. Gary Johnson (Libertarian), and save us, Jill Stein (Green), are highly unlikely to win. So a vote for them prevents you from voting from the alternative you prefer from the two candidates who might plausibly win.

Leaving aside arguments such as how well Ross Perot, George Wallace, Theodore Roosevelt, and Abraham Lincoln have done historically suggesting there is a possible role for Third Parties in US Presidential Elections, just as there are in Governor races (Jesse Ventura), I want to propose another argument.

The system of two major parties will never be broken if everyone always votes for a candidate from the two major parties. It’s basically a mutually-reinforcing belief system. If you believe everyone believes that a third party cannot win, you yourself will believe that a third party cannot win and not vote for a third party in an effort to win. The system cannot change under that logic.

We are stuck in what Richard Lipsey and Kelvin Lancaster dubbed a “second best” solution (the best solution given the imperfections of the rest of the world differs from the best solution if the rest of the world were optimal).

The only way to change beliefs about what is possible in the future is to act differently. If we want what we perceive as a first-best world where the candidate we like might win, we are going to have to vote for those candidates in earlier elections to change the belief structure of other voters about their electability.

Now this election is especially risky (they all are, but this one more so), in that a demagogue is closer to power than usually occurs. However the likelihood of any one vote being the marginal vote is infinitesimally small.

People will trot out the 2000 election and Nader and Gore on the risk of protest votes. Gore lost for lots of reasons, there are many ‘butfors’, Nader possibly among them, but hardly the only one. (Sighs, Bill Clinton, butterfly ballots, hanging chads, the US Supreme Court, Gore’s team asking for a partial but not total recount, and so on).

Yet there are a few million people in this country who think we should have a Green Party government. I think they are wrong. I think Greens should elect some actual Legislators and Senators before they try to run for the Presidency (they do have some Minneapolis City Council members, including my own district). But I also think they should have a voice. And if bless-your-soul, you should think Moscow should run US foreign policy and that vaccines are on net bad, you should vote for Jill Stein (and rethink your life choices). Or even if you think the Greens would be better in the long run, and government should be more green, and want to move the Overton Window in the green direction, you can justify a vote for Stein.

Similarly, this year Gary Johnson is the Libertarian nominee, and he is a more serious threat to the political party establishment, in that he (and his VP) have more governing experience than the major party candidates, and are polling in the neighborhood of 10%. If your first-best world has Libertarian President and Congress (or you simply think we should move farther in the direction of lower-case “l” libertarianism without being absolutely upper-case “L” Libertarian) you should vote for Johnson/Weld. Again, it would be good for the Libertarians to show success at lower levels of government and Congress (beyond Ron Paul) before trying to take over the Executive Branch, but we can’t always get what we want.

A serious showing in 2016 helps a third party’s candidates in 2020. It reframes people’s expectations. It moves towards your first-best world in the long run, even if it is suboptimal from a second-best perspective in the short run. Once a third party gets in the 30% range of support, it becomes a plausible, unwasted vote in the short term. The third party will be unlikely to get 30% support before it gets 20% or 10%. That could all happen in one election cycle, or it could take multiples.

Of course, if you like either the Democratic or Republican nominees best, you should vote for them.

Politics may seem like a one-shot game, but it is in fact repeated. If all goes well, I will spend as much of my life between the years in the next administration between 2017 and 2021 as in the following administration between the years 2021 and 2025. The rules for electing Presidents in the US may be nuts, but they are the only rules we have right now.

Adding transit costs to the accessibility equation offers a better gauge of transportation equity

CTS Catalyst reports on our recent paper: The cost of equity: Assessing transit accessibility and social disparity using total travel costTransportation Research part A: Policy and Practice, 2016. Reprinted below:

Access to opportunities such as jobs and services is one of the main benefits of public transit. To ensure this benefit is maximized, transportation planners are increasingly seeking to distribute transportation resources as fairly as possible in order to provide a variety of options to commuters and increase their access to opportunities.montrealtransit

Typically, transportation accessibility is measured using the number of opportunities that can be reached within a given time threshold. For example, a planner might look at how many jobs residents of a socially disadvantaged neighborhood can reach within 45 minutes to see where improvements might be needed. However, these traditional accessibility measurements have a significant shortcoming.

“Research shows us that low-income and socially disadvantaged individuals are the most likely to be transit dependent and face barriers to accessing their desired destinations,” says David Levinson, a professor in the Department of Civil, Environmental and Geo- Engineering. “If we only look at time as a constraint on accessibility, we leave the crucial factor of financial access out of the equation. For low-income populations, transit fares can present a major barrier to accessibility, since fares can consume a large share of individuals’ budgets. As a result, planners and researchers may overestimate job accessibility, particularly for low-income riders.”

In recent research, Levinson and his co-authors developed a set of innovative accessibility measures that incorporate both travel time and transit fares. Then, they applied those measures to determine whether people living in socially disadvantaged neighborhoods—in this case, in Montreal, Canada—experienced the same levels of transit accessibility as those living in other neighborhoods. Finally, they compared the results of their new measurement with traditional accessibility measures that account only for travel time.

“We found that accessibility measures relying solely on travel time estimate a higher number of jobs than our measure,” says Levinson. “For the most socially disadvantaged residents, factoring in a single fare reduces job accessibility 50 percent; adding a monthly pass reduces it 30 percent.”

The study also found that public transit generally favors vulnerable populations in Montreal. “Low-income populations generally reside in the central city, near transit stations and job concentrations. Higher-income populations are concentrated in suburban areas, and suburban fares are much more expensive. So in this case, residents of socially disadvantaged areas have more equitable accessibility to jobs even when fare cost is included,” he explains.

The new accessibility measure offers several benefits for transportation planners, Levinson says. First, it will allow them to better explain to policymakers the number of jobs a resident can reach for a given cost, thereby allowing fare structures and hourly wages to be judged against the cost of commuting. In addition, it can help planners identify neighborhoods that need transportation benefits the most and provide broader insight for the transportation community into how combined measures of accessibility can be used to better understand the impact of transportation planning decisions.

This research was conducted as a collaboration between Levinson and the Transportation Research at McGill (TRAM) group, which is led by Ahmed El-Geneidy, associate professor at McGill University in Montreal and a former U of M researcher.


Related Link

Access Across America: Auto 2015

CTS Catalyst September 2016 just came out, and announces our Access Across America: Auto 2015 study: Study estimates accessibility to jobs by auto in U.S. cities. The article is reprinted below:

Map of Accessibility to jobs by auto in U.S.
Accessibility to jobs by auto

A new report from the University’s Accessibility Observatory estimates the accessibility to jobs by auto for each of the 11 million U.S. census blocks and analyzes these data in the 50 largest (by population) metropolitan areas.

“Accessibility is the ease and feasibility of reaching valuable destinations,” says Andrew Owen, director of the Observatory. “Job accessibility is an important consideration in the attractiveness and usefulness of a place or area.”

Travel times are calculated using a detailed road network and speed data that reflect typical conditions for an  8 a.m. Wednesday morning departure. Additionally, the accessibility results for 8 a.m. are compared with accessibility results for 4 a.m. to estimate the impact of road and highway congestion on job accessibility.

Map of U.S. showing reduced job accessibility due to congestion
Reduced job accessibility due to congestion

Rankings are determined by a weighted average of accessibility, with a higher weight given to closer, easier-to-access jobs. Jobs reachable within 10 minutes are weighted most heavily, and jobs are given decreasing weights as travel time increases up to 60 minutes.

Based on this measure, the research team calculated the 10 metropolitan areas with the greatest accessibility to jobs by auto (see sidebar).

A similar weighting approach was applied to calculate an average congestion impact for each metropolitan area. Based on this measure, the team calculated the 10 metropolitan areas where workers experience, on average, the greatest reduction in job access due to congestion (see sidebar).

Areas with the greatest loss in job accessibility due to congestion

  1. Los Angeles
  2. Boston
  3. Chicago
  4. New York
  5. Phoenix
  6. Houston
  7. Riverside
  8. Seattle
  9. Pittsburgh
  10. San Francisco

Metropolitan areas with the greatest job accessibility by auto

  1. New York
  2. Los Angeles
  3. Chicago
  4. Dallas
  5. San Jose
  6. San Francisco
  7. Washington, DC
  8. Houston
  9. Boston
  10. Philadelphia

“Rather than focusing on how congestion affects individual travelers, our approach quantifies the overall impact that congestion has on the potential for interaction within urban areas,” Owen explains.

“For example, the Minneapolis–St. Paul metro area ranked 12th in terms of job accessibility but 23rd in the reduction in job access due to congestion,” he says. “This suggests that job accessibility is influenced less by congestion here than in other cities.”

The report—Access Across America: Auto 2015—presents detailed accessibility and congestion impact values for each metropolitan area as well as block-level maps that illustrate the spatial patterns of accessibility within each area. It also includes a census tract-level map that shows accessibility patterns at a national scale.

The research was sponsored by the National Accessibility Evaluation Pooled-Fund Study, a multi-year effort led by the Minnesota Department of Transportation and supported by partners including the Federal Highway Administration and 10 state DOTs.


Related Links

Demand for Future Transport

There are differing beliefs about the effects of autonomous vehicles on travel demand. On the one hand, we expect that automation of itself is a technology that makes travel easier, it pushes the demand curve to the right. For the same general cost, people are more willing to travel. Exurbanization has a similar effect (and automation and exurbanization form a nice positive feedback system as well).

Demand vectors for vehicle travel in a changing technological and socio-economic environment.
Demand vectors for vehicle travel in a changing technological and socio-economic environment.

On the other hand, the move from private vehicle ownership to mobility as a service, which is likely in larger cities means that the marginal cost of a trip might rise from very low (since the vehicle is already owned) to high (since the cost of the vehicle has to be recovered on a per-trip basis). This moves the demand curve to the left. It is similar in effect to urbanization (and urbanization and mobility-as-a-service form a nice positive feedback system). Lots of other changes also move the demand curve to the left, including demographic trends, substituting information technologies for work, socializing, and shopping, and dematerialization.

Income moves the willingness to pay for the same amount of travel up or down.

Changes in the price structure of travel move along the demand curve as shown here.

This is one scheme for thinking about the effects of new technologies on travel demand (which we will introduce in the . How these vectors net out is a problem that could be solved with analytical geometry, if only we knew their relative magnitudes. In The End of Traffic and the Future of Transport, we argue demand in the US is generally moving a bit more to the left than the right (though the last year saw sharp reductions in fuel costs and higher incomes and thus moved us more to the right than the left). But we also note that new automation technologies change the available capacity of roads through improved packing of vehicles in motion and smaller vehicles. Less demand plus more supply reduces congestion effects in the net.

Network Econometrics and Traffic Flow Analysis

Congratulations to Alireza Ermagun for successfully defending his dissertation: “Network Econometrics and Traffic Flow Analysis.” He will soon be off to do a post-doc. img_2498

 

Short-term traffic forecasting aims to predict the number of vehicles on a link during a given time slice, typically less than an hour. For decades, transportation analysts tackled the forecasting of traffic conditions, while focusing on the temporal dependency of traffic conditions in a road segment. Following the emergence of spatial analysis in traffic studies, a growing interest has aimed to embed spatial information in forecasting methods. These approaches generally take advantage of the information that many of the cars that will be on one link soon are already on the network upstream of the relevant location, and of typical patterns of flow.

While embedding the spatial component in forecasting methods acts as a catalyst, its functioning is hindered by the constraints of spatial weight matrices. The positivity of components in spatial weight matrices postulates that traffic links have a positive spatial dependency. In essence, this hypothesis is necessary to represent complementary (upstream and downstream) traffic links. For simple single facility corridors, this may be sufficient. On the flip side of the coin is the competitive nature of traffic links. This nature demonstrates that competitive links bear a significant proportion of diverted vehicles, when one of them is saturated or closed. Short-term traffic forecasting is initially confined to scrutinizing complementary links. In consequence, the competitive nature of traffic links has been overlooked in the spatial weight matrix configuration and short-term traffic forecasting.

This dissertation overcomes this challenge by introducing concepts, theories, and methods dealing with network econometrics to gain a deeper understanding of how the components are interact in a complex network. More precisely, it introduces distinctive network weight matrices, yet alike in concepts and theories, to extract the existing spatial dependency between traffic links. The network weight matrices stem from the concepts of betweenness centrality and vulnerability in network science. Their elements are a function not simply of proximity, but of network topology, network structure, and demand configuration. The network weight matrices are tested in congested and uncongested traffic conditions in both simulation-based and real-world environments.

 

From the simulation-based viewpoint, a 3 × 3 grid network and Nguyen-Dupuis network are designed and adopted as main test networks along with several toy networks for pedagogical purposes. To simulate traffic flow, a macroscopic traffic flow model is selected due to the purpose of this dissertation, which deals with traffic flow in a link in a specific time slice and does not include the single vehicle-driver units. From the real-word viewpoint, a grid-like sub-network is selected from the Minneapolis – St. Paul highway system, which comprises 687 detectors and 295 traffic links. The traffic flow of each link is extracted in 30 seconds increments for 2015 as the most recent year.

The results of the analysis lead to a clear and unshakable conclusion that traditional spatial weight matrices are unable to capture the realistic spatial dependency between traffic links in a network. Not only do they overlook the competitive nature of traffic links, but they also ignore the role of network topology and demand configuration in measuring the spatial dependence between traffic links. Neglecting these elements is not simply information loss. It has nontrivial impacts on the outcomes of research and policy decisions. However, using the proposed network weight matrices as a substitute for traditional spatial weight matrices exhibit the capability to overcome these deficiencies. The network weight matrices are theoretically defensible in account of acknowledging traffic theory. As the elements of the network weight matrix more closely reflect the dependence structure of the traffic links, the weight matrix becomes more accurate and defensible. They capture the competitive and complementary nature of links and embed additional network dynamics such as cost of links and demand configuration.

Building on real-world data analysis, the results contribute inexorably to the conclusion that in a network comprising links in parallel and series, both negative and positive correlation showe up between links. The strength of the correlation varies by time-of- day and day-of-week. Strong negative correlations are observed in rush hours, when congestion affects travel behavior. This correlation occurs mostly in parallel links, and in far upstream links where travelers receive information about congestion (for instance from media, variable message signs, or personal observations of propagating shockwaves) and are able to switch to substitute paths. Irrespective of time-of-day and day-of-week, a strong positive correlation is observed between upstream and downstream sections. This correlation is stronger in uncongested regimes, as traffic flow passes through the consecutive links in a shorter time and there is no congestion effect to shift or stall traffic.

 

Although this dissertation tests and validates the network weight matrices in the road traffic network problem to derive the realistic spatial dependency between traffic links, they have potential for implementation in other disciplines such as geography, regional, and social network sciences. The network weight matrices have further applications not only in models of physical flow, but also in social networks for which links or nodes may be either competitive or complementary with each other.

You have seen other work we have done together, including research related to this dissertation, earlier on the blog. He has a quite a few publications with me, and he was only at Minnesota two years, (and is not counting papers with others since he’s been here) including: