At a recent USDOT Summit, I heard Secretary Foxx, citing his agency’s Beyond Traffic report say the population of the US will increase by 70 million people by 2040 and freight will increase by 45%. Why should population increase by some 22% and freight twice as much over this period?
Well, certainly there is a rise in teleshopping, so local logistics will increase. The amount of this is unclear. Currently e-shopping is on the order of (and under) = 10% of retail sales, but it is growing.
There are several aspects of this. There is the shipment from factory to distribution center, from one distribution center to another, and from the final distribution center to the final destination (usually home), the “last mile”. If the total goods consumed remain the same, the first two stages are essentially unchanged.
If the freight delivery system currently covers every street (that is the UPS guy comes down your street once a day, every day), it will continue to cover every street, just with more vehicles dispatched from the dispatch center to the last mile(s), as with more deliveries there will be more trucks dispatched and each truck will have a shorter, but more intensive route. Ignoring automation in this field (and surely there will be some), once the appropriate optimizations are made in terms of grouping shipments, this still has to be more efficient than individuals going out and coming back from shopping trips.
Shopping trips declining counts as a reduction in personal travel, and UPS shipments count as an increase in freight, but how much is this? Shopping is less than 10% of personal travel.
We might also see a delivery system covering every street twice a day, or four times a day, as real-time deliveries become more significant. I am skeptical Amazon Prime Now – type services will be a thing for most people most of the time (really, I can wait for my lightbulb if it saves some money), but nevertheless if those trucks are not optimally filled, it would increase total freight ton-miles.
There is then the question of whether more material will be consumed overall. My sense is that total matter shipped should decline on a per capita basis. By the time period in question, 2040, the US should be off of coal and oil, replaced with renewable electricity (whose transmission does not count as transportation, unlike coal, oil, or gas). This will decimate the railroad industry, who will then try to move into markets now served by trucks.
Further think about things you use. Many of them are smaller than their equivalent 25 years ago (phones, TVs, computers, cars). Now we may have more of them, and we might need more furniture to occupy our large houses, but that is relatively small in the scheme of things, most freight are things which are consumed daily (food products, energy), not long term capital goods.
We might also increase freight ton-miles if we increase the distances that freight is shipped, but keep the quantity the same. Can our supply chains become even more global? Will they? With automation, the advantages of cheap labor in the production system will diminish, and it will be easier to manufacture locally (to reduce transportation costs and make just-in-time more viable). With cheap energy, things that are now difficult and expensive (like growing exotic fruits and vegetables indoors) will become more viable.
The net is uncertain, we cannot know whether freight shipments will grow faster or slower than population, but to predict, nay assert, a 45% increase is an assumption that should be pushed back on. It is used to justify government investments in highways (and ports and railroads to a lesser extent) for freight that can no longer be justified based on rising per capita passenger travel.
• The significance of model misspecification is underscored in terms of policy outcomes.
• Children aged 12–18 are more dominant than 6–12 years on their travel decision.
• In 63% of the cases the unitary household model underestimates the results.
• Results of models differ in both magnitude and sign of coefficients in some cases.
Abstract
This paper tests a group decision-making model to examine the school travel behavior of students 6–18 years old in the Minneapolis-St. Paul Metropolitan area. The school trip information of 1737 two-parent families with a student is extracted from Travel Behavior Inventory data collected by the Metropolitan Council between the Fall 2010 and Spring 2012. The model has four distinct characteristics including: (1) considering the student explicitly in the model, (2) allowing for bargaining or negotiation within households, (3) quantifying the intra-household interaction among family members, and (4) determining the decision weight function for household members. This framework also covers a household with three members, namely, a father, a mother, and a student, and unlike other studies it is not limited to dual-worker families. To test the hypotheses we build two models, each with and without the group-decision approach. The models are separately built for different age groups, namely students 6–12 and 12–18 years old. This study considers a wide range of variables such as work status of parents, age and gender of students, mode of travel, and distance to school. The findings of this study demonstrate that the elasticities of the two modeling approaches differ not only in the value, but in the sign in some cases. In 63% of the cases the unitary household model underestimates the results. More precisely, the elasticities of the unitary household model are as much as 2 times more than that of the group-decision model in 20% of cases. This is a direct consequence of model misspecification that misleads both long- and short-term policies where the intra-household bargaining and interaction is overlooked in travel behavior models.
Abstract: This paper analyzes the relationship between network supply and travel demand and describes a road development and degeneration mechanism microscopically at the link (road-segment) level. A simulation model of transportation network dynamics is developed, involving iterative evolution of travel demand patterns, network revenue policies, cost estimation, and investment rules. The model is applied to a real-world congesting network for Minneapolis-St. Paul, Minnesota (Twin Cities), which comprises nearly 8000 nodes and more than 20,000 links, using network data collected since 1978. Four experiments are carried out with different initial conditions and constraints, the results of which allow us to explore model properties such as computational feasibility, qualitative implications, potential calibration procedures, and predictive value. The hypothesis that road hierarchy is an emergent property of transportation networks is corroborated and the underlying reasons discovered. Spatial distribution of capacity, traffic flow, and congestion in the transportation network is tracked over time. Potential improvements to the model, in particular, and future research directions in transportation network dynamics, in general, are also discussed.
This note compares the original Access Across America, using 2010 data (A2010), released in April 2013 with the most recent Access Across America: Auto 2015 (A2015), which was released in September 2016.
Auto accessibility to jobs at 8:00 in 2015.
As we write in A2015:
An earlier report in the Access Across America series took a different methodological approach approach to evaluating accessibility across multiple metropolitan areas. [A2010]used average speed values and average job densities across entire metropolitan areas, rather than detailed block-level travel time calculations, to estimate job accessibility. Because of these differences, the results of that report cannot be directly compared with those presented here.
Thus it is important to note that almost all differences between the two reports can be attributed to differences in methodology, rather than changes over time due to different land use patterns or travel speeds. We anticipate that as subsequent reports come out annually in the 2016-2020 time period, a methodologically consistent time series will be available enabling substantive comparisons of accessibility over time across the United States.
A2010 used a macroscopic approach to computing accessibility, as a function of average network speed, circuity by time band, and average employment density for the region, constrained so that accessibility did not exceed the region’s number of jobs.
ρ_emp = Urban area employment density (jobs · km^−2).
t = time threshold.
V_n = Average network velocity in km · h^−1
C_t = Average circuity of trips (ratio of actual distance to Euclidean distance) in time threshold (ex: 20-min threshold measures circuity of trips 0-20min).
In addition, A2010 assumes that regions do not interact. So a traveler in the Baltimore region, for instance, only has a available jobs in Baltimore, not jobs in the Washington, DC. If the representative traveler for Baltimore could reach all of Baltimore’s jobs in 40 minutes, then the 50 and 60 minute accessibility would be the same. The same phenomenon is especially important in Baltimore, Hartford, Los Angeles, Providence, Raleigh, Riverside, San Francisco, San Jose.
A2010 used 2010 travel speeds (metropolitan averages on arterials and freeways from Inrix) and 2010 Census data.
A2015, used a microscopic approach to computing accessibility, making a separate computation for each census block in the United States. A2015 used 2015 link-level travel speeds (essentially every road segment in the US has a speed provided by TomTom based on their GPS data collection program). Jobs by Census block from the Census LEHD program were also employed, using data from 2013 LEHD at the census block level.
There are additional differences in geographies about how the two reports defined areas, as Census definitions of metropolitan areas may have changed.
While the results are similar in terms of order of magnitude, they differ in specific ways.
If we accept A2015 as “ground truth”, as it is far more detailed in its use of data and accurate in assumptions, the expected biases A2010 are as follows:
Relative to A2015, A2010 underestimates access in regions that adjoin other region, particularly at longer travel time thresholds, because it does not consider jobs in other regions as available and because it uses average speeds, which are too high for short trips and too low for long trips.
Relative to A2015, A2010 overestimates access for short travel time thresholds, because it uses regional average speeds, rather than the lower speeds actually available for short trips that typically use less freeway.
As shown in the figure below, this pattern is largely, but not uniformly, borne out by comparison of the metropolitan averages. On average, A2010 is higher than A2015 for thresholds of 10, 20, and 30 minutes, but higher for 40, 50, and 60 minutes. The averages are very close at between 30 and 40 minutes, longer than the average daily commute in these cities (a bit under 30 minutes on average).
I am actually surprised (and pleased) at how close the overall average is, given all of the assumptions that went into the macroscopic estimate. Macroscopic approximations can work well for a general understanding of the patterns of cities, but remain macroscopic and approximations. More detailed analysis is necessary to get at spatial differences and irregularities in those general patterns that we illustrate on the maps, differences that give life to individual cities.
• This study estimates individual and population exposure to ultra-fine particles in the Minneapolis – St. Paul (Twin Cities) metropolitan area, Minnesota.
• We combine a model of on-highway size-resolved UFP concentrations with GPS travel patterns.
• The most significant exposures were found at freeway interchanges.
• Peak hour concentration is about 10 times larger than the off-peak hour concentration.
• Peak hour exposure is about 100 times compared to its off-peak hour counterpart.
Abstract
Prior research on ultrafine particles (UFP) emphasizes that concentrations are especially high on-highway, and that time on highways contribute disproportionately to total daily exposures. This study estimates individual and population exposure to ultra-fine particles in the Minneapolis – St. Paul (Twin Cities) metropolitan area, Minnesota. Our approach combines a real-time model of on-highway size-resolved UFP concentrations (32 bins, 5.5–600 nm); individual travel patterns, derived from GPS travel trajectories collected in 144 individual vehicles (123 h at locations with UFP estimates among 624 vehicle-hours of travel); and, loop-detector data, indicating real-time traffic conditions throughout the study area. The results provide size-resolved spatial and temporal patterns of exposure to UFP among freeway users. On-highway exposures demonstrate significant variability among users, with highest concentrations during commuting peaks and near highway interchanges. Findings from this paper could inform future epidemiological studies in on-road exposure to UFP by linking personal exposures to traffic conditions.
There is a lot of noise about how voting for a third party is “wasting your vote” and taking it away from the only real candidates in this election, those of the two largest parties (The Democrats and the Republicans). Only one person can be elected, and only one other person has a reasonable shot of being elected if the first one isn’t.
In some sense, this is true. Gary Johnson (Libertarian), and save us, Jill Stein (Green), are highly unlikely to win. So a vote for them prevents you from voting from the alternative you prefer from the two candidates who might plausibly win.
Leaving aside arguments such as how well Ross Perot, George Wallace, Theodore Roosevelt, and Abraham Lincoln have done historically suggesting there is a possible role for Third Parties in US Presidential Elections, just as there are in Governor races (Jesse Ventura), I want to propose another argument.
The system of two major parties will never be broken if everyone always votes for a candidate from the two major parties. It’s basically a mutually-reinforcing belief system. If you believe everyone believes that a third party cannot win, you yourself will believe that a third party cannot win and not vote for a third party in an effort to win. The system cannot change under that logic.
We are stuck in what Richard Lipsey and Kelvin Lancaster dubbed a “second best” solution (the best solution given the imperfections of the rest of the world differs from the best solution if the rest of the world were optimal).
The only way to change beliefs about what is possible in the future is to act differently. If we want what we perceive as a first-best world where the candidate we like might win, we are going to have to vote for those candidates in earlier elections to change the belief structure of other voters about their electability.
Now this election is especially risky (they all are, but this one more so), in that a demagogue is closer to power than usually occurs. However the likelihood of any one vote being the marginal vote is infinitesimally small.
People will trot out the 2000 election and Nader and Gore on the risk of protest votes. Gore lost for lots of reasons, there are many ‘butfors’, Nader possibly among them, but hardly the only one. (Sighs, Bill Clinton, butterfly ballots, hanging chads, the US Supreme Court, Gore’s team asking for a partial but not total recount, and so on).
Yet there are a few million people in this country who think we should have a Green Party government. I think they are wrong. I think Greens should elect some actual Legislators and Senators before they try to run for the Presidency (they do have some Minneapolis City Council members, including my own district). But I also think they should have a voice. And if bless-your-soul, you should think Moscow should run US foreign policy and that vaccines are on net bad, you should vote for Jill Stein (and rethink your life choices). Or even if you think the Greens would be better in the long run, and government should be more green, and want to move the Overton Window in the green direction, you can justify a vote for Stein.
Similarly, this year Gary Johnson is the Libertarian nominee, and he is a more serious threat to the political party establishment, in that he (and his VP) have more governing experience than the major party candidates, and are polling in the neighborhood of 10%. If your first-best world has Libertarian President and Congress (or you simply think we should move farther in the direction of lower-case “l” libertarianism without being absolutely upper-case “L” Libertarian) you should vote for Johnson/Weld. Again, it would be good for the Libertarians to show success at lower levels of government and Congress (beyond Ron Paul) before trying to take over the Executive Branch, but we can’t always get what we want.
A serious showing in 2016 helps a third party’s candidates in 2020. It reframes people’s expectations. It moves towards your first-best world in the long run, even if it is suboptimal from a second-best perspective in the short run. Once a third party gets in the 30% range of support, it becomes a plausible, unwasted vote in the short term. The third party will be unlikely to get 30% support before it gets 20% or 10%. That could all happen in one election cycle, or it could take multiples.
Of course, if you like either the Democratic or Republican nominees best, you should vote for them.
Politics may seem like a one-shot game, but it is in fact repeated. If all goes well, I will spend as much of my life between the years in the next administration between 2017 and 2021 as in the following administration between the years 2021 and 2025. The rules for electing Presidents in the US may be nuts, but they are the only rules we have right now.
Access to opportunities such as jobs and services is one of the main benefits of public transit. To ensure this benefit is maximized, transportation planners are increasingly seeking to distribute transportation resources as fairly as possible in order to provide a variety of options to commuters and increase their access to opportunities.
Typically, transportation accessibility is measured using the number of opportunities that can be reached within a given time threshold. For example, a planner might look at how many jobs residents of a socially disadvantaged neighborhood can reach within 45 minutes to see where improvements might be needed. However, these traditional accessibility measurements have a significant shortcoming.
“Research shows us that low-income and socially disadvantaged individuals are the most likely to be transit dependent and face barriers to accessing their desired destinations,” says David Levinson, a professor in the Department of Civil, Environmental and Geo- Engineering. “If we only look at time as a constraint on accessibility, we leave the crucial factor of financial access out of the equation. For low-income populations, transit fares can present a major barrier to accessibility, since fares can consume a large share of individuals’ budgets. As a result, planners and researchers may overestimate job accessibility, particularly for low-income riders.”
In recent research, Levinson and his co-authors developed a set of innovative accessibility measures that incorporate both travel time and transit fares. Then, they applied those measures to determine whether people living in socially disadvantaged neighborhoods—in this case, in Montreal, Canada—experienced the same levels of transit accessibility as those living in other neighborhoods. Finally, they compared the results of their new measurement with traditional accessibility measures that account only for travel time.
“We found that accessibility measures relying solely on travel time estimate a higher number of jobs than our measure,” says Levinson. “For the most socially disadvantaged residents, factoring in a single fare reduces job accessibility 50 percent; adding a monthly pass reduces it 30 percent.”
The study also found that public transit generally favors vulnerable populations in Montreal. “Low-income populations generally reside in the central city, near transit stations and job concentrations. Higher-income populations are concentrated in suburban areas, and suburban fares are much more expensive. So in this case, residents of socially disadvantaged areas have more equitable accessibility to jobs even when fare cost is included,” he explains.
The new accessibility measure offers several benefits for transportation planners, Levinson says. First, it will allow them to better explain to policymakers the number of jobs a resident can reach for a given cost, thereby allowing fare structures and hourly wages to be judged against the cost of commuting. In addition, it can help planners identify neighborhoods that need transportation benefits the most and provide broader insight for the transportation community into how combined measures of accessibility can be used to better understand the impact of transportation planning decisions.
This research was conducted as a collaboration between Levinson and the Transportation Research at McGill (TRAM) group, which is led by Ahmed El-Geneidy, associate professor at McGill University in Montreal and a former U of M researcher.
Networks play a role in nearly all facets of our daily lives, particularly when it comes to transportation. Even within the transportation realm lays a relatively broad range of different network types such as air networks, freight networks, bus networks, and train networks (not to mention the accompanying power and communications networks). We also have the ubiquitous street network, which not only defines how you get around a city, but it provides the form upon which our cities are built and experienced. Cities around the world that are praised for having good street networks come in many different configurations ranging from the medieval patterns of cities like Prague and Florence, to the organic networks of Boston and London, and the planned grids of Washington D.C. and Savannah, Georgia. But how do researchers begin to understand and quantify the differences in such networks?
PRIMAL STREET NETWORK FOR METROPOLIS WITH INTERSECTIONS AS NODES AND SEGMENTS AS LINKS
The primary scientific field involved with the study of shapes and networks is called topology. Based in mathematics, topology is a subfield of geometry that allows one to transform a network via stretching or bending. Under a topological view, a network that has been stretched like a clock in a Salvador Dali painting would be congruent with the original, unstretched network. This would not be the case in conventional Euclidean geometry where differences in size or angle cannot be ignored. The transportation sector typically models networks as a series of nodes and links (Levinson & Krizek, 2008). The node (or vertex) is the fundamental building block of the model; links (or edges) are not independent entities but rather are represented as connections between two nodes. Connectivity – and the overall structure of the network that emerges from that connectivity – is what topology is all about. In other words, topology cares less about the properties of the objects themselves and more about how they come together.
For instance if we look at the topology of a light rail network, the stations would typically be considered the nodes and the rail lines would be the links. In this case, the stations are the actors in the network, and the rail lines represent the relationships between the actors (Neal, 2013). Those relationships – and more specifically, those connections – embody what is important. Taking a similar approach with a street network, we might identify the intersections as the nodes and the street segments as the links as shown in the network based on an early version of Metropolis above (Fleischer, 1941). For most street networks, however, the street segments are just as important as the intersections, if not more so. The ‘space syntax’ approach takes the opposite (or ‘dual’) approach with street networks: the nodes represent the streets, and the lines between the nodes (i.e. the links or edges) are present when two streets are connected at an intersection, as shown below using the same Metropolis network (Jiang, 2007).
DUAL STREET NETWORK VIEW OF METROPOLIS WITH SEGMENTS AS NODES AND INTERSECTIONS AS LINKS
Initial theories related to topology trace back to 1736 with Leonhard Euler and his paper on the Seven Bridges of Königsberg. Graph theory based topological measures first debuted in the late 1940s (Bavelas, 1948) and were initially developed in papers analyzing social networks (Freeman, 1979) and the political landscape (Krackhardt, 1990). Since then, topological analyses have been widely adopted in attempting to uncover patterns in biology (Jeong, Tombor, Albert, Oltvai, & Barabasi, 2000), ecology (Montoya & Sole, 2002), linguistics (Cancho & Sole, 2001), and transportation (Carvalho & Penn, 2004; Jiang & Claramunt, 2004; Salingaros & West, 1999). Topology represents an effort to uncover structure and pattern in these often complex networks (Buhl et al., 2006). The topological approach to measuring street networks, for instance, is primarily based upon the idea that some streets are more important because they are more accessible, or in the topological vernacular, more central (Porta, Crucitti, & Latora, 2006). Related to connectivity, centrality is another important topological consideration. A typical Union Station, for example, is a highly central (and important) node because it acts as a hub for connecting several different rail lines. Some common topological measures of centrality include Degree and Betweenness, which we will discuss in more detail in subsequent sub-sections.
There are also some peculiarities worth remembering when it comes to topology.
When thinking about the Size of a network, our first inclination might be measures that provide length or area. In topological terms, however, Size refers to the number of nodes in network. Other relevant size-related measures include: Geodesic Distance (the fewest number of links between two nodes); Diameter (the highest geodesic distance in a network); and Characteristic Path Length (the average geodesic distance of a network).
Density is another tricky term in the topological vernacular. When talking about the density of a city, we usually seek out measures such as population density, intersection density, or land use intensity. In most cases, these metrics are calculated in terms of area (e.g. per km2). In topology, however, Density refers to the density of connections. In other words, the density of a network can be calculated by dividing the number of links by the number of possible links. Topologically, the fully-gridded street networks of Portland, OR and Salt Lake City, UT are essentially the same in terms of Density; with respect transportation and urbanism, however, there remain drastic functionality differences between the 200’ (~60m) Portland blocks and the 660’ (~200 m) Salt Lake City blocks.
As illustrated with the Portland/Salt Lake City example, one limitation of topology is that it ignores scale. However, this can also be an advantage. For instance, Denver might be much closer to Springfield, IL than Washington, DC as the crow flies, but a combination of several inexpensive options for direct flights to DC and relatively few direct flight options for Springfield mean that DC is essentially closer in terms of network connectivity. Topology captures such distinctions by focusing on connectedness rather than length.
While topological analyses such as the above are scale-free, we also need to be careful about use of this term because scale-free networks are not equivalent to scale-free analyses. In topological thinking, scale-free networks are highly centralized. More specifically, if we plot the number of connections for each node, the resulting distribution for what is known in topology as a scale-free network would resemble a Power law distribution with some nodes having many connections but most having very few. A hub-and-spoke light-rail system, for instance, tends to exhibit scale-free network qualities with relatively few stations connecting many lines. The nodes in a random network, on the other hand, tend to have approximately the same number of connections. For instance when we define the intersections of a street network as the nodes and the segments as the links, the results tends towards a random network. If we flip the definition again, so that the streets are the nodes and the intersections the links, we trend back towards a scale-free network (Jiang, 2007; Jiang & Claramunt, 2004).
One reason to look at connectivity in these terms has to do with the critical issues of resilience and vulnerability. In general, robustness is associated with connectivity. When we have good connectivity, removing one node or link does not make much of a difference in terms of overall network performance. In contrast, scale-free networks are more susceptible to strategic attacks, failures, or catastrophes. However, as shown in a recent paper about urban street network topology during a Zombie apocalypse, good connectivity could actually be a double-edged sword (Ball, Rao, Haussman, & Robinson, 2013).
You can’t swing a dead cat without hitting another story about driverless cars, shared taxis, new mobility or other tech oriented developments in passenger travel. Invariably, these stories treat the technological advance as the Big Change for travel, where being able to summon a cab on your mobile phone changes everything we know about the world. However, looking at the future of transportation through just a technological lens misses the biggest economic shift underway—the shift from high average cost/low marginal cost travel to low average cost/high marginal cost travel. While this shift is enabled by technology, it is not just technology.
Eliminating drivers will not automatically reduce transportation costs to something approaching zero (I will explain this in a different post, but the short explanation is that drivers do far more than just drive.). Cars will still cost money to build, maintain and operate. Recently I saw a quote from Emily Castor of Lyft where she said even if autonomous vehicles cost $500,000 each the cost of a Lyft ride will remain the same as it is today. This claim suggests that even companies in the middle of “disrupting” transportation don’t really know where the field is going or what the relationships between capital and operating costs are. In short, capital costs cannot be separated from operating costs.
Consider the current cost of driving in the US. Once you buy and insure a car, driving is extremely cheap for most trips. Most trips begin and end with free parking, and to drive a few miles requires fuel costs measured in nickels and dimes, not dollars. (A car that gets 20 miles per gallon will use $0.50 in fuel for a five mile trip if gas is $2 per gallon, for instance.) Automobility in the US features high average costs (some fraction of the fixed costs of buying and insuring plus the cost of travel) and very low marginal costs (the cost of an additional trip).
Now consider the cost of taxis (or transit). You don’t pay any fixed costs, but the cost of an individual trip is high. A taxi trip of five miles might cost $10 or more, where the same marginal cost for driving was $0.50. A transit trip will cost a few dollars as well, plus additional travel time. Considering these differences in marginal costs, I may drive for a bag of potato chips if I’m in the mood, but I’m not going to pay an Uber or taxi to take me to the Circle K and back. Potato chips aren’t that valuable.
This shift in costs will affect the travel we undertake, and many firms will aim to get the costs of travel as low as possible. In short, as the cost of travel increases per trip, we will travel less.
As an example, when I lived in New York City, I would sometimes use Amazon Prime Now for two hour deliveries since they charged $6 to deliver while my subway fares would have been $5.50. Now that I live in Phoenix, I either ride my bike or make a short drive with free parking that saves me most of the delivery fee. In NYC I transferred my trip to a delivery person. Hopefully that person will use the transport systems more efficiently than a bunch of individual travelers out for sacks of coffee beans.
The shift to higher trip costs that reflect fixed and operating costs will be the factor that affects how and how much we travel in the future. If trip costs are high, then we will travel less and demand for effectively zero cost travel—think walking and biking—will increase. If trip costs remain low, then we will travel more by car.
How low can firms get the costs of travel? Probably not as low as they think. There are cost they don’t control: including paying for infrastructure through fuel taxes, road pricing, parking charges, congestion pricing, etc. Plus someone has to buy and maintain all those vehicles that will get hired, and those costs won’t get lost in the haze of public subsidy (hopefully). Somebody will want a return on their capital investment—just like a real company!
You must be logged in to post a comment.