We announce the launch of a new section of Findings: Urban Findings, following the Findings model of short, to-the-point research findings in the broad field of urbanism. The Editorial Board is here, along with an inaugural set of papers here.
At the start of the process, we sent out a call for papers, through our Editorial Board members. We thank them for generously putting in their own time and effort towards these papers, reaching out to their networks and students for contributions, and generously helping out with the review process. This is more so because this process unfolded in a time that was extremely busy for every academic round the globe, as we grappled with the new shifted reality of combining online and face-to-face teaching, and the new reality of virtual conferences.
The papers focus on a diverse set of issues around Urbanism. The application and novel use of new sources of data, and the development of models and methods in quantitative urbanism is growing by leaps and bounds, as these papers demonstrate. A topical theme was COVID-19, to which we all have been witnesses this past year, and which has understandably, and will in the future continue to, change the way in which we think of cities. The papers span broad application and method areas, from model based creation and evaluation of synthetic cities, to empirical research on people, cities, and housing, across Australia, the US, and Canada, large scale survey design and application, and even meta-analyses such as tracking the presence of Urbanism on social media, and the interaction of climate change and housing. Also observed in the papers is the recurring theme of the close interaction between transport and locational behaviours, and the resulting areas of land use and transport interactions – we truly cannot think of cities without thinking of location (urban) and movement (transport) as an integrated whole. This brings us full circle to the reason why we thought Urban Findings should sit under a common umbrella with Transport Findings.
So, along with the launch, this is a call out for regular submissions to Urban Findings (and Transport Findings), for short, to-the-point research focussed on cities. We look forward to some excellent work!
I recently received the following from an Elsevier editor at a prominent journal.
Dear Prof. Levinson, I am writing to ask you to reconsider your decision to decline the invitation to review the above paper. As an author who has a paper submitted to Transportation Research Part A, you should know how important is it to have good and prompt reviews. This is possible only if reviewers accept the invitation to review papers. As a very experienced past editor in chief told me once “if you wish your paper to be reviewed, you need to do your share for the journal”. I believe this is a fair comment. Hope you can reconsider your decision. Best wishes,
It would be a shame if you ever submit to this journal again, the editors might not look favourably.
I have edited, for free, i.e. engaged in unpaid labour, for Elsevier’s Transportation Research part A 31 times according to my incomplete records. I have published in this same journal 15 times over the course of my career, usually with coauthors, providing free content which Elsevier resells.
I think I have done my “share” for the journal, owned by one of the most profitable companies in the world.. But sure, if that’s how they want to play it, I am done. I am out. No more Transportation Research part A submissions from me. I won’t stand for this kind of guilt-tripping combined with implicit threat, this distorted version of ‘pay to play’. The editors of the other Transportation Research parts have never been quite so blatant about demanding this for that. I said “no,” that should have been the end of it.
To be clear, when a reviewer declines a new paper to review, the editor can ask nicely again if they need to. It is even more important on the second round. As an editor and founder of two open access journals: Journal of Transport and Land Use and Transport Findings, I know finding responsive reviewers can be difficult. I wish there were more open access journals in transport, so we could spread the wealth.
But I also know what I don’t know. I don’t know the other demands on the reviewers time. I don’t know whether they have sick or disabled family members at home, have a book coming out, face project or proposal deadlines, are recovering from earthquakes or natural disasters, have retired, are physically ill, have a conflict of interest with the paper, or are reviewing for 100 other journals, or anything else.
What is the optimal size of a research paper? The answer, of course, is that it depends. Some research findings are complex and difficult to explain, and are highly intertwined. Others are much more straight-forward, using well-understood methods to observe something new. However, most papers in most journals are expected to be of a certain length. In transport journals, for instance, this length is typically 3500 – 8000 words. This leaves a lot of words to fill, and people often stuff them, or are asked to stuff them, with repetition of well-known and well-established theory, regurgitation of self-explanatory tables and figures, citation of tangentially related research, and other matters describing what was not done in the research. Without a tight word count restriction, the authors have no recourse but to include filler at the bequest of the almighty reviewers, or in anticipation of such bequest.
Transport Findings takes the opposite approach. With a 1000 word cap (plus a maximum of 3 figures and 3 tables), it demands authors get to the core of their results: what did they measure, how did they measure it.
It’s really surprising what you can say in a few words. The Gettysburg Address was 270 words, depending on version.
Some people think fewer words means less work. The opposite is often true. Omitting needless words requires editing, and good editing takes time. The amount of time spent typing is not anywhere close to the amount spent reviewing, revising, and redacting in a brief text. We do that not for ourselves, but for our readers, to save them time, to help them see the point clearly without having to wade through a morass of miscellany and nonsense.
There are other reasons for short papers in addition to the benefits for the reader. They are faster to review, and so can go from conception to publication in less time than it takes some journals to move an article from their inbox to the review queue. I’d hypothesize (without any actual data, but impressionistically) that review time increases with the square of article length. So a 4000 word article will take reviewers on average 16 times longer to be reviewed than 1000 word article, neglecting fixed costs of getting people to read their email. Several things factor into this, most obviously considering the interactions of words in a text (a 4000 word article has far more textual interactions than a 1000 word article), but also including dread at reading a long rather than a short document for precisely that reason. And words beget words, a long document citing everyone but me can easily be just a bit longer.
We are now able in the academic community to produce many different kinds of research outputs, ranging from raw data, to figures and charts, to regression analyses, to texts and papers. These can all be put online at data conservancies and given permanent identifiers. Peer review still has some cache as a quality filter, let’s not waste the scarce time of volunteer reviewers with noise.
Harry Frankfurt wrote a book “On Bullshit“, which Wikipedia summarizes as saying “bullshit is speech intended to persuade without regard for truth.” I think the problem is deeper than that. There is work generated for the sake of saying that work was done.
Consider peer review. I recently received a review from a paper I co-authored in a good journal. The reviews were positive except Reviewer #2 said the word X was not the right word. X is of course exactly the right word, but in order to get accepted we had to change the paper to make Reviewer #2 happy. [I refuse to accept the charitable view that these were Reviewer #2’s genuine beliefs, it is truly nonsense.]
We complied. We wasted our time to increase the utility of anonymous Reviewer #2 in order to satisfy the editor. Reviewer #2’s ego is boosted, by having enforced compliance, and therefore increasing his relative status at the expense of ours, but since he is anonymous, only he knows. Reviewer #2 could have just said “Accept”, but that would be too easy, he felt he had to say something to prove he reviewed the paper. (R2 could be female, but he feels male.)
Now Reviewer #2 is not operating in a vacuum. Undoubtedly some unreasonable reviewer of one of his papers made him go through what he felt were ridiculous contortions, and this rolls downhill. To salvage ego, the abused child becomes an abuser, creating a new generation of abused spouses and children.
What we have here is a cycle of peer review violence, where as more and more research is produced due to increased productivity of academics (in part due to the rise of information technologies, but mostly the publish or perish culture driven by university ranking systems driven by the desire to attract international students driven by revenue), more review requests are generated, more reviewers get more annoyed at the requests, and more hoops laid out before us.
This Reviewer #2 was actually not so bad. Many others are unhappy if you don’t regurgitate all scientific knowledge up until the present day, and lay out all prospective policy outcomes going forward. This attitude has led to an explosion of paper lengths.
Reviewers would be much more polite were reviews not anonymous. This raises other problems, that junior people would be afraid to confront senior people on cases which were not bullshit. Unless everything is open, an open (non-blind) review policy from a single journal cannot extract fair reviews. Retaliation, on say grant reviews, or promotion reviews, which remain anonymous, is a risk.
Without any peer review, Gresham’s Law of Journal Articles: Bad knowledge drives out good, would surely apply. Peer review of some form is a Good Housekeeping Seal of Approval for scientific articles. But that said, what is really required here? If you, a journal, trust me enough to review (or edit) other people’s work, why do you not trust me enough to publish my own? There are a few arguments in favor of review:
First, we have Linus’s Law: “with many eyes all bugs are shallow”, and so some editorial review will improve quality and find problems. Good authors want good editors.
Second, the anticipation of peer review improves quality as you know a paper will have to get through review to be published
However in contrast, if we have a peer review system where nothing passes the first round (regardless of how good), but many papers go into the revise and resubmit limbo, authors will, in fact, submit lower quality work, and wait for the reviewers to make their recommendations, and spend their scarce time trying to satisfy those reviewers instead of themselves. In short, we have constructed a system where peer review lowers the initial quality of submission. We have become so afraid of publishing false positives (a wrong paper), we create many false negatives (decline competent papers). History can judge false positives retrospectively just fine, we don’t actually need to spend so many resources to do this prospectively.
I have talked previously about how peer review also costs society knowledge, as the inevitable delay in the Revise and Resubmit round, and the cost of going back to closed projects. Instead of rewarding academics by which journals they published in, reward them for how important their work is. This is either known, because history rewards them with citations, or arguable that the future will recognize them because colleagues believe in them now.
Instead of over-reliance on peer-review, we should view it as a filter to ensure wrong or poorly written papers are not published, not a filter to ensure only perfect papers are published. We should have a system that rewards the creation of small (or large) academic building blocks, and lets scientists and engineers and even economists file their work respectably as they develop it in the length appropriate, and not feel the need to expand their work to develop a whole new theory of civilization with every research output.
History can be the evaluator – it is attention which is now scarce, not the number of pages in a journal. Compliance with systems built for another age needs to be tossed with those systems.
Seeking letters for promotion cases aims to ensure that an outsider (someone not at your university) says you do good work, because for some reason, the university cannot trust people at its own university to make such a judgment. My promotion cases required 10 or 11 letters from other academics at other universities saying that my work was good enough to warrant promotion, and I have written numerous anonymous letters. I have not retaliated (nor had the opportunity to ‘retaliate’ against a youngster I was offended by, or their senior allies) by trying to undermine a promotion case, but I certainly can see how some senior people might if they were offended by a junior faculty somewhere, or by those junior’s senior colleagues, like a peer review.
I understand that such letters help assure that promotion is warranted, but imagine Apple computer asking Microsoft, Google, and Facebook to write letters in support of promotion of their own software engineers. That’s absurd. The evidence of my research is in my publications, and other people’s citations of those publications, not in whether someone else says my work is good. My colleagues should be able to judge that. The evidence of my teaching is in whether my students learned (and retained) anything, not in end-of-semester surveys.
But if I as a junior faculty know that I have to get 10 senior people to write letters for me, I will spend effort to curry favour by doing things like reviewing papers when assigned by editors, and serve on committees, and so on. In short, I will comply in advance so that the favour will be returned. Letter writing enforces compliance on the part of junior faculty structurally.
Let universities take on the burden themselves of deciding whom they should promote, rather than offloading this to the community. If they don’t feel comfortable assessing their own staff, maybe that’s a field they shouldn’t be in.
Sometimes compliance-enforcement takes an even more ridiculous turn. I recently had a conference paper at an Australasian conference accepted on its substance but declined because of some mysterious MS Word formatting problem that I refused to spend even more time to rectify after 2 previous revision attempts. Despite using the organisers templates, they still decided the paper somehow didn’t meet the correct format, and so was rejected. Obviously it’s their loss, they’ll miss me, and our research, (and my student who would have also presented something else), and the revenue we would have paid to attend the conference. If the papers were to be published in a book, I might understand why this matters, but that in fact was not the case, it was simply for electronic distribution, and the aesthetic judgment of the organiser, which is lacking (obviously, as it was a pretty ugly MS Word template to begin with).
Now I understand Tyler Cowan’s quote (can’t find the original, but essentially)
“The most important thing I learned in my PhD was to get the margins on the pages right.”
Back in the day, an older woman in the registrar’s office would go through your thesis or dissertation with a ruler and measure to make sure the margins on each page were just so. And if not, the dissertation would be turned back, and you got to reformat it. This was the University’s final lesson in compliance.
But really, why does marginal perfectionism matter? We did it because the system required it. Fortunately, this particular requirement has disappeared, but why did the system require it? One imagines so that reproductions of the dissertation on a smaller sheet of paper would not lose important information. There may have once been a good reason, but margin size enforcement was promulgated as a rule that lasted long past the original need.
At a major university with which I have an affiliation, I am working on establishing a new degree program. This is a relatively cost-free enterprise the university, the units of study are almost entirely already offered. However to get the program established, we have to have an Expression of Interest, vetted by three faculties involved, including 2 committees in my faculty, as well as two committees at the university level. Then we have a proposal, whose form is 57 pages. And then we need to go through all the same committees. I am told the 57 page form is designed to dissuade people who aren’t serious. But for those who are, that and all the meeting for something so technically simple to implement is pointless.
One of the faculty committees has about 50 members, all of whose job, apparently is to ratify what the other committees said and supervise one or two full-time staff members.
Let a thousand degrees flourish, and if they don’t succeed, they can be cancelled.
Just as universities accredit students (who undoubtedly think exams and homeworks and projects are a nuisance), degree programs often go through accreditation themselves to show that the curriculum they require students engage in comports with what the industry associations who control ABET think is important (or was important, as this is an exceptionally conservative process designed to stifle innovation.) The last time I went through this (fortunately I did not have to lead it in my Department), each required course produced a notebook with sample poor, average, and good work from students for each assignment, as well as printouts of the assignments and other miscellany. It is pretty clear the review panel did not actually review the contents of each notebook. They may have sampled them. The wall of notebooks was there to demonstrate compliance. Each assignment was cross-referenced against objectives and qualities students were supposed to accomplish by successfully completing that assignment. While this sounds good in principle, it is basically a database exercise, labelling things as satisfying objectives rather than changing things to meet objectives.
Let universities produce students whose value is they graduated from a university that taught what it thought important, and if that aligns with market demand, all the better. It is not as if students don’t also have to take and pass exams to be a Professional this or that, and an education that helped would be appropriately recognized, or universities don’t have well-established and largely self-fulfilling reputations.
Academics do nothing if not evaluate each other’s work. The amount of time writing letters of recommendation, evaluations of promotion cases, reviews of proposals and each other’s programs, and conducting peer reviews of articles is surprisingly, and in my view unnecessarily, high. It is academia generating work for academics who ought to be in the primary business of creating and transmitting knowledge, not evaluating knowledge creation and transmission. It is, in economic terms, a deadweight loss. If all this evaluation improved the quality of knowledge production or transmission sufficiently, it might not be, but there is no evidence I see such is the case. We adopt the forms because those before us adopted the forms.
which garnered many likes. But of course Twitter is no place to have a discussion like this. So
This is what I am thinking:
Journal Name: Transport Findings
Open Access. Flat $50 fee payable on submission (with no guarantee of acceptance) and $50 payable on acceptance. This filters the cranks, covers limited typesetting, article charges, hosting, etc. See Scholastica website for their costs, (the platform looks good for this) if I read it right, this price would more or less cover fixed costs if we had 50 articles per year. This handbook is also of interest
Maximum word count of 1000 (including References). Maximum Figure count of 3, Table count of 3.
The new journal would not be affiliated with existing journals (this creates confusion on the part of authors and reviewers).
Peer Review by 1 Reviewer drawn from the Editorial Advisory Board. (We add to the EAB if we cannot find someone who can review the article). Everyone who has reviewed in the past 3 years stays on the EAB. The Review should be done in 1 month. So while the Review is anonymous, the reviewers overall are all known.
Articles must be either New Question, New Method, New Data, or New Finding (i.e. it can almost exactly replicate a previous study and find something different), or some combination of the above.
The acceptance test is whether it satisfies the above and appears scientifically correct (no obvious mistakes/flaws) and replicable, and quality of English.
The journal has Accept/Reject decisions only. (Obviously people can submit again if they want to change the manuscript, however NEW submission, NEW reviewer, NEW fee). Acceptance Letters can add some minor comments. No Revise & Resubmit.
Scope: Findings in the broad field of transport
All data must be publicly available if possible (goes to replicability, caveats for personally identifying information)
No special issues, themes, or anything like that, the journal is basically just a list of peer-reviewed short articles in reverse chronological order.
There is a standard template for article submission, (I would say a web form, but that can’t handle equations, figures, or tables well). something like
AUTHORS (NAME, AFFILIATION, CONTACT)
1. QUESTION AND HYPOTHESES
2. METHODS AND DATA
No sections titled: Intro, No Lit Review, No Theory, No Discussion, No Conclusions
Comments on Twitter, I guess.
Now I am not thinking I should run this journal (I already have my hands full), but that it should exist. I am happy to help if someone has the energy to organize it. It should be fairly straight-forward and mostly self-organizing to the point of being self-sustaining, but it does need an initial investment of energy to get there.
This leads me to the hypothesis that the primary purpose of academic Peer Review is not to review papers and give feedback to authors. It is instead to induce authors to submit work of high quality because they believe someone will read it.
Journals want to ensure good (e.g. novel and important) papers are accepted and bad (e.g. wrong or trivial) papers are rejected. In addition to the evaluative goal, peer review may also have a developmental goal, making papers better, as any paper can be improved. It seems reasonable enough as a goal, it has costs that are unnecessarily high.
There are two sources of errors that can occur, analogous to Type I and type II errors in statistics (which is which depends on what you take as the null hypothesis, rejection or acceptance):
Error 1: Bad papers are accepted. … This is a false positive.
Error 2: Good papers are rejected. … This is a false negative.
There has been a great deal of ink spilled about the acceptance of bad papers, and the retraction of wrong papers. Obviously we would prefer not to accept bad papers as a community, as it is embarrassing, may mislead researchers and the general public.
However, we spend so much time poring over papers (the amount of time academics spend reviewing other academics’ work would surprise an outsider) to ensure bad papers are rejected that we inevitably cast our net wide enough to reject good papers. And so we almost never accept good papers on the first round.
Any rejected paper can always be resubmitted and a second (third, fourth, fifth) journal can get an opportunity to review it. This costs time. But more than that it costs a significant amount of mental effort. When the paper was originally submitted, it was immediately after the research was completed. The ideas were fresh in the mind. Authors were somewhat enthusiastic about the topic. By submitting the paper, the authors have mentally closed this project and opened the next one. But then 3 or 6 or 9 or 12 months later (or in one sad case of mine 8 years!) the reviews come back. And the reviewers want some change; the reviewers always see some way the paper can be improved. And no doubt in a perfect world with infinite time in a day, we would agree not only that this is an improvement, but that it is worth doing.
But instead, we are apathetic or antagonistic or busy with other things, as what was closed has now been needlessly reopened for what is in reality a very minor improvement most of the time to make the reviewer feel that his or her fingerprints have affected the outcome of the paper.
Some of my coauthors are also faculty members, and should have motivation to revise and resubmit, which may be a few hours to a days worth of work in many cases, and is a far faster way to get a paper accepted than starting from scratch. But the mental burden and pains reopened are that great for work from 1 or 3 or 5 or 10 years ago. I have more understanding for coauthors who are in industry, where the rewards from peer reviewed publication are another line on the CV and maybe an attaboy (attagirl) and a beer from colleagues, but not existential in the way tenure is.
But instead of revising the paper, it sits.
I currently have about 10 papers in this state (almost enough to move someone from Assistant Professor to Associate Professor at many US universities), ignoring papers that have been fully abandoned, and excluding papers that I have some confidence or hope will actually be revised and resubmitted soon. My coauthors have not yet made the revisions necessary, (nor did I, but they were the lead authors and it was really their work), and so it was not done in a timely way and thus the original reviews effectively expired; and we have not sent it elsewhere. There are always reasons, with which I have empathy, coauthors have young families, new jobs, or are otherwise busy. In the end it is a question of priorities, and the personal benefits to publication for non-academics is not especially great, the benefits accrue to science and society at large. The positive spillovers cannot be captured.
And this is after I encourage, cajole, nag, and flog students and former students to revise and resubmit. And I suspect I am more systematic about this than most people. The amount of knowledge buried on people’s hard drives because of the peer review ‘revise and resubmit’ system is a huge loss to humanity and scientific progress.
Abstract: This study evaluates routes followed by residents of the Minneapolis–St. Paul metropolitan area, as measured by the Global Positioning System (GPS) component of the 2010/11 Twin Cities Travel Behavior Inventory (TBI). It finds that most commuters used paths longer than the shortest path. This is in part a function of trip distance (+, longer distance trips deviate more), trip circuity (−, more circuitous trips deviate less), number of turns (+, trips with more turns per kilometer deviate more), age of driver (−, older drivers deviate less), employment status (+, part-time workers deviate more), flexibility in work hours (+, more flexibility deviate more), and household income (−, higher-income travelers deviate less). Some reasons for these findings are conjectured.
Author keywords: Global positioning system (GPS); Shortest path; Route choice; Wardrop’s principles; Travel behavior.
The proper metric for an academic’s influence on the academic world of academic publishing is academic citations. An academic might make many (say 100) small contributions, each cited a small number (say 10) of times, or one contribution cited widely (say 1000) times. Neither is inherently superior, despite claims to the contrary, a
nd for the academic in question, it was probably easier to write one widely cited piece than 100 smaller ones, but that was unpredictable at the time.
Academic citations are cumulative distribution function, they can never go down (they can with retractions, but we will neglect that). So by this measure on average senior academics appear more influential than younger academics, which they of course are. But this is not a useful measure for filtering prospective candidates for hiring and promotion, which is why these metrics exist, to sort people based on productivity and establish a social hierarchy.
So to begin, we have two corrections to make. First, senior academics have more opportunities to write papers. A junior academic simply has not had the cumulative time to author 100 papers. Second, the senior academic’s papers have had more time to accumulate citations. So I suggest dividing total citations by Years^2 to account for these two temporal accumulating factors.
But which “Years”? Years since terminal degree? — This favors the young who start publishing before their degree. Years since they began their degree? Almost no one has any paper in year 1 of their graduate career. So we can estimate and split the difference and say years since graduation with terminal degree +2, on the theory that by the time you graduate you should have had at least 3 papers, and that means you started about 2 years before graduation. Still this is highly sensitive to assumptions for younger academics, it will wash out for the older academics. Domains will vary of course in terms of publishing culture.
There are other problems, for instance, co-authorship. At the extreme, all 108 billion people who ever lived have contributed fractionally to every paper, but they don’t all get co-authorship (except on experimental physics papers). But someone who puts all of their PhDs on all of their group’s papers is gaming the system to the detriment of those who assign more individually authored papers. So each citation should be divided by the fraction of authorship that the academic in question deserves. While this is impossible to assess, (promotion files sometimes ask for percentages on co-authored papers, but this is never systematically estimated or consistent). Computing an average dividing by the number of authors on the paper is a good surrogate.
I am not in this business of bibliometrics, I will leave that to others. But hopefully someone in the industry (Scopus, Web of Science, Google Scholar) can run the proposed corrections on these databases and produce a normalized citation measure as a standard output.