green-color-2017-solid-green-color-2560x1600-dark-green-solid-color-background.jpg

News

News Blog

Trying to eliminate bias within the reviewing process

As empirical social scientists we hope to be non-biased and guided by data, but the reality is that we all have our own biases which can influence the review process.

Prof. Barbara B. Flynn

Prof. Barbara B. Flynn


For instance, some potential authors rightfully worry that reviewers may perceive that certain types of data or data from specific places is not reliable.


We hope that JSCM’s record of publishing all types of empirical research:

Some recent examples of different methods:
Case StudiesA Mid‐Range Theory of Control and Coordination in Service Triads, Marko Bastl, Mark Johnson & Max Finne

Secondary Data- Supply Chain Power and Real Earnings Management: Stock Market Perceptions, Financial Performance Effects, and Implications for Suppliers; Danny Lanier Jr. William F. Wempe & Morgan Swink

ExperimentsOrganizational Communication and Individual Behavior: Implications for Supply Chain Risk Management: Scott DuHadway Steven Carnovale & Vijay R. Kannan

Engaged research Ramp Up and Ramp Down Dynamics in Digital Services: Henk Akkermans Chris Voss & Roeland van Oers

using data from many parts the globe counters that argument; while still acknowledging that each reviewer will be bringing their own biases to the processes.

Very recent examples of data not from North America

Data from Germany and Japan - Managing Coopetition in Supplier Networks – A Paradox Perspective: Miriam Wilhelm Jörg Sydow

Data from China- Inside the Buying Firm: Exploring Responses to Paradoxical Tensions in Sustainable Supply Chain Management: Chengyong Xiao Miriam Wilhelm Taco van der Vaart Dirk Pieter van Donk

Data from Korea - The Effects of Supply Chain Integration on the Cost Efficiency of Contract Manufacturing: Yoon Hee Kim Tobias Schoenherr

As editors we cannot control these individual biases, but we can and do try and provide a fair and developmental review process by doing the following:

First, our reviewer pool, like the JSCM community is global.
Our Editorial Review Board includes members from over 18 countries.


Second, all reviews are double blind. We don’t want an early stage researcher being discounted for not yet having a name, nor do we want a researcher at a university with a low research profile to judged based on that profile. Research should be judged based on its contribution and fit with JSCM; not the authors or where they work. Double blind helps to guarantee this happens.

Third, we generally use a review team comprised of 3 reviewers plus an AE. While we all bring our biases to the process, having 4 people not 2 or 3 involved reduces the influence of any one person. In addition, using 3 reviewers makes it easier to maintain our standards, in that if a reviewer does not turn in a developmental review or if their review is some other way not professional, we need not use it.  And we also follow up with reviewers who are not being developmental to make sure our expectations are clear. Reviewers who do not meet our community’s standards will not be reviewers for JSCM for long.

Forth, reviewers are selected based on either their topical or methodological expertise; the reviewers will typically have worked in the same space and or have used the same empirical tools. And another benefit of the larger review team is that this gives us more space to have both topical and methodological experts on the review team.  Finally, when we use new reviewers we typically add them as a 4th reviewer in case they don’t understand the community’s expectations. And new reviewers are also given feedback from the AE and the Editor to help them improve.  


Is this process bias free, of course not. That is why we are trying to engage with the community more frequently and in new ways. So if you have actionable suggestions on how we can improve we would like to hear them.


Jacqueline Jago