Summary of “Our Scholarly Recognition System Doesn’t Still Work” panel

This post is meant to summarize the “Our Scholarly Recognition System Doesn’t Still Work” panel, held at the Science of Team Science Conference (SciTS), Friday, June 5, 2015, Bethesda, Maryland, USA

The panel organizers were Daniel S. Katz (U. of Chicago & Argonne National Laboratory), Amy Brand (Digital Science), Melissa Haendel (Oregon Health & Science University), and Holly J. Falk-Krzesinski (Elsevier).

The panel speakers were Robin Champieux (Oregon Health & Science University), Holly Falk-Krzesinski (Elsevier), Daniel S. Katz (U. of Chicago & Argonne National Laboratory), Philippa Saunders (University of Edinburgh).

The slides from the panel are available at: http://www.slideshare.net/danielskatz/panel-our-scholarly-recognition-system-doesnt-still-work  (Slide numbers below refer to these slides.)  The notes on the talks are by Kenneth Gibbs (NIH), as enhanced by the speakers; the notes on the questions and discussion are by Daniel S. Katz (U. of Chicago & Argonne National Laboratory)

Daniel S. Katz started with an introduction (slides 2-6)

  • System recognizes individual accomplishment
  • Author no longer means the person who write; who contributes, and which “contributors” become authors
    • Authorship order (differs by discipline)

Next, Holly Falk-Krzesinski talked about Rewarding Team Science (slides 7-10)

  • “Everyone on the team needs to get the same big, gaudy ring.” Blackhawks equipment managers, trainers and medical staff also got rings in addition to players and coaches.
  • Emphasis on individual accomplishment
    • Contributorship model (vs. authorship model)
  • Cultures and behaviors of each discipline are recognized by Elsevier
  • Impacts on bibliometrics and scientometrics for new contributorship models
  • Unintended consequences
    • Certain statuses for certain contributorship models

Daniel S. Katz next spoke for Amy Brand, who was unable to attend, on Beyond Authorship, CRediT taxonomy (slides 11-26)

  • What counts as “published” research: papers? Software?
  • Publications in physics: 3000 authors
  • Standard tags for contributions in biomedical journals
  • To understand differences across fields, we need to know what contributions are
  • ORCID is moving toward credit taxonomy; tags need to be in NCBI JATS (journal article tag suit)

Then Daniel S. Katz talked about Transitive Credit (slides 27-34)

  • Science relies on activities that aren’t fully recognized
    • Crediting indirect contributions to work
  • Previous citations (for pubs) doesn’t work well for things like software
  • How to do it: (A) Decide what to credit (B) Determine how much credit for each (C) Person who registers product also registers credit map
    • People and things
  • Will this work? Take a specific field and see.

Next, Robin Champieux spoke about the Force 11 Attribution Working Group (slides 35-42)

  • Little of the information that contributes to science are captured or able to be queried; thus our understanding of scientific contribution is incomplete and the decisions we make as a result are also incomplete
  • Force11 (http://force11.org)
  • What products do we need to consider? What contributions will be considered? What kinds of questions will people ask of the data?

To close the panelist presentations, Philippa Saunders talked about the Team Science Project (Academy of Medical Sciences) (slides 43-48)

  • Has grown out of careers committee
  • Culture change
    • Example of culture change: open access through Wellcome Trust
  • Remove disincentives; train people so they can lead teams effectively

Finally, there were audience questions and discussion:

Q: How to work together in these problems?
A: Force 11 attribution working group as possible answer

Q: How to teach out to provosts and other key university figures who determine how promotion and tenure works, since we think that the current system doesn’t work well for those involved in team science?
A: Academy of Medical Sciences Careers Committee Task Group’s Team Science Project has done this in the UK
A: Individual university members have tried to reach out to their own administrations
A: There could be a workshop in US to bring together VPRs and provosts to discuss this
A: Encourage policy makers to use funding pressures to push desired changes
A: Impact statements that funding agencies may require can be used in P&T discussions too

Additional points the audience members made include:

  • Increasingly finer-grained discussion of credit may hurt the function of teams who want to provide something that’s more than the sum of the contributions
  • Best teams may not have the most highly rated people or best scientists; also need diversity and people who contribute strongly to teams
    • How to measure this – qualitative and quantitative measures may both help
  • Need to ensure the cost (in time) of quantitative measurements is lower than the reward that comes from measuring contributions
    • Want to do this as easily as possible when the manuscripts, software, etc. are being submitted/registered, not as a different step
  • What data do we already have that could be used to understand credit? For example, what would the IMDB dataset tell us about movie credit?
Disclaimer: Some work by the author was supported by the National Science Foundation (NSF) while working at the Foundation; any opinion, finding, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the NSF.

Published by:

Daniel S. Katz

Chief Scientist at NCSA, Research Associate Professor in CS, ECE, and the iSchool at the University of Illinois Urbana-Champaign; works on systems and tools (aka cyberinfrastructure) and policy related to computational and data-enabled research, primarily in science and engineering

Categories Uncategorized1 Comment

One thought on “Summary of “Our Scholarly Recognition System Doesn’t Still Work” panel”

Leave a comment