“Our Scholarly Recognition System Doesn’t Still Work” is the title of a panel at the Science of Team Science Conference that I’m co-organizing with Amy Brand (Digital Science), Melissa Haendel (Oregon Health & Science University), and Holly J. Falk-Krzesinski (Elsevier)
If you are interested in this topic, you may want to consider attending the conference and the panel. The conference is June 2-5, 2015, at NIH in Bethesda, Maryland, just outside Washington DC. There is no registration fee. Our panel is June 5, at 3:15 pm in “Balcony A.”
Our submitted abstract is:
With a historical focus on individual disciplinary achievements, the scientific community has been slow to figure out how to adequately recognize and reward accomplishments by individuals that occur across disciplines and in the context of collaborative work. This is a serious impediment to fostering effective team science.
This panel will discuss issues around scholarly recognition, specifically authorship, contribution, credit, and attribution as associated with the development of scholarly products. Current models of shared authorship and attribution are an obstruction to scientific collaboration. When there are multiple authors, we tend to rely on the order in which names are listed to determine the most significant contribution, when in fact there are no consistent name-ordering practices from one ﬁeld to another. Yet who gets credit for research and discovery has a tremendous impact on people’s lives. It affects career advancement and tenure, as well as the transparency and integrity of the permanent research record. Even in fields such as economics in which the author order is alphabetical and a supposition of relative contributions has been removed, it has been shown that one is more likely to get tenure or win a prestigious prize if your last name begins with a letter earlier in the alphabet (http://pubs.aeaweb.org/doi/pdfplus/10.1257/089533006775526085).
How we apportion credit for collaborative works today is highly subjective, open to abuse, and often determined more by lab politics or seniority than by effort or contribution. Junior researchers and those making non-traditional research contributions such as data and code tend to lose out most on deserved recognition. As interdisciplinary collaboration and multi-authorship increase across all fields of research, we clearly need a better system for representing collaborative contribution to published works — film credits are one alternative model. If this initiative is ultimately successful, there will be fewer barriers to team science, fewer author disputes, and fewer disincentives for sharing data and code, for example, because those contributions will be more reliably recognized. Hence these efforts could positively influence both the cooperative culture of research, and academic incentive structures more generally.
“We will need to find better ways to do team science and reward it if we are to solve large overarching problems. Everybody on the team needs to get the same big gaudy championship ring…” [AG Gilman. Silver Spoons and Other Personal Reflections. Annu. Rev. Pharmacol. Toxicol, 2012]
This area is of interest to funding agencies such as NSF (for example, see http://www.nsf.gov/pubs/2014/nsf14059/nsf14059.jsp) and NIH in the US, publishers, university administrations, and scientific researchers, and the science of science research community. Much work has been going on in this area, for example, a recent effort to develop an open standard for tagging contributor roles in multi-author research publications (see http://credit.casrai.org/proposed-taxonomy/). Project CredIT (projectcredit.net) formally launched in 2014 to address the groundswell of interest among researchers, funding agencies, academic institutions, and editors in increasing the transparency of research contributions, and in more fine-grained attribution and associated credit tracking. The taxonomy is now being actively piloted. Another example is the concept of transitive credit, where similarly, all contributions to a product are registered, but in this case, quantitatively rather than qualitatively. Efforts in VIVO and eagle-i have aimed to relate a person to the things they do and create in support of expertise finding and attribution. A Force11 working group (https://www.force11.org/group/attributionwg) has been created to bring some of these efforts together.
This panel will include relatively brief (5 min) statements from some of the research projects and researchers in this area, followed by a substantive discussion between the panelists and the audience.
- Amy Brand (Digital Science) (will not be available, slides to be presented by Dan Katz)
Project CRediT: recording contributor roles, see http://www.nature.com/news/publishing-credit-where-credit-is-due-1.15033 and http://projectcredit.net for more details.
- Robin Champieux (Oregon Health & Science University)
Force11 Attribution Working Group: linking attribution research and implementation activities, see https://www.force11.org/group/attributionwg for more details.
- Holly Falk-Krzesinski (Elsevier). Team science reward and recognition and publishers’ role in clarifying attribution in a digital world. See Mendeley Science of Team Science (SciTS), specifically subgroups “Credit_Promotion and Tenure” and “Authorship_Publishing Issues” (available via desktop client) for more details.
- Daniel S. Katz (University of Chicago & Argonne National Laboratory)
Transitive Credit: recording weighted credit for both contributors and resources, see http://dx.doi.org/10.5334/jors.be for more details.
- Philippa Saunders (University of Edinburgh)
The Academy of Medical Sciences Team Science policy project: examining researchers’ incentives and disincentives to participate in large collaborative projects, focusing on how such contributions can be better recognized in career-relevant decision making, see http://www.acmedsci.ac.uk/policy/policy-projects/team-science/ for more details.
Some work by the author was supported by the National Science Foundation (NSF) while working at the Foundation; any opinion, finding, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the NSF.