New Ways to Measure Science

The value of a scientist's contributions is extends beyond simple measures like citations. Social Dimensions blogger and mathematician Samuel Arbesman explores how can might better measure a researcher's contributions.

Leo Szilard, a physicist involved in the Manhattan Project, was known for his generosity of ideas and helpfulness toward his colleagues. This generosity was extremely useful, leading to such inventions as the electron microscope and the nuclear reactor, despite having only published fewer than 30 scientific papers over his lifetime. However, Szilard, despite his importance, is little known today, in comparison to physicists such as Einstein and Oppenheimer, and was not generally given his due even in his own time.

For too long, the measurement of scientific contribution has centered on the publication. Whether through the number of articles, the citations those articles have by other articles, or even other far more complicated metrics, most scientists are still measured by a derivative of the research article, the basic technology of scientific publishing that is well over 300 years old.

But science is much more than that. It’s ultimately about being involved in making discoveries and creating new knowledge. It's creating data, helping others, commenting on previous work, and even using Twitter and blogging. If you help someone out or mentor a student, isn’t that worthwhile as well? How can we begin to measure a person such as Szilard?

Let’s take the example of being helpful. Alexander Oettl of Georgia Tech has studied the importance of this trait, despite its lack of appreciation. He combed through acknowledgments within the immunology literature, in order to find the most helpful scientists — those who read article drafts, provide helpful research advice, or even just act as sounding boards for ideas. And then he looked at what happened when these extremely helpful people died.

Oettl found that even if these people had only been moderately productive when it came to actually authoring papers, the productivity of their collaborators dropped by over 10 percent when these cooperative scientists died. Unfortunately, while simply being helpful is an important contribution to science, it often gets overlooked in academia.

Luckily, there is a growing movement within the scientific establishment to better measure and reward all the different ways that people contribute to the messy and complex process of scientific progress. This movement has begun to gather loosely around the banner of “altmetrics,” which was born out of a simple recognition: Many of the traditional measurements are too slow or simplistic to keep pace with today’s internet-age science.

Altmetrics are being explored in a whole host of areas not served by the traditional article. The simplest example is data. The lifeblood of quantitative research is the availability of datasets that can be used to test hypotheses and reach novel conclusions. But the accumulation of data, even if it doesn’t result in a publication, is an important contribution to the scientific endeavour. There are now projects, such as DataCite, working to create a framework and culture where it is both acceptable and relatively straightforward to cite data sources from other people.

Unpublished work can now be part of the scientific conversation, whether measured in the form of working papers or even how a scientist shares the slides of a presentation at a conference. There are even ways to publish experimental procedures in a peer-reviewed format. Jason Priem, a graduate student at UNC-Chapel Hill, who is helping to spearhead this movement toward altmetrics, is even exploring the idea of how to give the scientific seal of approval of peer review to such non-traditional forms as blog posts. If Priem’s idea of creating the availability of peer review for non-traditional forms of contribution can work, this would open whole new areas for acceptable scientific contribution. As Priem argues, the “identification of good science need not be limited by venue.”

Even more informal contributions to science such as mentorship have potential for quantification. There are proposals and initial implementations for metrics that calculate how many students a scientist has advised or served on doctoral committees, and metrics that allow the citational success of one’s students to redound to oneself.

Of course, there are some shortcomings. Many of these metrics are not yet accepted. And even fairly well-established measurements, like number of patents, are only now becoming acceptable for some scientists’ tenure packages. For example, Texas A&M has only included patents in the tenure process since 2006. These changes will take time, led by the early adopters, who pull the rest along.

Ultimately, properly measuring the multifaceted contributions to science, and rewarding them accordingly, opens a whole host of possibilities. While there is always the concern that tenure committees might reduce this all to a single number, the profusion of metrics now gives universities more nuanced options. As Priem noted to me, if a university only cares about citations, or even grant money, they can measure that. But if a university values blogging, online conversations, or the kind of informal helpfulness that only gets mentioned as acknowledgments, they now have the tools to make that possible.

The world of altmetrics allows us to move from rewarding what we can easily see to finally having a discussion about what truly furthers science, and what we ultimately value when it comes to the scientific endeavor.

Image:drtran/Flickr/CC-licensed