This week, we’re continuing a series (started Tuesday) of guest blogs from our Mozilla Fellows for Science, with Christie Bahlai, you can read more about Christie here on the 2015 fellows’ page, and read on to learn about her thoughts on open science, and her interaction with its discontents. Reach out to her on twitter @cbahlai, or read her blog here.
The scientific process, or at least the one we typically use in western academia, goes something like this. A scientist, wondering about some aspect of how the world works, does an experiment or makes observations, producing data. She then does some kind of analysis on the data to see what sort of patterns emerge. She then writes a report or paper where she describes what she did, what she saw, and what she thinks it means in the context of what we already know. Then she sends the paper to a journal, and the journal editor decides if the paper should be published with the help of peer reviewers.1 If it is published, it will be made available to other scientists through university libraries that subscribe to that journal.
Typically, this final paper is what we look at as the main product of research, but a lot of work happened behind the scenes in its production. So you can see, this process can lead to a lot of problems- namely, other scientists don’t have access to the behind-the-scenes products so they can build on the first scientist’s work, and the public, unless they have an in at the university library, can’t access the research paper at all. This means the people that need science most- the ones making decisions that affect human health, livelihoods, and the environment, do not have access to the all the information produced by scientists to help solve these problems.
Closed practice is pervasive in academic science. At every level of rank and organization, the infrastructure is built to not particularly value open practice, and sometimes outright deter it. The culture of academic science re-enforces secrecy through fear 2 – I remember even as an undergrad, hearing grad students talk about their concerns that their work would be ‘scooped’ by others. There was an oral tradition where students passed down this message- that science was primarily an adversarial pursuit- you had to hold your cards close, lest your competitors use your data to solve their problems before you. These messages get reinforced as you pass along the pipeline and through the academic ranks. High impact factor papers and grant funding are the currency of success in academia- there are few recognitions for inclusivity or reproducibility. Because of these incredibly dominant cultural aspects, I believe the key to changing the culture is through gentle shifts in regulation and the reward structure- and then aim for the bulk of the change to occur in early career scientists.
Open science can improve reproducibility and accessibility issues presented by the current system, but requires training, advocacy, and a reward system. In an ‘open’ model, scientists use the tools and connectivity available to them through the internet to document and share all steps of the scientific process- posting raw data, the code they used to analyze the data, and the final publication- and invite comment at all stages in the process. Most scientists agree that learning to use technology to improve the reproducibility of their work is a good thing, but there is a lot of pushback against open science in my field for two big reasons.
- the learning curve associated with taking a whole new approach to science is not trivial.
- There are risks to open practice, both perceived and real, and rewards can be difficult to quantify under conventional academic metrics.
The first factor, I feel, is fairly easily addressed. Academics are used to doing things that are hard. Offering training in open science early in their careers makes learning it less hard, and then they can follow the path as they grow as scientists. I’m less able to address the second point because these are real structural problems that are harder to overcome. I feel like we need to change the value system- how people are evaluated- in academia to tip the ratio on these cost-benefit analyses.
I think one of the key factors in bringing open science to my field of organismal ecology involve breaking down the hesitance towards technology I’ve observed among many people in my field. To do this, I think the best approach is to start small- show them simple, small steps they can take that make their lives easier or more efficient- be it better documenting their data, scripting an analysis so that it automatically processes observations from a new experiment, or making their contributions more easily integrated with a collaborator’s work.
Open science has the potential to change both society and academia, for the better. It will place scientific evidence into the hands of people who need it most, from people working on more efficient agricultural systems in developing countries, to people who want to learn better, evidence based ways to treat medical conditions. It will create an environment where scientists build on each other’s work, and can draw on the skills and ideas from the broader community.
1. the peer reviewers are asked to evaluate a paper based on a variety of criteria which varies with the journal. Some of them are good criteria, like “was the experiment competently performed and adequately described?” Some criteria are bad, like “does this paper represent a significant novel contribution to the field?” which is a rabbit hole I don’t want to go down right now.
2. and if we’re being honest, through appeals to ego and a high degree of competition. Getting your paper published at a ‘high impact journal’ is not just a way to feel like you’re winning the game, it also helps distinguish you on a cutthroat job market.