“All science is either physics or stamp collecting” – Ernest Rutherford
by Shannon – When I started college, I was not a chemistry major. In fact, I didn’t declare my major until the last possible moment in my 3rd year of college. Before that, I was studying both chemistry and music performance. I had in my mind that I might end up on Broadway someday. But by my Junior year, I realized how difficult it is to be a professional performer. You have to contend with tremendous, and highly subjective, rejection. Sometimes, I got turned down for a role just because my face was too round, or my nose too small. One time I got a role solely for that reason.
So I decided to focus on my other love – science. I saw science as an objective analysis of our observable universe. Scientists are unbiased: observations are reported, experiments performed, hypotheses rejected. Only much later did I fully realize that Science is really no different from those auditions I went to long ago.
There are four stages to experimental science. The first three are observation, experimentation, and analysis. But the fourth is arguably the most essential – communication. Communication of scientific results is essential to ensure scientific discourse. But, while the experimentation itself strives for objectivity, the communication does not.
The modus operandi for scientific communication for over 100 years now is publication in a peer-reviewed journal. To accomplish this, every piece of scientific data that is produced must go through a series of highly subjective steps:
- Which journal will find this work most appropriate?
- Will the editor find this work interesting?
- Who will the editor choose to review the article?
- Will the reviewers have the proper experience to assess the article accurately?
For each of these steps, scientists have developed a number of mechanisms to reduce chance of rejection. First, there are thousands of journals in every possible scientific subject. If you submit your work to a journal focusing on that subject, your chance of getting past the first two hurdles is much higher. Second, almost all journals ask for suggested reviewers from the authors. By selecting appropriate reviewers who can accurately assess your research, you reduce the chance of a misstep for the last two hurdles.
For most studies, these precautions work relatively well. But my students and I have recently uncovered a serious, and sometimes fatal flaw in this process. What do you do with interdisciplinary research?
Will studying a blade of grass help you map a golf course?
Our lab, by its very design, mixes relatively disparate fields together: chemistry and behavior, neuroscience and evolution, engineering and ecology. By nature, our experiments are very “big picture”, seeking to blend fields together to answer big questions about our world. That means, of course, that none of our experiments are designed to probe deeply into a particular field, but rather use the tools and concepts of multiple fields in new and interesting ways.
This method has (if we do say so ourselves) produced some fantastic insights into the natural world. It has allowed us to develop new multisensory virtual reality systems, develop sustainable agriculture for plantation crops, identify pollinators’ innate recognitions of flowers, understand the impact of air pollution on wild pollinators, and a number of other fun projects. The problem comes when we try to communicate those results.
On one hand, our “big picture” studies are easy to communicate to policy makers and the general public, who generally don’t have all the details of a particular field anyways. This also helps us get past the second editorial stage of publication. Our trouble lies in the last two stages. First, many journals don’t select only from the suggested reviewers presented to them. Often, they have a pool of dedicated reviewers who specialize in different topics. Second, the pool of scientists truly trained in interdisciplinary research is exceedingly small. The vast majority of scientists have been trained over the last 50+ years to dig ever deeper into a question, probing the intricate details of a cell, or crystal, or mathematical formula. With this mindset, it can be very difficult to step back and “see the forest from the trees” so to speak.
For those of you who are not professional scientists, let me give you an analogy. Imagine you are an automotive engineer who has spent your life learning about, designing, and building car engines. You know everything there is to know about diesel or gas turbine engines, pistons, crankshafts – you name it. Now, imagine you are asked to evaluate Lamborghini’s newest model for a car review. You have a few options here. One, you can focus on your knowledge base and look at only the engine specifications. Two, you can step out of yourself and try to appreciate the other features such as design, transmission, steering, and other aspects. Now, not knowing about these other features, you might rely on other experts to evaluate these other parts and stick to what you know.
The problem is, Lamborighini has built a car, not just an engine. It is possible that they left some non-essential features out of the engine design to focus on other aspects like steering. Should you give the car a bad review because it doesn’t have the best engine? Surely, if the engine is badly designed you should. But what if it just doesn’t have all the bells and whistles of the best engine available – is this enough to negate the entire car itself? Such is the nature of our conundrum as a lab.
Over the past year, we have repeatedly received reviews from broadminded scientists who appreciate our work accompanied by one review from a specialist who is dissatisfied with the level of detail for their particular field. We recently had a molecular biologist question why a highly active non-toxic mosquito repellent is interesting at all to science. We have had a neuroscientist wonder why we didn’t use optogenetic techniques on an evolutionary system (where it isn’t possible anyways). Their concerns are never that our reported results are invalid, but rather that we didn’t dig deep enough into their field. And most of the time their suggestions are either not possible in our system for technical reasons, or far afield of the questions we are asking. Unfortunately, it is nearly impossible to overturn reviews after they are given. Editors (and reviewers), do not like being told they are wrong.
We have a few options here. First, we could try to answer every detail in every field. This is difficult, but not impossible. The problem comes in reporting the results – the manuscript then becomes a behemoth tome that few journals will publish. Second, we can break the experiments up into tiny parts. The issue here is that the story is lost – the individual parts make up a larger concept that we want to convey. Third, we can submit to less-rigorous journals, but less-rigorous in science generally equals less-respected as well. Fourth, we can just keep submitting our manuscript until we hit that magic blend of reviewers who can mutually see the bigger picture. This, unfortunately, is the solution we most often take, trying to improve the manuscript with any useful suggestions along the way.
The real solution is slow, and painful, but needed. And it is the purpose of our lab. We need to train new scientists who can specialize in particular areas while maintaining appreciation for the bigger picture. We need to stop training scientists who confuse accuracy with hair-splitting. Accuracy does not necessarily equate to detail, but rather the appropriate choice of experiments to address a particular question. We need to engender a mentality that appreciates both detailed inquiry into a particular topic, as well as synthetic integration of multiple ideas. And we need to teach them that it is not always necessary to answer every little detail to address a bigger question.
Because that’s not science. That’s stamp collecting.