This past week was the Goddard Symposium. It is a fantastic place for a scientist to go to help you see how to effectively talk about your science and research at a high level. The audience is quite different from that of a science meeting: more business suits, more panel discussions, a broader set of work and academic backgrounds, and more commercials. That means jargon, acronyms, and initialisms are not allowed. However, you have many movers and shakers from across the industry, government, policy centers, and academia at this meeting. You also get some amazing quotes. And Day 1 was filled with them, but one resonated and stuck with me. It was stated by Dr. Scott Pace (and paraphrased by me)- "Collaborate where we can and compete at a high level where we must."
Science has reached a point where you must distinguish yourself as an anomaly, a genius, or an innovator like no other to "have made it." However, science moves forward in little steps. To get hired, be promoted, acknowledged, and honored by our communities; we must be distinguished and stand above others in our fields. Moreover, framing one relative to others is dangerous, leading to unrealistic expectations for workload and one's value within the community. This culture of having to always and in all ways be the best in a field of few is impossible to achieve and/or maintain. Furthermore, when competition becomes fierce, it also has meant that many have turned to put others down to be seen as unique. We have seen this happen in other fields and need to ensure this does not happen in ours.
Our metrics for success impact and drive some of the competition and negative impacts on our culture. Competition can lead to the cultural view that if you succeed, I have failed. Let us work through an example. Many experimental/observational scientists, and a growing number of modelers and theorists, see getting a rocket/CubeSat/larger mission as an indicator of their success in the field. Historically, being a part of a larger mission has provided significant funding so that one can dedicate their focus and time to a single set of science questions and increase their likelihood of future proposal success. In other words, being a PI often translates to career and funding stability. If we assume there are 10,000 heliophysics in our field, perhaps a conservative estimate is that half would classify themselves as someone who wants to be a scientist with a significant role in a mission as a PI or instrument PI. Every year NASA funds maybe 5 CubeSats. So throughout a 30-year career, there are ~150 opportunities for CubeSats in oneβs career. Perhaps there are about as many opportunities for rockets and fewer opportunities for larger missions.
Nevertheless, let us stick with CubeSats. That means if everyone is equally deserving of having their CubeSat mission funded, and everyone only gets one mission apiece, there is a 3% chance that you will PI a CubeSat mission once in your life. Each CubeSat might have another five instruments - which then raises the likelihood you get to participate to ~10 - 20%. Moreover, this assumes everyone only gets one, so maximizing our estimates, where in reality, we see many of the same people as PIs and instrument PIs on multiple missions. Thus, too few opportunities are available for people within our field to see themselves as successful using this metric, reinforcing the feeling that if you succeed, then I have failed.
Now, the metric of PIing a mission or instrument - no matter how large - is perhaps not a metric we should use. Adjusting how we measure and define success can help mitigate competition and hopefully help improve collaboration. For example, if we have a metric looking at how an individual team grows in membership throughout the mission life would help foster collaboration after the initial competition. This might help ensure that it is not an all-or-nothing impact for those whose missions are not selected. However, such a metric would necessitate further and additional funding of the science and science collaborations as the team grows and provides more leadership opportunities for the mission.
In addition to the adverse environment from hyper-competition, we have seen other impacts. Proposals are becoming more and more risk-averse. When there is less than a coin flip chance of being funded, people start playing it safe, showing that they are ensured to get results. Implicitly, hyper-competition disincentivizes anything but these smaller steps.
Additionally, as we become more and more competitive, people become hesitant to share what they are working on with others. When I first joined the field, we were in a unique position where at workshops, people would share what they were currently working on, events and problems they did not yet understand, and bring up issues they could not solve. However, recently we have seen people sharing only what has already been published. There is a growing fear that you will be scooped if you share what you are working on. This fear pushes against the idea of open science - because collaboration is not the norm, but competition is.
For too long, we have stated and built the structures around our community around the idea that competition leads to better ideas and better science. The examples here are not the only issues of an increase in competition. However, I worry that we now compete everywhere we can and collaborate only when we must.
Perhaps it is time that we try something different. Perhaps it is time that we follow Scott's advice, collaborate everywhere we can, and compete only when necessary.