In mid-1934 a Hungarian nuclear physicist Leo Szilard, filed a patent for one of the reactions he was studying, the nuclear chain reaction. Szilard, filed the patent, not to achieve personal monetary gain, but to protect the idea from misuse. Indeed, a refugee from the rising fascist threat in mainland Europe, he was concerned about the weaponization of this idea. He unsuccessfully lobbied the British War Office to keep the patent secret arguing that it ‘contains information which could be used in the construction of explosive bodies . . . very many thousand times more powerful than ordinary bombs”. Over the following decade, as the drums of war beat louder over Europe, Szilard worked to progress the field of nuclear physics whilst warning those around him of the potential dangers these advances posed. Indeed, in 1938 following the discovery of a fission, Szilard lead a push for both scientists and scientific journals to supress publication of the new findings on the grounds that they might be misused. Szilard’s arguments were rejected by many physicists, particularly those in France such as Jean Frederic Jolliot who viewed the move as a suppression of scientific autonomy, which would only aid Hitler and Mussolini in destroying another precious liberty in Europe. Ultimately the ideas were shared widely and various powers including the US and Germany began nuclear weapons programs ending with the infamous attacks on Hiroshima and Nagasaki in August 1945 and the deaths of up to 225, 000 individuals, mostly civilians.
Whether Szilard was right to push against the publication of the new findings about nuclear chain reactions, and what suppression of that information would have achieved, is debated. For philosophers of science and ethicists, the period poses a broader question about the nature of scientific freedom and scientific responsibility. What individual responsibility should scientists bear for the consequences of their research? What, if any, restrictions should there be on the freedom of scientists to choose their own research projects and publish their results?
I am your host Dr Rachael Brown and Welcome to the P-Value. In this episode we shall look more closely at the ethics around what are dubbed “dual-use dilemmas” — when one in the same piece of research in science has the potential to be used for harm as well as for good. What, if any, obligation do scientists have to think about this? How should they respond to such dilemmas when they arise?
In the case of nuclear weapons, international non-proliferation agreements restrict research, but nuclear technology is not the only place that dual-use dilemmas arise.An accidental discovery close to home illustrates the challenge well. Ever since the house mouse was introduced to Australia with European colonists in the 18thCentury, populations of the species have undergone cycles of boom and bust with mouse plagues being a reasonably regular threat. In 2021, for example, increased rainfall across Eastern Australia due to a La Nina event resulted in large crop loads and as a knock one effect, caused one of the worst mouse plagues in the nation’s history. Not only did the mice threaten hundreds of millions of dollars of crops but they were so bad that they were biting people in their beds, eating through electrical systems, and threatening native species. Farmers and householders could do little in the face of a literal grey-brown tsunami across their land and into their homes.
A biological control solution for the mouse plague problem is one of the holy grails of Australian agricultural research. In 2001, researchers at ANU and the CSIRO working on just this challenge, genetically modified a common, and usually relatively benign mouse virus, mousepox, in an attempt use it as a mouse contraceptive. Instead of inducing effective infertility in infected mice, however, the virus totally suppressed their immune response and killed them. Not only that but the virus proved deadly even to some mice that had been vaccinated against mouse pox. Effectively, the researchers had accidentally stumbled upon a gene that course be inserted into mousepox and make it both far more virulent and able to evade existing mousepox. This alone was a huge finding — this was the first example of a virus overcoming vaccination in this way — but it raised the spectre of an even more worrying possibility. There was strong reason to expect that the gene would have similar effect, if inserted in another related disease, human smallpox. The researchers were faced with a dilemma. This was a significant scientific finding which, on pure scientific grounds should be published; but publication seemed to carry with it a very real potential for misuse with potentially catastrophic results.
Smallpox is a highly contagious disease which kills about 3 in 10 of those infected. Its eradication through vaccination and other measures, is lauded as one of the great public health victories of the 20th — the last natural occurring case being in the 1970s. There are however, samples of the virus remaining in secure labs around the world, and it is not hard to imagine the use of a modified and vaccine resistance smallpox virus being used as a weapon for bioterrorism.
The Australian mousepox researchers knew as soon as they started seeing mice dying that they had created something scary. It wasn’t total unexpected; all work at the time had to be approved by what was then the Gene Manipulation Advisory Committee [the predecessor of the Australian government's Office of the Gene Technology Regulator] and in their original application they had indicated that there was a possibility that this virus could be highly immuno-suppressive, resulting in a lethal infection. They had thought it was highly unlikely though. Regardless, the researchers struggled with the question of whether to publish their findings or not, discussing the issue with close contacts and even contacting the military (who didn’t respond). Eventually they did submit the findings for publication, figuring that they weren’t qualified to decide such an ethical question and that three was already so much out there that bioterrorists could use in this area, that this wouldn’t make a huge difference. The paper came out in the Journal of Virology in 2001. Through a combination of chance and timing—recall this was 2001, just prior to September 11—the results ultimately ended up the focus of a New Scientist article with the provocative title “Killer virus: An engineered mouse virus leaves us one step away from the ultimate bioweapon”. This prompted a raft of discussions about the autonomy of scientific practice and scientific publishing and the responsibility of science for the use and misuse of their results which continues today. Indeed, one of the threads of debate in the COVID-19 pandemic has concerned both the importance of studies into ways that the virus could change in the future to be more virulent and the balancing of the risks that such information could be misused.
What do you think? Should the researchers have published their results? Were their actions ethical in your eyes?
All research has two distinct potential users. The original or intended users: those who use the research for the purposes intended by the researcher. And the secondary users: those who use the research for some purpose not intended by the researcher. In the case of the dual-use dilemma the original or intended usage of the research is beneficial but there is some secondary usage that is harmful. Although in the cases discussed thus far, mousepox and nuclear fission research, the dilemma has arisen in the context of the decision to publish, the dilemma can also arise concerning the decision to undertake research in the first place. For example, in the mousepox case, given that the researchers knew that there was a risk of creating a lethal version of the mousepox virus in advance, there is some sense in which they unknowingly already faced the dilemma prior to publication. Indeed, in subsequent discussions with ethicists, the researchers themselves noted that they had really been dealing with dual-use technology for some time without fully appreciating it and that there were many external factors, such as September 11, which fed into the focus on this particular study. Indeed, some argue that in the area of virology, dual-use research is so ubiquitous as to make discussions of the possibility of mis-use almost pointless — either we do the research accepting the across the board risks, or we don’t, but interrogating individual pieces of research for dual-use risk is simply to waste time. Others point to the broader challenges beyond virology of dual-use dilemmas. All sorts of technology, from CRISPR-cas9 Gene editing to AI to drone technology seem to pose dual-use dilemmas raising the question of who should decide what scientists can and can’t do in research and whether they should be sharing it. Can you think of any other possible sources of dual-use dilemmas?
For many, just as the French phyicists of the 1930s, the spectre of government regulation of science raises questions about the right to intellectual inquiry and the role of free speech and enquiry in the university context. There are a number of views about research freedom which relate to this. Some of these are broader to do with intellectual freedom simplicite, other relate to the nature of science itself. There are various examples where the control of science by government has had a clear negative effect on scientific progress — for example the role of marxist ideology in the adoption of Lysenkoism by biologists in the soviet union in the 20th century— which have resulted in the view amongst some that scientific autonomy is fundamental to science. Many of these arguments rest on difficult to assess claims about the impact of limits upon scientific creativity. Scientific creativity being seen to be fundamental to scientific progress. These arguments also tend rely on extreme examples to get off the ground — government regulation need not take the oppressive form it did in the soviet union during the 20th Century. Indeed, as we have seen over the past two podcasts, in biomedicine at least researchers are very familiar with government regulation of research through various ethics and review boards for things such as recombinant DNA work, and the use of animals and humans for research. Reflecting on our earlier discussion, in many ways the mousepox case is an example of how these bodies are seen to be a way to manage the responsibility of researcher There is governmentally mandated institutional oversight of this research which does shape research agendas in some ways but also frees researchers by taking on a lot of the responsibility for the societal impact of research.
Perhaps the solution to the dual-use dilemmas being raised for other areas where technology is opening huge new avenues for research such as artificial intelligence is that there are simply more such boards and ethics bodies? In Australia, whilst there is a government endorsed list of voluntary principles for safe, secure, and reliable AI, there are no specific regulations of research into this technology akin to those for research on human or animal subjects. What do you think? Should there be more regulation of research into new technology? What would the costs and benefits be? Are they worth it?
You have been listening to the p-value podcast. I am your host Dr Rachel Brown. Thank you for listening..
The P-value is an initiative of the Centre for philosophy of the sciences at the Australian National university