How to Fail Well

27th May 2024

Wealth Okete on the benefits and opportunities of failure in research

Experiments fail all the time, and they do so for many reasons. Sometimes the reasons are obvious and avoidable, such as a poorly designed method or shoddy technique. Other times the reasons for failure may be unclear or elusive. Some scientists consider inconclusive experiments to be failures; it could also be argued that seemingly conclusive experiments that cannot be replicated are also worthless. And many experiments fail simply because things turn out to be more complex and costly than first imagined.

Some ‘failures’, of course, famously lead scientists down new and exciting avenues. Sir Alexander Fleming’s 1928 discovery of penicillin began when he returned from holiday to find his experimental cultures of Staphylococcus had been contaminated with fungus. His perceptive response to this setback helped usher in the antibiotic era, saving many millions of lives and earning him a Nobel Prize.

In the late 1970s a group of scientists investigating cultures of Pseudomonas syringae believed a faulty fridge must be causing their odd results. Cultures supposedly held at temperatures of between 1°C and 4°C kept coming out icy and frozen. It was only after multiple ‘unsuccessful’ attempts at changing the experiment’s outcome that they began to accept the idea that the bacteria were helping the water form ice at above-zero temperatures. Today, the ability of bacteria to induce ice formation has important applications in agriculture, bioprecipitation and frost protection.

However, the unexpected discovery of a life-saving compound or geologically important organism from the apparent ruins of an experiment are exceptional cases. The majority of experiments fail without throwing up signs of a profound new discovery. Scientists are neither required to report failures nor incentivised to explore them, no matter how costly those failures are. Rather, scientists are rewarded for science that works – when it answers a question, solves a problem or charts a new course in a field.

CASE STUDY: SUPPORTING EVIDENCE
Anonymous MSc student

This student was tasked with predicting the structure and function (or activity) of a bacterial protein isolated from Legionella pneumophila, the most common cause of legionellosis (or Legionnaires’ disease.)

Once the sequence and structure had been predicted, the student found it matched a sequence found within the human histone acetyltransferase (HAT), an enzyme complex that attaches acetyl groups to proteins to control gene expression. He decided to test the hypothesis that the bacterial protein had a similar activity.

Experimental data suggested it actually had the opposite function, acting as a deacetylase, detaching acetyl groups from proteins rather than attach them. It was a confusing and contradictory result.

However, the result spurred the student to revisit both the bacterial and human proteins to study their sequences and structures in greater depth. He realised that the sequence of the bacterial protein in question actually aligned with a regulatory, deacetylase ‘motif’ within the greater HAT acetylase complex. So his findings that the protein functioned as a deacetylase did not actually contradict what was known about the HAT sequences, but were in fact supported by it.


Few journals are interested in publishing results that do not come to a clear conclusion, and it is not typical to consider how failure may be beneficial beyond the ‘stop’ or ‘redirect’ signal it gives. Researchers will mainly worry about wasted time and resources, or the impact on their career prospects, and understandably prefer to channel their efforts into more fruitful research avenues and finding the best journals to publish their successful findings.

Nonetheless, science is built on the back of failure – it is literally part of the scientific process. The mistakes, the errors, the dead ends, the incorrect hypotheses – all of these play a crucial role in getting to robust and replicable explanations. And considering the enormous investment of time, energy and funds that go into research – one study estimated the cost of irreproducible biology at roughly $28bn a year – all scientists should learn to understand and reflect on the nature and potential value of their failures.

What is failure?

Jonas Salk, famous for his work on polio vaccines, is said to have told his laboratory staff that it was impossible for an experiment to fail, as “learning what doesn’t work is a necessary step to learning what does”. However, Sophien Kamoun, professor of biology and senior scientist at the Sainsbury Laboratory, asserts that experiments can and do fail on certain grounds, such as technical hitches and poor research methods. To him, the greatest scientific disappointment lies in inconclusive experiments. When recently discussing failed experiments in an article on the blogging platform Medium, Kamoun expressed concern that a great many students seemed to believe that experiments that contradict an expected result or hypothesis were failures too.

“My message is that we need to appreciate the nuances between a failed, inconclusive and a conclusive experiment,” he writes. “If you have a solid, robust result that contradicts your hypothesis, then it’s good news, because you can draw a solid conclusion – your hypothesis is wrong and you need a new hypothesis… it is one step forward in discovering what the real mechanism/model is.”

Kamoun believes that the constant incentive to produce data that supports a theory – which is of course more likely to be published in a major journal – is creating the dangerous idea that research that finds contradictory or difficult-to-explain results is somehow failing. “In biology, there is this rush to generate data to ‘prove’ preconceived models,” he writes. “Sometimes, the tail starts wagging the dog … The whole process of science as an effort to falsify a given hypothesis – no matter how enamoured the scientist is with that hypothesis – is turned upside down.”

Dealing with real failures

When an experimental failure is linked to a weak or poorly designed methodology, an obvious first response is to review the methodology. However, how does one respond to a failure emerging from a standard experimental procedure? It still does not hurt to revisit the procedure for unintentional errors or else to just repeat the overall experiment to be sure the observed result is not due to a failure in technique or reagents.

It is also worth turning to the literature after an unusual or inexplicable result. There are many scientific resources to consult for guidance while running experiments. But post-experiment, a new literature sweep can help connect you to other approaches or findings that defy scientific explanation (see ‘Binding light’, below, and ‘Supporting evidence’, above).

Reflecting more deeply on failed experiments can help scientists enhance their critical and analytical thinking skills. While the reasons for experimental failure are not always obvious, deliberate observation and reflection can often be key to unearthing new ideas or paths forward. And even when they do not lead to clear answers, it sets the groundwork for a new direction that takes into account what hasn’t worked before.

Documenting both successes and failures in detail is important too. It is typical of scientists to keep records of experimental procedures and the results they yield, but intentionally keeping detailed records of all experiments – including those that are aborted or want to be forgotten – can prove valuable in unexpected ways. Adding a ‘failure notebook’ to the scientist’s arsenal, and sharing interesting observations even if they are not what you had intended to focus on, might still be useful to yourself or colleagues in future, and may even warrant publication somewhere.

CASE STUDY: BINDING LIGHT
Adebayo Bello, postdoctoral Fellow at Liverpool John Moores University

During his PhD, Bello was working on developing synthetic promoter sequences to increase the expression of a specific enzyme. The overall goal of the experiment was to enhance the binding activity of a naturally occurring promoter to enhance expression of the target enzyme.

The final steps in Bello’s experiment involved visualising how much of the target enzyme had been expressed via gel electrophoresis. He was disappointed to find that no bands appeared on the gel. The target enzyme had not been expressed. Had the synthetic promoter been a total failure?

Bello turned to the literature for answers. He realised high levels of methylated ‘CpG islands’ – regions of DNA rich in the bases cytosine and guanine – could have accounted for the failure he encountered. The second phase of his experiment, engineering the ribosomal binding site that directly influences protein synthesis, was more successful. But it was the realisation that CpG islands influenced the activity of promoter sequences that was an eye-opener. It not only helped him revisit his view of the failed experiment, but also unravelled a new angle that would help him better understand the expression of the target enzyme in future.


Sharing’s caring

Finally, don’t forget to publicise your experience. Most scientists are comfortable sharing unsuccessful laboratory experiences with their supervisors, colleagues and students, and it is an obvious route to feedback that can help get research back on track. However, social media means scientists no longer have to limit this to their immediate colleagues or networks. The internet offers ample opportunities for discussing failure publicly while reaching a highly diverse, knowledgeable and interested audience.

It does not matter whether the experience was the result of a silly mistake or lacks the potential to be a ground-breaking discovery. What matters is that it is communicated. One researcher’s missteps and mistakes might prove invaluable to someone else embarking on a related experiment, null results can help other researchers avoid experimental avenues that have already proven to be dead ends, and sharing failures can just help others feel better about their own difficulties. Some senior researchers have even started posting ‘alternative CVs’, which include details of their many scientific failures in the hope of inspiring greater openness and honesty about the complex paths that research careers really take.

The AllTrials campaign has made significant progress in its call for all clinical trial data to be reported in case it’s useful to others. Beyond clinical trials, new publishing models, such as preprints and specialist ‘null or negative result’ journals, are helping to overcome the ‘publication bias’ that sees only certain types of finding ever make it into the literature.

The message is: failed experiments do not equate to wasted efforts. When scientists choose to rethink and redefine their experiences of lab failures, they can find new opportunities for repurposing them, unravel new insights or enhance their skills. By probing failures and the idea of failure further, researchers can often contribute to the advancement of science in less direct but still meaningful ways.

In fact, publicising failure might hold the key to strengthening a culture of honesty, openness and ultimately trust in science.

Further reading

1) Barwich, A. S. The value of failure in science: the story of grandmother cells in neuroscience. Front. Neurosci. 13 (2019).

2) Baker, M. Irreproducible biology research costs put at $28 billion per year. Nature (2015)

3) Kluger, J. Why scientists should celebrate failed experiments. Time (2014).

4) Kamoun, S. What’s a failed experiment? Medium, 2021.

5) Detail of the ‘alternative CVs’ mentioned in this piece can be found in ‘Failure in Science and the Science of Failure’. Association for Women in Science, 2024. 

Wealth Okete is a science writer and host of the podcast Immunology Africa.