Interview with Professor John Ioannidis
Two MiRoR research fellows, Cecilia Superchi and David Blanco (Universitat Politècnica de Catalunya – Barcelona Tech, Spain) had the opportunity to interview Professor John Ioannidis, co-director of the Meta-research Innovation Center at Stanford (METRICS). Research practices, quality in research and researcher commitment are among the topics covered in this inspiring interview. The transcript and the audio recording are available below.
In the 2015 BMJ editorial you defined yourself as an “uncompromising gentle maniac”[1]. Why? And how does your personality influence the way you conduct research?
I am trying to conduct research in a way that I get maximum enjoyment and I try to learn as much as possible from colleagues. Maybe I am maniac in terms of not settling in doing less and I am always struggling to do more. I try to be gentle because most of the issues about bias, transparency, and quality can get people upset when you find out that there are too many biases or the quality is horrible. I am uncompromising because practically when you are dealing with science and scientific methods you want to have no compromise in terms of how strictly you follow the scientific methods and how rigorous you want to be about it.
What was your main motivation to pursue a career in the field of evidence-based medicine? Can you attribute your choice to a certain episode, such as a meeting with a specific person?
In life, choices are not randomized experiments, so it is not easy to identify a particular intervention or occurrence that had a clear causal effect. The turning point for me was probably about 25 years ago when I met the late Thomas Chalmers who was a very charismatic personality. At that time, he had published a paper on cumulative meta-analysis with Joseph Lau [2]. I met Joe and Tom in the same year, evidence based medicine was coined as a term and more widely used at McMaster; the Cochrane Collaboration was being launched and there was a lot of interest about moving in that direction. I found all of that really fascinating. As I say, there is probably some oversimplification here, and clearly there is always recall bias when we are trying to explain why we did something.
You were born in New York City but raised in Athens. What brought you back to United States?
I have never lived for a long period of time in New York per se. I have ping ponged in my life between Europe and USA. Initially it was East-coast and now West-coast. I moved to Stanford seven years ago and I think it is a very exciting environment, very open to new high-risk ideas. Increasingly, I see myself as both split between different continents but also unified in science and evidence. I think it is world citizenship that we are talking about when it comes to science, evidence and humanity.
You were an essential part of the creation of METRICS in 2014. To date, how satisfied are you with the impact of METRICS? And how does it compare to what you hoped for?
METRICS was launched indeed about three years ago and when we joined forces with Steve Goodman, and several other colleagues, we thought the time was right to put together a connector hub in an overarching effort on research on research, on studying research practices and ways to improve efficiency, transparency, reproducibility in research with a primary focus on biomedicine but also with ramifications, influx of ideas and concepts from other fields that face similar challenges. When we got started, we had a pretty ambitious program for changing the world and three years later probably our ambition has just increased further. I am not sure that it would be objective to judge my own effort or center, but over these three years we clearly have seen far more people sensitized about these issues, both in science and other stakeholders related to science and evidence. I think that it would have been very difficult to imagine three years ago all these possibilities that have emerged nowadays. I think we are at the crossroad where there are a lot of possibilities for improving research practices, scientific methods, transparency, and reproducibility. Almost every week I see something new. There are new opportunities to brainstorm and join forces with talented colleagues around the world, in helping shape and evolve that agenda. I have probably seen more action than I would have hoped, even though we were very ambitious upfront.
In the BMJ editorial, you said that your best career move was switching from bench research to evidence based medicine and research methods. Why? And how could you encourage any bench researcher who read that sentence to keep on working ambitiously?
I have enjoyed practically all types of research that I have done. I think that it is important to try to learn from all our research experiences and I feel privileged to have had different opportunities to do research and to work with wonderful colleagues in all of these areas. I believe that my shift to more evidence based research methods topics was beneficial because it allowed me to be exposed to some questions that are more generic and that may have more impact across multiple fields rather than a focused set of questions, which is more characteristic of bench science. However, this does not mean that there is research that should be discredited or credited just because of the questions it tries to address. Bench researchers can do fantastic work. In fact, many of the questions we are trying to address on research methods are highly pertinent to bench research and the boundaries between disciplines are very blurred at the moment. Some of the greatest contributions in bench research may come from theorists or people working with new approaches to data or new statistical tools. Conversely, some of the opportunities that arrive from new bench methods or new technologies of measurement result in some very interesting debates about evidence and research methods, by offering empirical handles to think about some aspects that would have been unheard of otherwise. I think researchers should enjoy what they do and continue working with all the joy and ambition science can offer them.
We all agree that defining quality in research is extremely difficult. Based on your own experience, what research quality means to you?
Over the years, I have become very skeptical of the term “quality”. It is some sort of a Holy Grail and if we had a single quality scale or some way to measure quality reliably, reproducibly, unambiguously, and consistently, it would be fantastic. But we do not really have that. We have different approaches and different tools that look at various aspects of research work, relating eventually to quality. We have ways to measure and understand risk of bias, to track recording of research, to probe into different biases… So for each type of design and study we need to ask ourselves what are the main issues involved and what are the main threats and risks to transparency, reproducibility, precision, and many other aspects. Sometimes, in a non-transparent environment there is very little you can say: for example much research is not published at all, so if it is not available, how can you even think about judging its quality? Even when it is published this is more like an advertisement: there are just 4, 5 or 6 pages where there is a whole universe of actions, activities, data collection, speculations, protocol, lack of protocol, analysis, manipulation of data, heavy interpretation with potentially unbelievable spin. This 5 or 6 pages-published product is more a footprint and quality may not be very transparent. Producing the perfect quality research is more of a final goal to me, but this has to be operationalized to make it tangible in a case-by-case basis.
In 2015, your colleagues Douglas Altman and David Moher provided four proposals to help improve medical research literature, such as introducing publication officers or developing competencies for editors and peer reviewers. In your opinion, how have these proposals been implemented so far? What further proposals do you have?
David and Doug have worked a number of years trying to optimize our research practices in many aspects including reporting and peer review coming back to the quality term, the quality of publications. I think their views in the specific PLOS Medicine paper are very interesting; basically they can be categorized under the bigger theme of having people who are knowledgeable. How do you make people knowledgeable? How do you improve their skills, their understanding and ability to come up with cogent research and write up their protocols and papers? How do you train other people in the chain of the production of evidence like editors and peer reviewers to understand what they are doing and improving research? There is plenty of room to improve on all of these fronts. The question is: how exactly should that be done? For example, the idea of a publication officer can be operationalized in many different ways and I do not think that you need to have someone who is a committed publication officer to train people on how to write manuscripts. It is a question of how do you improve the standards of education, training and knowledge for people who are at the core of producing and disseminating research. In 2014, I wrote a paper in PLOS Medicine about approaches to increase credibility of research [3]: I listed 12 families of such possibilities, including large scale collaborative research, adoption of replication culture, registration practices, strengthening of sharing, reproducibility checks, finding ways to contain conflicted sponsors and authors, using more appropriate statistical methods, standardizing definitions and analysis, using more stringent thresholds for claiming discoveries or successes, improving study design standards, improving peer review, reporting, and dissemination of research, and better training of the scientific work force of in methods and statistical literacy. There are other initiatives: for example, a few months ago we published a paper in Nature Human Behavior along with several other colleagues. We have come up with a manifesto for improving reproducible research and again, we went through a number of proposals. Many of them were about improving the circle from conceiving an idea until it gets published and disseminated. I think we will hear more and more such ideas and the real question is how many of those can we adopt? How many of those should we prioritize? And, especially, how can we test which ones are the best? While some of these ideas have empirical support, some others are very speculative. Can we map evidence on improving Research on Research practices, much as we do for drugs and devices in clinical trials? Can we test whether having someone who is being trained to peer review or who is trained as an editor would improve some particular outcomes? This is where the main challenge is. And there is some action in trying to test with experimental methods all of these proposals.
Nowadays, productivity and efficiency are required from every researcher more than ever. Do you consider that it is possible to teach a new generation of researchers to produce a more honest, relevant, and quality science if everyone around us talks about being productive and efficient?
I think that productivity and efficiency are wonderful. […] If anything when we try to get the best evidence, we try to improve efficiency, to get things to be done better, faster, with less cost and better outcomes. Regarding productivity, I am not in favor of not publishing because that would just promote extreme publication bias. The main challenge is to connect productivity and efficiency with transparency, sharing, reproducibility, and real translational potential for improving outcomes: which in terms of health and health care it means lives saved and lives with better quality of life. Can we work in a way that these other features are also promoted and still remain efficient and productive, still make genuine, reproducible, transparent and verifiable progress? It is a matter of rewards and incentives: if we reward people just to publish more papers, they will just publish more papers. In fact, after some time they will probably start cutting corners to publish yet another paper as quickly as possible, even though it will not be transparent, shareable, reproducible, and will not have real translational potential. Adding these other dimensions would make a difference and then we could have a new generation really aligned with promoting these extremely important values. I am very optimistic and I think that the large majority of scientists realized that this is where we need to go.
With regard to tacking research misconduct, where do you think that resources should be directed? Into educating the researchers from an early career stage via dedicated training or into establishing stricter mechanisms to monitor the quality and conduct of research as it is undertaken?
I think this remains an open question. There are many interventions that can be adopted at very different levels and stages of the whole process of producing research results, disseminating them, and implementing them. It is important to understand that some of the proposed interventions may even be harmful. For example, if we want to go down the path of monitoring quality and conduct of research as it is undertaken; one option might be to audit everything: in every lab, we would have an auditor who would be looking at everything that has been done. You would need to double the research force and have half of the resources given to that auditor. But this would not be science, as it jeopardizes completely the joy of science, of doing something altruistically. Then, how can we strike the right balance? We of course need to educate. But when and where do we intervene? At the end of the chain or earlier? What is “early”? We have to be very careful about what we propose and how we verify this will do more good rather than harm.
Lastly, what career advices would you give to early stage researchers as the MiRoR Fellows?
I would probably say that research is very demanding and life-long enterprise; it takes a lot of efforts and commitment. Whatever you do, make sure that you do something that you are very excited about and that gives you both joy and mental satisfaction. Science is the best thing that has happened to humans. Trying to improve scientific research can have major repercussions for humans, so try to be inspired by that. Don’t quit because of difficulty. You will be rejected again and again – it has happened to me over a thousand times in terms of how many rejections I have got. Don’t quit no matter what the adversity is.
[1] BMJ C. John Ioannidis: Uncompromising gentle maniac. Bmj [Internet]. 2015;4992(September):h4992. Available from: http://www.bmj.com/lookup/doi/10.1136/bmj.h4992
[2] Lau J, Schmid CH, Chalmers TC. Cumulative meta-analysis of clinical trials builds evidence for exemplary medical care. J Clin Epidemiol. 1995;48(1):45–57.
[3] Ioannidis JPA. How to Make More Published Research True. PLoS Med. 2014;11(10).
In the 2015 BMJ editorial you defined yourself as an “uncompromising gentle maniac”[1]. Why? And how does your personality influence the way you conduct research?
I am trying to conduct research in a way that I get maximum enjoyment and I try to learn as much as possible from colleagues. Maybe I am maniac in terms of not settling in doing less and I am always struggling to do more. I try to be gentle because most of the issues about bias, transparency, and quality can get people upset when you find out that there are too many biases or the quality is horrible. I am uncompromising because practically when you are dealing with science and scientific methods you want to have no compromise in terms of how strictly you follow the scientific methods and how rigorous you want to be about it.
What was your main motivation to pursue a career in the field of evidence-based medicine? Can you attribute your choice to a certain episode, such as a meeting with a specific person?
In life, choices are not randomized experiments, so it is not easy to identify a particular intervention or occurrence that had a clear causal effect. The turning point for me was probably about 25 years ago when I met the late Thomas Chalmers who was a very charismatic personality. At that time, he had published a paper on cumulative meta-analysis with Joseph Lau [2]. I met Joe and Tom in the same year, evidence based medicine was coined as a term and more widely used at McMaster; the Cochrane Collaboration was being launched and there was a lot of interest about moving in that direction. I found all of that really fascinating. As I say, there is probably some oversimplification here, and clearly there is always recall bias when we are trying to explain why we did something.
You were born in New York City but raised in Athens. What brought you back to United States?
I have never lived for a long period of time in New York per se. I have ping ponged in my life between Europe and USA. Initially it was East-coast and now West-coast. I moved to Stanford seven years ago and I think it is a very exciting environment, very open to new high-risk ideas. Increasingly, I see myself as both split between different continents but also unified in science and evidence. I think it is world citizenship that we are talking about when it comes to science, evidence and humanity.
You were an essential part of the creation of METRICS in 2014. To date, how satisfied are you with the impact of METRICS? And how does it compare to what you hoped for?
METRICS was launched indeed about three years ago and when we joined forces with Steve Goodman, and several other colleagues, we thought the time was right to put together a connector hub in an overarching effort on research on research, on studying research practices and ways to improve efficiency, transparency, reproducibility in research with a primary focus on biomedicine but also with ramifications, influx of ideas and concepts from other fields that face similar challenges. When we got started, we had a pretty ambitious program for changing the world and three years later probably our ambition has just increased further. I am not sure that it would be objective to judge my own effort or center, but over these three years we clearly have seen far more people sensitized about these issues, both in science and other stakeholders related to science and evidence. I think that it would have been very difficult to imagine three years ago all these possibilities that have emerged nowadays. I think we are at the crossroad where there are a lot of possibilities for improving research practices, scientific methods, transparency, and reproducibility. Almost every week I see something new. There are new opportunities to brainstorm and join forces with talented colleagues around the world, in helping shape and evolve that agenda. I have probably seen more action than I would have hoped, even though we were very ambitious upfront.
In the BMJ editorial, you said that your best career move was switching from bench research to evidence based medicine and research methods. Why? And how could you encourage any bench researcher who read that sentence to keep on working ambitiously?
I have enjoyed practically all types of research that I have done. I think that it is important to try to learn from all our research experiences and I feel privileged to have had different opportunities to do research and to work with wonderful colleagues in all of these areas. I believe that my shift to more evidence based research methods topics was beneficial because it allowed me to be exposed to some questions that are more generic and that may have more impact across multiple fields rather than a focused set of questions, which is more characteristic of bench science. However, this does not mean that there is research that should be discredited or credited just because of the questions it tries to address. Bench researchers can do fantastic work. In fact, many of the questions we are trying to address on research methods are highly pertinent to bench research and the boundaries between disciplines are very blurred at the moment. Some of the greatest contributions in bench research may come from theorists or people working with new approaches to data or new statistical tools. Conversely, some of the opportunities that arrive from new bench methods or new technologies of measurement result in some very interesting debates about evidence and research methods, by offering empirical handles to think about some aspects that would have been unheard of otherwise. I think researchers should enjoy what they do and continue working with all the joy and ambition science can offer them.
We all agree that defining quality in research is extremely difficult. Based on your own experience, what research quality means to you?
Over the years, I have become very skeptical of the term “quality”. It is some sort of a Holy Grail and if we had a single quality scale or some way to measure quality reliably, reproducibly, unambiguously, and consistently, it would be fantastic. But we do not really have that. We have different approaches and different tools that look at various aspects of research work, relating eventually to quality. We have ways to measure and understand risk of bias, to track recording of research, to probe into different biases… So for each type of design and study we need to ask ourselves what are the main issues involved and what are the main threats and risks to transparency, reproducibility, precision, and many other aspects. Sometimes, in a non-transparent environment there is very little you can say: for example much research is not published at all, so if it is not available, how can you even think about judging its quality? Even when it is published this is more like an advertisement: there are just 4, 5 or 6 pages where there is a whole universe of actions, activities, data collection, speculations, protocol, lack of protocol, analysis, manipulation of data, heavy interpretation with potentially unbelievable spin. This 5 or 6 pages-published product is more a footprint and quality may not be very transparent. Producing the perfect quality research is more of a final goal to me, but this has to be operationalized to make it tangible in a case-by-case basis.
In 2015, your colleagues Douglas Altman and David Moher provided four proposals to help improve medical research literature, such as introducing publication officers or developing competencies for editors and peer reviewers. In your opinion, how have these proposals been implemented so far? What further proposals do you have?
David and Doug have worked a number of years trying to optimize our research practices in many aspects including reporting and peer review coming back to the quality term, the quality of publications. I think their views in the specific PLOS Medicine paper are very interesting; basically they can be categorized under the bigger theme of having people who are knowledgeable. How do you make people knowledgeable? How do you improve their skills, their understanding and ability to come up with cogent research and write up their protocols and papers? How do you train other people in the chain of the production of evidence like editors and peer reviewers to understand what they are doing and improving research? There is plenty of room to improve on all of these fronts. The question is: how exactly should that be done? For example, the idea of a publication officer can be operationalized in many different ways and I do not think that you need to have someone who is a committed publication officer to train people on how to write manuscripts. It is a question of how do you improve the standards of education, training and knowledge for people who are at the core of producing and disseminating research. In 2014, I wrote a paper in PLOS Medicine about approaches to increase credibility of research [3]: I listed 12 families of such possibilities, including large scale collaborative research, adoption of replication culture, registration practices, strengthening of sharing, reproducibility checks, finding ways to contain conflicted sponsors and authors, using more appropriate statistical methods, standardizing definitions and analysis, using more stringent thresholds for claiming discoveries or successes, improving study design standards, improving peer review, reporting, and dissemination of research, and better training of the scientific work force of in methods and statistical literacy. There are other initiatives: for example, a few months ago we published a paper in Nature Human Behavior along with several other colleagues. We have come up with a manifesto for improving reproducible research and again, we went through a number of proposals. Many of them were about improving the circle from conceiving an idea until it gets published and disseminated. I think we will hear more and more such ideas and the real question is how many of those can we adopt? How many of those should we prioritize? And, especially, how can we test which ones are the best? While some of these ideas have empirical support, some others are very speculative. Can we map evidence on improving Research on Research practices, much as we do for drugs and devices in clinical trials? Can we test whether having someone who is being trained to peer review or who is trained as an editor would improve some particular outcomes? This is where the main challenge is. And there is some action in trying to test with experimental methods all of these proposals.
Nowadays, productivity and efficiency are required from every researcher more than ever. Do you consider that it is possible to teach a new generation of researchers to produce a more honest, relevant, and quality science if everyone around us talks about being productive and efficient?
I think that productivity and efficiency are wonderful. […] If anything when we try to get the best evidence, we try to improve efficiency, to get things to be done better, faster, with less cost and better outcomes. Regarding productivity, I am not in favor of not publishing because that would just promote extreme publication bias. The main challenge is to connect productivity and efficiency with transparency, sharing, reproducibility, and real translational potential for improving outcomes: which in terms of health and health care it means lives saved and lives with better quality of life. Can we work in a way that these other features are also promoted and still remain efficient and productive, still make genuine, reproducible, transparent and verifiable progress? It is a matter of rewards and incentives: if we reward people just to publish more papers, they will just publish more papers. In fact, after some time they will probably start cutting corners to publish yet another paper as quickly as possible, even though it will not be transparent, shareable, reproducible, and will not have real translational potential. Adding these other dimensions would make a difference and then we could have a new generation really aligned with promoting these extremely important values. I am very optimistic and I think that the large majority of scientists realized that this is where we need to go.
With regard to tacking research misconduct, where do you think that resources should be directed? Into educating the researchers from an early career stage via dedicated training or into establishing stricter mechanisms to monitor the quality and conduct of research as it is undertaken?
I think this remains an open question. There are many interventions that can be adopted at very different levels and stages of the whole process of producing research results, disseminating them, and implementing them. It is important to understand that some of the proposed interventions may even be harmful. For example, if we want to go down the path of monitoring quality and conduct of research as it is undertaken; one option might be to audit everything: in every lab, we would have an auditor who would be looking at everything that has been done. You would need to double the research force and have half of the resources given to that auditor. But this would not be science, as it jeopardizes completely the joy of science, of doing something altruistically. Then, how can we strike the right balance? We of course need to educate. But when and where do we intervene? At the end of the chain or earlier? What is “early”? We have to be very careful about what we propose and how we verify this will do more good rather than harm.
Lastly, what career advices would you give to early stage researchers as the MiRoR Fellows?
I would probably say that research is very demanding and life-long enterprise; it takes a lot of efforts and commitment. Whatever you do, make sure that you do something that you are very excited about and that gives you both joy and mental satisfaction. Science is the best thing that has happened to humans. Trying to improve scientific research can have major repercussions for humans, so try to be inspired by that. Don’t quit because of difficulty. You will be rejected again and again – it has happened to me over a thousand times in terms of how many rejections I have got. Don’t quit no matter what the adversity is.
[1] BMJ C. John Ioannidis: Uncompromising gentle maniac. Bmj [Internet]. 2015;4992(September):h4992. Available from: http://www.bmj.com/lookup/doi/10.1136/bmj.h4992
[2] Lau J, Schmid CH, Chalmers TC. Cumulative meta-analysis of clinical trials builds evidence for exemplary medical care. J Clin Epidemiol. 1995;48(1):45–57.
[3] Ioannidis JPA. How to Make More Published Research True. PLoS Med. 2014;11(10).
À lire aussi
Farewell to Doug Altman – a revolutionary mind that inspired us all
“I don’t think we are making waves, but we are making ripples. […] We are always looking to do more”, said Doug Altman when asked about the impact of EQUATOR, one of the most prominent initiatives aiming to improve the reliability and value of published health...
Life after MiRoR: what are our fellows becoming?
Our fellows are towards the end of their PhD projects and are ready for new beginnings, willing to make the most of what they learnt and of the research network they have developed during these years. Some of them have already started new positions: Alice Biggane...
Interview with Professor Doug Altman
The renowned statistician and medical researcher Professor Doug Altman was recently interviewed by the MiRoR research fellows. The interview covers questions about his career, research interests, thoughts on the current research climate and advices for young research...
Interview with Jacques Demotes
This month two MiRoR research fellows, Maria Olsen and Mona Ghannad (University of Amsterdam) had the chance to interview Jacques Demotes, the Director General of the European Clinical Research Infrastructure Network (ECRIN) in Paris. The interview covers questions...