13 Reasons the Journal Editors Will Reject Your COVID Study

April 11, 2020, 1:02 p.m.

Blog Image

As COVID-19 engulfs our planet the medical community is struggling to save thousands of victims from premature death. High-quality research published quickly is mandatory to provide the best care. Researchers, journal editors, and clinicians are all working together.

This week I had a chance to chat with one of my mentors: Sam Stratton, the editor of Prehospital and Disaster Medicine. This week we talked about what researchers do that slows down or even destroys the path to publication. These 13 things are near guarantees of rejection.

1. Not Following the Instructions for Authors: Professor Stratton assures me that editors and reviewers will use a checklist of items your paper must meet to be considered for publication. At the top of the list is study formatting. Not formatting your paper as per the instructions for authors is often an automatic cause for rejection. Journals publish detailed instructions for authors on their website. Don't forget to use them. Pay particular attention to how sections of the paper are named and how the references are formatted.

2. Not Obtaining Ethics Approval: If the paper involves human subjects or health data it must have Health Research Ethics Board approval. This includes any human data. Yes, even surveys, interviews, questionnaires, or chart reviews. Yes, even if the data is from students or colleagues.

3. Double Submitting: Professor Stratton assured me that most journals will use some sort of electronic tool to detect plagiarism. This includes assuring that your paper is not previously published. Changing a few words in the title and abstract is not enough. Although most journals will require the authors to declare the manuscript is not previously published, many authors seem to try to game the system. Often this involves first paying to publish the article in one of the so-called "predatory" journals and then seeking publication in a more reputable journal. This is not permitted in most journals, and the editors have ways to track you down.

4. Having a Vague Research Question: Papers with vague research questions and unclear hypotheses are both hard to read and hard to review. When working on a study design, one of the first questions I ask as a statistician is: "what are you trying to prove?" This should be very clear in the introduction of the paper what the researchers are trying to prove. 

5. Failing to Define the Study Population: Studies should include details of the inclusion criteria, exclusion criteria, and how the study population was defined.

6. Biasing the Study by Design: Bias in a paper is something that experienced editors can see quickly. You will not be able to hide it. This can manifest itself in such manners as an overly biased sample, letting the researchers own expectation bias the study, and purposely looking for data that supports a hypothesis.

7. Dredging the Data: Quality scientific papers always start with a hypothesis that was developed before the data was inspected. Most common statistical analyses are completely invalid when the hypotheses are developed after looking at the data. Professor Stratton assures me that experienced editors can usually spot when researchers develop their hypotheses after looking at the data.

8. Failing to Address Issues of Power: Small studies with low power that fail to reject their null hypotheses is an instant "red flag" in the editorial world. By far the best solution is to ensure that a power calculation is done before conducting the study. If this is not done, researchers should pay very careful attention to the precision of their data by looking at the confidence intervals.

9. Choosing the Wrong Analysis: Editors will cringe when they see ordinal data described as means and standard deviation. Are you using a Likert scale? Make sure it is analyzed correctly. Choosing a statistics test is not easy. Ask for help if you need it. For qualitative studies, this means making sure that the themes that emerge are developed in a methodologically sound manner and that the process is detailed in the study methods.  

10. Ignoring the Variation in the Data: Editors can quickly spot situations where the variation in the data is not addressed. Errors can include claiming a study fails to show a significant difference when in fact the uncertainty in the data would allow for a large difference to be undetected. Likewise, big studies can have positive p-values while the confidence intervals can support that the magnitude of the difference is very small. 

11. Minimizing or Hide the Limitations: Professor Stratton stated the limitations section is a very important part of the paper. Aside from protecting the reader from jumping to overly large conclusions it also signals to the reviewer that the researchers understand the topic and study design. Experienced editors will see quickly the limitations of your study. When the authors themselves do not discuss these limitations, it signals to the editor that either the authors are trying to hide something or that they don't understand the topic. Just remember, if you have thought of that limitation, write about it. You won't be able to hide it.

12. Failing to Address the Clinical or Practical Significance: Having a study with a great p-value is fantastic, but what about the practical or clinical significance? Is a reduction in systolic blood pressure of 2 mmHg truly clinically significant? Many of my mentors call this the "So What?" question, although I prefer the phrasing "Who Cares?" Professor Stratton assures me that they expect data to be both statistically and clinically significant before meriting conclusions about effectiveness.

13. Making Sweeping Conclusions from your Data: Professor Stratton stated that the best conclusions simply answer the study problem or hypothesis. Make sure to explicitly answer your study question. Keep your conclusions focused on your population and study settings Don't make claims that your conclusions are generalizable to every situation. Don't make claims to anything you did not specifically address in your study design. It's okay to make a statement about what is needed for future research on the topic, but don't lay claim to any further studies.

With COVID-19 straining our medical system in ways our generation has never before experienced, the need for high quality, reproducible, and clinically significant data has never been more important. Please make sure to watch for these 13 errors to ensure that your research is most likely to be published and that journal editors can quickly see the value of your findings.

Are you working on a high impact COVID research project? Book a free, no-obligation consult with us and we will help you optimize your impact factor and publish your study as fast as possible.

By: Jeffrey Franc

Categories: Peer Review

Views: 1369

If you liked this post, please share it on Twitter or Facebook.


Comments

You must be registered as a user to add a comment.

Please login or create a FREE account here


Are You Ready to Increase Your Research Quality and Impact Factor?

Sign up for our mailing list and you will get monthly email updates and special offers.

Protected by reCAPTCHA
WhatsApp

1-587-410-3498

Contact
Checklists
Privacy
Security
FAQ