Insights From an Editor: Choosing a Journal and Publishing Your Paper
In his 12 years on the editorial board of JNeurosci, David Perkel — a senior editor for the journal and a professor at the University of Washington — has acquired numerous insights into science publishing.
Here he answers commonly asked questions about the publishing process and shares advice applicable to neuroscientists at any career stage, whether you’re preparing to publish your first paper or looking to keep up with best practices.
Click on each question to reveal his advice for choosing where to submit and increasing the value of your paper to the scientific community.
Once there's a clear set of data that looks like there's a finding that could be publishable, it is valuable to meet, usually multiple times, with coauthors to form the scope of a paper iteratively. What are the figures going to look like? What's the overall story you're going to tell? That helps fill in what control experiments you might need, or what additional details you might need to address in the study. You’ll do this over and over to refine the scope and contents of the manuscript.
Elements that can justify authorship include conceiving the idea, designing the experiments, carrying out the experiments — collecting and analyzing the data, including providing new analyses nobody thought of before — and preparing the figures and manuscript. In some cases, authorship is granted for developing a reagent or piece of equipment that was crucial for the study. No single element of that necessarily guarantees authorship, but if you've done several of those, and have contributed substantially to the paper, that would earn authorship.
Typically, the senior author is last on the authorship order of the manuscript, and the first author is the person who has driven the project, done most of the work, and usually written the first draft.
Co-first authorship is fairly well accepted now. An asterisk or a symbol denotes two or three first authors contributed equally to the manuscript. This approach has its challenges, but it's commonly done and a way to compromise about authorship order.
Think about the papers you cite most and what journals they were published in. Read the instructions for authors and the mission statement for each journal you’re considering. Some mission statements don't align exactly with the journal’s title because their goal is evolving in some way.
Different journals have different goals, target readerships, and expectations.
For example, eNeuro explicitly states the journal will publish negative results. It will publish replication studies if there's a good justification. It will study non-replication if somebody finds results at odds with something that's been published elsewhere. It will publish phenomena without full explanation or full mechanistic understanding.
That's distinct from JNeurosci. Typically, we want some advance in our understanding of how a mechanistic process explains some higher-level phenomenon, or a fundamental advance in our understanding of how something works.
It can be particularly helpful for trainees to talk with their adviser and colleagues about what the appropriate journal may be for their paper, once they’ve drafted the manuscript but before they've finalized it.
Also consider whether you want to publish in an open-access journal. There has been a rise in open-access journals because of an increasing trend toward wanting immediate access to publicly funded science.
Immediately upon publication, papers in open-access journals are made available to all readers around the world for free. Author-paid publication costs, not readers, provide financial support for the publisher. Some journals have a permanent paywall — they charge subscribers for access and require nonsubscribers to pay a substantial fee to view or download an article. JNeurosci has a hybrid model: For six months a paper is behind a paywall, and after that it’s open access.
Impact factor isn’t always a good measure of true scientific impact.
It’s a statistic calculated by a ratings agency that indicates the number of citations papers receive on average for a given journal in the past two years. It has come to be a shorthand to say, "If this paper was published in that journal, it must have had a high impact." That’s driven people to work hard to publish papers in journals with a high impact factor, even if that paper doesn't have that much impact.
In the last few years, though, there has been a clearer understanding that impact factor is not all it could be. Publishing in journals with goals that aren’t aligned with impact factor, yet still hold strong scientific impact, seems to be disadvantageous in the eyes of some authors.
It can easily be argued that JNeurosci has a lower impact factor than represents its true impact. Part of that is because we represent a society and publish a lot of manuscripts. Even so, our top manuscripts have just as much scientific impact as the top manuscripts published in other journals, as can be demonstrated by bibliometrics that are more meaningful than impact factor. For example, JNeurosci is rated first among journals in the neurosciences category when using the Eigenfactor Score, which is a more contemporary and sophisticated approach to ranking impact.
I think selection committees are changing their practices to focus less on impact factor and more on actual scientific impact. Numerically, you can ask how many citations or what types of citations an individual paper has had. Any hiring committee, fellowship selection committee, or granting agency should be using individual evaluations by experts in the field.
Sometimes people have a manuscript that may not be quite ready for submission, but it’s close. Preprint servers allow them to post it online for free. This method shares your idea, approach, findings, and conclusions without peer review.
The preprint manuscript may or may not be citable, but it allows you to say, "I've marked this territory." There’s a risk someone could scoop your work. However, if somebody publishes before you, at least you got it out there first.
Many journals used to resist the idea you could then submit a paper to a journal after the preprint was published. They said, "Well, you've already published it." Many journals have moderated that now. For example, at JNeurosci we have a button that allows you to submit your paper from a preprint server without reuploading it.
Another note is publishing on a preprint server won’t sway an editor’s opinion one way or another. I haven’t detected any bias for or against those papers. They’re treated just the same way others are.
Many trainees have heard there's a large fraction of papers that don't get sent out for review. That’s true at some, but not all, journals. JNeurosci, for example, reviews over 90 percent of manuscripts submitted.
Most of the decisions not to send out a manuscript are based on the manuscript’s being out of the scope of neuroscience — if it’s entirely biophysical, not addressing something that occurs in neurons, or solely behavioral, without any plausible argument or theory about the underlying neural mechanisms. Another reason a paper can be determined to be out of the scope of JNeurosci is it describes phenomena without any mechanistic explanation.
Understanding the editorial process can help all scientists to write clearer manuscripts. If you know who's reading it and what they're looking for, it helps you address the immediate editorial and reviewer readership. That's important, because different journals are looking for different kinds of papers.
Sometimes the process is imperfect, and authors perceive an error has been made. If a paper is rejected and authors don’t agree with the basis for rejection or there was a factual error or evidence of bias, journals may consider an appeal.
Plan your statistical analysis before you do experiments. That's a good practice. In some cases, that's not possible because you're doing an exploratory study. You're measuring something that's never been measured before, so you couldn't imagine what the variance of those values would be. But you can do the study and get those measurements, maybe make a preliminary finding, and then you can do it again with a targeted, appropriate, rigorous, and statistical plan.
Reporting all the statistical values is now required by most journals. It's important to understand what statistical test you did and articulate why you chose that test. That's increasingly under scrutiny by editors, reviewers, funding agencies, and authors.
The fundamental core value of a publication is to share scientific information to allow people to replicate the experiments and use the data in other scientific processes. Publishing in 10 or 20 years is going to look different from the way it does now. The formats will change, but these values should stay the same.
Readers follow scientific reports more easily if the authors state directly the rationale for manipulating specific variables and for measuring specific dependent variables, and then present those measurements in as clear a fashion as possible.
At JNeurosci, we have a policy for graphical representation of data. We urge authors as much as possible to go beyond the simple bar graph. For example, if there are 10 experiments contributing to an average that led to that bar, include dots for every single one. In many cases, authors can show every data point and the reader gets much more information.
People are exploring and enhancing data appreciation even beyond the big data movement. There are ways of improving data presentation even for small data that I’d encourage everybody to think about creatively.
Even if you're writing for a specialty journal, you should use the official, correct terminology in your field, but not jargon. Any practicing neuroscientist should be able to understand the basics of your paper — certainly the abstract and introduction.
One common error is starting the introduction too narrowly, diving into the details too quickly, and not stating the big question. One way to avoid that is, prior to submission, to have colleagues at your institution who aren’t in your lab or field read the manuscript, even if it's just the abstract and introduction. It's not a big request to say, "Can you look at this and give me some comments on this portion of my manuscript?" Another helpful approach would be to read some significance statements and introductions of related papers in JNeurosci before submitting your manuscript.
Make your title as general as possible without overstretching. Ensure your abstract and introduction are understandable by any neuroscientist. There may be some technical information in the abstract that maybe not every neuroscientist gets, but they should be able to understand most of it.
Also, set the expectations appropriately for what you’re going to deliver. By reading the introduction, I should have a pretty clear expectation of what's coming in the paper. I build the mental model of what I will see in the results and figures. If it sets me up for success in understanding the paper, everything goes smoothly. If it sets my expectations too high, or too low, or there's a big mismatch with what comes later, it's problematic.
Don't omit relevant prior work in the introduction and then bring it into the discussion thinking you're going to avoid somebody’s saying, "Some of this work was done before." People will also bury references in the discussion so they can say, "Well, I did mention it."
Visit the Publishing and Peer Review collection for more advice on publishing a paper and improving your skills as a peer reviewer.