FUNDING: How many proposals should you submit?

I have been in my current position for nine years. I submitted plenty of research proposals before I came to MSU, but I have now established a long-ish record of proposal writing and grantsmanship at one institution. I did a quick search through our proposal database and figured out that I submitted, either as PI or co-PI, at least 68 proposals between August 2006 and December 2014. That is an average of about 8.5 proposals per year. Here is a nifty graph of submitted proposals from 2006-2014 contrasted with the number of those proposals that were actually funded:

Screen Shot 2015-06-01 at 11.27.29 AM

Of the 68 proposals shown in the graph, 13 were funded. This is a 19% funding rate. I was pretty intrigued to see that this rate is on par with the overall EHR funding rate – most of my funding runs through EHR. In 2013-14, EHR funding rates were 17-18%! [Note: To my colleagues in geoscience who claim that I am in a field where it is “easy to get funding” – um, no. For the 2006-2014 time frame, EHR’s highest funding rate was 29%. GEO had a whopping high of 45%. That’s an almost 50-50 chance of getting funded! ].

So, how many proposals should you submit? If you are like me, you need to pay attention to the programs you submit to and submit based on the funding rate. At this time, I can hope to be funded once for every five proposals I submit to EHR. What are the funding rates like in your field?

EDIT: June 1, 2015

My colleague, Titus Brown, was nice enough to offer his proposal acceptance and funding rates to add to this conversation. Titus works at the intersection between bioscience and computer science. Between 2008 and 2014, Titus submitted 43 proposals. Of these, 11 were funded; that’s a 25% funding rate. Here is his nifty graph – looks similar to mine:

Screen Shot 2015-06-01 at 12.49.19 PM

Advertisements

Where Should A DBER Scholar Publish?

A couple of years ago the question of where discipline-based education research (DBER) should be published came up in conversation, and I did an informal survey of DBER faculty at my institution to determine where they publish and what they read. In essence, which journals are part of their scholarly conversations? I compiled the list of unique journals that my colleagues suggested, and wanted to share it more broadly. I also added in a few additional and reputable publishing opportunities that have arisen in the meantime. Note that this is NOT a comprehensive list of all good journals. Rather, this is a list that originated from a consensus of top journals used by DBER scholars. Email me if you think a top or high quality new journal needs to be added!

*Interestingly, only a subset (65%) of these journals are indexed by Thomson Reuters – anyone else wish they would expand their offerings or abandon the practice of assigning impact altogether? My institution seems like it wants to discount my scholarship because my field has a non-ISI journal as its main journal. Whether indexed by Thomson Reuters or not, always check Beall’s list to make sure your work is published in the highest quality places!

DBER JOURNALS

Bioscience
Advances in Physiology Education
Biochemistry and Molecular Biology Education
CBE Life Science Education
Evolution: Education and Outreach
Journal of Microbiology and Biology Education
Journal of Natural Resources and Life Sciences Education (also listed in Earth System Science)

Chemistry
Chemical Education Research and Practice
Chemical Engineering Education (also listed in Engineering)
Journal of Chemical Education

Earth System Science/Environmental Science
Bulletin of the American Meteorological Society
Environmental Education Research
Geosphere (special theme)
International Journal of Environmental and Science Education
Journal of Environmental Education
Journal of Geography in Higher Education
Journal of Geoscience Education
Journal of Natural Resources and Life Sciences Education (also listed in Bioscience)

Engineering
Advances in Engineering Education
Chemical Engineering Education (also listed in Chemistry)
International Journal of Engineering Education
Journal of Engineering Education

Mathematics
The College Mathematics Journal
Journal for Research in Mathematics Education
Research in Mathematics Education
Notices of the American Mathematical Society

Physics
American Journal of Physics
The Physics Teacher
Physical Review Special Topics – Physics Education Research

HIGHER EDUCATION/EDUCATION/PSYCHOLOGY JOURNALS
Active Learning in Higher Education
American Educational Research Journal
Assessment and Evaluation in Higher Education
British Journal of Educational Psychology
Cognition and Instruction
Educational Researcher
International Journal of Higher Education
International Journal of Science Education
Journal of Educational Psychology
Journal of Research in Science Teaching
Journal of Science Education and Technology
Journal of Science Teacher Education
Journal of the Learning Sciences
Research in Science Education
Review of Educational Research
Science Education
Teaching and Teacher Education

Citations and impact? Who says your research is valuable?

I cannot stress enough how much I DISLIKE the focus research universities place on journal impact factors and ISI citation counts. Both of these are really the work of one organization, Thomson Reuters. The Science Citation (and Social Science, etc) indexes offered a great service to researchers before web-based and open access publishing hit the planet in a big way. Rather than being one of many resources for identifying related bodies of work, however, ISI citations and journal impacts are being used to make and break people’s careers, as might be the case at Northeastern. Although impact factors and ISI citation counts really are only two sources of data about the impact of research on the local and broader community, these metrics are being used to decide whether or not research is valuable. This is extremely problematic for several reasons:

1. Not all good journals are indexed by Thomson Reuters. This means that good publication is deemed “not-so-good” simply by virtue of not having the Thomson Reuters special seal of approval. This is funny, since people have been pointing out for years that Google Scholar offers a more accurate, holistic view of scholarly impact.

2. Disciplines are not equally indexed by Thomson Reuters. Way back in 2010, Larsen and von Ins (2010) noted that the Science Citation and related indexes simply do not provide the kind of coverage that open access indexes like GoogleScholar can offer. The traditional sciences have more coverage than the social sciences, and disciplines that use alternative publication venues are highly underrepresented. For example, this means that computer science, which has great traction with conference proceedings AND is ahead of the curve on open access publishing, use of Creative Commons, and innovative strategies for getting research out, is “not valuable” if the lens of Thomson Reuters citation counts and journal impact is the de facto metric.

3. Most people who publish, including some pretty important journal editors, see journal impact factors as a poor way to assess research value. Or, as the DATA PUB blog would say, impact factors are a broken system. All sorts of web-based alternatives exist, but somehow aren’t being valued by the administrators, granting agencies, and other people who make decisions based on “research impact”.

4. Most frightening, organizations like the Association of American Universities (AAU) are setting a precedent that gives Thomson Reuters power over the kinds of research universities are willing to invest in. AAU is considered to be pretty elite; their website explains that “AAU member universities are on the leading edge of innovation, scholarship, and solutions that contribute to the nation’s economy, security, and well-being.  The 60 AAU universities in the United States award more than one-half of all U.S. doctoral degrees and 55 percent of those in the sciences and engineering.” Getting into or falling out of the AAU is a BIG DEAL. Look what happened when University of Nebraska was kicked out, and the jealousy other schools felt when Georgia Tech was let in.

How does AAU decide if an institution is elite enough to be a member? They have a very nice membership policy document published in Nov. of 2012 that you can download from their website. AAU puts universities through two stages of analysis.  The first, more quantitative stage looks at four metrics – to directly quote AAU:

1. Competitively funded federal research support.
2. Membership in the National Academies (NAS, NAE, IOM).
3. Faculty awards, fellowships, and memberships.
4. Citations: Thomson Reuters InCitesTM.

Phase 2 metrics are more complicated, but let’s be clear: ONE of only FOUR criteria used to initially decide if universities are elite enough to be in the AAU is based on…Thomson Reuters’ metrics. Let’s think about this logically:

1. Universities want to be in the AAU, much like college football teams want to play in the Rose Bowl.

2. To be in the AAU, universities have to get lots of federal grants; employ people who are in the National Academies; employee people who receive awards, fellowships, and elite memberships (apparently, there is a list of such things that count); and must be affiliated with publications that are indexed by Thomson Reuters.

3. Universities that want to get into (or stay in) the AAU must increase their metrics. Faculty at research institutions already seek and receive federal funding, National Academy membership, and awards. Faculty also publish – but not necessarily in so called “ISI journals”.

3. Thomson Reuters indexes a fraction of all of the articles published each year.

4. Universities seeking to increase AAU ranking may be tempted to treat ISI publications as more valuable than publications in venues not indexed by Thomson Reuters.

Which means that: Research universities could fall into a trap of allowing Thomson Reuters to indirectly set research agendas! How on earth did we reach a point where a third-party company has such power to control the types of research that are valued, funded, and supported by our academic institutions?

I wonder how things would change if Thomson Reuters dropped the evaluation process for journals and simply started indexing everything? Oh, wait – Google Scholar already does that.

I should note that I publish in both ISI and non-ISI journals. Since my work is interdisciplinary, my personal decision on where to publish reflects which communities I want to reach, and I often have to make a judgement call based on where the work will have the most impact. This is not impact as reflected by some outside metric, but impact as I think it should be viewed: Who needs to see my research? What other scholars could be impacted by my research? Which community will have the greatest impact on related future work? In essence, I need to figure out with whom I want to have a scholarly conversation, and publish accordingly. I would be ignoring many valuable colleagues and groups if I limited my publication to ISI journals, so I have always simply refused to allow my publishing decisions to be dictated by an elitist metric.