Short review of “Spamming in Scholarly Publishing: A Case Study”

Interesting: a researcher, Marcin Kozak, gets a lot of unsollicited email (spam) trying to convince him to publish in a journal or with a publisher and decides to check out these journals and publishers.

Kozak, M., Iefremova, O. and Hartley, J. (2015), Spamming in scholarly publishing: A case study. Journal of the Association for Information Science and Technology. doi: 10.1002/asi.23521

The abstract covers it well:

Spam has become an issue of concern in almost all areas where the Internet is involved, and many people today have become victims of spam from publishers and individual journals. We studied this phenomenon in the field of scholarly publishing from the perspective of a single author. We examined 1,024 such spam e-mails received by Marcin Kozak from publishers and journals over a period of 391 days, asking him to submit an article to their journal. We collected the following information: where the request came from; publishing model applied; fees charged; inclusion or not in the Directory of Open Access Journals (DOAJ); and presence or not in Beall’s (2014) listing of dubious journals. Our research showed that most of the publishers that sent e-mails inviting manuscripts were (i) using the open access model, (ii) using article-processing charges to fund their journal’s operations; (iii) offering very short peer-review times, (iv) on Beall’s list, and (v) misrepresenting the location of their headquarters. Some years ago, a letter of invitation to submit an article to a particular journal was considered a kind of distinction. Today, e-mails inviting submissions are generally spam, something that misleads young researchers and irritates experienced ones.

Some details were missing, however. I think good methodologies for assessing a publisher’s or journal’s trustworthiness are necessary, so it would be great if people researching these methodologies get the details correct.

The location of the headquarters was determined via various means, one of these being a lookup of the domain name holder’s (or registrant’s) country in a WHOIS system. The authors conclude this is not a reliable method, but do not explain why. A few sentences before they do suggest that the registrant’s country is the country the publisher/journal is based in, or that WHOIS shows the location of the server. Exactly what information was used from WHOIS is not described.

Another way of determining the headquarters’ location was to look up the information on the website. How to determine that information is found or missing is not mentioned.

One of the conclusions is that “the average time claimed for peer review was 4 weeks or less.” I don’t see how this follows from the summary table of claimed time for peer review, because it contains N/A values, and nearly all claimed times are 4 weeks or less. The form of the statement is wrong.

Finally, I would have liked to see a reason for not including the dataset. I can only guess why the authors deliberately did not provide the names of journals and publishers.

I think the conclusions hold (except for the one mentioned above), and that work should be performed to improve the methodology for judging journal quality. Eventually, the work would be automated and be easily replicated over time. Results from such automated checks could be added to the DOAJ.

Short review of the International Journal of Digital Library Services

If you do not like Elsevier’s misinterpretation of the Creative Commons licences, then stay away from the International Journal of Digital Library Services.

Reviewing this journal was easy. (I was partially inspired by Jeffrey Beall’s list of things to look for to determine ‘predatoriness’.) The website features animated GIF images and other very general images as ‘context’ on its homepage, capitalised titles, uses the ISSN in title references, and features many spelling and grammar mistakes. But most importantly, and the only real reason to recommend not doing business with this journal, is that the copyright page reads:

Articles which are published in IJODLS are under the terms and conditions of the Creative Commons [Attribution] License. Aim of IJODLS is to disseminate information of scholarly research published in related to library and information science.
The submission of the manuscript means that the authors automatically agree to assign exclusive copyright to Editor-in-Chief of IJODLS for printed and electronic versions of IJODLS, if the manuscript is accepted for publication. The work shall not then be published elsewhere in any language without the written consent of the publisher. The articles published in IJODLS are protected by copyright, which covers translation rights and the exclusive right to reproduce and distribute all of the articles printed in the journal.

In other words: this journal (“intellectual property rights” being one of its keywords in DOAJ) doesn’t get licences right. Any journal that requires transfer of copyrights for publication will not get my recommendations, but this copyright statement makes me distrust the publisher.

Hot topic: trust & quality in science, science publishing et al.

While I’m preparing for presenting standards for and certification of Trustworthy Digital Repositories in a workshop about preservation metadata, which is about demonstrating trustworthiness of DRs, others are discussing trust and quality too. (This is not an extensive or necessarily balanced review –€“ this is what caught my attention.)

Richard Smith asks how researchers can be judged on the quality of their work, rather than the supposed impact of their work. Neither Impact Factor or Altmetrics should be used as a metric to judge a researcher’s performance, he argues.

Jeffrey Beall doesn’t trust the intentions (and with it, the quality) of another publisher, but Peter Murray-Rust disputes Beall’s conclusions because the quality of the reasoning is sub-par. This is a good debate to have in general – I think trust and quality of reviews is important enough to discuss in the context of science publishing.

After a sting that showed many Open Access academic journals were keen on publishing bogus science for money, recently two major academic publishers removed bogus papers from their collections. Was there peer review in these cases? If there was, its quality was far too low.

Therefore you should be able to review the reviews too. SciRev lets researchers do so; quality and speed of the review process for journals can be rated, together with the outcome of the review (accepted, rejected, withdrawn). Alternatively, quality of peer review can be expressed in a number called preSCORE, according to preSCORE (, Inc.?). I’m not sure whether either method suffices to judge a journal on.

Finally, for now, some are scrutinising the whole system of academia and/or science publishing. Sydney Brenner talks to Elizabeth Dzeng in King’s Review about this, Michael White writes about it on Pacific Standard and Robbert Dijkgraaf compared publishers’ Big Deals to a hypothetical supermarket (paywalled) forcing customers to buy the whole store contents in his column in NRC Handelsblad.

Will things change now?

Update, 2014-03-05: yesterday my (now former) colleague Frank van der Most presented some of his results in a Europe-sponsored research project ACUMEN (Academic Careers Understood through Measurement and Norms). He interviewed academics at different levels of seniority and deans and HR managers about research data sharing and evaluations as part of evaluations of researchers. It is not yet part of the standard evaluations, that is certain.