Short review of the International Journal of Digital Library Services

If you do not like Elsevier’s misinterpretation of the Creative Commons licences, then stay away from the International Journal of Digital Library Services.

Reviewing this journal was easy. (I was partially inspired by Jeffrey Beall’s list of things to look for to determine ‘predatoriness’.) The website features animated GIF images and other very general images as ‘context’ on its homepage, capitalised titles, uses the ISSN in title references, and features many spelling and grammar mistakes. But most importantly, and the only real reason to recommend not doing business with this journal, is that the copyright page reads:

Articles which are published in IJODLS are under the terms and conditions of the Creative Commons [Attribution] License. Aim of IJODLS is to disseminate information of scholarly research published in related to library and information science.
The submission of the manuscript means that the authors automatically agree to assign exclusive copyright to Editor-in-Chief of IJODLS for printed and electronic versions of IJODLS, if the manuscript is accepted for publication. The work shall not then be published elsewhere in any language without the written consent of the publisher. The articles published in IJODLS are protected by copyright, which covers translation rights and the exclusive right to reproduce and distribute all of the articles printed in the journal.

In other words: this journal (“intellectual property rights” being one of its keywords in DOAJ) doesn’t get licences right. Any journal that requires transfer of copyrights for publication will not get my recommendations, but this copyright statement makes me distrust the publisher.

Hot topic: trust & quality in science, science publishing et al.

While I’m preparing for presenting standards for and certification of Trustworthy Digital Repositories in a workshop about preservation metadata, which is about demonstrating trustworthiness of DRs, others are discussing trust and quality too. (This is not an extensive or necessarily balanced review –€“ this is what caught my attention.)

Richard Smith asks how researchers can be judged on the quality of their work, rather than the supposed impact of their work. Neither Impact Factor or Altmetrics should be used as a metric to judge a researcher’s performance, he argues.

Jeffrey Beall doesn’t trust the intentions (and with it, the quality) of another publisher, but Peter Murray-Rust disputes Beall’s conclusions because the quality of the reasoning is sub-par. This is a good debate to have in general – I think trust and quality of reviews is important enough to discuss in the context of science publishing.

After a sting that showed many Open Access academic journals were keen on publishing bogus science for money, recently two major academic publishers removed bogus papers from their collections. Was there peer review in these cases? If there was, its quality was far too low.

Therefore you should be able to review the reviews too. SciRev lets researchers do so; quality and speed of the review process for journals can be rated, together with the outcome of the review (accepted, rejected, withdrawn). Alternatively, quality of peer review can be expressed in a number called preSCORE, according to preSCORE (, Inc.?). I’m not sure whether either method suffices to judge a journal on.

Finally, for now, some are scrutinising the whole system of academia and/or science publishing. Sydney Brenner talks to Elizabeth Dzeng in King’s Review about this, Michael White writes about it on Pacific Standard and Robbert Dijkgraaf compared publishers’ Big Deals to a hypothetical supermarket (paywalled) forcing customers to buy the whole store contents in his column in NRC Handelsblad.

Will things change now?

Update, 2014-03-05: yesterday my (now former) colleague Frank van der Most presented some of his results in a Europe-sponsored research project ACUMEN (Academic Careers Understood through Measurement and Norms). He interviewed academics at different levels of seniority and deans and HR managers about research data sharing and evaluations as part of evaluations of researchers. It is not yet part of the standard evaluations, that is certain.