Categories
Academia

Hot topic: trust & quality in science, science publishing et al.

While I’m preparing for presenting standards for and certification of Trustworthy Digital Repositories in a workshop about preservation metadata, which is about demonstrating trustworthiness of DRs, others are discussing trust and quality too. (This is not an extensive or necessarily balanced review –€“ this is what caught my attention.)

Richard Smith asks how researchers can be judged on the quality of their work, rather than the supposed impact of their work. Neither Impact Factor or Altmetrics should be used as a metric to judge a researcher’s performance, he argues.

Jeffrey Beall doesn’t trust the intentions (and with it, the quality) of another publisher, but Peter Murray-Rust disputes Beall’s conclusions because the quality of the reasoning is sub-par. This is a good debate to have in general – I think trust and quality of reviews is important enough to discuss in the context of science publishing.

After a sting that showed many Open Access academic journals were keen on publishing bogus science for money, recently two major academic publishers removed bogus papers from their collections. Was there peer review in these cases? If there was, its quality was far too low.

Therefore you should be able to review the reviews too. SciRev lets researchers do so; quality and speed of the review process for journals can be rated, together with the outcome of the review (accepted, rejected, withdrawn). Alternatively, quality of peer review can be expressed in a number called preSCORE, according to preSCORE (, Inc.?). I’m not sure whether either method suffices to judge a journal on.

Finally, for now, some are scrutinising the whole system of academia and/or science publishing. Sydney Brenner talks to Elizabeth Dzeng in King’s Review about this, Michael White writes about it on Pacific Standard and Robbert Dijkgraaf compared publishers’ Big Deals to a hypothetical supermarket (paywalled) forcing customers to buy the whole store contents in his column in NRC Handelsblad.

Will things change now?

Update, 2014-03-05: yesterday my (now former) colleague Frank van der Most presented some of his results in a Europe-sponsored research project ACUMEN (Academic Careers Understood through Measurement and Norms). He interviewed academics at different levels of seniority and deans and HR managers about research data sharing and evaluations as part of evaluations of researchers. It is not yet part of the standard evaluations, that is certain.