[Todos] Fwd: Pensé que te podía interesar. Nota en Plos sobre factor de impacto

Hugo Scolnik hugo en dc.uba.ar
Mar Oct 15 13:38:47 ART 2013


[image: plos biology]<http://retractionwatch.files.wordpress.com/2013/10/plos-biology.png>We’ve
sometimes said, paraphrasing Winston
Churchill<http://wais.stanford.edu/Democracy/democracy_DemocracyAndChurchill%28090503%29.html>,
that pre-publication peer review is the worst way to vet science, except
for all the other ways that have been tried from time to time.

The authors of a new paper in *PLOS Biology*, Adam
Eyre-Walker<http://www.lifesci.sussex.ac.uk/home/Adam_Eyre-Walker/Website/Welcome.html>
 and Nina Stoletzki <https://twitter.com/Nstoletzki>, compared three of
those other ways<http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001675>
to
judge more than 6,500 papers published in 2005:

subjective post-publication peer review, the number of citations gained by
a paper, and the impact factor of the journal in which the article was
published

Their findings?

We conclude that the three measures of scientific merit considered here are
poor; in particular subjective assessments are an error-prone, biased, and
expensive method by which to assess merit. We argue that the impact factor
may be the most satisfactory of the methods we have considered, since it is
a form of pre-publication review. However, we emphasise that it is likely
to be a very error-prone measure of merit that is qualitative, not
quantitative.

(Disclosure: Ivan worked at Thomson Reuters, whose Thomson Scientific
division owns the impact factor, from 2009 until the middle of this year,
but was at Reuters Health, a completely separate unit of the company.)

Or, put another way, as Eyre-Walker
told<http://www.theaustralian.com.au/higher-education/scientists-not-good-judges-of-research/story-e6frgcjx-1226734935006#sthash.11ZDHFgx.dpuf>
 *The Australian*:

Scientists are probably the best judges of science, but they are pretty bad
at it.

In an accompanying
editorial<http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001677>
, Jonathan Eisen <http://twitter.com/phylogenomics>, Catriona
MacCallum<http://www.plos.org/staff/catriona-maccallum/>,
and Cameron Neylon <http://cameronneylon.net/> call the paper “important”
and acknowledged that the authors found that impact factor “is probably the
least-bad metric amongst the small set that they analyse,” but note some
limitations:

The subjective assessment of research by experts has always been considered
a gold standard—an approach championed by researchers and funders
alike [3]<http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001677#pbio.1001677-Hochberg1>
–[5]<http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001677#pbio.1001677-US1>,
despite its problems
[6]<http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001677#pbio.1001677-Smith1>.
Yet a key conclusion of the study is that the scores of two assessors of
the same paper are only very weakly correlated (Box
1<http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001677#pbio-1001677-box001>).
As Eyre-Walker and Stoletzki rightly conclude, their analysis now raises
serious questions about this process and, for example, the ~£60 million
investment by the UK Government into the UK Research Assessment Exercise
(estimated for 2008), where the work of scientists and universities are
largely judged by a panel of experts and funding allocated accordingly.
Although we agree with this core conclusion and applaud the paper, we take
issue with their assumption of “merit” and their subsequent argument that
the IF (or any other journal metric) is the best surrogate we currently
have.

We have, as Retraction Watch readers may recall, extolled the virtues of
post-publication peer
review<http://www.nature.com/nature/journal/v480/n7378/full/480449a.html>
 before.
Share this:

   - Twitter54<http://retractionwatch.wordpress.com/2013/10/14/is-impact-factor-the-least-bad-way-to-judge-the-quality-of-a-scientific-paper/?share=twitter&nb=1>
   - Facebook12<http://retractionwatch.wordpress.com/2013/10/14/is-impact-factor-the-least-bad-way-to-judge-the-quality-of-a-scientific-paper/?share=facebook&nb=1>
   - Digg<http://retractionwatch.wordpress.com/2013/10/14/is-impact-factor-the-least-bad-way-to-judge-the-quality-of-a-scientific-paper/?share=digg&nb=1>
   - Reddit<http://retractionwatch.wordpress.com/2013/10/14/is-impact-factor-the-least-bad-way-to-judge-the-quality-of-a-scientific-paper/?share=reddit&nb=1>
   - Email<http://retractionwatch.wordpress.com/2013/10/14/is-impact-factor-the-least-bad-way-to-judge-the-quality-of-a-scientific-paper/?share=email&nb=1>
   - StumbleUpon<http://retractionwatch.wordpress.com/2013/10/14/is-impact-factor-the-least-bad-way-to-judge-the-quality-of-a-scientific-paper/?share=stumbleupon&nb=1>
   - Google<http://retractionwatch.wordpress.com/2013/10/14/is-impact-factor-the-least-bad-way-to-judge-the-quality-of-a-scientific-paper/?share=google-plus-1&nb=1>
   - LinkedIn<http://retractionwatch.wordpress.com/2013/10/14/is-impact-factor-the-least-bad-way-to-judge-the-quality-of-a-scientific-paper/?share=linkedin&nb=1>
   - Tumblr<http://retractionwatch.wordpress.com/2013/10/14/is-impact-factor-the-least-bad-way-to-judge-the-quality-of-a-scientific-paper/?share=tumblr&nb=1>
   - Pinterest<http://retractionwatch.wordpress.com/2013/10/14/is-impact-factor-the-least-bad-way-to-judge-the-quality-of-a-scientific-paper/?share=pinterest&nb=1>
   -

Like this:

-- 
Dr.Hugo D.Scolnik
Profesor Consulto Titular
Departamento de Computación
Facultad de Ciencias Exactas y Naturales
Universidad de Buenos Aires
www.dc.uba.ar
TE      : +5411 4576 3359
Mobile: +5411 4970 6665
------------ próxima parte ------------
Se ha borrado un adjunto en formato HTML...
URL: http://mail.df.uba.ar/pipermail/todos/attachments/20131015/4ddf2a79/attachment.html 


Más información sobre la lista de distribución Todos