I am not a fan of metrics in research performance evaluation. I am an enemy of any kind of metrics when it comes to the use of the existent frameworks that deal with “publications” in measuring the teaching performance. I assume any rational individual will understand why it is completely idiotic to treat the teaching performance by measuring its impact or by measuring the so-called impact of someone’s publications. I don’t want to insist on that. What I find even worse than that is the fact that people are judged by the performance of a journal. For example, in Romania, if you publish in a B+ journal, you get a certain number of points, that are finally added to your annual performance evaluation. If you published with one of your colleagues, you get half and so on. Now, if you publish a critical edition you get the same points as publishing a translation, in some cases. Moreover, these types of metrics do not differentiate between various translation projects: one thing is to translate from Arabic or from Old French than from Contemporary English.
In Romania, and I have seen this trend almost all over the world, people are obsessed with ISI/MISI/PISI and other indexing and metrics. One of my thesis is that some of those promoting this type of metrics are on the payroll of the big organisations. Otherwise, I do not understand why they promote them so aggressively or defend them so fiercely. Especially when they enjoyed certain positions.
But, today, I have a fragment from a Thomson Reuters report that says it all:
For more than four decades, Thomson Reuters has published its Journal Citation ReportsTM, annually imparting the Journal Impact Factor (JIF) of the titles covered in its indexing. And for nearly as long, the JIF has been a source of controversy. Originally a metric intended to help librarians track the usage of journals in their local collections, JIF was soon seized upon by publishers and authors alike for purposes of publicity and prestige.
Although Thomson Reuters has unswervingly maintained that JIF is a specific measurement of a journal’s utility as viewed by the research community, much has been made of the figure in a manner beyond the company’s control and approval. One particularly erroneous application is the use of JIF as a proxy for an author’s overall performance. In other words, an author notes that his paper appeared in X journal, which carries a JIF of Y, and therefore his work must automatically be judged as superior. This is a misperception that Thomson Reuters has consistently endeavored to correct.
In fact, as noted above, JIF provides a specific measurement of journal impact over a specific time period. As with other resources within InCites and built on Web of Science data, the Journal Citation Reports (JCR) now feature an expanded array of metrics to provide a more extended, nuanced picture of journal impact. (Source: “Impact Measures and How to Use Them. What Can Research Metrics Really Tell Us?“)
Is it clear enough?