MeCCSA’s response to the ESRC dated 26/06/2014 in light of the above review.
I write as Hon. Sec. of MeCCSA, the subject association representing students in media, communication and cultural studies in UK higher education. The above Review has been discussed by the Association Executive, and the following comments arise from that discussion and represent the Association’s views on the issues being considered by the Review. We submit these in the form of a letter rather than responding point by point to the questions listed in the reply template.
Our fundamental concerns about the role of metrics in research assessment were raised in relation to the 2008 RAE. At that time it was decided by HEFCE that bibliometric data was neither robust nor mature enough to provide the basis for research assessment. We concurred with this judgment. However, we continue to feel that, while some improvements have been made in the methodology and generation of bibliometric data, fundamental concerns remain, and we wish the Review team to consider them fully. The most important are as follows:
- Bibliometric data can inform and assist the formation of judgments about research quality but can never replace them. Whatever they measure, and however precise and accurate the calibration behind them, such data are not measures of quality. If the excellence of research is to remain the core of its assessment then no methodological improvements in bibliometric data can justify using them in place of peer review. The use of ‘altmetrics’ and similar advances in taking account of a variety of forms of research dissemination do not diminish this central concern.
- The Consultation, and HEFCE research documentation after 2008, anticipated many of the methodological difficulties of citation counts, and indeed the more familiar rapidly acquired shorthand epithets – ‘cold fusion’ for high citation counts for disproved science; counts of citation cannot, after all, deal with the nature of the citation; ‘sleeping beauties’ for important work whose recognition is delayed, and so on. The difficulties for early career researchers (whose work may be outstanding but too recent yet to have achieved high citation counts), for collaborative work across institutions, for interdisciplinary work, are increasingly well rehearsed. We are concerned that these difficulties remain intractable, and have not been removed by advances in bibliometric methodology.
- Citations measure one form of impact, namely on debate and discussion within their field in a particular set of outputs (mainly conventional academic journals). However, we are especially concerned with the proper recognition and assessment of practice-led research. This provoked much debate in the context of the 2008 RAE. In a system driven by bibliometric measures of citation counts this could have very deleterious effects on the assessment of practice based research. In the field of media and culture such work is of great importance.
- The new dimension of research assessment concerned with the impact of research is explicitly about the use and effect of research beyond the academy. This will not be at all measurable by the use of bibliometric data. Similarly, assessment of research environments, while usefully supported by data about PGR numbers, research income, facilities, and so on, is not assisted by bibliometric data.
- Research assessment in its current form commands a fragile respect from the academic community. While often criticized both for its practices and effects, as well as for its intellectual and political underpinning, the process has acquired a degree of acceptance and legitimacy, very largely based on its foundation in peer review. As a nominating body in the formation of sub-panels we would find any transfer of significance from peer review towards metrics extremely difficult to defend, and as almost bound to have a very damaging effect on the credibility of research assessment and on confidence in its basis among those whose work is assessed.
- I confirm that the Association would, as requested in the consultation, be interested in participating in a workshop or similar event to discuss the use of metrics in research assessment and management.