Submitted 21st March 2016
This response is on behalf of MeCCSA (the Media, Communications and Cultural Studies Association) which is the subject association for academics and students in these fields in UK Higher Education. It draws on extensive independent research into the changing higher education environment and how it has impacted on the field of media, communications and cultural studies, and on discussion at the Association’s Executive and after consultation with members. The research study, cited in some answers, employed both quantitative and qualitative methods to capture the personal experiences of senior academics working in the fields of media, communications and cultural studies in higher education.
You can read the full response below or download a PDF version.
1. What changes to existing processes could more efficiently or more accurately assess the outputs, impacts and contexts of research in order to allocate QR? Should the definition of impact be broadened or refined? Is there scope for more or different use of metrics in any areas?
This question splits into three:
1. Efficiency of process. We regard fairness and validity as of greater importance than efficiency in such an important exercise, and would assess any proposal for greater efficiency against those criteria. The crux of any such exercise, if it is to retain broad acceptance and legitimacy, is its basis in peer review. While the details of how this is achieved attract much criticism and debate, that the core of research assessment is peer review remains crucial.
2. There remains concern that a narrow view of impact may shorten the time-frame of research impact from idea to research to implementation to suit some commercial requirements. A simple linear link from academic inquiry to innovation to external action is often inappropriate to much research in the arts, humanities, and social sciences, and we welcome the broader and varied guidance to impact provided by Panels in the 2014 REF. Indeed, it would be beneficial to have the definition of research impact further extended to include the impact on teaching (at present excluded other than beyond the institution where the research is conducted), further demonstrating the links between research activity and pedagogy.
As the REF subject overview for this field notes, “Impact was a strong element of submissions because in many of the fields within the sub-panel’s remit research impact, and the deployment of research results, have long been inherent aspects of research in these fields. Thus, the sub-panel considered that impact had for many years been an integral aspect of the research undertaken in many areas within its remit, and in describing the impact of research institutions had successfully demonstrated how far such benefits beyond the academy were already well established features of much of this research. The relatively strong scores recorded here thus reflected the maturity and established character of research impact in many of the areas considered”.
In the survey mentioned above many respondents pointed to the significant increase in workload that impact activities had required without an increase in available time or resource. Over 60% of respondents reported an increase in public engagement activities like seminars, 51% reported an increased emphasis on connections and partnerships with industry, 53% reported increased connections with civil society, 49% increased outreach to media and related publicity efforts, 47% noted an increase in online and social media activity and 38% an increase in connections with government bodies and quangos.
Some members have also expressed concern that impact criteria for practice research in areas such as filmmaking or photography are extremely unclear and would benefit from further guidance.
There are also worrying emergent signs that the way impact is construed and managed can lead to discriminatory practices. Anecdotal evidence from members points to pressures to be more publicly visible resulting in harassment and abuse particularly for feminist scholars. We return to this issue in response to Qn. 9.
3. We believe the very limited, and in many cases zero use of bibliometrics by sub-panels in the 2014 REF reflects careful consideration of their limitations. The conclusion after the 2008 RAE that bibliometrics were as yet unsatisfactory, immature and insufficient for reliable assessment is still widely regarded as true, and we would oppose any move to greater use of or reliance on bibliometrics in any future assessment. Intractable difficulties relating to database coverage and other limitations, academic names and publishing conventions, young or early career researchers, women researchers and others with career breaks, citations to poor research (the ‘cold fusion’ problem), recent research, and so on, all give rise to problems that make bibliometrics a wholly unacceptable proxy for quality judgement. At very most they might inform but should never determine assessments.
2. If REF is mainly a tool to allocate QR at institutional level, what is the benefit of organising an exercise over as many Units of Assessment as in REF 2014, or in having returns linking outputs to particular investigators? Would there be advantages in reporting on some dimensions of the REF (e.g. impact and/or environment) at a more aggregate or institutional level?
We do not think that elements of REF should be only at the institutional level, as strong departments or individuals in relatively ‘weak’ universities would run the risk of being under-valued and wrongly or not assessed. While this is true of all disciplines it is especially true of areas of research where concentration of expensive plant or equipment is unnecessary. The measured quality of research in UK universities would be massively inaccurate if assessment were restricted or conducted increasingly at institutional level. However, measures of institutional support for research should continue to be assessed and could be given greater emphasis. This may ensure better support and accountability at this level.
3. What use is made of the information gathered through REF in decision making and strategic planning in your organisation? What information could be more useful? Does REF information duplicate or take priority over other management information?
Our answer to this question derives from our research among member departments. All HEIs now use external performance indicators as a means to differentiate themselves to fee-paying students. League tables of all types (constructed by the press or bodies other than HEFCE) are viewed with much suspicion amongst academics yet they are expected to both monitor and to improve ratings in every audit measure year on year. To ensure the greatest possible success in REF most HEIs engage in a fairly constant process of reflection and analysis, with some HEIs beginning dry runs for the next REF almost immediately the last one is over. Such practices serve only to distract from actual research:
“If you talk to any HoD they will say it’s soul-destroying. The form-filling is soul-destroying. Everyone’s clever. So all you’re doing is writing to fit that particular game plan, that particular agenda. A lot of it is meaningless twaddle. But you’re writing it to fit the discourse that will tick the box that will get you your funding or your acknowledgement. And we spend unbelievable amounts of time doing it. And that’s what’s depressing. Most of us, what we like most, is teaching, and writing, and doing research. They’re the things that you do least of nowadays. Actually the teaching I do is miniscule. The research and writing I do is ever decreasing. So the rest of the stuff takes up so much time. And that’s when you start to take away the value, the really inherent value of higher education. If you look at everybody, that’s the whole purpose – is to survive through audit. Compulsory audit. Then that’s a fairly grim place to be.” (HoD Pre-1992 HEI)
In this manner, REF contributes to the tyranny of constant audit and shapes the research agenda as well as university strategies. In many institutions individual performance management of staff is now tied directly to the REF even though individual scores of authors are not published. This dissuades collegiality, inter-disciplinary working and greater collaboration between institutions. Likewise, university research strategies mirror those of funding bodies as the same research strategies are reproduced across universities with the very real risk of the research environment becoming increasingly monotone and ever-more instrumental.
We would also counsel against a restricted exercise assessing research undertaken only in institutions, or UoA’s, above a threshold of success in the last Exercise. This would stultify innovation, demoralize many excellent researchers, and inhibit change and development in fast moving fields like many of those we represent. Our conclusion is that assessment at institutional level or selectively would be highly damaging.
4. What data should REF collect to be of greater support to Government and research funders in driving research excellence and productivity?
All that any stake-holders need to know, whether students, research users, or Government, is the nature and quality of research undertaken in UK universities. This is provided in enormous detail by current research assessment, and we would be very wary of any greater emphasis on particular data supplementary to that now gathered whose primary purpose was to increase the utility to Government of research. The truism that research is either applied or ‘yet to be applied’ remains crucial, and we would oppose any dilution of the Haldane principle that decisions about research (not just its funding but its excellence) should be made by other researchers in the first instance.
5. How might the REF be further refined or used by Government to incentivise constructive and creative behaviours such as promoting interdisciplinary research, collaboration between universities, and/or collaboration between universities and other public or private sector bodies?
We question whether the purpose of REF is to incentivise research rather than to assess research quality, but clearly one way of increasing incentives to do research is to increase QR funding. Our answer to previous questions suggests the danger that certain changes to research assessment could have the perverse effect (as many procedures do now) of discouraging inter-institutional collaboration. Any departure from the core principles of seeking to reward excellence wherever it is practiced and on the basis of peer review runs the risk of discouraging good practice of the kind instanced in the question.
6. In your view how does the REF process influence, positively or negatively, the choices of individual researchers and / or higher education institutions? What are the reasons for this and what are the effects? How do such effects of the REF compare with effects of other drivers in the system (e.g. success for individuals in international career markets, or for universities in global rankings)? What suggestions would you have to restrict gaming the system?
REF indisputably affects career choices and research. The immediate consequence is the concentration of resources in those departments deemed excellent, limiting the ability of the rest to maintain research activity. But it also has a detrimental affect on early career academics who are often on insecure contracts and expected to be ever-more research productive in order to keep their job or get another. While all departments surveyed for our research reported that full-time permanent academic staff are expected to make a significant research contribution, 70% also report that fixed-term staff or those on non-permanent contracts are also expected to do research on top of scholarship normally required for teaching without necessarily being paid for the time required to do it. Specific and notional time allocated to staff to undertake research has remained fairly constant in theory, but the pressure significantly to increase financial inputs and published outputs has markedly intensified:
“… if you’re one of those departments that doesn’t attract lots of funding it really impacts on your status [….]. So in that sense it’s an unofficial implicit thing. The other not so implicit thing is just in terms of promotion. Everyone knows that you’re not going to get promoted unless you’ve got another book. And even better if you’ve got research funding, better even so if the two come together. So if you’ve got a major monograph coming out based on an AHRC grant, you’ll get a promotion. Whereas if you’ve just been beavering away with another 4-5 journal articles out, based on sheer effort, doing research when you’ve got time, you’re unlikely to get promoted.” (HoD pre-1992 HEI)
“But there’s a realisation – what has happened is that there’s a proper professional progression pathway for staff which has been introduced [around research]. So if you want to follow this route you will have to start publishing regularly. You’ll have to start getting outputs in the REF and start getting bids for funding and bringing research funding in, stuff like that. That approach tends to be more self-selective for staff who are interested in that as a career pathway.” (HoD post-1992 HEI)
“There is a pretty transparent promotion process at X and research performance figures very prominently. Don’t even bother to apply unless you’ve produced this amount of work. And produced something more than what you were on than when you were appointed to this pay grade that you were already on. There are other kinds of things you have to do to get promoted, it’s not purely research, but it is research focused. There’s a kind of unwritten expectation that you will produce at least 2 peer-reviewed journal articles a year.” (HoD pre-1992 HEI)
The pressure placed on academics to perform for the REF has made for an intensely competitive and arduous work environment. Metrics assumed to be significant for the REF have now become tools of management in very many institutions. Of those surveyed, 56% respondents report more pressure to publish in ‘high impact’ journals and 36% report an increased pressure to publish monographs since 2010/11. So, even though REF only requires 4 outputs, institutions require more and more evidence of research activity.
“I’ve protected [notional research time for staff]. It would be eroded otherwise. So the expectation is to do everything longer and better with less time and less resources and less support. Be a better teacher. Be a better researcher. Work harder. Work longer. Take less holiday. Produce produce produce. Increase everything without having any time to do it.” (HoD pre-1992 HEI)
“The pressure is systematically organized. You keep being asked every two months, ‘have you published anything in the last two months?’ If you haven’t published anything you still have to fill in this form and this box that says ‘zero return’. There’s pressure like that that didn’t used to be in the institution. There’s certainly pressure to apply for the money even though there’s less money around and it takes a lot of time to apply for whatever money is actually left.” (HoD pre-1992 HEI)
Anticipatory REF preparations for REF 2020 are already well underway. The Times Higher recently reported that one in six HEIs now has grant capture targets for individual researchers bringing with them a climate of fear and stress that frequently serve to reinforce inequities of gender and ethnicity and block career advancement.
7. In your view how does the REF process influence the development of academic disciplines or impact upon other areas of scholarly activity relative to other factors? What changes would create or sustain positive influences in the future? Much of REF focuses on the retrospective analysis of success achieved by institutions either through output or impact. Yet the resources provided anticipate continued success based on that track record. Are there means of better addressing forward-looking institutional plans and priorities, and how these might feed in to national policy?
REF does appear to affect the development of disciplines in our fields and increases pressure to undertake the type of research that appears more readily applicable to REF criteria. This results in a growing emphasis on applied research to the detriment of critical and/or theoretical research.
“There is much more pressure to become ‘instrumental’ which can be intellectually constraining” (HoD pre-1992 HEI)
“While it’s generally positive to be more outward facing, there is some concern over how the need to show impact would cloud or compromise ‘pure’ research that might not have a public face – non-impact research areas like historical, theoretical or critical is not as desired… there is less scope for speculative research (HoD pre-1992 HEI)
Departments that previously had control over their research funds and allocations expressed dismay that monies are increasingly dispensed from a central source within the institution – creating a more competitive internal culture for whatever amounts might be available. Academics in many departments must now apply to the institutional administrative centre for financial allocations and sabbatical time, and there is an expectation that internal monies are not to be requested until all available external sources have been exhausted. While 60% of survey respondents report an increase in institutional support for grant applications, the amount of time and administration that making grant applications requires is increasing.
8. How can the REF better address the future plans of institutions and how they will utilise QR funding obtained through the exercise? The Review is keen to hear of creative ideas and insights and to be open in its approach.
It is not the role of REF to instruct HEIs on how to use QR money. However, guidance on minimal expectations for the use of QR money for research purposes could be beneficial to ensure QR money is used directly for research purposes. REF panels provided clear guidance on how plans would be assessed and we would not wish to depart from methods that require plans to be credible and concrete, requiring that institutions use their QR monies in creative, transparent, and equitable fashion
9. Are there additional issues you would like to bring to the attention of the Review?
a. Metrics: We believe that peer review should remain as the primary mechanism for research assessment. The independent review of metrics concludes that “no metric can currently provide a like-for-like replacement for REF peer review.”
b. Subpanel 36 was an unlikely amalgamation of disciplines including media, communication, cultural studies, and library and information sciences, that did not fit well together and could not be easily assessed in the same panel. The field of media and communications should be separated out from library and information studies. This case was accepted for previous assessment exercises and the amalgamation was an unfortunate, and now widely regarded as mistaken, consequence of the reduction from 67 to 36 units of assessment from 2008 to 2014.
c. Workload for REF panelists should be taken into account by institutions and adequate relief from institutional duties be a requirement of panel membership. Without this, the work of panelists is severely constrained.
d. The algorithm by which QR monies are distributed between subject areas should be more transparent and not inappropriately lead to the under-funding of research in some areas in ways that are nothing to do with the quality of assessed research in those areas.
e. Many of our answers and much of the feedback in our survey illustrates the extent to which the willful or inadvertent use and abuse of REF planning by institutional management is having and has had regressive impacts on researchers, and in turn on the institutions they serve and the students they teach. While not the result directly of requirements within the REF process itself, it is important that when guidance is published for the next exercise, it seeks deliberately and carefully to ensure that research assessment is not misunderstood by HEI managements, and that it does not have perverse effects on the quality of professional life or on the very high standard of research produced by UK academics, not least in internationally leading fields like cultural, communication and media research.