Tuesday, February 9, 2016

Stop publishing your research!

The 'Watt review' - or the Review of Research Policy and Funding Arrangements has broken the link between publications and funding. Since the mid 1990s publications have informed a competent of the research block grants for universities. In 2010 ERA provided an additional avenue for publications to inform block funding allocations. The Watt review has recommended that publications be removed from the Higher Education Data Collection (HERDC) and recommended the removal of the Sustainable Research Excellence (SRE) fund from the block grant. These recommendations mean that universities will no longer receive block funding based on publications.

When publications were introduced to the block grant allocations there was a rapid increase in the volume of publications produced - however, the quality of those publications was low - in other words the quantity went up but the quality didn't. ERA introduced a quality component to the block grant allocation, albeit a modest allocation, which saw an increase in journal article output (compared with conferences and book) and an increase in articles in 'A*' and 'A' ranked journals.

So it seems that publication behaviour changes as the policy and incentives change. It will be interesting to see what impact this newest change has on publication behaviour. Should universities tell their academics to stop publishing? Well, probably not - there are many good reasons to keep publishing, not least of which is that researchers tend to like publishing and it is still a powerful way to disseminate knew knowledge. Besides this though there are a number of other reasons - promotions and recruitments are often influenced by publication record, grant success and university rankings are also linked to publication output.

So maybe don't stop publishing just yet. But watch this space to see what happens to publishing across Australian universities.

Thursday, February 4, 2016

Assessment of Impact and Engagement

We have really come full circle in a short amount of time. It wasn't all that long ago that Australia was in the midst of a research excellence and impact evaluation called the Research Quality Framework (RQF). This was to be Australia's first comprehensive evaluation of the quality and impact of its universities's research. With a change in government though came the cancellation of the RQF with concerns that it was too complex and too burdensome to the university sector. As quickly as the RQF was cancelled though it was replaced with the Excellence in Research for Australia (ERA). This would go on to become the first comprehensive evaluation of research quality of Australia's universities - note that impact was removed.

Now, thanks to the recommendations of the Watt review of research funding and policy, we find ourselves returning once again to an impact evaluation. The Watt review recommends we implement a 'companion piece' to our ERA called the Assessment of Impact and Engagement (AIE). The AIE will be a mixed methods evaluation combining quantitative and qualitative components moderated by an expert advisory group. What will the evaluation look like? Well most likely it will be informed by metrics - along the lines of the ATSE Research Engagement for Australia proposal. It will include case studies - as per the UK's REF and it will be moderated by expert review - just like the ATN/Go8 Excellence in Innovation for Australia.

Of particular importance to the evaluation will be how the terms 'impact' and 'engagement' are defined. Ask any researcher what they think the terms mean and you will almost get a different answer every time. This means there will be quite an education piece required to let us all know what the AIE is actually evaluating. And what is it evaluating? What will it tell us about the impact of university research? Most statements about Australian university research mention the same high profile impacts over and again - Cochlear, Gardasil, Atlassian - fantastic impacts, but we already know about them, we don't need an evaluation to tell us about them again. An evaluation may uncover a goldmine of unknown impacts - but what tends to happen is that the high profile impacts need very little, if any, time spent to evaluate them as top of the pile while the rest of the effort, resource, expense is consumed by the less impactful projects - the ones that never get mentioned in the media... so is it really worth it?