Saturday, May 7, 2016

Web of Science used by Australian Research Council for Analysis of Benefits from University Research

According to this announcement - the Australian Research Council (ARC) will use Web of Science data as part of the next ERA and Engagement and Impact Evaluation - see release below.

Media Release:

The Australian Research Council (ARC) has recently obtained Thomson Reuters Web of Science™ Core Collection as one of the data sources to contribute to analyses that will be utilized by the ARC to support development of national impact and engagement assessment to assess the benefits derived from university research. This national assessment exercise is being introduced as part of the Australian government’s National Innovation and Science Agenda.  This was announced today by the Intellectual Property & Science business of Thomson Reuters.

In 2016 the ARC will work with the higher education research sector, industry and other end-users of research to develop quantitative and qualitative measures of impact and engagement of university research. The Web of Science Core Collection provides source data for records such as topic, title and author information which will be used by ARC to support work around sector and ERA analysis in order to derive a model for national assessment. The ARC will conduct a national assessment as a companion exercise to the Excellence in Research for Australia (ERA), the country’s national research evaluation framework which identifies and promotes excellence across the full spectrum of research activity in Australia’s higher education institutions.

Jeroen Prinsen, vice president and head of Australia & New Zealand, IP &Science, Thomson Reuters said, “As a strong advocate of research collaboration and partner of Australia’s research community, we are pleased to support this important national impact and engagement assessment of university research which will ultimately promote high-quality research that will drive Australia’s innovation and economic growth. We are honored that the ARC will utilize source data from the Web of Science Core Collection, the world’s most trusted source of citation databases.”

Tuesday, February 9, 2016

Stop publishing your research!

The 'Watt review' - or the Review of Research Policy and Funding Arrangements has broken the link between publications and funding. Since the mid 1990s publications have informed a competent of the research block grants for universities. In 2010 ERA provided an additional avenue for publications to inform block funding allocations. The Watt review has recommended that publications be removed from the Higher Education Data Collection (HERDC) and recommended the removal of the Sustainable Research Excellence (SRE) fund from the block grant. These recommendations mean that universities will no longer receive block funding based on publications.

When publications were introduced to the block grant allocations there was a rapid increase in the volume of publications produced - however, the quality of those publications was low - in other words the quantity went up but the quality didn't. ERA introduced a quality component to the block grant allocation, albeit a modest allocation, which saw an increase in journal article output (compared with conferences and book) and an increase in articles in 'A*' and 'A' ranked journals.

So it seems that publication behaviour changes as the policy and incentives change. It will be interesting to see what impact this newest change has on publication behaviour. Should universities tell their academics to stop publishing? Well, probably not - there are many good reasons to keep publishing, not least of which is that researchers tend to like publishing and it is still a powerful way to disseminate knew knowledge. Besides this though there are a number of other reasons - promotions and recruitments are often influenced by publication record, grant success and university rankings are also linked to publication output.

So maybe don't stop publishing just yet. But watch this space to see what happens to publishing across Australian universities.

Thursday, February 4, 2016

Assessment of Impact and Engagement

We have really come full circle in a short amount of time. It wasn't all that long ago that Australia was in the midst of a research excellence and impact evaluation called the Research Quality Framework (RQF). This was to be Australia's first comprehensive evaluation of the quality and impact of its universities's research. With a change in government though came the cancellation of the RQF with concerns that it was too complex and too burdensome to the university sector. As quickly as the RQF was cancelled though it was replaced with the Excellence in Research for Australia (ERA). This would go on to become the first comprehensive evaluation of research quality of Australia's universities - note that impact was removed.

Now, thanks to the recommendations of the Watt review of research funding and policy, we find ourselves returning once again to an impact evaluation. The Watt review recommends we implement a 'companion piece' to our ERA called the Assessment of Impact and Engagement (AIE). The AIE will be a mixed methods evaluation combining quantitative and qualitative components moderated by an expert advisory group. What will the evaluation look like? Well most likely it will be informed by metrics - along the lines of the ATSE Research Engagement for Australia proposal. It will include case studies - as per the UK's REF and it will be moderated by expert review - just like the ATN/Go8 Excellence in Innovation for Australia.

Of particular importance to the evaluation will be how the terms 'impact' and 'engagement' are defined. Ask any researcher what they think the terms mean and you will almost get a different answer every time. This means there will be quite an education piece required to let us all know what the AIE is actually evaluating. And what is it evaluating? What will it tell us about the impact of university research? Most statements about Australian university research mention the same high profile impacts over and again - Cochlear, Gardasil, Atlassian - fantastic impacts, but we already know about them, we don't need an evaluation to tell us about them again. An evaluation may uncover a goldmine of unknown impacts - but what tends to happen is that the high profile impacts need very little, if any, time spent to evaluate them as top of the pile while the rest of the effort, resource, expense is consumed by the less impactful projects - the ones that never get mentioned in the media... so is it really worth it?