Tuesday, September 12, 2017

The ARC has released the draft ERA 2018 Submission Guidelines

The ARC has now released the draft ERA 2018 submission guidelines for consultation. You can find a copy at their website here.

There are not really too many changes to the submission which should please universities - especially as they are gearing up for the Impact and Engagement assessment at the same time. Guidelines for the impact and engagement assessment are still pending at this stage.

A couple of interesting additions include:

  • Reporting of ORCID (optional)
  • Reporting of DOI (optional)
  • Aboriginal and Torres Strait Islander research section - although it would be nice to have a clear definition of Aboriginal research to work with. 
Also interesting to see the addition of this to the the guidelines:


Institutions agree to allow the ARC to publish any submitted data from ERA 2018. In addition, institutions must agree to publish their submission, with the exception of their staff data, on 5 February 2019.

It will be good to get some clarity on what form this would take.

Tuesday, August 22, 2017

Clarivate selected as citation provider for ERA 2018

Clarivate selected as citation provider for ERA 2018

Chief Executive Officer (CEO) of the Australian Research Council (ARC), Professor Sue Thomas, has today announced that the ARC has selected Clarivate Analytics to provide citation information for the 2018 round of Excellence in Research for Australia.

Saturday, May 7, 2016

Web of Science used by Australian Research Council for Analysis of Benefits from University Research


According to this announcement - the Australian Research Council (ARC) will use Web of Science data as part of the next ERA and Engagement and Impact Evaluation - see release below.

http://thomsonreuters.com/en/press-releases/2016/may/web-of-science-source-data.html


Media Release:

The Australian Research Council (ARC) has recently obtained Thomson Reuters Web of Science™ Core Collection as one of the data sources to contribute to analyses that will be utilized by the ARC to support development of national impact and engagement assessment to assess the benefits derived from university research. This national assessment exercise is being introduced as part of the Australian government’s National Innovation and Science Agenda.  This was announced today by the Intellectual Property & Science business of Thomson Reuters.

In 2016 the ARC will work with the higher education research sector, industry and other end-users of research to develop quantitative and qualitative measures of impact and engagement of university research. The Web of Science Core Collection provides source data for records such as topic, title and author information which will be used by ARC to support work around sector and ERA analysis in order to derive a model for national assessment. The ARC will conduct a national assessment as a companion exercise to the Excellence in Research for Australia (ERA), the country’s national research evaluation framework which identifies and promotes excellence across the full spectrum of research activity in Australia’s higher education institutions.

Jeroen Prinsen, vice president and head of Australia & New Zealand, IP &Science, Thomson Reuters said, “As a strong advocate of research collaboration and partner of Australia’s research community, we are pleased to support this important national impact and engagement assessment of university research which will ultimately promote high-quality research that will drive Australia’s innovation and economic growth. We are honored that the ARC will utilize source data from the Web of Science Core Collection, the world’s most trusted source of citation databases.”

Tuesday, February 9, 2016

Stop publishing your research!

The 'Watt review' - or the Review of Research Policy and Funding Arrangements has broken the link between publications and funding. Since the mid 1990s publications have informed a competent of the research block grants for universities. In 2010 ERA provided an additional avenue for publications to inform block funding allocations. The Watt review has recommended that publications be removed from the Higher Education Data Collection (HERDC) and recommended the removal of the Sustainable Research Excellence (SRE) fund from the block grant. These recommendations mean that universities will no longer receive block funding based on publications.

When publications were introduced to the block grant allocations there was a rapid increase in the volume of publications produced - however, the quality of those publications was low - in other words the quantity went up but the quality didn't. ERA introduced a quality component to the block grant allocation, albeit a modest allocation, which saw an increase in journal article output (compared with conferences and book) and an increase in articles in 'A*' and 'A' ranked journals.

So it seems that publication behaviour changes as the policy and incentives change. It will be interesting to see what impact this newest change has on publication behaviour. Should universities tell their academics to stop publishing? Well, probably not - there are many good reasons to keep publishing, not least of which is that researchers tend to like publishing and it is still a powerful way to disseminate knew knowledge. Besides this though there are a number of other reasons - promotions and recruitments are often influenced by publication record, grant success and university rankings are also linked to publication output.

So maybe don't stop publishing just yet. But watch this space to see what happens to publishing across Australian universities.

Thursday, February 4, 2016

Assessment of Impact and Engagement

We have really come full circle in a short amount of time. It wasn't all that long ago that Australia was in the midst of a research excellence and impact evaluation called the Research Quality Framework (RQF). This was to be Australia's first comprehensive evaluation of the quality and impact of its universities's research. With a change in government though came the cancellation of the RQF with concerns that it was too complex and too burdensome to the university sector. As quickly as the RQF was cancelled though it was replaced with the Excellence in Research for Australia (ERA). This would go on to become the first comprehensive evaluation of research quality of Australia's universities - note that impact was removed.

Now, thanks to the recommendations of the Watt review of research funding and policy, we find ourselves returning once again to an impact evaluation. The Watt review recommends we implement a 'companion piece' to our ERA called the Assessment of Impact and Engagement (AIE). The AIE will be a mixed methods evaluation combining quantitative and qualitative components moderated by an expert advisory group. What will the evaluation look like? Well most likely it will be informed by metrics - along the lines of the ATSE Research Engagement for Australia proposal. It will include case studies - as per the UK's REF and it will be moderated by expert review - just like the ATN/Go8 Excellence in Innovation for Australia.

Of particular importance to the evaluation will be how the terms 'impact' and 'engagement' are defined. Ask any researcher what they think the terms mean and you will almost get a different answer every time. This means there will be quite an education piece required to let us all know what the AIE is actually evaluating. And what is it evaluating? What will it tell us about the impact of university research? Most statements about Australian university research mention the same high profile impacts over and again - Cochlear, Gardasil, Atlassian - fantastic impacts, but we already know about them, we don't need an evaluation to tell us about them again. An evaluation may uncover a goldmine of unknown impacts - but what tends to happen is that the high profile impacts need very little, if any, time spent to evaluate them as top of the pile while the rest of the effort, resource, expense is consumed by the less impactful projects - the ones that never get mentioned in the media... so is it really worth it?

Sunday, November 1, 2015

Journal Quality Lists: ecological fallacy or convenient and cost-effective evaluation tool?

University managers are constantly seeking simple ways to measure and evaluate the research output of their university’s academics. While peer review of scholarly research papers is arguably the best way to determine the quality of any individual research output it is also acknowledged that peer review is time consuming, expensive and subjective. Journal-level metrics, such as a journal quality list, present managers with a convenient, objective and inexpensive tool for determining the quality of scholarly articles.  However, managers relying on journal level metrics to evaluate articles may be suffering from the ecological fallacy. The ecological fallacy occurs when conclusions are made about individuals based only on analyses of group data. In this case, judging the quality of an individual article based only on the journal in which it is published.

Using journal level metrics to evaluate research quality is not a new phenomenon with the earliest examples of journal quality lists being found as far back as the late 60s and early 70s. Journal level metrics often take the form of lists of scholarly journals which have been ranked against some particular criteria. While there is no consensus on how a journal list should be compiled many lists having been created using methodologies ranging from perceptual and peer review based rankings through to objective citation based rankings.

The use and misuse of journal rankings is well documented in the literature. Within Australia, and internationally, the academic community is shifting away from the use of journal metrics to evaluate research. Australian academics were introduced to the ranked journal list as part of the national research evaluation exercise, Excellence in Research for Australia (ERA). The rankings were considered highly influential in determining a university’s ERA outcome so many institutions began to provide incentives to staff to publish in ‘A*’ and ‘A’ ranked journals. The ranked journal list quickly became the most contentious issue of the ERA and by 2011, then Minster for Education, Kim Carr, announced that it would be discontinued because its ‘existence was focusing ill-informed undesirable behaviour in the management of research’. In 2010, Australia’s other major research funding agency the NHMRC, released a statement saying that the Journal Impact Factor would no longer be accepted in applications for funding or used in the peer review of individual applications. The statement went on to say that the Journal Impact Factor is ‘not a sound basis upon which to judge the impact of individual papers’.

Internationally, the San Francisco Declaration on Research Assessment (DORA), originating from the December meeting of the American Society for Cell Biology, put forward a number of recommendations for funding agencies, universities and researchers regarding the use of metrics for research evaluation. Amongst it other recommendations DORA aims to halt the use of journal-based metrics for the research evaluation of individual researchers. As of August 2015 the declaration had over 12,500 individual and 588 institutional signatories.

While there are some compelling reasons to use journal quality lists to evaluate the research performance of academics including convenience, objectivity and cost savings there are also disadvantages. Some of the disadvantages of using journal quality lists for research evaluation include, reduced academic freedom, promotion of outlet targeting, driving research in the direction of publisher preference and disadvantaging of specialist journals and specialist fields of research.


Whether we like them or not journal quality lists have been part of research evaluation for the past 50 years and their legacy persists today. As the requirement for convenient and cost-effective research evaluation mechanisms increases it is possible that journal quality lists will continue to play a part in research evaluation into the future. For examples of journal lists from around the world visit www.researchimpact.com.au/viewforum.php?f=20.

Thursday, August 6, 2015

Research Evaluation – an argument for a ‘census’ driven collection of publications

In this age of accountability no one questions the idea that data relating to university research publications is collected and reported on. Research publications are no longer only a mechanism for disseminating research findings but they are now also a measure of research performance.

It is hardly surprising then that discussions arising from a recent review by PhillipsKPA are not around whether we should collect research publication data but how we can collect it more efficiently. Australian universities currently report research publications data through the Higher Education Research Data Collection (HERDC) and Excellence in Research for Australia (ERA). One of the 27 recommendations from the PhilipsKPA Review of University Reporting Requirements is to streamline the collection of research data into a single collection. Combining the two collections into a single collection will only be worthwhile if it improves the efficiency, integrity, transparency and utility of the data being collected. Any consultation document will hopefully clarify for the sector how the combined collection will achieve this.

While both mechanisms currently collect research publication and research income data that is really where the similarities stop. The type of data collected, the level of detail collected and importantly the purpose of the collections are quite different.

The purpose of the HERDC is to collect research income and publications data to inform the distribution of research block grants to universities based on their relative performance in each measure. The HERDC only reports on publication volume and does not consider the field of research or the quality of the research – at best it provides a proxy for the volume of research activity across Australian universities.
According to the ARC’s ERA documentation the objectives of ERA are much broader than the HERDC and are listed as, to:

·         establish an evaluation framework that gives government, industry, business and the wider community assurance of the excellence of research conducted in Australian higher education institutions;

·         provide a national stocktake of discipline level areas of research strength and areas where there is opportunity for development in Australian higher education institutions;

·         identify excellence across the full spectrum of research performance;

·         identify emerging research areas and opportunities for further development; and

·         allow for comparisons of research in Australia, nationally and internationally, for all discipline areas.

If we focus on the collection of research publications data in each collection there is one main difference. In the HERDC, publications data is collected for all publications that acknowledge the university with which the author is affiliated on the publication itself – for example through an author ‘by-line’ – regardless of whether the author is currently employed at the university or not - I will refer to this as an ‘address’ based collection. In the ERA collection publications data is collected for all publications authored by researchers employed by a university at a census date (usually 31 March of the year preceding the ERA collection) – regardless of whether the university is acknowledged within the publication or not - I refer to this as a ‘census’ based collection. The difference between an ‘address’ based collection and a ‘census’ based collection may not at first seem significant – but it is.
Consider the case of the HERDC – data are collected on publications only where the university has been acknowledged on the publication – regardless of whether the researcher or research group still works at the university. Once the data have been collected and reported to the Department of Education the numbers are used to distribute block grant funding (approximately $1700 per publication point). This is effectively rewarding universities for the volume of publications they can report that list the university in the byline. There is no consideration made of the quality or the focus of the research in the publications – just the volume. If researchers who produced the publications have since left the university the university is still credited with their research activity.

A big advantage of the address based collection is that it is easy to determine which outputs are eligible for collection and which university they belong to. An address based collection could be conducted by a third party (for example through a citation data provider like Scopus or Web of Science). The disadvantages are that the collection is retrospective in that the researchers may have left the university but their output still contributes to the block grant allocation. This is fine if you are rewarding past performance but problematic if you are trying to profile current research strengths of Australian universities or if you are trying to fund for future research success.
Now consider the ERA which also collects publications data. These publications are the ones produced by the current cohort of staff at the university and not just the ones with a university listed byline. When the publications are reported they are assigned to fields of research and subsequently given a quality score by a national evaluation panel. This allows the Department (and the public) to see where research excellence exists in Australian universities and where research strengths may be emerging. The main disadvantage of a census based collection is that it requires more administrative work to collect the data as publications are not readily identifiable by a university byline within the publication. A census based collection cannot easily be performed by a third party. Advantages of the census based collection are that it represents the current research profile of the university and encourages universities to strategically recruit researchers to contribute to the research profile of the university.

In the case of a new “combined collection” for research publications data it is not immediately clear whether it would be done based on the ‘address’ or the ‘census’ date. Each has its pros and cons and each is useful for a different purpose. However, I would argue that the ‘census’ based approach is more appropriate in this case because of the following reasons:

1.       It allows universities to demonstrate a current research profile based on researchers who are actually working at the university rather than a retrospective profile where researchers may well have left the university since the evaluation

2.       It allows universities to respond strategically to changes in the research landscape and funding environment for example by recruiting researchers to complement or strengthen their existing research profile

3.       It trusts universities with the responsibility of presenting their research in a meaningful way based on their own knowledge of their researchers and their research rather than leaving it up to a third party or a generic business process relying on accurately recorded address data

4.       It aligns with the national uptake of a universal researcher ID in Australia such as the ORCID
I would conversely argue that using an ‘address’ based collection for a research evaluation may result in what I call ‘phantom’ units of evaluations. A publication can have multiple authors and therefore multiple universities listed in the byline – each author may also have multiple bylines. This results in each author of each publication potentially contributing to multiple university research evaluations. A ‘phantom’ evaluation would be where a university may appear to have a minimum volume of publication output (for ERA this is 50 publications to trigger an evaluation) based only on the fact that the university’s name appears at least once on 50 publications. However, while the byline appears on the publication the author may not actually work at the university – for example, if the author has since left the university or where the author has multiple bylines which include other university affiliations in addition to the one they work at. The ‘address’ based collection would also potentially disadvantage universities who have strategically invested in recruiting new researchers to their university. In this case while the new university is paying the salary of the researchers, those researchers’ publications would be contributing to another university’s research evaluation based on their previous bylines.