Could ERA submissions be auto-generated in the near future? The new ERA specifications released by the ARC hint perhaps yes.
Australia's national research evaluation exercise, Excellence in Research for Australia (ERA) is conducted roughly once every three years with a large investment of time and money from the university sector and the ARC. The cost of running ERA to the sector has been variously estimated to be between $30 million and $100 million.
Universities are required to submit information and data relating to their research activities over the preceding six years. This includes publications, research projects and grants, research staff, along with a raft of related indicators such as patents and commercialisation activity.
Much of the information universities submit as part of the exercise is available from other sources - either publicly available (e.g. grant outcomes from the ARC and NHMRC, HERDC income returns, ABS R&D expenditure surveys) or from third party suppliers (e.g. Scopus or Clarivate publications databases).
If we were able to link researchers, their publications and grant funding to universities and fields of research then an ERA submission could in theory be developed automatically without the time and expense incurred by universities.
The Australian Research Council (ARC) website now includes the ERA 2018 Technical Specifications and Submission Guidelines. Of note is the optional inclusion of information like unique author identifiers (ORCID) and unique article identifiers (DOI). A combination of ORCIDs, DOIs, citation data and fields of research (e.g. from the ERA Journal List) could in theory be used to auto-generate ERA submissions for universities. Not only could this be less expensive for the sector but also offers the benefit of a more contemporary data set compared with the retrospective ERA as it currently stands.
So perhaps we will see an auto-generated ERA in 2021...
You can view the ERA guidelines for yourself at the ARC's website.
Saturday, September 23, 2017
Tuesday, September 12, 2017
The ARC has released the draft ERA 2018 Submission Guidelines
The ARC has now released the draft ERA 2018 submission guidelines for consultation. You can find a copy at their website here.
There are not really too many changes to the submission which should please universities - especially as they are gearing up for the Impact and Engagement assessment at the same time. Guidelines for the impact and engagement assessment are still pending at this stage.
A couple of interesting additions include:
There are not really too many changes to the submission which should please universities - especially as they are gearing up for the Impact and Engagement assessment at the same time. Guidelines for the impact and engagement assessment are still pending at this stage.
A couple of interesting additions include:
- Reporting of ORCID (optional)
- Reporting of DOI (optional)
- Aboriginal and Torres Strait Islander research section - although it would be nice to have a clear definition of Aboriginal research to work with.
Also interesting to see the addition of this to the the guidelines:
Institutions agree to allow the ARC to publish any submitted
data from ERA 2018. In addition, institutions must agree to publish their
submission, with the exception of their staff data, on 5 February 2019.
It will be good to get some clarity on what form this would take.
Tuesday, August 22, 2017
Clarivate selected as citation provider for ERA 2018
Clarivate selected as citation provider for ERA 2018
Chief Executive Officer (CEO) of the Australian Research Council (ARC), Professor Sue Thomas, has today announced that the ARC has selected Clarivate Analytics to provide citation information for the 2018 round of Excellence in Research for Australia.
Saturday, May 7, 2016
Web of Science used by Australian Research Council for Analysis of Benefits from University Research
According to this announcement - the Australian Research Council (ARC) will use Web of Science data as part of the next ERA and Engagement and Impact Evaluation - see release below.
http://thomsonreuters.com/en/press-releases/2016/may/web-of-science-source-data.html
Media Release:
The Australian Research Council (ARC) has recently obtained Thomson Reuters Web of Science™ Core Collection as one of the data sources to contribute to analyses that will be utilized by the ARC to support development of national impact and engagement assessment to assess the benefits derived from university research. This national assessment exercise is being introduced as part of the Australian government’s National Innovation and Science Agenda. This was announced today by the Intellectual Property & Science business of Thomson Reuters.
In 2016 the ARC will work with the higher education research sector, industry and other end-users of research to develop quantitative and qualitative measures of impact and engagement of university research. The Web of Science Core Collection provides source data for records such as topic, title and author information which will be used by ARC to support work around sector and ERA analysis in order to derive a model for national assessment. The ARC will conduct a national assessment as a companion exercise to the Excellence in Research for Australia (ERA), the country’s national research evaluation framework which identifies and promotes excellence across the full spectrum of research activity in Australia’s higher education institutions.
Jeroen Prinsen, vice president and head of Australia & New Zealand, IP &Science, Thomson Reuters said, “As a strong advocate of research collaboration and partner of Australia’s research community, we are pleased to support this important national impact and engagement assessment of university research which will ultimately promote high-quality research that will drive Australia’s innovation and economic growth. We are honored that the ARC will utilize source data from the Web of Science Core Collection, the world’s most trusted source of citation databases.”
Tuesday, February 9, 2016
Stop publishing your research!
The 'Watt review' - or the Review of Research Policy and Funding Arrangements has broken the link between publications and funding. Since the mid 1990s publications have informed a competent of the research block grants for universities. In 2010 ERA provided an additional avenue for publications to inform block funding allocations. The Watt review has recommended that publications be removed from the Higher Education Data Collection (HERDC) and recommended the removal of the Sustainable Research Excellence (SRE) fund from the block grant. These recommendations mean that universities will no longer receive block funding based on publications.
When publications were introduced to the block grant allocations there was a rapid increase in the volume of publications produced - however, the quality of those publications was low - in other words the quantity went up but the quality didn't. ERA introduced a quality component to the block grant allocation, albeit a modest allocation, which saw an increase in journal article output (compared with conferences and book) and an increase in articles in 'A*' and 'A' ranked journals.
So it seems that publication behaviour changes as the policy and incentives change. It will be interesting to see what impact this newest change has on publication behaviour. Should universities tell their academics to stop publishing? Well, probably not - there are many good reasons to keep publishing, not least of which is that researchers tend to like publishing and it is still a powerful way to disseminate knew knowledge. Besides this though there are a number of other reasons - promotions and recruitments are often influenced by publication record, grant success and university rankings are also linked to publication output.
So maybe don't stop publishing just yet. But watch this space to see what happens to publishing across Australian universities.
When publications were introduced to the block grant allocations there was a rapid increase in the volume of publications produced - however, the quality of those publications was low - in other words the quantity went up but the quality didn't. ERA introduced a quality component to the block grant allocation, albeit a modest allocation, which saw an increase in journal article output (compared with conferences and book) and an increase in articles in 'A*' and 'A' ranked journals.
So it seems that publication behaviour changes as the policy and incentives change. It will be interesting to see what impact this newest change has on publication behaviour. Should universities tell their academics to stop publishing? Well, probably not - there are many good reasons to keep publishing, not least of which is that researchers tend to like publishing and it is still a powerful way to disseminate knew knowledge. Besides this though there are a number of other reasons - promotions and recruitments are often influenced by publication record, grant success and university rankings are also linked to publication output.
So maybe don't stop publishing just yet. But watch this space to see what happens to publishing across Australian universities.
Thursday, February 4, 2016
Assessment of Impact and Engagement
We have really come full circle in a short amount of time. It wasn't all that long ago that Australia was in the midst of a research excellence and impact evaluation called the Research Quality Framework (RQF). This was to be Australia's first comprehensive evaluation of the quality and impact of its universities's research. With a change in government though came the cancellation of the RQF with concerns that it was too complex and too burdensome to the university sector. As quickly as the RQF was cancelled though it was replaced with the Excellence in Research for Australia (ERA). This would go on to become the first comprehensive evaluation of research quality of Australia's universities - note that impact was removed.
Now, thanks to the recommendations of the Watt review of research funding and policy, we find ourselves returning once again to an impact evaluation. The Watt review recommends we implement a 'companion piece' to our ERA called the Assessment of Impact and Engagement (AIE). The AIE will be a mixed methods evaluation combining quantitative and qualitative components moderated by an expert advisory group. What will the evaluation look like? Well most likely it will be informed by metrics - along the lines of the ATSE Research Engagement for Australia proposal. It will include case studies - as per the UK's REF and it will be moderated by expert review - just like the ATN/Go8 Excellence in Innovation for Australia.
Of particular importance to the evaluation will be how the terms 'impact' and 'engagement' are defined. Ask any researcher what they think the terms mean and you will almost get a different answer every time. This means there will be quite an education piece required to let us all know what the AIE is actually evaluating. And what is it evaluating? What will it tell us about the impact of university research? Most statements about Australian university research mention the same high profile impacts over and again - Cochlear, Gardasil, Atlassian - fantastic impacts, but we already know about them, we don't need an evaluation to tell us about them again. An evaluation may uncover a goldmine of unknown impacts - but what tends to happen is that the high profile impacts need very little, if any, time spent to evaluate them as top of the pile while the rest of the effort, resource, expense is consumed by the less impactful projects - the ones that never get mentioned in the media... so is it really worth it?
Now, thanks to the recommendations of the Watt review of research funding and policy, we find ourselves returning once again to an impact evaluation. The Watt review recommends we implement a 'companion piece' to our ERA called the Assessment of Impact and Engagement (AIE). The AIE will be a mixed methods evaluation combining quantitative and qualitative components moderated by an expert advisory group. What will the evaluation look like? Well most likely it will be informed by metrics - along the lines of the ATSE Research Engagement for Australia proposal. It will include case studies - as per the UK's REF and it will be moderated by expert review - just like the ATN/Go8 Excellence in Innovation for Australia.
Of particular importance to the evaluation will be how the terms 'impact' and 'engagement' are defined. Ask any researcher what they think the terms mean and you will almost get a different answer every time. This means there will be quite an education piece required to let us all know what the AIE is actually evaluating. And what is it evaluating? What will it tell us about the impact of university research? Most statements about Australian university research mention the same high profile impacts over and again - Cochlear, Gardasil, Atlassian - fantastic impacts, but we already know about them, we don't need an evaluation to tell us about them again. An evaluation may uncover a goldmine of unknown impacts - but what tends to happen is that the high profile impacts need very little, if any, time spent to evaluate them as top of the pile while the rest of the effort, resource, expense is consumed by the less impactful projects - the ones that never get mentioned in the media... so is it really worth it?
Sunday, November 1, 2015
Journal Quality Lists: ecological fallacy or convenient and cost-effective evaluation tool?
University managers are constantly seeking simple ways to measure
and evaluate the research output of their university’s academics. While peer
review of scholarly research papers is arguably the best way to determine the
quality of any individual research output it is also acknowledged that peer
review is time consuming, expensive and subjective. Journal-level metrics, such as a journal quality
list, present managers with a convenient, objective and inexpensive tool for determining
the quality of scholarly articles. However,
managers relying on journal level metrics to evaluate articles may be suffering
from the ecological fallacy. The ecological fallacy occurs when conclusions
are made about individuals based only on analyses of group data. In this case, judging
the quality of an individual article based only on the journal in which it is
published.
Using journal level metrics to evaluate research quality is not a
new phenomenon with the earliest examples of journal quality lists being found
as far back as the late 60s and early 70s. Journal level metrics often take the form of lists
of scholarly journals which have been ranked against some particular criteria. While
there is no consensus on how a journal list should be compiled
many lists having been created using methodologies ranging from perceptual and
peer review based rankings through to objective citation based rankings.
The use and misuse of journal rankings is well documented in the literature.
Within Australia, and internationally, the academic community is shifting away
from the use of journal metrics to evaluate research. Australian academics were introduced to the ranked journal list as part of the national research evaluation exercise,
Excellence in Research for Australia (ERA). The rankings were considered highly
influential in determining a university’s ERA outcome so many institutions
began to provide incentives to staff to publish in ‘A*’ and ‘A’ ranked
journals. The ranked journal list quickly became the most contentious issue of the
ERA and by 2011, then Minster for Education, Kim Carr, announced that it would
be discontinued because its ‘existence
was focusing ill-informed undesirable behaviour in the management of
research’. In 2010, Australia’s
other major research funding agency the NHMRC, released a statement saying that
the Journal Impact Factor would no longer be accepted in applications for
funding or used in the peer review of individual applications. The statement went
on to say that the Journal Impact Factor is ‘not a sound basis upon which to
judge the impact of individual papers’.
Internationally, the San
Francisco Declaration on Research Assessment (DORA),
originating from the December meeting of the American Society for Cell Biology,
put forward a number of recommendations for funding agencies, universities and
researchers regarding the use of metrics for research evaluation. Amongst it
other recommendations DORA aims to halt the use of journal-based metrics for
the research evaluation of individual researchers. As of August 2015 the
declaration had over 12,500 individual and 588 institutional signatories.
While there are some compelling reasons to use journal quality lists
to evaluate the research performance of academics including convenience, objectivity and cost savings
there are also disadvantages. Some
of the disadvantages of using journal quality lists for research evaluation
include, reduced academic freedom, promotion of outlet targeting, driving research
in the direction of publisher preference and disadvantaging of specialist
journals and specialist fields of research.
Whether we like them or not journal quality lists have been part of research
evaluation for the past 50 years and their legacy persists today. As the
requirement for convenient and cost-effective research evaluation mechanisms increases
it is possible that journal quality lists will continue to play a part in
research evaluation into the future. For examples of journal lists from around
the world visit www.researchimpact.com.au/viewforum.php?f=20.
Subscribe to:
Posts (Atom)