Sunday, November 1, 2015

Journal Quality Lists: ecological fallacy or convenient and cost-effective evaluation tool?

University managers are constantly seeking simple ways to measure and evaluate the research output of their university’s academics. While peer review of scholarly research papers is arguably the best way to determine the quality of any individual research output it is also acknowledged that peer review is time consuming, expensive and subjective. Journal-level metrics, such as a journal quality list, present managers with a convenient, objective and inexpensive tool for determining the quality of scholarly articles.  However, managers relying on journal level metrics to evaluate articles may be suffering from the ecological fallacy. The ecological fallacy occurs when conclusions are made about individuals based only on analyses of group data. In this case, judging the quality of an individual article based only on the journal in which it is published.

Using journal level metrics to evaluate research quality is not a new phenomenon with the earliest examples of journal quality lists being found as far back as the late 60s and early 70s. Journal level metrics often take the form of lists of scholarly journals which have been ranked against some particular criteria. While there is no consensus on how a journal list should be compiled many lists having been created using methodologies ranging from perceptual and peer review based rankings through to objective citation based rankings.

The use and misuse of journal rankings is well documented in the literature. Within Australia, and internationally, the academic community is shifting away from the use of journal metrics to evaluate research. Australian academics were introduced to the ranked journal list as part of the national research evaluation exercise, Excellence in Research for Australia (ERA). The rankings were considered highly influential in determining a university’s ERA outcome so many institutions began to provide incentives to staff to publish in ‘A*’ and ‘A’ ranked journals. The ranked journal list quickly became the most contentious issue of the ERA and by 2011, then Minster for Education, Kim Carr, announced that it would be discontinued because its ‘existence was focusing ill-informed undesirable behaviour in the management of research’. In 2010, Australia’s other major research funding agency the NHMRC, released a statement saying that the Journal Impact Factor would no longer be accepted in applications for funding or used in the peer review of individual applications. The statement went on to say that the Journal Impact Factor is ‘not a sound basis upon which to judge the impact of individual papers’.

Internationally, the San Francisco Declaration on Research Assessment (DORA), originating from the December meeting of the American Society for Cell Biology, put forward a number of recommendations for funding agencies, universities and researchers regarding the use of metrics for research evaluation. Amongst it other recommendations DORA aims to halt the use of journal-based metrics for the research evaluation of individual researchers. As of August 2015 the declaration had over 12,500 individual and 588 institutional signatories.

While there are some compelling reasons to use journal quality lists to evaluate the research performance of academics including convenience, objectivity and cost savings there are also disadvantages. Some of the disadvantages of using journal quality lists for research evaluation include, reduced academic freedom, promotion of outlet targeting, driving research in the direction of publisher preference and disadvantaging of specialist journals and specialist fields of research.


Whether we like them or not journal quality lists have been part of research evaluation for the past 50 years and their legacy persists today. As the requirement for convenient and cost-effective research evaluation mechanisms increases it is possible that journal quality lists will continue to play a part in research evaluation into the future. For examples of journal lists from around the world visit www.researchimpact.com.au/viewforum.php?f=20.

Thursday, August 6, 2015

Research Evaluation – an argument for a ‘census’ driven collection of publications

In this age of accountability no one questions the idea that data relating to university research publications is collected and reported on. Research publications are no longer only a mechanism for disseminating research findings but they are now also a measure of research performance.

It is hardly surprising then that discussions arising from a recent review by PhillipsKPA are not around whether we should collect research publication data but how we can collect it more efficiently. Australian universities currently report research publications data through the Higher Education Research Data Collection (HERDC) and Excellence in Research for Australia (ERA). One of the 27 recommendations from the PhilipsKPA Review of University Reporting Requirements is to streamline the collection of research data into a single collection. Combining the two collections into a single collection will only be worthwhile if it improves the efficiency, integrity, transparency and utility of the data being collected. Any consultation document will hopefully clarify for the sector how the combined collection will achieve this.

While both mechanisms currently collect research publication and research income data that is really where the similarities stop. The type of data collected, the level of detail collected and importantly the purpose of the collections are quite different.

The purpose of the HERDC is to collect research income and publications data to inform the distribution of research block grants to universities based on their relative performance in each measure. The HERDC only reports on publication volume and does not consider the field of research or the quality of the research – at best it provides a proxy for the volume of research activity across Australian universities.
According to the ARC’s ERA documentation the objectives of ERA are much broader than the HERDC and are listed as, to:

·         establish an evaluation framework that gives government, industry, business and the wider community assurance of the excellence of research conducted in Australian higher education institutions;

·         provide a national stocktake of discipline level areas of research strength and areas where there is opportunity for development in Australian higher education institutions;

·         identify excellence across the full spectrum of research performance;

·         identify emerging research areas and opportunities for further development; and

·         allow for comparisons of research in Australia, nationally and internationally, for all discipline areas.

If we focus on the collection of research publications data in each collection there is one main difference. In the HERDC, publications data is collected for all publications that acknowledge the university with which the author is affiliated on the publication itself – for example through an author ‘by-line’ – regardless of whether the author is currently employed at the university or not - I will refer to this as an ‘address’ based collection. In the ERA collection publications data is collected for all publications authored by researchers employed by a university at a census date (usually 31 March of the year preceding the ERA collection) – regardless of whether the university is acknowledged within the publication or not - I refer to this as a ‘census’ based collection. The difference between an ‘address’ based collection and a ‘census’ based collection may not at first seem significant – but it is.
Consider the case of the HERDC – data are collected on publications only where the university has been acknowledged on the publication – regardless of whether the researcher or research group still works at the university. Once the data have been collected and reported to the Department of Education the numbers are used to distribute block grant funding (approximately $1700 per publication point). This is effectively rewarding universities for the volume of publications they can report that list the university in the byline. There is no consideration made of the quality or the focus of the research in the publications – just the volume. If researchers who produced the publications have since left the university the university is still credited with their research activity.

A big advantage of the address based collection is that it is easy to determine which outputs are eligible for collection and which university they belong to. An address based collection could be conducted by a third party (for example through a citation data provider like Scopus or Web of Science). The disadvantages are that the collection is retrospective in that the researchers may have left the university but their output still contributes to the block grant allocation. This is fine if you are rewarding past performance but problematic if you are trying to profile current research strengths of Australian universities or if you are trying to fund for future research success.
Now consider the ERA which also collects publications data. These publications are the ones produced by the current cohort of staff at the university and not just the ones with a university listed byline. When the publications are reported they are assigned to fields of research and subsequently given a quality score by a national evaluation panel. This allows the Department (and the public) to see where research excellence exists in Australian universities and where research strengths may be emerging. The main disadvantage of a census based collection is that it requires more administrative work to collect the data as publications are not readily identifiable by a university byline within the publication. A census based collection cannot easily be performed by a third party. Advantages of the census based collection are that it represents the current research profile of the university and encourages universities to strategically recruit researchers to contribute to the research profile of the university.

In the case of a new “combined collection” for research publications data it is not immediately clear whether it would be done based on the ‘address’ or the ‘census’ date. Each has its pros and cons and each is useful for a different purpose. However, I would argue that the ‘census’ based approach is more appropriate in this case because of the following reasons:

1.       It allows universities to demonstrate a current research profile based on researchers who are actually working at the university rather than a retrospective profile where researchers may well have left the university since the evaluation

2.       It allows universities to respond strategically to changes in the research landscape and funding environment for example by recruiting researchers to complement or strengthen their existing research profile

3.       It trusts universities with the responsibility of presenting their research in a meaningful way based on their own knowledge of their researchers and their research rather than leaving it up to a third party or a generic business process relying on accurately recorded address data

4.       It aligns with the national uptake of a universal researcher ID in Australia such as the ORCID
I would conversely argue that using an ‘address’ based collection for a research evaluation may result in what I call ‘phantom’ units of evaluations. A publication can have multiple authors and therefore multiple universities listed in the byline – each author may also have multiple bylines. This results in each author of each publication potentially contributing to multiple university research evaluations. A ‘phantom’ evaluation would be where a university may appear to have a minimum volume of publication output (for ERA this is 50 publications to trigger an evaluation) based only on the fact that the university’s name appears at least once on 50 publications. However, while the byline appears on the publication the author may not actually work at the university – for example, if the author has since left the university or where the author has multiple bylines which include other university affiliations in addition to the one they work at. The ‘address’ based collection would also potentially disadvantage universities who have strategically invested in recruiting new researchers to their university. In this case while the new university is paying the salary of the researchers, those researchers’ publications would be contributing to another university’s research evaluation based on their previous bylines.

Friday, April 3, 2015

Collaborate, innovate, or be left behind

The Australian Technology Network (ATN) have released a new report titled: Innovate and Prosper – Ensuring Australia’s Future Competitiveness through University-Industry Collaboration’.

It is an interesting read, with recommendations on how to better incentivize collaboration between industry and universities. Recommendations include, amongst a range of others, better transfer of university researchers and graduates into industry and increasing funding for universities based on industry engagement. Developing an engagement measure is also touched on and it is good to note that the ATN have input from industry in this process through their Industry Advisory Board. If the university sector is going to develop a robust metric for engagement or research impact then it should be done with industry input. More information can be found here:

https://www.atn.edu.au/News-room/Latest-News/Collaborate-innvoate-or-be-left-behind/

Friday, February 13, 2015

ERA 2015 Research Evaluation Committee (REC) Members


The Australian Research Council (ARC) today announced the full membership of the Research Evaluation Committee (REC) for the Excellence in Research for Australia (ERA) 2015 initiative.
There are 149 members in total spread across 8 discipline clusters. The majority of membership is drawn from Australian universities (142) with a small number from international institutions (7). The discipline cluster with the highest number of members was Humanities and Creative Arts (23) followed by Medical and Health Sciences (22), see Table 1.

Table 1. REC membership by discipline cluster

Cluster
REC members
Humanities and Creative Arts (HCA)
23
Medical and Health Sciences (MHS)
22
Education and Human Society (EHS)
20
Physical, Chemical and Earth Sciences (PCE)
18
Biological and Biotechnological Sciences (BB)
17
Engineering and Environmental Sciences (EE)
17
Economics and Commerce (EC)
16
Mathematical, Information and Computing Sciences (MIC)
16
Total
149

The gender balance is uneven with only 28 percent of the membership being female and 72 percent male. Of the 8 chairs of the panels there are 2 females and 6 males.

Table 2. REC membership by gender
Gender
REC members
%
Female
42
28%
Male
107
72%
Total
149
100%

Table 3 shows the breakdown of membership by gender and by discipline with the best ratio of male to female members being in Education and Human Society (EHS) and Humanities and Creative Arts (HCA) and the worst ratio being in Mathematical, Information and Computing Science (MIC) and Physical, Chemical and Earth Sciences (PCE).

Table 3. REC membership by gender and discipline

Discipline

Female

Male

Total

EHS

9

11

20

HCA

8

15

23

MHS

6

16

22

EC

4

12

16

BB

4

13

17

EE

4

13

17

PCE

4

14

18

MIC

3

13

16

Total

42

107

149

The eastern states made up around 72 percent of the total membership with the highest number coming from NSW (54), Victoria (35) and Queensland (17).

Table 3. REC membership by State
State
REC members
NSW
53
Vic
35
Qld
17
WA
15
SA
12
ACT
8
Intl
7
Multi
1
Tas
1
Total
149


The Group of Eight universities had a major representation in the membership with 70 members (48%). University of Newcastle (9) and Macquarie University (7) were well represented in the membership for universities outside of the Go8.
Table 4. REC membership by university
University
REC members
The University of Sydney
11
The University of Western Australia
11
The University of Melbourne
11
The University of New South Wales
10
Monash University
9
The University of Newcastle
9
Macquarie University
7
The University of Queensland
7
The University of Adelaide
6
University of Wollongong
5
Swinburne University of Technology
5
The Australian National University
5
University of Technology, Sydney
4
Flinders University
4
University of Western Sydney
4
RMIT University
4
Griffith University
4
University of Canberra
3
Curtin University of Technology
3
James Cook University
2
The University of New England
2
Deakin University
2
University of South Australia
2
Royal Holloway, University of London
2
Queensland University of Technology
2
La Trobe University
2
Forschungszentrum J├╝lich GmbH Germany
1
Edith Cowan University
1
Bond University
1
Victoria University
1
University of Edinburgh
1
The University of Waikato
1
University of Tasmania
1
Australian Catholic University 
1
Federation University Australia 
1
Tilburg, The Netherlands
1
Imperial College London
1
Southern Cross University
1
The University of the Sunshine Coast
1
Total
149


 The full list of members can be found at the ARC's website: