Friday, February 13, 2015

ERA 2015 Research Evaluation Committee (REC) Members


The Australian Research Council (ARC) today announced the full membership of the Research Evaluation Committee (REC) for the Excellence in Research for Australia (ERA) 2015 initiative.
There are 149 members in total spread across 8 discipline clusters. The majority of membership is drawn from Australian universities (142) with a small number from international institutions (7). The discipline cluster with the highest number of members was Humanities and Creative Arts (23) followed by Medical and Health Sciences (22), see Table 1.

Table 1. REC membership by discipline cluster

Cluster
REC members
Humanities and Creative Arts (HCA)
23
Medical and Health Sciences (MHS)
22
Education and Human Society (EHS)
20
Physical, Chemical and Earth Sciences (PCE)
18
Biological and Biotechnological Sciences (BB)
17
Engineering and Environmental Sciences (EE)
17
Economics and Commerce (EC)
16
Mathematical, Information and Computing Sciences (MIC)
16
Total
149

The gender balance is uneven with only 28 percent of the membership being female and 72 percent male. Of the 8 chairs of the panels there are 2 females and 6 males.

Table 2. REC membership by gender
Gender
REC members
%
Female
42
28%
Male
107
72%
Total
149
100%

Table 3 shows the breakdown of membership by gender and by discipline with the best ratio of male to female members being in Education and Human Society (EHS) and Humanities and Creative Arts (HCA) and the worst ratio being in Mathematical, Information and Computing Science (MIC) and Physical, Chemical and Earth Sciences (PCE).

Table 3. REC membership by gender and discipline

Discipline

Female

Male

Total

EHS

9

11

20

HCA

8

15

23

MHS

6

16

22

EC

4

12

16

BB

4

13

17

EE

4

13

17

PCE

4

14

18

MIC

3

13

16

Total

42

107

149

The eastern states made up around 72 percent of the total membership with the highest number coming from NSW (54), Victoria (35) and Queensland (17).

Table 3. REC membership by State
State
REC members
NSW
53
Vic
35
Qld
17
WA
15
SA
12
ACT
8
Intl
7
Multi
1
Tas
1
Total
149


The Group of Eight universities had a major representation in the membership with 70 members (48%). University of Newcastle (9) and Macquarie University (7) were well represented in the membership for universities outside of the Go8.
Table 4. REC membership by university
University
REC members
The University of Sydney
11
The University of Western Australia
11
The University of Melbourne
11
The University of New South Wales
10
Monash University
9
The University of Newcastle
9
Macquarie University
7
The University of Queensland
7
The University of Adelaide
6
University of Wollongong
5
Swinburne University of Technology
5
The Australian National University
5
University of Technology, Sydney
4
Flinders University
4
University of Western Sydney
4
RMIT University
4
Griffith University
4
University of Canberra
3
Curtin University of Technology
3
James Cook University
2
The University of New England
2
Deakin University
2
University of South Australia
2
Royal Holloway, University of London
2
Queensland University of Technology
2
La Trobe University
2
Forschungszentrum Jülich GmbH Germany
1
Edith Cowan University
1
Bond University
1
Victoria University
1
University of Edinburgh
1
The University of Waikato
1
University of Tasmania
1
Australian Catholic University 
1
Federation University Australia 
1
Tilburg, The Netherlands
1
Imperial College London
1
Southern Cross University
1
The University of the Sunshine Coast
1
Total
149


 The full list of members can be found at the ARC's website:

Tuesday, November 4, 2014

How does ERA evaluation differ between peer review and citation metrics disciplines


Evaluation of the Excellence in Research for Australia (ERA) is done by either peer review (for the humanities and social sciences) or citation metrics (for the medical, engineering and physical sciences). On average the scores across the sector are lower for the 'peer review' disciplines than they are for the 'citation metrics' disciplines. This may mean that in Australia the research quality for the humanities and social sciences is not as good as the research quality for medical sciences, engineering and physical sciences - or it may be that the process for evaluation is not equivalent between the two. Unfortunately no discipline has been evaluated using both methods in an ERA round so it is not clear whether a discipline would fair the same using both methods.

I think two simple experiments could be conducted in the ERA which could have very interesting results. Experiment 1 to determine whether the same submission evaluated by two different methods (citation metrics and by peer review) would score differently in an evaluation of research quality. Experiment 2 to determine whether an evaluation of research quality by peer review is altered by knowing the authors and the journals that the articles are published in.

Experiment 1.
Run a comparison of evaluations of the same submission in a field of research (FOR) using (a) citation metrics and (b) peer review.

a. The first group of evaluators would make a judgment based only on the metrics (e.g. citation impact, research income per faculty, and the standing of journals that articles were published in).

b. The second group of evaluators would look at the same submission but with no metrics and make a judgment purely by reading the journals articles of the submitted authors.

both groups would come up with an ERA ‘score’ – it would be interesting to see the if there was any difference.

Experiment 2.
Take group (b) above – the peer review group – and split them in to a further two groups:

c. This group would peer review the submission by reading the articles with complete knowledge of the authors of the articles and which journals the articles were published in.

d. The second group would have no information on the authors of the articles or the journals in which they were published. These evaluators would have to read the articles and make an evaluation of the quality of the research based on the articles alone.

It would be interesting to see how much the knowledge of the authors names or the knowledge of the journals that the articles were published in swayed the evaluators assessment of research quality.


Wednesday, August 27, 2014

Developing an ‘Impact and Engagement for Australia’ (IEA) metric

ATSE has floated their idea for a research engagement and impact evaluation based purely on metrics. The metrics suggested are category 3 income and commercialization income. ATSE suggest that this evaluation be run along with the ERA to produce a combined quality and impact rating (quality 5-1 and impact A-D).

It is important for Australia to consider an impact evaluation - and examples such as the Excellence in Innovation for Australia (EIA) have shown that it is possible to evaluate research impact in Australia. The UK have also evaluated impact as part of their national research evaluation exercise. Some argue that the EIA and the REF methodology of case studies and peer review are onerous and expensive. However, this is by no means a reason not to do them - these sorts of evaluations will never be easy because the evaluation of research is not easy. Proper evaluation of research requires time and the people with the right level of expertise to carry out the evaluation.

Here are some of the issues I think the ATSE ERA-linked evaluation might have:

  • It assumes that your research input, output and impact all occur in the same FoR code.
  • It assumes that impact has occurred concurrently with the research – most impact is not realised that quickly.
  • It won’t take into account the difference between pure research and applied research in the same FoR.
  • It is focused only on economic impact – which for the commercialisation income might actually be more a measure of the success of the company selling the widget than the quality of the university.
  • It also does not take into consideration the amount of gaming that occurs in the ERA.

The ATSE release can be read here: http://www.atse.org.au/atse/content/activity/innovation-content/developing-impact-engagement-australia-metric.aspx

Saturday, August 16, 2014

Kardashian Index

Do you know what your Kardashian Index is?

Your Kardashian Index is a measure of the discrepancy between your social media profile and your publication record based on the direct comparison of numbers of citations and Twitter followers.

Anything great than '5'and you are considered a science Kardashian! My index is only 0.1 so I am far from a Kardashian - I think I am going to have to try and get more Twitter followers...

Read Neil Hall's paper: The Kardashian index: a measure of discrepant social media profile for scientists http://genomebiology.com/2014/15/7/424