Tuesday, November 4, 2014

How does ERA evaluation differ between peer review and citation metrics disciplines


Evaluation of the Excellence in Research for Australia (ERA) is done by either peer review (for the humanities and social sciences) or citation metrics (for the medical, engineering and physical sciences). On average the scores across the sector are lower for the 'peer review' disciplines than they are for the 'citation metrics' disciplines. This may mean that in Australia the research quality for the humanities and social sciences is not as good as the research quality for medical sciences, engineering and physical sciences - or it may be that the process for evaluation is not equivalent between the two. Unfortunately no discipline has been evaluated using both methods in an ERA round so it is not clear whether a discipline would fair the same using both methods.

I think two simple experiments could be conducted in the ERA which could have very interesting results. Experiment 1 to determine whether the same submission evaluated by two different methods (citation metrics and by peer review) would score differently in an evaluation of research quality. Experiment 2 to determine whether an evaluation of research quality by peer review is altered by knowing the authors and the journals that the articles are published in.

Experiment 1.
Run a comparison of evaluations of the same submission in a field of research (FOR) using (a) citation metrics and (b) peer review.

a. The first group of evaluators would make a judgment based only on the metrics (e.g. citation impact, research income per faculty, and the standing of journals that articles were published in).

b. The second group of evaluators would look at the same submission but with no metrics and make a judgment purely by reading the journals articles of the submitted authors.

both groups would come up with an ERA ‘score’ – it would be interesting to see the if there was any difference.

Experiment 2.
Take group (b) above – the peer review group – and split them in to a further two groups:

c. This group would peer review the submission by reading the articles with complete knowledge of the authors of the articles and which journals the articles were published in.

d. The second group would have no information on the authors of the articles or the journals in which they were published. These evaluators would have to read the articles and make an evaluation of the quality of the research based on the articles alone.

It would be interesting to see how much the knowledge of the authors names or the knowledge of the journals that the articles were published in swayed the evaluators assessment of research quality.


Wednesday, August 27, 2014

Developing an ‘Impact and Engagement for Australia’ (IEA) metric

ATSE has floated their idea for a research engagement and impact evaluation based purely on metrics. The metrics suggested are category 3 income and commercialization income. ATSE suggest that this evaluation be run along with the ERA to produce a combined quality and impact rating (quality 5-1 and impact A-D).

It is important for Australia to consider an impact evaluation - and examples such as the Excellence in Innovation for Australia (EIA) have shown that it is possible to evaluate research impact in Australia. The UK have also evaluated impact as part of their national research evaluation exercise. Some argue that the EIA and the REF methodology of case studies and peer review are onerous and expensive. However, this is by no means a reason not to do them - these sorts of evaluations will never be easy because the evaluation of research is not easy. Proper evaluation of research requires time and the people with the right level of expertise to carry out the evaluation.

Here are some of the issues I think the ATSE ERA-linked evaluation might have:

  • It assumes that your research input, output and impact all occur in the same FoR code.
  • It assumes that impact has occurred concurrently with the research – most impact is not realised that quickly.
  • It won’t take into account the difference between pure research and applied research in the same FoR.
  • It is focused only on economic impact – which for the commercialisation income might actually be more a measure of the success of the company selling the widget than the quality of the university.
  • It also does not take into consideration the amount of gaming that occurs in the ERA.

The ATSE release can be read here: http://www.atse.org.au/atse/content/activity/innovation-content/developing-impact-engagement-australia-metric.aspx

Saturday, August 16, 2014

Kardashian Index

Do you know what your Kardashian Index is?

Your Kardashian Index is a measure of the discrepancy between your social media profile and your publication record based on the direct comparison of numbers of citations and Twitter followers.

Anything great than '5'and you are considered a science Kardashian! My index is only 0.1 so I am far from a Kardashian - I think I am going to have to try and get more Twitter followers...

Read Neil Hall's paper: The Kardashian index: a measure of discrepant social media profile for scientists http://genomebiology.com/2014/15/7/424

Research ethics and the ebola epidemic

The recent outbreak of the Ebola virus has presented scientists with an opportunity to test potential vaccines on patients. While the epidemic is terrifying this does not mean that research ethics should be ignored.

http://www.researchimpact.com.au/viewtopic.php?f=19&t=36

Sunday, August 10, 2014

Four reasons to stop caring so much about the h-index.

Four reasons to stop caring so much about the h-index.

Interesting blog piece from Stacy Konkiel on the h-index and altmetrics called 'Four reasons to stop caring so much about the h-index'. I like that Stacy doesn't ask us to stop caring all together about the h-index but to just stop caring 'so much' about it.

It seems confusing to me to be comparing h-index directly with Altmetrics. To me they appear to be two different things. H-index attempts to measure the productivity of a person whereas Altmetric attempts to measure the influence of an article. Why not compare the Altmetric to Relative Citation Impact (RCI) for example? The RCI is normalized for the discipline and year the article was published and gives a measure of whether the article has been cited above or below the expected average rate for the discipline. It is not immediately clear how to determine whether an Altmetrics score is relatively high or low for the discipline so it would be useful if it incorporated some sort of benchmark.


Monday, July 28, 2014

More Journal Ranking

Here are some more journal rankings to add to your list:

  • Impact per Publication (IPP)
  • SCImago Journal Rank (SRJ)
  • Source Normalized Impact per Paper (SNIP)




Wednesday, July 23, 2014

Journal Ranking










If you would like to view some of the popular and publicly available journal 'ranking' lists head on over to the researchimpact.com.au forum to check out:

  • Eigenfactor.org
  • ERA Journal Ranking
  • Harzing - journal quality list
  • ABDC - journal quality list
Make a comment on these lists or perhaps add some of the ones that you know of or use.

Why was it better to be a researcher in Design Practice and Management than Forestry for ERA 2010?

Saturday, July 12, 2014

DECRA not considered a highly regarded form of accolade by the ARC's ERA 2015

The third round of the ARC's Excellence in Research for Australia (ERA 2015) is underway; and, unfortunately, if you have a Discovery Early Career Researcher Award (DECRA) it is not being considered an 'esteem' measure. However, you will be considered 'esteemed' if you hold a NHMRC Early Career Fellowship.

The ERA includes a number of measures of 'esteem' and they are defined as measures that:
constitute recognition of the quality ... and indicate that a researcher is held in particularly high regard by peers in their field of research and/or by other well qualified parties. ...embody a measure of prestige and are recognised by experts within the discipline as a highly desired, highly regarded form of accolade or acknowledgement.
The ERA requests institutions submit data on only five esteem measures (listed below) of which one is 'nationally competitive research fellowships'. Unfortunately it looks like the DECRAs did not make it onto the list this time so are not considered a measure of esteem by the ERA.

The esteem measures eligible for ERA 2015 are the following:
  • editor of a prestigious work of reference;
  • fellowship of a learned academy and membership of the Australian Institute of Aboriginal and Torres Strait Islander Studies (AIATSIS);
  • recipient of a nationally competitive research fellowship; 
  • membership of a statutory committee; and 
  • recipient of an Australia Council grant or Australia Council fellowship.
And, only nationally competitive research fellowships in the following programs are eligible (where are the DECRAs?):
  • ARC Discovery—Australian Laureate Fellowships; 
  • ARC Discovery—Federation Fellowships; 
  • ARC Discovery—Future Fellowships; 
  • ARC Discovery—Indigenous Researchers’ Development; 
  • ARC Discovery—Projects (including Australian Professorial Fellowships, Queen Elizabeth II Fellowships, and Australian Postdoctoral Fellowships); 
  • ARC Linkage—International; 
  • ARC Linkage—Projects (including Australian Postdoctoral (Industry) Fellowships); 
  • NHMRC Practitioner Fellowships (Formerly Practitioner Fellowships Scheme); 
  • NHMRC Research Fellowships; 
  • NHMRC Australia Fellowship (Formerly Australia Fellowship Scheme);
  • NHMRC Career Development Fellowships (Formerly Career Development Awards); 
  • NHMRC Early Career Fellowships (Formerly Postdoctoral Training Fellowships); 
  • NHMRC Sir MacFarlane Burnett Fellowship;
  • NHMRC John Cade Fellowship in Mental Health Research; and 
  • NHMRC Translating Research Into Practice (TRIP) Fellowships.
To check out the ERA 2015 and related documents yourself you can visit: http://arc.gov.au/era/era_2015/2015_keydocs.htm

Saturday, July 5, 2014

Nobel winner declares boycott of top science journals



Nobel prize winner Randy Schekman says his lab will no longer send papers to Nature, Cell and Science as they distort scientific process.


After careful consideration, I too will join Schekman's revolution and will be boycotting Nature, Science and Cell by not publish anything with them --- after I win a Nobel prize.

There is enormous pressure on academics to produce 'excellent' research and as Schekman points out 'the incentives offered by top journals distort science'. We have also seen increased instances of plagiarism and publication of falsified results from all over the world. Schekman is advocating a move away from the 'luxury' journals, like Nature and Science, towards online and open access journals.

This would pose an interesting question about how to advise early career researchers (ECR) on where and how to publish. There is still a strong incentive through grant reviews, performance reviews, promotions, and university rankings to publish in high impact factor journals like Nature, Science and Cell. If an ECR chooses to boycott these types of journal will they jeopardize their career?

Sheckman has obviously done very well in his career - and he has the traditional performance metrics to back that up. According to Scopus, Schekman has over 280 papers with over 18,000 cites (the stats in the Web of Science are even more impressive with over 25,000 cites!) – over 40 of these are published in Nature, Science or Cell and these ones account for about 40% of the citations. Schekman’s Nature, Science and Cell papers are cited on average about 4 times per paper higher than for the rest of his output - impressive stats that would help win grants and get promoted.

I think Schekman's idea is great and I think open access journals are a good idea and it is really great that a Nobel prize winner is joining the revolution. Schekman is boycotting these journals as a researcher who has reached the top of his field. What advice would he have for early career researchers on where to publish when they are at the start of their career?

Excellence in Research for Australia - ERA 2015

The Australian Research Council (ARC) this week released the submission documents for the Excellence in Research for Australia (ERA) 2015. The 2015 guidelines are very similar to the 2012 guidelines with only a few significant changes. 

The first change is that there is now a requirement to submit more data not directly related to the evaluation. For example institutions are now required to submit data on staff gender; which publications are available in open access; and, information on the time and cost associated with submitting the ERA data for the institution. The problem with requesting these sorts of data is that each new requirement costs an institution time and money to collect and report. This means an increased cost to the whole sector for information that is not actually going to be used for the evaluation process - it is just for information.

A second change is that institutions are now able to write to the ARC regarding staff on less than 0.4 FTE contracts and argue a case for their inclusion (and the inclusion of their publications). This is a good move as it reduces the negative impact on fractional staff - a cohort that includes more female academics than male - thus shifting towards a more equitable process (although still not ideal). I wrote about why I thought there was a gender equity issue here:


Institutions will be required to submit their ERA 2015 data in early 2015.

http://arc.gov.au/era/era_2015/era_2015.htm



Tuesday, March 11, 2014

Forum to discuss evaluation of research at Australian universities www.researchimpact.com.au


I have just launched a new forum called Research Impact for those who would like to discuss the evaluation of university research activity, quality and impact.

Feel free to drop in on the forum if you are interested in discussing topics like the Excellence in Research for Australia (ERA); the evaluation of research impact or the 'impact dividend' or maybe you are interested in discussing the growing number of article retractions in the sciences.

www.researchimpact.com.au

Thursday, February 13, 2014

Does ERA 2015 present a gender equity issue for the higher education sector?

Academic staff employed on fractional-time contracts may be disadvantaged in the ERA by the requirement for employees on less than 0.4 FTE to have published with their new university's byline within the already passed ERA reference period.

A recent article in The Australian highlights the 'churning' of academic staff that is occurring in the sector to coincide with the ERA census date (31 March). Researchers who move (poached) to a new institution by the census date have their entire publication track record counted towards their new employer's ERA submission. This is despite the fact that the work was likely supported by their previous employer. The fact that researchers take their performance with them for the ERA results in a lot of 'horse trading' (or churning) between institutions  around the census date as they try to maximize their ERA outcomes. This means that high performing researchers, or often now groups of researchers, become a valuable commodity and can no doubt leverage this newly discovered value to their advantage through negotiations with potential employers.

However, as is often the case with these sorts of initiatives, the ERA guidelines are not quite as simple as this. There is actually a guideline that requires the FTE of the newly employed researcher to be greater than 0.4 FTE. If the contract is greater than 0.4 FTE (e.g. 1.0 FTE) then the researcher and all of their publication activity is automatically eligible for the ERA submission. BUT, if the contract is for less than 0.4 FTE there is an extra requirement that the researcher must show a "publication association" with their new institution, for example with a by-line that states the new institution as their affiliation. Now as the ERA is a retrospective evaluation it is highly unlikely that a new researcher employed close to the census date would have had time publish something listing the new university as their affiliation. With a less than 0.4 FTE contract and no affiliation byline with their new employer suddenly none of their performance counts for the ERA. In ERA terms this then makes full time researchers much more valuable than fractional researchers - and, therefore universities will likely employ full time researchers close to the ERA census date rather than look for fractional appointments.

So why is this inequitable? The process makes full-time staff more attractive to a university around the census date and fractional staff less attractive. So while full time researchers can personally and professionally benefit from the head-hunting, horse-trading and churning that goes on - the fractional researchers may not. The national higher education staff data collection shows that female academics make up a higher proportion of the fractional appointments in the sector than males (for research-only academics the ratio is almost 2:1). It is likely then that not only will fractional researchers suffer because of this but that female academics will be impacted in greater numbers than males.

Tuesday, February 11, 2014

How much is my publication worth to the university?

The Australian Government funds university research through the Australian Research Council (ARC) and National Health and Medical Research Council (NHMRC) as well as through the research block grant schemes. Research block grant funding, which is now managed by the Department of Education, currently consists of four performance based programs to support research and research training in Australian universities. The four programs are:

  • The Sustainable Research Excellence (SRE) initiative;
  • the Joint Research Engagement (JRE) program;
  • the Research Infrastructure Block Grant (RIBG) scheme; and
  • the Research Training Scheme (RTS).
Universities report research performance measures such as publications, research income, HDR load and HDR completions to the Department on an annual basis. These data are used in the calculations for distributing block grants funds to the sector. Publications factor into two of the funding schemes and contribute to the distribution of 10% of the JRE and 10% of the RTS. The fact that these data and the calculation methodology are publicly available means that with a simple calculation a dollar value can be placed upon a research publication.


This table shows that in 2002 each publication point was worth $2,928. In fact a book would have been worth a whopping $14,640 as they are weighted at 5:1 over other publication types. However, this value has rapidly decreased from $2,928 in 2002 to a current value of only $1,745 in 2014 ($8,725 for a book). So, if you are publishing research at a university and you are wondering how much your publication is worth in dollar terms, now you know, $1,745 - BUT, this is for a sole authored paper of course, and if you have co-authored with someone outside of your institution this value halves to $872; co-authored with two others? $582 and so on.

I was shown a paper the other day with the title: "Inclusive Search for Supersymmetry Using Razor Variables in pp Collisions at sqrt[s]=7 Tev"; this paper had 2196 authors listed on it and one of them was from an Australian university. This means that this highly collaborative research paper was worth a grand total of 79c!

Saturday, February 8, 2014

Draft ERA 2015 Journal and Conference Lists

The ARC have released their draft ERA 2015 Journal and Conference lists.

Draft ERA 2015 Journal and Conference Lists

Public Consultation on the Draft ERA 2015 Submission Documents

The Australian Research Council (ARC) is currently seeking public feedback on the Draft ERA 2015 submission documents.
One of several changes being suggested in the new guidelines is the addition of output volume information to the published ratings (Outcome Reporting - see section 4.2.3). This means that for evaluated fields of research the ARC will not only publicly report the ERA rating (5-1) but also the volume of outputs aligned to that rating. In the past rounds of ERA the evaluation has highlighted 'excellence wherever it occurs' meaning that a small university could score an ERA rating '5' in the same field of research as a larger university and these were seen as equivalent. This suggested change to the reporting of outcomes will mean that there will be a grading or stratification within ratings - for example an ERA rating of '5' with only 50 outputs will no longer be seen as equivalent to an ERA rating '5' in the same field with 500 outputs; or, in fact, may also be seen as inferior to an ERA rating '4' with 500 outputs.
Another change noted in the consultation documents (but not specifically highlighted so you have to dig to find it) is the slight change to the 'reassignment exception'. The new "warning" in the business rules document states that "The reassignment exception must not be used to more more than 40% of the apportioned journal articles into a single UoE". This may be an attempt to put a stop to the 'gaming' that some people think is occurring in the sector. This new rule will now mean that in a single field of research at least 60% of the journal articles must be assigned a field of research as stated on the ARC's journal list

The public consultation can be accessed by the following link: