Wednesday, August 27, 2014

Developing an ‘Impact and Engagement for Australia’ (IEA) metric

ATSE has floated their idea for a research engagement and impact evaluation based purely on metrics. The metrics suggested are category 3 income and commercialization income. ATSE suggest that this evaluation be run along with the ERA to produce a combined quality and impact rating (quality 5-1 and impact A-D).

It is important for Australia to consider an impact evaluation - and examples such as the Excellence in Innovation for Australia (EIA) have shown that it is possible to evaluate research impact in Australia. The UK have also evaluated impact as part of their national research evaluation exercise. Some argue that the EIA and the REF methodology of case studies and peer review are onerous and expensive. However, this is by no means a reason not to do them - these sorts of evaluations will never be easy because the evaluation of research is not easy. Proper evaluation of research requires time and the people with the right level of expertise to carry out the evaluation.

Here are some of the issues I think the ATSE ERA-linked evaluation might have:

  • It assumes that your research input, output and impact all occur in the same FoR code.
  • It assumes that impact has occurred concurrently with the research – most impact is not realised that quickly.
  • It won’t take into account the difference between pure research and applied research in the same FoR.
  • It is focused only on economic impact – which for the commercialisation income might actually be more a measure of the success of the company selling the widget than the quality of the university.
  • It also does not take into consideration the amount of gaming that occurs in the ERA.

The ATSE release can be read here: http://www.atse.org.au/atse/content/activity/innovation-content/developing-impact-engagement-australia-metric.aspx

Saturday, August 16, 2014

Kardashian Index

Do you know what your Kardashian Index is?

Your Kardashian Index is a measure of the discrepancy between your social media profile and your publication record based on the direct comparison of numbers of citations and Twitter followers.

Anything great than '5'and you are considered a science Kardashian! My index is only 0.1 so I am far from a Kardashian - I think I am going to have to try and get more Twitter followers...

Read Neil Hall's paper: The Kardashian index: a measure of discrepant social media profile for scientists http://genomebiology.com/2014/15/7/424

Research ethics and the ebola epidemic

The recent outbreak of the Ebola virus has presented scientists with an opportunity to test potential vaccines on patients. While the epidemic is terrifying this does not mean that research ethics should be ignored.

http://www.researchimpact.com.au/viewtopic.php?f=19&t=36

Sunday, August 10, 2014

Four reasons to stop caring so much about the h-index.

Four reasons to stop caring so much about the h-index.

Interesting blog piece from Stacy Konkiel on the h-index and altmetrics called 'Four reasons to stop caring so much about the h-index'. I like that Stacy doesn't ask us to stop caring all together about the h-index but to just stop caring 'so much' about it.

It seems confusing to me to be comparing h-index directly with Altmetrics. To me they appear to be two different things. H-index attempts to measure the productivity of a person whereas Altmetric attempts to measure the influence of an article. Why not compare the Altmetric to Relative Citation Impact (RCI) for example? The RCI is normalized for the discipline and year the article was published and gives a measure of whether the article has been cited above or below the expected average rate for the discipline. It is not immediately clear how to determine whether an Altmetrics score is relatively high or low for the discipline so it would be useful if it incorporated some sort of benchmark.