Wednesday, June 25, 2025

A Global Top 100 Debut: Why Adelaide University’s QS Ranking Is a Milestone Worth Celebrating

Adelaide University, the soon-to-be-launched institution born from the merger of the University of South Australia and the University of Adelaide, has made its debut on the global stage with a QS World University Ranking of 82. This result is nothing short of remarkable and represents a strong vindication of the university’s ambitious goal to be ranked among the world’s top 100 institutions.

When the merger was first proposed, many critics dismissed the idea, suggesting that combining the two institutions would weaken rather than strengthen South Australia’s university sector. Detractors argued that UniSA would dilute the quality of the more prestigious University of Adelaide. Even now, with the ink barely dry on the merger legislation, some commentators continue to talk down the achievement. A recent AFR piece labelled the result “lacklustre”, a surprising take given that fewer than 100 of the world's approximately 26,000 universities ever break into the top 100.

Others question how the new university could be ranked at all before it officially opens its doors on January 1, 2026. The answer is simple: global ranking agencies evaluate institutional identity, not bricks and mortar. They assess research output, reputation, academic strength, and global engagement, all of which are already active, measurable, and very real in the merged institution.

Rather than nit-pick or diminish the achievement, this moment deserves recognition. A debut at 82 places Adelaide University in elite company, and sends a clear message: the merger hasn’t weakened the institutions; it has elevated them.

The goal was to create a world-class university for South Australia. That goal is already being realised. Let’s celebrate that.

Tuesday, May 27, 2025

Want Growth? Build a University


The article titled "The Economic Impact of Universities: Evidence from Across the Globe" by Anna Valero and John Van Reenen, published in the Economics of Education Review, explores the relationship between the presence of universities and regional economic growth. Utilising a comprehensive dataset encompassing nearly 15,000 universities across approximately 1,500 regions in 78 countries, the study examines data from 1950 to 2010 to assess how the number of universities influences GDP per capita.


Key Findings:

  • Positive Correlation with Economic Growth: The study finds that an increase in the number of universities within a region is positively associated with higher future GDP per capita. Specifically, a 10% increase in universities per capita correlates with a 0.4% rise in future GDP per capita.
  • Spillover Effects: The economic benefits of universities extend beyond their immediate regions, positively impacting neighbouring areas within the same country.
  • Mechanisms of Impact: The presence of universities contributes to economic growth not merely through direct expenditures but also by enhancing human capital and fostering innovation.
  • Influence on Democratic Attitudes: Regions with a historical presence of universities tend to exhibit stronger pro-democratic attitudes, suggesting a broader societal impact.

This research underscores the multifaceted role of universities in promoting economic development and societal progress, highlighting their significance beyond education.

https://doi.org/10.1016/j.econedurev.2018.09.001



Tuesday, May 13, 2025

Beyond the Numbers: Reclaiming Academic Purpose from Performative Pressures

 

In today’s data-driven universities, research performance is often equated with metrics: citations, publications, grant income. But are we losing sight of what truly matters in academia?

Recent reflections from Elsevier’s Research MetricsGuidebook and a compelling paper by Visser et al. (Journal of Education Policy, 2024) point to a growing concern: the rise of performativity. That is, the pressure for academics to continuously prove their value through measurable outputs, often at the expense of deeper scholarly and educational contributions.

This performative culture distorts academic behaviour. Researchers may prioritise what is countable over what is meaningful. Critical activities such as teaching, mentoring, peer review, and community engagement can be undervalued simply because they are less visible

However,  metrics can still play a constructive role if used responsibly. The Elsevier guide promotes two simple but powerful rules: use more than one metric, and always pair metrics with expert judgment. This triangulation helps avoid simplistic rankings and ensures context-sensitive assessment.

A responsible metrics culture reframes the use of metrics but doesn't reject measurement. It acknowledges disciplinary diversity, career stages, and the rich variety of academic contributions. It supports, rather than distorts, academic integrity.

To shift the culture, institutions must lead: redesign evaluation processes, train staff in interpreting metrics critically, and celebrate contributions that metrics alone can’t capture.

Metrics should serve academic purpose; not replace it.

 

Wednesday, May 7, 2025

What a University Council Really Needs to Know About Research: Clarity Beyond the Metrics



The Strategic Role of Research

University research is a source of both pride and complexity. It underpins our global rankings, attracts funding, enables industry partnerships, and drives innovation and societal impact. Yet for many members of university Council, research can feel like a "black box"; full of acronyms, shifting benchmarks, and dense performance tables.

While governance bodies are not expected to be immersed in operational detail, they do need to understand the high-level performance, risks, and opportunities within research to fulfil their strategic oversight role. This short piece offers a clear view of what matters most.

The Problem: Too Many Numbers, Not Enough Insight

Research reporting to Council is often technical and fragmented. Data might include ERA results, HERDC income, grant success rates, citation metrics, rankings data, and individual initiatives, but rarely a coherent picture.

Without synthesis or trend context, it becomes hard to tell: Are we improving? Where are we strong? What should we be concerned about?

What Council Really Needs to Know

To support good governance and strategic stewardship, Council needs clear, contextual answers to five key questions:

Is our research activity growing or shrinking?

Look at trends in external research income, research-active FTE, and publication volume. Growth indicates momentum; flatlining may indicate capacity or competitiveness risks.

Is our research quality competitive?

Use field-weighted citation impact (FWCI) or citations per paper compared to sector benchmarks. Context is key, where do we stand among peers or international standards?

Are we building capability for the future?

Consider the proportion of early-career researchers, pipeline of grant applications, or internal schemes for research development. Long-term health depends on today’s investments.

How aligned is our research to strategy?

Are we publishing and attracting grants in strategic priority areas? Are we working with industry or partners in mission-aligned fields?

Are we positioned for policy and funding changes?

Anticipate the impact of ERA's replacement, the shift toward impact and translation, and potential changes to funding schemes.

From Metrics to Meaning: How to Shift the Conversation

Rather than loading Council papers with every available KPI, consider a more strategic approach:

  • Use trends, not snapshots
  • Provide benchmarks or context, not just figures
  • Focus on signals, not noise: where performance is moving
  • Link metrics to mission: how does this support our university's strategy?

A compact dashboard with visual trends and commentary can be more effective than dense tables.

Conclusion: Clarity Builds Confidence

Research is a long-term, high-stakes endeavour. Giving Council the right insights, without overwhelming detail, builds trust, improves decision-making, and strengthens advocacy beyond the university.

It’s not about oversimplifying. It’s about focusing on what truly matters.

Monday, May 5, 2025

When Published Research Gets Rejected: A Glimpse into Peer Review Flaws

 



In a fascinating and provocative experiment, researchers Peters and Ceci once resubmitted 12 already-published psychology articles to the same journals that had originally accepted them. The twist? They changed only the names and affiliations of the authors. What happened next exposed cracks in the foundations of academic peer review.

Only 3 of the 12 resubmissions were identified as duplicates. Of the 9 that underwent full peer review again, 8 were rejected, most for "serious methodological flaws." The same papers that had previously passed muster were now deemed unworthy of publication.

Psychologist John Bartko later reflected on these findings in a commentary titled "The Fate of Published Articles, Submitted Again". His takeaway? The peer-review process, while central to academic credibility, may be far less consistent and objective than many assume. Reviewer bias, institutional prestige, and systemic flaws can skew decisions and undermine trust in the system.

This experiment, now decades old, still resonates today. It reminds us that peer review is a human process, imperfect and in need of constant reflection and improvement.

Do you trust peer review? Or is it time to rethink how we judge good science?

When Opting Out Isn’t Enough: Utrecht University and the Rankings Paradox

 


In 2022, Utrecht University made headlines by stepping away from the Times Higher Education (THE) World University Rankings. The Dutch institution cited concerns that global rankings are overly reductionist, lacking in transparency, and misaligned with its values, particularly its focus on collaboration, open science, and long-term social impact.

But opting out didn’t end the conversation.

When the latest rankings were released without Utrecht’s name, the university faced a flood of questions. Students, staff, and partners wanted to know: Where would Utrecht have landed if it had taken part?

In response, the university issued a public statement explaining its absence and, somewhat paradoxically, pointed to past performance to give a sense of where it might have ranked. The message was clear: while Utrecht rejects the premise of global rankings, it still understands their pull and the need to reassure stakeholders about its standing.

The episode reveals a deeper tension: even for institutions that challenge the value of rankings, their influence is hard to ignore.

Thursday, March 7, 2019

ANZSRC Review, or what are our new FOR codes?


The ARC, Australian Bureau of Statistics (ABS), Statistics New Zealand (Stats NZ), and the New Zealand Ministry of Business, Innovation and Employment (MBIE) are undertaking a joint review of the Australian and New Zealand Standard Research Classification (ANZSRC). What is the ANZSRC I hear you say - well it is the field of research codes, or FOR codes. It is also the socio-economic objectives (SEO) and type of activity (pure research, strategic research applied research etc).

It is a good time to refresh the categorization to bring it into line with current and future research activity. It is particularity pleasing to see the inclusion of a specific question around how Aboriginal research is categorized (currently it is all hidden at the '6-digit' FOR level).

The other interesting question is around interdisciplinary research and how the classification could be set up to support this better. If research is multidisciplinary then it is probably a simple matter of tagging it with more than one FOR code. If it is truly inter- or trans-disciplinary activity and of some scale then perhaps it should just have its own FOR code (maybe under a division of 'Interdisciplinary').

I sometimes think of research disciplines as fruits - a field of research might be like an apple, and another is a pear and another is a banana. When research is multidisciplinary it is like we have chopped up the fruit and tossed it together in a bowl to make a fruit salad - works well together but still separate fruits. Interdisciplinary research might be more like a fruit smoothy - we've taken all of the fruits we need but blended them together and have created something new and different. So, ANSZRC helps us classify the fruits and the fruit salad - but how will it classify the smoothy?

You can have a look at the review document yourself at the ARC website: https://www.arc.gov.au/anzsrc-review