EI 2018: The failure to measure engagement

December 6, 2017

A recent article in The Conversation took a critical look at the Australian Research Council’s (ARC) plans to measure engagement and impact of Australian research at universities from next year.

 

Essentially, the Federal Government wants our universities to demonstrate that they are getting the best national bang for the federal buck and researchers have to demonstrate the quality of their work in terms other than number of papers published or citations in the literature.

 

At first glance, this is a perfectly reasonable expectation. But a closer look at what they want to measure and how they want to measure it reveals a commercial distortion of the true value of Australian research. This, in turn, sets a national research agenda devoid of social capital, environmental benefits or societal dividends. At its potential worst, this could favour some areas of research that should best be abandoned while passing over research that is vital to the long-term welfare of our nation.

...

 

The ARC started the Engagement and Impact Assessment in December 2015, not long after PM Malcolm Turnbull’s innovation statement. It was a companion exercise to the ERA (Excellence in Research Australia) 2018 which aimed to identify and promote excellence across the full spectrum of research activity in Australia's higher education institutions.

The Engagement and Impact Assessment Pilot Report was released on 1 November 2017 by The Education Minister Simon Birmingham who announced that the ARC will introduce an engagement and impact assessment in 2018 (EI 2018).

 

The main objectives of the E&I Assessment were to establish a set of metrics that could measure impact and engagement across the nation and to examine how universities are translating their research into economic, social and other benefits encouraging greater collaboration between universities, industry, business and other users of research.

 

It’s Australia’s first stab at the international trend for defining Knowledge Transfer in an academic setting: how the movement of research out of academia can produce changes in the world outside. Academic Knowledge Transfer broadly consists of two parts: Engagement is the processes by which knowledge is transferred and Impact is the changes and effects of that transfer of knowledge.

 

I am particularly interested in the engagement side of the Pilot Report and just how that will be defined and measured in EI 2018. So that’s where I started digging into the report.

 

The report defines engagement as:

 

“The interaction between researchers and research end-users outside of academia, for the mutually beneficial transfer of knowledge, technologies, methods or resources.”

 

So far, so good. This definition is broad enough to encompass a whole suite of possible interactions between researchers and the outside world. Everything from a business partnership around a research and development project, to face-to-face meetings with politicians and community groups furthering cultural, historic, social and environmental issues, to the general broadcast of the findings of research across Australia via media and social media to raise public understanding of research would appear to fit within this definition of engagement.

 

But read on a little further and the report appears to be working to a much more restricted definition.

 

 

Figure 1 from the report shows diagrammatically how EI 2018 will go about rating both engagement and impact at any given university. For engagement there are two parts to the assessment; a suite of indicators or metrics and a narrative. When taken in combination, the indicators and narrative produce an assessment on a three-tier rating scale: low, medium and high.

 

It’s when we look at the indicators that have been specified in the report that we start to see some problems emerging around that breadth of reporting engagement.

 

The following indicators will be included in EI 2018:

Cash support from end-users

Total HERDC[1] income per FTE[2] (specified schemes)

End-user sponsored grants: proportion of HERDC Category 1

Research commercialisation income (selected FoR[3] codes only)

 

[1] Higher Education Research Data Collection

 

[2] Full-Time Equivalents

 

[3] Field of Research

 

Striping away some of the confusion around abbreviations, the failure of these indicators of engagement is that they only measure the flow of money. There are no indicators for societal benefits or non-commercial engagement activities. A research stream, such as astrophysics, palaeontology or climate science will have a harder time demonstrating engagement via these indicators than research into nanotechnology, GMOs or fossil fuels.

 

Perhaps to provide some redress to these economics-driven measures of engagement, the report sets out a second part to the assessment, the narrative. Each university is invited to submit a 500-word description of their engagement activities, strategy and/or objectives. The narrative can include evidence from the four quantitative indicators as well as a long list of optional indicators including (this is not a complete listing):

 

co-authorship of research outputs with research and uses

patents granted

citations in patents

in-kind support of research and uses

outputs available via open access

public lectures seminars open days on school visits

establish networks and relationships with research users

outreach activities (public lectures, policy engagements, media engagements,

community events)

media coverage of exhibitions and new works

metrics which captured social media activity

 

So, while you can reiterate your financial measures of engagement given in the indicators within the narrative, somewhere in those 500 words you can include a variety of non-financial measures of engagement activity. Buried right at the end of the optional indicators list are measures concerning outreach, social media and community interactions. This indicates a low priority being given to the public engagement for research.

 

This whole system is predicated on measuring successful engagement through economic and financial indicators at the expense of highlighting societal benefits and social capital generated through engagement with research. Let’s go through a hypothetical case comparison to show why this is not a good idea.

 

University A has the Coal Research Group that attract lots of research dollars from the mining sector and the fossil fuel industries. They have no problems using the four indicators to demonstrate lots of engagement and the narrative is also easy to fill with lots of examples of engagement with end-users resulting in a ‘high rating’ for their engagement.

 

Over at University B The Climate Science Research Group finds it very difficult to attract funding from end-users outside of academia - most of their research funding is taken direct from government and this does not constitute engagement. They struggle to demonstrate that they are doing anywhere near as much as engagement as the Coal Research Group through the four indicators. While they can put into their narrative lots of public outreach and media activity, their narrative is going to still look thin in comparison to the Coal Research Group and the overall assessment would probably be 'low'.

 

But which of these two research projects are actually doing the most engagement with the community to generate societal benefits from their research? The system as set up by EI 2018 will actually favour the Coal Research Group as doing the most engagement (and generating the greater impact) than the Climate Science Research Group yet the social capital for the former is zero or even negative while the social dividends of the Climate Science Group are going to be much greater.

 

I think this is the most worrying consequence of the way that EI 2018 has been established. It favours applied research over pure, blue-sky research because it focusses exclusively on economic measures of engagement. I doubt, if EI 2018 had been applied to the research of John O’Sullivan into astronomical telescope receiving systems that lead to the development of Wifi, there could have been any recognition of the full impact of his work.

 

There is still a role for outreach, media and social media within EI 2018, particularly for research groups that are not involved in applied research, but these central tenets of engagement are devalued. In its failure to properly account for engagement in terms other than economic activity, EI 2018 is an inadequate measure for the great breadth of engagement conducted by Australia’s researchers.

 

 

Share on Facebook
Share on Twitter
Please reload

Featured Posts

The challenge of getting the message across

July 31, 2017

1/1
Please reload

Recent Posts

December 18, 2017

November 13, 2017

October 9, 2017

September 14, 2017

Please reload

Search By Tags
Please reload

Follow Us
  • Facebook Classic
  • Twitter Classic
  • Google Classic