The UK’s umbrella funding body UK Research and Innovation (UKRI) is opening up data underlying up to 2000 grant proposals to explore whether using generative AI could ease the burden of grant peer review.
UKRI allocates more than £8 billion into research funding each year. While the number of research and innovation grants UKRI is funding has halved in the last seven years, the number of applications has surged by more than 80%.
So the agency is considering ways to streamline its peer review process. Starting in October, a research team led by Mike Thelwall, a data scientist at the University of Sheffield, UK, started looking into ways UKRI could make use of generative AI. The work was funded by the UK Metascience Unit, the first governmental unit dedicated to studying how research is done and how its processes can be improved.
Thelwall says his team will have access to the full-text versions of between 1000–2000 grant proposals submitted to UKRI that either ended up being funded or rejected by the agency, which are usually kept confidential. Thelwall and colleagues plan to run these applications through large language models (LLMs) to see if the tools can accurately predict the scores peer reviewers gave the proposals and the decision they ultimately recommended.
While Thelwall and his team will know the scores each proposal received and whether they were funded or not, they won’t disclose this to the LLMs. ‘If large language models can do some kind of reasonable job at predicting the score that a grant proposal would get, then that might allow them to be used in some way to help speed up the grant review system or to support the work of reviewers,’ Thelwall says.
Thelwall was previously part of a team that also explored whether AI could be used to assist in refereeing articles submitted to the UK’s Research Excellence Framework, which assesses the quality of research taking place at UK universities.
In December 2022, when Thelwall and his colleagues released their data, they recommended that AI systems needed more work before they could assist peer review, and data suggested that the AI systems generated identical scores to human peer reviewers 72% of the time. Thelwall, however, said at the time that this figure needs to reach 95% accuracy to be viable.
Mohammad Hosseini, who researches the ethical implications of AI at Northwestern University, US, says there are still ‘serious doubts’ around whether LLMs create novel ideas. ‘If AI cannot create really novel ideas, it also is unlikely to detect really creative ideas because it is being trained on existing data,’ he says. ‘In a manuscript, you are reporting something that has happened, whereas in a grant, you are sharing an idea that still has potential.’
Another problem for funders using LLMs is that if they are not transparent about what criteria they are feeding into the AI, there will be a backlash from researchers, Hosseini notes. However, if funders are open about the process, grant applicants may start to game the system or deliberately start writing in ways that may generate more favourable feedback from AI.
While it’s unclear how UKRI may use generative AI, Thelwall suggests it may work well in tiebreaker situations. LLMs could also serve as the third or fourth additional reviewer, Thelwall suggests, or assist in a fast-track desk-reject option to reduce the peer review conducted by human experts.
Thelwall cites the case of la Caixa Foundation in Barcelona, which has been experimenting with AI-assisted grant peer review. Around 90% of the submitted grant applications still end up going through full peer review with three experts examining them, Thelwall says.
‘They save a little bit of full reviewer time,’ Thelwall notes, ‘which doesn’t sound like a lot but that’s a lot of experts not having to spend their time evaluating proposals which have a very low chance of being funded.’
User Center
My Training Class
Feedback








Comments
Something to say?
Login or Sign up for free