With the rise of publicly available Artificial Intelligence (AI) tools, such as ChatGPT, questions are emerging about the potential opportunities and challenges of using these tools in the academic research process.
This topic has been raised at many of the external meetings I have attended recently, with colleagues and funders urging caution when using these tools.
Funders have been explicitly reminding reviewers that they must not input content from confidential funding applications into, or use, generative AI tools to develop their reviews of funding proposals. It must also be noted that anything you enter in the development of your own grants can then form part of the database that is used by the AI tool, meaning that your valuable intellectual property and novel ideas can be used for others making a similar query. There have been instances where ideas are in more than one grant proposal as a consequence!
To explore the use of AI in grant funding applications further, I have invited Dr Lotta Croft, Research Development Manager in Research & Innovation Services (RIS), to explore some of the potential benefits and drawbacks and provide some reflections from a RIS and funder perspective.
Kind regards,
Sarah
Mojitos, grants and ChatGPT
I like any kind of technology that makes things quicker or easier - such as using google maps to dodge traffic jams on my commute, asking Siri to translate “where is the nearest cocktail bar?” into the local language when I’m on holiday abroad, or relaxing to music curated for me by my Spotify DJ after a good day at work reviewing cutting edge grant funding applications.
What do these actions have in common? They are all made possible by Artificial Intelligence (AI). Once just the stuff of science fiction, AI is now firmly integrated into our lives – at home and in the workplace.
I work in the University’s Research and Innovation Services, helping researchers to create high quality grant funding applications for funders such as UKRI and the Royal Academy of Engineering. Huge advances in recent years have enabled AI developers to capitalise on the increased availability of data and computing power and to release new and often freely available software and tools, such as ChatGPT, a ‘generative’ AI tool and ‘large language model’ (LLM) that can be used to create content for a wide range of purposes.
Within academia, this has led to researchers using generative AI tools to create content and fast-track the often time-consuming process of writing grant funding applications (Van Noorden & Perkel 2023; Parilla 2023). As someone who spends much of my time working on grant funding applications, I wanted to better understand the impact of AI on my work and the researchers I support, with the following questions in mind:
- What is a generative AI tool, how does it work and what are the potential benefits/drawbacks?
- What do research funders say about using generative AI to produce content for funding applications?
- What are the key things to consider when using generative AI tools to create funding application content?
I’ve focused on ChatGPT to explore these questions. Other questions that have arisen during my quest for deeper understanding are: Will human-generated text become highly valuable in the future because it will be so rare? Could the use of generative AI affect our ability to think for ourselves? Will our future be defined by misinformation? However, as this is a short blog, I’ll save going down some very big dystopian rabbit holes for another day!
How generative AI works
With ChatGPT, users enter their query into a simple interface, hit return and the AI generates a response. I asked ChatGPT how it can help someone write a grant funding application and it returned a range of options. This included helping to generate ideas that are a good fit to funder requirements, drafting sections of a funding proposal, editing, and proofreading text, gathering information, data, and references to support the proposal, developing a budget, and reviewing and providing feedback on a proposal.
These benefits sound great if you are a time-poor researcher. However, can ChatGPT really do all these things? In addition, are there any potential drawbacks to consider? To answer these questions, we need to understand how it works.
I attended a lecture on the future of generative AI at the Turing Institute. The speaker was Professor Mike Wooldridge, a computer scientist based at Oxford University. He explained that generative AI tools, such as ChatGPT:
- Are like a chatbot
In a nutshell, they auto-predict what words should come next in a sentence in a similar way to a smartphone when we use it to send a text, but on a much larger scale. In technical terms, it is a ‘large language model’ using ‘machine learning’ processes. This makes it an efficient way to create content, but it can get answers wrong in plausible ways. How many times has your phone autocorrected to completely incorrect words? This is because it is making a best guess to predict what text should come next and is not able to critically evaluate information, which is a high-risk combination and raises the prospect of factual inaccuracies in responses.
- Use the world wide web as its source of information and creates content quickly
Being able to access many information sources and collate/summarise these in a matter of seconds can potentially save a lot of time when it comes to researching a topic or data collection. However, the model reflects the bias of its sources, which could include informal data sources such as Reddit and it may preferentially draw on particular languages, cultures, and norms to create content. The results may also not include information behind a paywall, such as academic papers that require a subscription to access.
- May absorb user queries, which then become part of its source data
This results in increasing volumes of information for the generative AI to draw on. However, if a user enters personal, sensitive or confidential information, such as someone’s name, address and/or details of their research idea or methods, this could appear in the results of another user's future query, which has implications for data security and protection and intellectual property. In short, putting your best research ideas into a large language model may make them available to other users.
Other points to note are:
- A generative AI tool may not have the full context or sufficient knowledge of a specialist subject to provide a sufficiently tailored answer. As Professor Chris Bowen (ADR, Faculty of Engineering & Design) put it to me in a recent conversation “by reading the literature I know what has been done, what has been done badly and what has not been explored. Knowing about the variety of research approaches people take can also lead to new ideas.”
I tested out the current, free-version release of ChatGPT by asking it to create a list of all engineering research fellowships available to UK researchers. The research results included some well-known opportunities that I would expect to see on the list, such as the UKRI Future Leaders Fellowships, but overall, the list was biased towards US opportunities and notably included closed or discontinued calls. By fine-tuning my query, I was able to get more accurate results, but I still needed to fact check the response using my own knowledge of the funding landscape to ensure the resulting list was relevant and accurate.
Research funder perspectives on generative AI tools
At the end of 2023, the Research Funders Policy Group issued a joint statement on the use of generative AI tools in funding applications and assessment. Group members include:
- Association of Medical Research Charities (AMRC)
- British Heart Foundation (BHF)
- Cancer Research UK (CRUK)
- National Institute for Health and Care Research (NIHR)
- Royal Academy of Engineering
- Royal Society
- UK Research and Innovation (UKRI)
- Wellcome
The statement recognises the benefits of generative AI tools for supporting content generation for computer code, assisting neurodivergent researchers and/or reducing potential language barriers. However, it also acknowledges the risks and advises that any AI generated content should be acknowledged and used in accordance with any relevant ethical and legal standards.
The statement also advises applicants to check individual funding call guidance for any call specific requirements. For example, this year’s Royal Academy of Engineering Research Fellowships applicant guidance states that applications should reflect the applicants own voice and ideas, and that it is not acceptable for an entire grant application to be written using generative AI tools. The point about applicant voice is an excellent one, because from the examples of un-edited AI-generated text I have seen, this is what is lost, but is often what helps an applicant and their application stand out from the crowd. Displaying personal flair and passion can be particularly important in grant funding opportunities such as Fellowships that are strongly focused on the individual and their unique selling points.
When it comes to reviewing grant applications, at the time of writing this blog, the author understands that UKRI is planning to produce further guidance for its reviewers in due course. The Royal Academy of Engineering are asking academics who are reviewing Research Fellowship grant applications not to use generative AI tools to make judgments or write feedback on grant applications or share grant application content with any generative AI tool; and this approach is reflected in the Research Funders Policy Group’s joint statement.
Further afield, the European Commission’s Director General for Research has set up a new unit to develop a policy on the use of AI tools in science and industry. Evaluating the impact of AI on proposal writing will be one of the new unit’s top priorities.
RIS Reflections
The current thinking and policies on using generative AI to create grant applications is moving rapidly and it is an emerging topic for universities and funders alike. Based on our current understanding of this issue, Research and Innovation Services (RIS) suggest that academic staff who are thinking of using AI tools to create funding applications should consider the following points:
- Keep in mind the principles of research integrity – honesty, transparency, openness. These principles apply to both AI and non-AI generated content.
- As mentioned above, when applying for research funding, the current best practice is to declare if AI tools have been used to create a funding application. This includes making sure that anyone involved in developing your funding application is made aware if AI tools have been used to generate content. As mentioned above, it’s also worth checking the funding call guidance to see if it includes any call-specific advice around using AI tools to create an application.
- Make sure that you fully understand how your chosen AI works, and its benefits and limitations. For example, some generative AI tools now include an option to ensure that any information included in a query is not absorbed into its dataset. Notwithstanding this, we would advise against entering any information into an AI tool that is personal, sensitive, or confidential.
- Always fact-check AI-generated content and evaluate it for bias, credibility, and accuracy; and ensure any content used in a funding application reflects your own voice, knowledge, and experience. Consider that there is probably a trade-off to be made between spending time generating and then shaping AI-generated content versus writing the content yourself (perhaps at one of RIS’s twice-yearly Writing Retreats).
The last word from ChatGPT…
‘In summary, while ChatGPT can be a valuable tool for drafting and refining grant applications, it should be used as a supportive resource rather than a sole solution. Human expertise, domain knowledge, and exercising good judgement to critically evaluate information are essential components of a successful grant proposal’.