Widespread use of ChatGPT among researchers
Since 2022, OpenAI’s ChatGPT has gained much attention. This is because of the advanced language model’s ability to generate coherent text on various subjects. Its rise has not been without controversy, sparking debates about academic integrity. However, when used correctly, ChatGPT can be a powerful tool and asset, according to Pablo Picazo Sanchez, PhD in Computer Science.
“LLMs like ChatGPT have demonstrated potential in enhancing scientific writing. They can assist researchers in saving time and effort by generating coherent and well-structured texts. However, it’s important to remember that these models are tools, and the responsibility for the text ultimately lies with the authors.”
Pablo Picazo Sanchez
“Together with Lara Ortiz-Martin, PhD in Computer Science at Carlos III University in Madrid, I have conducted a study where we have investigated the impact of ChatGPT in research. We have analysed abstracts from over 45,000 papers across 300 journals. Our findings reveal that ChatGPT is used in around 10 per cent of the papers published across various editorials, highlighting its rapid adoption by researchers,” says Pablo Picazo Sanchez.
Thorsteinn Rögnvaldsson, Professor of Computer Science and Deputy Vice-Chancellor with specific responsibility for research and doctoral education at Halmstad University, believes that the tool will continue to gain users within the academia.
“ChatGPT is not considered by most to contribute intellectually to the research. It is ‘just’ a tool that can formulate ‘nicely’ in words, or in figures, ideas that the authors have. This tool is available to be exploited in the research publishing rat race, and it will be exploited. I predict that over time, even more researchers will use it and eventually, everyone will use it in all parts of research and education, including reviewing and grading,” says Thorsteinn Rögnvaldsson.
Lack of transparency
As natural language processing (NLP) techniques become more common in scientific writing, many authors incorporate these tools without proper disclosure. This lack of transparency raises concerns about the credibility of scientific documents and emphasises the need for more transparent reporting practices.
“Despite efforts to identify text generated by large language models, LLMs, like ChatGPT, distinguishing between human-written and AI-generated content remains challenging. This can raise questions about authorship and credibility,” says Pablo Picazo Sanchez.
Many scientific articles are written in English. Pablo Picazo Sanchez explains that there’s a perception that authors from English-speaking countries don’t use ChatGPT as much in their writing process compared to non-English speakers. However, that doesn’t seem to be the case. Instead, Pablo Picazo Sanchez emphasizes the widespread use and benefits of AI tools in scientific communication.
“LLMs like ChatGPT have demonstrated potential in enhancing scientific writing. They can assist researchers in saving time and effort by generating coherent and well-structured texts. However, it’s important to remember that these models are tools, and the responsibility for the text ultimately lies with the authors,” says Pablo Picazo Sanchez.
“The findings are in no way surprising. One can compare with research assistants. I think there have been plenty of cases in the past when research assistants wrote text and did experiments or literature surveys that produced research results, without getting author credit. Possibly, they received an acknowledgement,” says Thorsteinn Rögnvaldsson.
“Authors must always maintain control over the content”
The study’s findings shed light on the rapid integration of AI tools into academic research. With the advancement of NLP technology and the growing presence of AI models like ChatGPT, it’s essential to not only embrace these tools but also anticipate their continued evolution.
Pablo Picazo Sanchez suggests that strategies must be developed to tackle the challenges associated with increased AI usage in research. This includes identifying co-authorship and combating the dissemination of fake news and fake academic papers.
“Authors must always maintain control over the content they publish and ensure that their work is original, accurate, and meets the standards of academic integrity. By doing so, we can harness AI tools’ potential while upholding rigorous scholarship values,” says Pablo Picazo Sanchez.
“I think any writing aid is ok as long as the final text can be labelled as original work by the authors. That is, you cannot paste large chunks of text directly and claim that you have written it without crediting ChatGPT or something similar. The authors must read it, process it, and adjust it to fit their style of expression. Or they should clearly state that ChatGPT wrote this text based on a prompt, and perhaps they should provide the prompts in the supplementary material to make clear their intellectual input in the writing process. This does not mean that I urge everyone to use ChatPGT or any similar tool. But I can see why this happens – there is a constant efficiency pressure that pushes people to produce more and more, faster and faster. It is just a fact,” says Thorsteinn Rögnvaldsson.
“Failure to do so can result in a loss of credibility and undermine the trustworthiness of the research,” says Pablo Picazo Sanchez.
Exploring misleading information
Continuing his research on LLMs, Pablo Picazo Sanchez will focus on the hallucinations (i.e., fictional or misleading content) produced by LLMs during automatic reference generation.
“Our goal is to understand how these models create and modify the text, and their potential impact on the accuracy of scientific references,” says Pablo Picazo Sanchez and continues, “We hope to shed light on the challenges and limitations of using LLMs for reference generation and to identify strategies for improving the quality and credibility of generated references.”
Text: Anna-Frida Agardson
Picture: iStock
Portrait picture: Private