Some of the largest academic journal publishers in the world have forbidden or restricted their authors’ use of the sophisticated chatbot, ChatGPT. The publishers are concerned that false or copied content may appear in academic papers since the bot leverages data from the internet to generate highly legible replies to inquiries.
The chatbot has already been named as a co-author on many academic works by a number of scholars, and several publishers have taken action to outlaw this practise. A step further has been taken by the editor-in-chief of Science, one of the most prestigious scientific journals in the world, who has outlawed the use of any text from the programme in articles that have been submitted.
It is hardly unexpected that academic publishers are interested in using these chatbots. Our latest research, which was published in Finance Research Letters, demonstrated that ChatGPT might be used to create financial papers that would be acceptable for publication in scholarly journals. Even while the bot did better in certain areas than in others, our personal experience helped, in the eyes of journal reviewers, to overcome the program’s shortcomings.
We contend, however, that ChatGPT should not be seen as a danger but rather as a potentially valuable research tool—a low-cost or even free electronic assistant. We reasoned that if utilising ChatGPT to get excellent results is straightforward, maybe there is something more we can do to enhance these positive results.
The usual four components of a research study are the research hypothesis, literature review (an assessment of prior scholarly research on the same issue), dataset, and recommendations for testing and examination. We initially requested ChatGPT to develop these components. We just provided a general description of the topic and said that the final product should be suitable for publication in “a respectable financial magazine.”
This was our first implementation of ChatGPT. For version two, we put just under 200 abstracts (summaries) of pertinent, prior research papers into the ChatGPT window.
We then requested that the software include them into the four study phases. Version three saw the addition of “domain expertise”—comments from academic academics. We reviewed the computer program’s replies and offered ideas for enhancements. We did this by combining our knowledge with ChatGPT’s.
Then, we asked a group of 32 reviewers to examine one version of ChatGPT’s potential for producing an academic research. Reviewers were requested to assess the output’s suitability for publication in a “excellent” academic finance journal by determining if it was sufficiently thorough, accurate, and unique.
The key learning from all of these research was that they were all generally regarded as acceptable by the expert reviewers. It’s very amazing that a chatbot was found to be able to come up with good topics for academic study. This raises basic concerns about what it means to be creative and who owns creative ideas—questions for which there are currently no clear solutions.
Possibilities and limitations
The results also point up some possible advantages and disadvantages of ChatGPT. We discovered that various study areas received varying ratings. The dataset and the study proposal were often given good ratings. The ranking for the literature reviews and testing recommendations was lower but still adequate. The data summary, an immediately recognisable “text chunk” in most research studies, is an example of how ChatGPT is especially adept at taking a group of external texts and linking them (the core of a research concept), or at picking easily recognisable pieces from one document and modifying them.
When the work was more complicated—when the conceptual process went through too many stages—a relative weakness of the platform became obvious. This area often includes testing and literature reviews. Some of these processes were generally where ChatGPT excelled, but not all of them. The reviewers seem to have noticed this.
However, in our most sophisticated version (version three), when we collaborated with ChatGPT to get workable results, we were able to get beyond these restrictions. The fact that evaluators gave the advanced research study’s parts excellent marks across the board demonstrates that academic researchers are still needed in today’s society.
Moral ramifications
A tool is ChatGPT. In our study, we demonstrated that it is possible to produce a respectable financial research study using it with moderate caution. Without any thought, it produces work that is believable. There are obviously obvious moral ramifications to this. Academic institutions already struggle with a serious lack of research integrity, and websites like RetractionWatch often publish findings that are fabricated, plagiarised, or just plain incorrect. Could ChatGPT make this issue worse?
The quick answer is that it may. The genie, however, cannot be placed back in the bottle. Additionally, technology will only advance (and quickly). The specifics of how we may recognise and regulate ChatGPT’s use in research are a broader topic for another day. However, our results are also helpful in this area since they demonstrate the continued importance of human researchers’ contributions to high-quality research by demonstrating the superiority of the ChatGPT study version with researcher experience.
For the time being, we believe researchers should see ChatGPT as a tool rather than a danger. It could be especially helpful for those researchers working in developing economies, graduate students, and early-career academics, who sometimes lack the funding for conventional (human) research support. ChatGPT and related applications could possibly be able to democratise the research process.
However, scientists should be aware that using it to create journal articles is prohibited. There are obviously many various perspectives on this technology, therefore it must be handled with caution.
You might also be interested in reading, A process for altering the conductive polymers’ mechanical and transport characteristics