AI in the scientific publication process: possibilities, risks, and ground rules

Artificial intelligence is everywhere now – publishing included.

Text Leena Wahlfors English translation Marko Saajanaho

Artificial intelligence (AI) has quickly become part of all aspects of life, including the scientific publication process, and is here to stay. AI can be used to screen manuscripts. It suggests peer reviewers, helps peer reviewers at their job, and even tries to decide on our behalf what should be published.

The use of AI raises questions about fairness, transparency, and responsibilities. In the following essay, I will explain which parts of the scientific publication process AI is used in, how it is used, what its benefits are – and where the lines of risks and ethics in AI use are drawn.

The text is based on a literature review on the subject and discussions held on the topic and observations raised at JUFO Publication Forum panels in the autumn of 2025. The text follows the different phases of the scientific publication process, starting from leaving the manuscript at the editorial office and ending with the decision on whether to publish the manuscript.

Good pre-screening is cooperation between AI and human

Prior to the peer review, the editorial staff pre-screens the manuscript to check, for example, if the text structure and formatting are correct, if any signs of plagiarism or other dishonesty can be found, if the methodological description is sufficiently transparent, and if the citations and citation style are correct. It is also essential to determine whether the content of the text matches the publication’s field and focus. Recognising a relevant journal or conference is a critical phase of the research publication process.

Otherwise, even a high-quality article manuscript may be rejected, or it may find a smaller audience than it would via a more relevant publication channel based on its content. Many of the aforementioned phases can be carried out with AI tools. These tools save time, and we can assume they are also assistants that improve quality.

At the panel meetings, the panellists considered it acceptable for editorial staff to use AI to identify plagiarism, for example, but the decision to take the manuscript to the next phase, i.e., peer review, should always be made by a human being. Another problematic aspect was that editors require researchers to disclose if AI has been used or how it has been used when creating the manuscript, but do not nearly always act in a transparent manner themselves. As such, openness may soon become a competitive advantage to a publisher, as one panellist highlighted.

More openness and guidelines are required

The panellists observed that the editorial offices’ AI use guidelines for researchers were lacking and contradicted each other. They also brought up a concern related to both peer reviewer and researcher roles – it is not always possible to report AI use because you may unknowingly use AI as part of your available digital tools. The researcher’s threshold for informing editors of AI use may also be raised by the fear of the manuscript potentially being rejected based on this information, even if AI use was limited to assisting with language checking or citation placement, for example.

Similar observations were found in the research literature. The study reports state that international publication ethics operators such as COPE and WAME have published guidelines on AI use, but publishers’ AI use guideline for researchers conflict with each other while also being inadequate. Deficiencies in AI research ethics guidelines reflect the rapid development of AI technologies as well as lack of knowledge.

International industry associations and major publishers will hopefully follow industry developments actively and update their requirements as AI tools and their use develop further. The research literature indicates that one common aspect in publisher guidelines is allowing the researcher to use AI to assist their work in e.g., language checking and polishing the structure, but responsibility for the content itself always remains with the person. AI must not act as the author. Another principle is that AI use must be disclosed in a transparent manner when it has been part of the research method or produced content.

The attitude towards AI image generation and editing is especially wary. Some publishers prohibit it whereas others allow it with strict conditions and clear disclosure. Publishers also warn of AI bias (e.g., gender or geographical delusions), hallucinations (invented “facts” and citations), and IP and data protection issues. A core principle issued to researchers and editors seems to be that if AI is used, the output must be checked, and AI use must be disclosed.

The panellists emphasised that this core principle requires honesty in a time when tools for recognising AI use remain highly imperfect.

Peer reviewers’ choice: AI as an assistant but not as a gatekeeper

According to one panellist, peer reviewing is in crisis. They say “something new must be invented because people no longer have time for peer reviewing. In five years, it may disappear completely.”

Studies on the subject also highlight the effect of AI making it easier to find a suitable group of reviewers, which is becoming more and more arduous. Various AI-based tools can be used to read the manuscript via “keywords” and references and suggest experts, including those outside the editorial office’s networks. According to researchers in the field, this saves time, increases the number of potential reviewers, and improves the way the manuscript topics and reviewer expertise match each other.

However, researchers warn of risks – subtle and multidisciplinary subjects may be missed by the algorithm, and models may reinforce existing bias such as preferring peer reviewer candidates in certain countries or institutions. Algorithms are also incapable of critical thinking and, for example, considering whether top names in a certain field might be too busy to spend their time on review work. It is also possible for the system to suggest entirely fictitious reviewer candidates if the data fed into it is of poor quality.

One panellist pointed out that making use of the opportunities provided by AI is more relevant to younger researchers serving as peer reviewers than their more experienced colleagues “who have plenty of expertise (in their field) even without AI. Juniors lack their networks.”

Peer review routine: what AI helps with and where it hits its limits

The job of a peer reviewer is to especially assess the manuscript’s novelty value, significance, relevance, and scientific accuracy. AI can help with routine tasks such as condensing and summarising texts, checking statistics reports, identifying inconsistencies, and organising tables and images. When used properly, AI tools free up the reviewer’s time to allow focusing on what is relevant, which is assessing if the research is genuinely new, important, and well-grounded. Human expertise, consideration, and responsibility cannot be replaced, as the research literature stresses.

This was also stressed by the panellists. It is important to be able to find information, check details, and reveal plagiarism, but entirely AI-generated peer reviews cannot be approved even though the panellists say it appears fully AI-generated and reviewed studies are becoming more common. According to the panellists, “often there is no evidence, but a strong suspicion of its use.”

Transparency should also be required from peer reviewers regarding if they have used AI or how it has been used. The panellists emphasised the importance of transparency between editors, peer reviewers, and researchers. One panellist summarised the idea by stating: “We don’t know, and that is why we must suspect everything… the era of suspicion.”

Publishing decisions and the post-publication phase

If reviewer opinion differs on the relevance of the manuscript and the care put into it, editors may use AI to help decide what corrections are requested or if the work should be approved as is. Even after publication, AI can help with summaries and translations, making research more accessible. However, data protection and confidentiality should be kept in mind at all times. If the editors or peer reviewer feeds the text to be reviewed to AI, its content and potentially significant novel discoveries will spread uncontrollably, as many panellists pointed out.

The panellists had very little to say about observations or assumptions of AI use in publishing decisions. One panellist offered an example in which AI was used to summarise interviews conducted by people when selecting researchers for a science conference.

Looking to the future: uncertainty, possibilities, and limits

According to the research literature, AI increases the efficiency of work related to the scientific publication process but also brings with it new kinds of potential errors. AI tools used to detect scientific dishonesty are not infallible, as researchers remind us. AI tools may suspect a text to be dishonestly produced despite scientific best practices actually having been followed in its creation, or they may miss a genuine problem.

AI tools may also cause bias. If researchers are screened or manuscripts are reviewed differently based on the country, institution, or language, this leads to unfairness according to researchers. Furthermore, so-called hallucinations and invented citations are a risk specific to generative tools – at least in their current stage of development.

All fields’ panels highlighted both the positive and negative uses of AI and the threats and possibilities AI bring to science. The panellists found AI an effective tool and considered where the limits of its use are. They asked if existing doubts regarding the usefulness and necessity of science are negatively affected by AI. On the other hand, the panellists believed that even as AI becomes more advanced, there will be things that cannot be handed from a person to a robot.

The methods of science change, but as the panellists put it: “Everything is open. The future is uncertain.” “The threats are materialising… the same applies to the possibilities.” “We should find a good marriage of human and AI.”

Based on the above, I thought whether future development of publication forum classification should consider the publication channels’ guidelines on AI use and descriptions of how they themselves use AI in their scientific publication process.

Sources:
Ahn, S. (2024, September 1). The transformative impact of large language models on medical writing and publishing: Current applications, challenges and future directions. Korean Journal of Physiology and Pharmacology, 28(5), 393–401.

Alnaimat, F., AlSamhori, A. R. F., Hamdan, O., Seiil, B., & Qumar, A. B. (2025, June 2). Perspectives of Artificial Intelligence Use for In-House Ethics Checks of Journal Submissions. Journal of Korean Medical Science, 40(21), e170.

Checco, A., Bracciale, L., Loreti, P. et al. AI-assisted peer review. Humanit Soc Sci Commun 8, 25 (2021).

da Veiga, A. Ethical guidelines for the use of generative artificial intelligence and artificial intelligence-assisted tools in scholarly publishing: a thematic analysis. Sci Ed 2025;12(1):28- 34

Farber, S. (2024). Enhancing peer review efficiency: A mixed-methods analysis of artificial intelligence-assisted reviewer selection across academic disciplines. Learned Publishing, 37(4), e1638.

Ganjavi, C., Eppler, M. B., Pekcan, A., Biedermann, B., Abreu, A., Collins, G. S., Gill, I. S., & Cacciamani, G. E. (2024). Publishers’ and journals’ instructions to authors on use of generative artificial intelligence in academic and scientific publishing: Bibliometric analysis. BMJ, 384, e077192.

Hosseini, M., & Resnik, D. B. (2025). Guidance needed for using artificial intelligence to screen journal submissions for misconduct. Research Ethics, 21(1), 1–8.

Kousha, K., Thelwall, M. Artificial intelligence to support publishing and peer review: A summary and review. First published: 08 August 2023.

Liang, W., Zhang, Y., Cao, H., Wang, B., Ding, D. Y., Yang, X., Vodrahalli, K., He, S., Smith, D. S., Yin, Y., McFarland, D. A., & Zou, J. (2023). Can large language models provide useful feedback on research papers? A large-scale empirical analysis. arXiv.

Razack, H. I. A., Mathew, S. T., Saad, F. F. A., & Alqahtani, S. A. (2021). Artificial intelligence-assisted tools for redefining the communication landscape of the scholarly world. Science Editing, 8(2), 134–144.

Zaharie, Monica Aniela & Osoian, Codruţa Luminiţa (2016). Peer review motivation frames: A qualitative approach. European Management Journal, 34(1), 69–79

Zhang, A., Gao, Y., Suraworachet, W., Nazaretsky, T., & Cukurova, M. (2025, April 15). Evaluating trust in AI, human, and co-produced feedback among undergraduate students. arXiv.

Zhuang, Z., Chen, J., Xu, H., Jiang, Y., & Lin, J. (2025). Large language models for automated scholarly paper review: A survey. arXiv.

Quick guide for editors and reviewers

  • Use in a transparent manner: disclose what tools were used for what.
  • Keep the human at the wheel: check all “alerts” manually.
  • Avoid bias: monitor whether the system treats manuscripts from different countries or institutions equally.
  • Protect the data: do not feed confidential material into open services.
  • Verify citations: check the existence and content of sources – do not solely trust an AI-generated list.
  • Remember authorship: AI is not the author; the person has responsibility.

Recommended articles