AI Tools and Generative AI Use

Policy on AI Tools and Generative AI based on COPE (Committee on Publication Ethics) 

LenteraBio recognizes the benefits of utilizing large language models (LLMs), such as ChatGPT, and generative AI as productivity tools for authors during the article preparation process. These tools can assist in generating initial ideas, structuring content, summarizing, paraphrasing, and refining language usage. However, it is crucial to acknowledge that all language models have limitations and cannot replicate human creative and critical thinking. Human intervention remains essential to ensure accuracy and appropriateness of the content presented to readers. Therefore, we requires authors to be mindful of the following considerations when using LLMs in their submissions:

  1. Objectivity: LLM-generated text may contain previously published content with biases, including racism, sexism, or other biases. Minority viewpoints may not be adequately represented. The use of LLMs has the potential to perpetuate these biases since the information generated is decontextualized and harder to identify.

  2. Accuracy: LLMs can produce false content, particularly when used beyond their domain or when addressing complex or ambiguous topics. They might generate linguistically plausible but scientifically implausible content, provide incorrect facts, and even generate nonexistent citations. Some LLMs may also lack access to recent data, resulting in an incomplete picture.

  3. Contextual understanding: LLMs struggle to apply human understanding to the context of a given text, especially when dealing with idiomatic expressions, sarcasm, humor, or metaphorical language. This can lead to errors or misinterpretations in the generated content.

  4. Training data: LLMs require a substantial amount of high-quality training data to achieve optimal performance. However, in certain domains or languages, such data may not be readily available, limiting the model's usefulness.

Guidance for Authors:

Authors preparing a manuscript for LenteraBio can use AI Tools to support them. However, these tools must never be used as a substitute for human critical thinking, expertise and evaluation. AI Tools should always be applied with human oversight and control. Ultimately, authors are responsible and accountable for the contents of their work. This includes accountability for:

1. Carefully reviewing and verifying the accuracy, comprehensiveness, and impartiality of all AI-generated output (including checking the sources, as AI-generated references can be incorrect or fabricated).

2. Editing and adapting all material thoroughly to ensure the manuscript represents the author’s authentic and original contribution and reflects their own analysis, interpretation, insights and ideas.

3. Ensuring the use of any tools or sources, AI-based or otherwise, is made clear and transparent to readers — for the use of AI Tools we require a disclosure statement upon submission.

4. Ensuring the manuscript is developed in a way that safeguards data privacy, intellectual property and other rights, by checking the terms and conditions of any AI Tool that is used

Please note that AI bots such as ChatGPT should not be listed as authors in your submission.

LenteraBio does not allow more than 25% AI writing in manuscript, while figures made from generative AI is not allowed in the manuscript. 

Editors and Reviewers:

Maintaining the integrity of the editorial and peer review process requires reviewers and editors to uphold strict standards of confidentiality, objectivity, and accuracy at every stage of manuscript evaluation. With the increasing use of artificial intelligence (AI) technologies, including generative AI tools such as ChatGPT and similar systems, it is essential to establish clear ethical boundaries.

For reviewers, any manuscript submitted for peer review is confidential and must not be uploaded, shared, or processed using generative AI tools in any form. Uploading manuscripts or review reports to such tools may compromise author confidentiality, infringe copyright, and breach data privacy regulations. Furthermore, scientific peer review demands critical thinking and expert judgment, which cannot be delegated to AI. Reviewers are therefore prohibited from using generative AI to write, structure, or assist in the preparation of review reports. Reviewers remain fully responsible and accountable for the content and conclusions of their evaluations.

For editors, all manuscripts and related editorial correspondence must be treated as confidential. Editors must not upload manuscripts, decision letters, or any editorial communications into generative AI tools, even for purposes such as language refinement. Editorial decisions, including the acceptance or rejection of a manuscript, must be based on the editor’s own professional evaluation and judgment, not on the output or suggestions generated by AI systems.

The use of AI technologies may be permitted for limited technical functions—such as plagiarism detection or formatting checks—provided that these tools do not access or process the scientific content of manuscripts directly, and that they are operated within secure, official systems that adhere to established data privacy and ethical standards.

If there is reasonable suspicion that a reviewer or author has misused generative AI without proper disclosure, editors should report the matter promptly to the publisher or the journal’s governing body for investigation.

While we acknowledge and support the responsible use of emerging technologies, it is important to emphasize that the scientific assessment of research and editorial decision-making remain fundamentally human responsibilities that must not be delegated to machines.

Further information

Please see the World Association of Medical Editors (WAME) recommendations on chat bots, ChatGPT and scholarly manuscripts and the Committee on Publication Ethics (COPE)’s position statement on Authorship and AI tools 

This policy may be subject to further evolution as we collaborate with our publishing partners to understand how emerging technologies can facilitate or hinder the research publication process. Please revisit this page for the latest information.