Generative AI Policies
For Authors
Use of Generative AI and AI-Assisted Technologies in Scientific Writing
This policy applies solely to the writing process and does not restrict the use of AI tools for data analysis or generating research insights.
Authors may use generative AI and AI-assisted technologies during manuscript preparation, but only to improve language clarity and readability. Such use must be conducted under direct human supervision and control. Authors are responsible for carefully reviewing, verifying, and editing all AI-generated content, as AI may produce outputs that appear authoritative but are inaccurate, incomplete, or biased. Authors remain fully accountable for the entire content of their work.
Authors must disclose the use of generative AI or AI-assisted tools in their manuscript. A statement acknowledging this use will appear in the published article. Full transparency supports trust among authors, readers, reviewers, editors, and contributors, and ensures compliance with the terms of use of the relevant tools.
AI tools or systems must not be listed as an author or co-author, nor cited as an author. Authorship entails responsibilities and actions—such as critical evaluation, approval of the final manuscript, and accountability for research integrity—that can only be fulfilled by humans. Each author must ensure the accuracy and integrity of the work, confirm that all listed authors meet authorship criteria, verify the originality of the submission, and ensure no third-party rights are violated. Authors are encouraged to review DIGIVENTURE’s Ethics in Publishing policy before submission.
Use of Generative AI and AI-Assisted Tools in Figures, Images, and Artwork
DIGIVENTUREdoes not permit the use of generative AI or AI-assisted tools to create or alter images in submitted manuscripts. This includes enhancing, obscuring, moving, removing, or introducing specific features within an image or figure. Adjustments to brightness, contrast, or colour balance are acceptable only if they do not obscure or eliminate information present in the original image. DIGIVENTURE may use image forensics tools or specialised software to detect potential image manipulation.
The only exception is when AI is an integral part of the research methodology (e.g., AI-assisted imaging techniques used to generate or interpret research data, as in biomedical imaging). In such cases, the use of AI must be clearly and reproducibly described in the Methods section. This includes a detailed explanation of how the AI tool was used, the name of the model or software, version and extension numbers, and the manufacturer. Authors must comply with the AI software’s usage policies and ensure proper content attribution. If requested, authors may be required to provide pre-AI-adjusted image versions or the original raw composite images used to create the submitted figures for editorial review.
The use of generative AI in creating graphical abstracts is not allowed. For cover art, generative AI may be permitted in limited cases, provided the author obtains prior approval from the journal editor and publisher, demonstrates that all necessary rights have been secured, and ensures accurate content attribution.

For Reviewers
Use of Generative AI and AI-Assisted Technologies in the Peer Review Process
Submitted manuscripts are confidential documents. Reviewers must not upload a manuscript or any portion of it into a generative AI tool, as this may violate the author’s confidentiality, intellectual property rights, and, where applicable, data privacy regulations—especially if the manuscript contains personally identifiable information.
This confidentiality extends to the peer review report, which may contain sensitive information about the manuscript or authors. Therefore, reviewers must not input their review reports into AI tools, even for language refinement or readability improvement.
Peer review is a cornerstone of the scientific process, and DIGIVENTURE upholds the highest standards of integrity. The critical thinking, original assessment, and ethical judgment required in peer review are uniquely human responsibilities. Generative AI or AI-assisted technologies must not be used by reviewers to evaluate a manuscript, as these tools risk producing incorrect, incomplete, or biased assessments. Reviewers are solely responsible and accountable for the content and integrity of their review reports.
DIGIVENTURE’s AI policy permits authors to use AI in the writing process only for language improvement, with proper disclosure as outlined in the Guide for Authors. Reviewers can find this disclosure in a dedicated section at the end of the manuscript, before the references.
DIGIVENTURE employs identity-protected, in-house or licensed AI-assisted technologies that comply with the DIGIVENTURE Responsible AI Principles. These tools are used during screening for completeness checks, plagiarism detection, and reviewer matching. They are designed to protect author confidentiality, undergo rigorous bias evaluation, and adhere to strict data privacy and security standards.
DIGIVENTURE supports the responsible development and use of AI-driven tools that assist reviewers and editors, provided they uphold the confidentiality and data privacy rights of all stakeholders.
For Editors
Use of Generative AI and AI-Assisted Technologies in the Editorial Process
Manuscripts and all related communications must be treated as confidential. Editors must not upload a submitted manuscript or any part of it into a generative AI tool, as this could breach author confidentiality, intellectual property rights, and data privacy—particularly if the manuscript contains personal or identifiable information.
This confidentiality also applies to all editorial correspondence, including decision letters and notifications, which may contain sensitive information. Editors must not use AI tools to process or improve such communications, even for language editing.
Editorial decision-making requires human judgment, ethical responsibility, and critical evaluation—functions that cannot be delegated to AI. Generative AI or AI-assisted technologies must not be used by editors to assess manuscripts or make editorial decisions, as this could lead to inaccurate, incomplete, or biased outcomes. Editors remain fully responsible for the editorial process, final decisions, and communication with authors.
DIGIVENTURE’s AI policy allows authors to use generative AI in writing only for language and readability improvements, provided they disclose this use in their manuscript. Editors can locate this disclosure in a dedicated section before the references. If an editor suspects a violation of DIGIVENTURE’s AI policy by an author or reviewer, they should report it to the publisher immediately.
DIGIVENTURE utilises identity-protected AI technologies that align with the RELX Responsible AI Principles. These tools support screening processes such as plagiarism checks, completeness verification, and reviewer identification. They are designed to protect confidentiality, are regularly audited for bias, and comply with data protection regulations.
DIGIVENTURE actively supports the ethical integration of AI technologies that assist editors and reviewers, ensuring they respect the privacy and rights of all participants in the scholarly publishing process.
Note:
Generative AI refers to artificial intelligence systems capable of producing various types of content, including text, images, audio, and synthetic data. Examples include ChatGPT, NovelAI, Jasper AI, Rytr AI, DALL-E, and similar tools.