In this era of rapid digital transformation, Generative Artificial Intelligence (Generative AI) technology has had a significant impact on various aspects of life, including academia and scholarly publishing. AI's ability to generate text, summarize literature, translate languages, and even assist in data analysis offers new opportunities for researchers and academic authors. However, beyond this potential for efficiency and innovation, emerge challenges of ethics, transparency, and scientific integrity that we must address collectively.

This journal has observed an increase in manuscript submissions involving the use of generative AI. In response to this development, we feel it is crucial to communicate clear guidelines and expectations to all our authors, reviewers, and readers.

Core Principles for Generative AI Use

  1. Maximum Transparency: The use of generative AI in the article writing process must be disclosed fully and specifically in the methods or acknowledgments section. Authors should explain the role of the AI, the tools used, and which parts of the manuscript received AI assistance.

  2. Human Accountability Remains Paramount: Authors are fully responsible for all aspects of their work, including content generated with AI assistance. Claims and interpretations within the article must still originate from human authors who can be held accountable for them.

  3. Intellectual Originality: AI should not be used to generate ideas or content that completely replaces the critical thinking and original contributions of the authors. The function of AI should ideally be limited to enhancing productivity and providing technical assistance.

  4. Verification and Validation: All facts, references, and data generated or suggested by AI must be independently verified by the authors. The risks of AI hallucination (generation of factually incorrect content) must be anticipated and minimized.

Practical Guidelines for Authors

  • Mandatory Disclosure: All manuscripts using generative AI must include a transparency statement explaining:

    • The specific AI tool(s) used (e.g., ChatGPT-4, Gemini, Claude, etc.)

    • The purpose of use (e.g., language enhancement, creating outlines, summarizing literature)

    • The sections of the manuscript that received AI assistance

    • The verification steps undertaken by the authors

  • Limitations of Use: AI must not be used to:

    • Generate false or manipulated research data

    • Write entire manuscript sections without substantial author oversight and contribution

    • Replace the critical literature review that authors must conduct

    • Formulate conclusions or interpretations without rigorous human supervision

Journal Policy

Starting with this volume, all submitted manuscripts must include a statement regarding the use of generative AI. Manuscripts without this statement will be returned to the authors before the review process begins. Reviewers are also expected to be vigilant for indications of inappropriate or undisclosed AI use.

We encourage open discussion about the use of this technology in research and scholarly publishing. The letters to the editor section is open for perspectives on the challenges and opportunities of generative AI in academia.

Closing Remarks

Generative AI technology, when used responsibly and transparently, can be a valuable tool in accelerating scientific progress. However, the integrity of science relies on intellectual honesty, methodological transparency, and human accountability. As a scientific community, our responsibility is to leverage these new tools without compromising the ethical standards that form the basis of public trust in science.

We invite all stakeholders to engage in this ongoing conversation and contribute to the development of best practices that balance innovation with scientific integrity.

The Editorial Team