Generative AI Policy
In light of the rapid development of artificial intelligence tools and their impact on academic activity, we support their use to enhance research efficiency, improve article quality, and streamline routine processes. However, to preserve academic integrity, it is important to adhere to clear ethical standards, taking into account the recommendations of the Committee on Publication Ethics (COPE) and the editorial principles of leading publishers.
Use of AI by authors
Authors are permitted to use AI tools at various stages of manuscript preparation. This may include:
- text editing (grammar, spelling, style);
- data analysis and the automated generation of graphs, tables, or diagrams;
- improving readability or translating text;
- fact-checking and scientific referencing.
However, authors must explicitly state in the manuscript if AI was used at any of these stages. It is important that AI technologies do not replace the scientific work of the authors and are not used to create research outcomes that have not been validated by experimental data or other scientific methods.
AI should not be used to create fabricated data, falsify results, or generate scientific claims without appropriate verification. AI users must be accountable for the accuracy of the results provided and ensure compliance with ethical standards.
Disclosure of AI use
Authors must disclose the use of AI tools in preparing their manuscript. This disclosure must be made in the Acknowledgements or Materials and Methods section, specifying which AI tools were used, their functions, and their impact on the results of the work.
It is important to understand that AI is not considered an author of the article. The use of AI does not absolve authors from responsibility for the content of the article, the correctness of interpretations, and scientific integrity.
Use of AI by reviewers
Reviewers are prohibited from submitting the manuscript content to external AI systems, as this violates confidentiality principles. However, reviewers may use AI tools to improve the structure and language of their reviews, provided that the manuscript content is not shared with the AI system. Any changes or recommendations made by reviewers must be based on their own analysis of the material.
Use of AI by the editorial team
The editorial team may use AI tools for technical checks of articles (e.g., plagiarism detection or grammar evaluation), automation of some editorial processes, and assistance in maintaining communication with authors and reviewers. However, the editorial team does not use AI to make editorial decisions regarding the publication of articles or to modify the content of manuscripts.
Ethical principles and academic integrity
This policy aims to ensure academic integrity in the use of AI technologies. Authors and reviewers are required to adhere to the following ethical principles:
- the use of AI should not undermine academic integrity or scientific ethics;
- all results obtained using AI must be verified and confirmed by the author or reviewer;
- AI technologies must not be used to manipulate data or create fraudulent results.
Responsibility for AI use
Authors, reviewers, and the editorial team are responsible for the use of AI technologies within the academic process. If the use of AI tools leads to errors or breaches of ethics, these must be immediately disclosed and rectified. The editorial team has the right to retract articles that violate ethical norms or require further clarification from the authors.
Policy updates
This policy will be reviewed and updated in line with the development of AI technologies and changes in international editorial standards (Elsevier). Updates will account for new practices in the use of AI in the academic process as technology advances.