Artificial Intelligence (AI) Policy

The Arab German Journal of Sharia and Law Sciences (AGJSLS) acknowledges the expanding presence of Artificial Intelligence (AI) tools—including large language models (LLMs) and other generative systems—in research, writing, peer review, and publishing.

AGJSLS is dedicated to maintaining research integrity and publication ethics in line with the principles of the Committee on Publication Ethics (COPE). The journal supports responsible innovation while emphasizing that all published work must remain the product of human intellectual contribution and ethical responsibility.

The following guidelines outline how AI tools may be appropriately used by authors, reviewers, and editors.


1. For Authors

1.1 Responsible Use of AI

  • Authors may employ AI tools to enhance the language, clarity, or structure of their writing. Such limited use does not require formal disclosure.

  • Authors remain fully accountable for the originality, accuracy, and integrity of their work, regardless of whether AI tools were used.

1.2 Generative AI and Disclosure

  • If AI tools are used to generate substantive material—such as text, data, images, or ideas—this use must be clearly disclosed in the manuscript.

  • The disclosure should mention the tool’s name, version, and specific purpose (for example: “ChatGPT, OpenAI, version 4, was used to assist in summarizing the literature”).

  • AI tools cannot be listed as co-authors because they lack the ability to assume ethical or legal responsibility for the content.

1.3 Accuracy and Verification

  • Authors are responsible for verifying the accuracy of information, the authenticity of sources, and the absence of plagiarism or fabrication in any AI-generated text.

  • The use of AI to create or modify data, fabricate references, or misrepresent existing works is strictly prohibited.


2. For Reviewers

2.1 Ethical Review Practices

  • Reviewers may use AI tools to improve the readability or clarity of their review reports, but not to generate assessments or judgments about the manuscript’s content.

  • The evaluation of submissions must rely on the reviewer’s own expertise and critical judgment.

2.2 Confidentiality and Data Security

  • Reviewers must not upload any part of a manuscript, or related confidential material, into AI systems that collect or store data externally.

  • Breaches of confidentiality through AI platforms are considered serious ethical violations under COPE guidelines.


3. For Editors

3.1 Editorial Oversight

  • Editors may use AI tools for limited administrative or technical purposes (e.g., grammar correction, similarity checks, or reviewer suggestions).

  • However, AI systems must not be used to draft editorial communications, make publication decisions, or analyze unpublished manuscripts.

3.2 Responsibility

  • The Editor-in-Chief and the editorial team retain full responsibility for ensuring that AI use within the journal aligns with COPE’s Core Practices and with AGJSLS’s standards of scholarly integrity.


4. Misuse and Non-Disclosure

  • Any inappropriate or undisclosed use of AI tools in a submission or review process will be investigated under AGJSLS’s ethics procedures.

  • Depending on the outcome, possible actions include manuscript rejection, correction, or retraction, in accordance with COPE recommendations.