Akapress Journal of Scientific Publishing's Artificial Intelligence Use Policy

To read it offline, please download it from here

1. Introduction and purpose

1.1 Policy Statement

Akapress Journal of Scientific Publishing recognizes the transformative potential of AI technologies in academic research and scientific publishing.

This policy sets comprehensive guidelines for the ethical, transparent, and responsible use of AI tools by authors, reviewers, and editors, ensuring compliance with the standards of Q2 and Q3-ranked journals while maintaining the highest levels of academic integrity.

2.1 Scope and Application

This policy applies to all stakeholders involved in the journal's publishing process:

  • Authors submitting scientific papers for preprint and final peer review
  • Peer reviewers who evaluate submitted scientific papers
  • Editorial staff who manage the publishing process
  • Editorial Board members who provide oversight and guidance

3.1 Compliance with international standards

This policy has been developed in accordance with the guidelines of:

  • Committee on Publication Ethics (COPE)
  • International Committee of Medical Journal Editors (ICMJE)
  • Major publishers including Elsevier, Springer Nature, Taylor & Francis, and Wiley
  • Best practices from Scopus-indexed journals with Q2 and Q3 rankings

2. Definitions

2.1 Basic Terms

Artificial intelligence tools: Machine-based systems that can derive solutions to tasks with varying degrees of autonomy, including but not limited to:

  • Large Language Models (LLMs); such as: ChatGPT, Claude, Gemini, etc.
  • Text generation and editing tools
  • Artificial intelligence systems for data analysis and visualization
  • Image generation or processing tools

Generative AI: AI systems capable of generating various types of content including text, images, audio, or synthetic data.

Essential use of artificial intelligence:

  • Produce evidence, analysis, or discussion that supports the conclusions of the scientific paper.
  • directly affect the content or results of the search
  • Generate large portions of text, data, or visuals for a scientific paper.

Claim for Disclosure: Any use of AI beyond basic spell checking, grammar correction, or standard reference management tools.

3. Author Guidelines

3.1 Permitted uses of artificial intelligence

Authors may use AI tools for the following purposes with mandatory disclosure:

3.1.1 Improving language and writing

  • Grammar and spell checking beyond standard software
  • Improve language and enhance readability
  • Translation assistance for bilingual (Arabic-English) articles
  • Improve style and coordination

3.1.2 Research support activities

  • Literary research and classification assistance
  • Coordinating and organizing references
  • Support idea generation and brainstorming
  • Exploring research methodology

3.1.3 Data processing and analysis support

  • Statistical analysis assistance (with human verification)
  • Improve data visualization
  • Identifying patterns in large data sets
  • Develop and debug code

3.2 Prohibited Uses of Artificial Intelligence

The following applications of AI are strictly prohibited:

3.2.1 Content Generation

  • Generating or fabricating primary research data
  • Create imaginary or artificial experimental results
  • Generating fake citations or references
  • Generate substantial manuscript text without significant human input.

3.2.2 Image and visual content

  • Create, modify, or use research images, figures, or photographs (unless the researcher cannot obtain images of rare objects in which case the software can be used)
  • Generating fake microscopic, medical, or scientific images
  • Manipulating data visualizations to distort results
  • Create visually appealing summaries or illustrations

3.2.3 Critical analysis and interpretation

  • AI-generated scientific conclusions or explanations
  • Automated peer review or evaluation of scientific papers
  • AI-generated responses to reviewer comments
  • Delegating critical thinking or decision-making to AI systems

3.3 Disclosure Requirements

3.3.1 Mandatory Disclosure Statement

Authors should include a dedicated section titled “AI Disclosure” immediately before the References section, using the following format:

Artificial Intelligence Disclosure

In preparing this paper, the author(s) used [name of AI tool/service, version] for [specific purpose].

The AI-generated outputs were reviewed, edited, and verified by the authors, who bear full responsibility for the content of this publication.

3.3.2 Detailed disclosure requirements

The disclosure statement must specify:

  • Tool identification: The exact name and version of the AI system used.
  • Purpose: The specific tasks for which AI is employed
  • Range: The approximate percentage or sections in which AI was used.
  • Verification Process: How Authors Verify AI-Generated Content
  • Human Supervision: A description of the author review and editing process

3.3.3 Bilingual Disclosure

For bilingual articles, disclosure statements must be submitted in both Arabic and English.

3.4 Authorship Criteria

3.4.1 Prohibition of AI Authoring

AI tools cannot be listed as authors or co-authors. Authorship requires:

  • Ability to take responsibility for the integrity of research
  • Ability to respond to inquiries about work
  • Legal status of publishing agreements
  • Accountability for Ethical Compliance

3.4.2 Human Accountability

The authors remain fully responsible for:

  • Accuracy and correctness of all content
  • Research Methodology and Ethical Compliance
  • Data integrity and authenticity
  • Appropriate citation and attribution
  • Compliance with journal policies

4. Reviewer Guidelines

4.1 Confidentiality Requirements

4.1.1 Protecting the scientific paper

Reviewers should not:

  • Upload the scientific paper or sections of the scientific paper to any AI tool
  • Sharing scientific paper content with AI systems for analysis
  • Using AI tools to generate substantive parts of audit reports
  • Compromising the confidentiality of scientific papers through artificial intelligence processing

4.1.2 Intellectual Property Protection

  • Maintain strict confidentiality of all contents of the scientific paper.
  • Protecting the author's intellectual property rights
  • Avoid data privacy breaches by using AI tools.
  • Respect for proprietary information and unpublished results

4.2 AI Disclosure Responsibilities

4.2.1 Identification indicators

Reviewers should alert editors to potential undisclosed use of AI when they discover:

  • Unusual writing patterns or inconsistent style
  • Unreasonable data patterns or statistical results
  • General or prosaic language that is not consistent with domain standards
  • Typical factual errors of hallucination in artificial intelligence
  • Inconsistencies between methods and results sections

4.2.2 Reporting procedures

When you suspect the use of undisclosed AI:

  • Document specific interests with examples.
  • Report results to the managing editor.
  • Provide a detailed explanation of suspicious items.
  • Avoid making direct accusations in audit reports.
  • Focus on academic merit while noting interests

4.3 Evaluation criteria

4.3.1 Evaluation of disclosed use of artificial intelligence

When the use of AI is properly disclosed, auditors should:

  • Verifying the suitability of the AI application for the stated purpose
  • Evaluate whether human verification appears sufficient.
  • Evaluating the scientific validity of AI-powered content
  • Checking for potential bias or errors in AI-generated components
  • Confirmation of compliance with the journal's AI policy

4.3.2 Limited use of artificial intelligence by reviewers

Reviewers can use AI tools to:

  • Improve the language of their audit reports (with disclosure)
  • Literature verification for review accuracy
  • Check the grammar and style of the review content.

All uses of AI by reviewers must be disclosed to the editorial team.

5. Editors' Guidelines

5.1 Verification of Compliance

5.1.1 Initial screening process

Editors must:

  • Verify AI disclosure data when needed
  • Check compliance with disclosure format requirements
  • Guaranteed bilingual disclosure for Arabic-English articles
  • Verifying the appropriateness of the disclosed use of artificial intelligence
  • Mark incomplete or insufficient disclosures

5.1.2 AI Detection Tools

The magazine uses:

  • Automated plagiarism detection systems
  • AI-generated content detection software
  • Image integrity verification tools
  • Statistical analysis validation systems
  • Reference verification mechanisms
  • AI tools are specified by the journal's editorial board for the work of editors within certain limits (clause 5.3.2). AI tools are periodically audited to verify their reliability and bias, especially during the current period due to their significant development.

5.2 Violation Response Process

5.2.1 Minor Violations

Responding to insufficient disclosure or minor policy violations:

  • Notify the author with a specific policy quote.
  • Opportunity to review and correct disclosure
  • Educational guidance on the correct use of artificial intelligence
  • Review of revised submission for compliance
  • Documenting the warning in the written records

5.2.2 Main violations

Responding to intentional deception or material misuse:

  • Immediate rejection of the scientific paper with a detailed explanation
  • Notify the researcher's institution of the policy violation.
  • Temporary application ban (6-12 months depending on severity)
  • COPE Consultation for Complex Cases
  • General cancellation if already posted

5.2.3 Appeal Process

Authors may appeal infringement determinations by:

  • Formal written appeal within 30 days
  • Independent Editorial Board Review
  • Consult external experts if necessary.
  • Final decision from the editor-in-chief
  • COPE Conflict Resolution Guidance

5.3 Policy Enforcement

5.3.1 Consistent Application

  • Apply policies uniformly across all submissions.
  • Maintain detailed records of violations and responses.
  • Review and update the policy regularly.
  • Training employees to detect and respond to AI
  • Monitor effectiveness and modify procedures

5.3.2 Editors' use of artificial intelligence

Editors can use AI tools to:

  • Administrative tasks (scheduling, coordination)
  • Reviewer Matching and Assignment Assistance
  • Language editing for editorial communications
  • Quality Assurance and Process Improvement

All editorial uses of AI must be documented and the confidentiality of the scientific paper must be maintained to ensure accountability if necessary.

6. Ethical principles

6.1 Transparency

Forced openness ensures trust and verification:

  • Full disclosure of all uses of AI in accordance with clause 3.3
  • Clear communication of AI policy to all stakeholders
  • General availability of policy and updates
  • Transparent enforcement and violation reporting
  • Open dialogue on the role of artificial intelligence in scientific publishing

6.2 Integrity

Artificial intelligence supports, but does not replace, human expertise:

  • Human supervision is required for all AI applications.
  • The authors bear full responsibility for the accuracy of the content.
  • Scientific accuracy is maintained through human verification.
  • Critical thinking remains exclusively in the human domain.
  • Quality standards are maintained regardless of the use of AI.

6.3 Accountability

Clear assignment of responsibility for all content:

  • The authors are responsible for all content of the scientific paper.
  • Reviewers are responsible for the quality of the assessment.
  • Editors are responsible for enforcing the policy.
  • The journal is responsible for maintaining standards.
  • Clear consequences for policy violations

7. Implementation framework

7.1 Phased rollout (12-month timeline)

Stage 1: Foundation (Months 1-2)

  • Finalize and approve the policy
  • Training the editorial team on AI detection and response
  • Preparing the system for AI detection tools
  • Notifying authors of new policy requirements
  • Distribution of reviewer instructions

Success measures: Editorial team efficiency rating >4.5/5, Policy clarity score >4.5/5

Phase 2: Implementation (Months 3-4)

  • Publish the policy on the magazine's website.
  • Updated Author Guidelines with AI Policy
  • Reviewer training sessions and materials
  • Initial enforcement with an educational focus
  • Collecting feedback from stakeholders, anonymously, detailing their experiences using AI in accordance with this policy.

Success metrics: Author compliance rate >95%, reviewer satisfaction >4.0/5

Stage 3: Monitoring (months 5-8)

  • Compliance monitoring and violation tracking
  • Policy Effectiveness Evaluation
  • Incorporate feedback and minor adjustments.
  • Quality Assurance Reviews
  • Stakeholder satisfaction surveys

Success metrics: Violation detection accuracy >90%, resolution time <30 days

Stage 4: Improvement (months 9-12)

  • Refine policy based on experience
  • Implement process improvement
  • Annual Review and Planning
  • Documenting best practices
  • Continuous Improvement Planning

Success Metrics: Policy Effectiveness Score >4.5/5, Stakeholder Satisfaction >4.0/5

7.2 Success indicators

  • High compliance rates (>95% appropriate disclosure)
  • Effective detection and response to violations
  • Stakeholder satisfaction with clarity of policy
  • Quality standards for the maintained scientific paper
  • Improved journal reputation and indexing prospects

8. Consequences and Enforcement

8.1 Categories of Violation

8.1.1 Category 1: Minor Violations

  • Incomplete AI Disclosure Data
  • Non-compliance with disclosure coordination
  • unintentional policy misunderstanding
  • Minor procedural errors

Typical response: warning, review opportunity, educational guidance

8.1.2 Category 2: Moderate Violations

  • Failure to disclose the material use of artificial intelligence
  • Inappropriate application of artificial intelligence without disclosure
  • repeated minor violations
  • Reference confidentiality violations

Typical response: paper rejection, revision requirement, temporary restrictions

8.1.3 Category 3: Major Violations

  • Deliberate deception about the use of artificial intelligence
  • AI-generated or fabricated search data
  • Systematic policy violations
  • Fraudulent AI Application

Typical response: Reject the paper, ban publication, notify the researcher's institution

8.1.4 Category 4: Reviewer Violation

  • Liability for Negligence: If the auditor is found to have committed gross negligence or neglect in the performance of his duties—such as failing to detect clear plagiarism, overlooking data manipulation, or failing to report a material conflict of interest of which he was aware—the auditor bears the resulting professional and moral responsibility.
  • Consequences of a breach of responsibility: If a scientific paper is published and a serious violation is later discovered that the reviewer should have discovered and reported, this not only damages the journal's reputation but also constitutes a breach of trust in the reviewer.

Typical response: Send formal notification of the incident, suspend the reviewer from future reviews, remove the reviewer's name from the journal's database of approved reviewers.

8.2 Appeals and Due Process

All authors and reviewers have the right to:

  • Clear explanation of violation determinations
  • Written appeal process within 30 days
  • Independent review by the editorial board
  • Outpatient consultation for complex cases
  • A fair hearing follows COPE guidelines.

9. Continuous improvement

9.1 Regular Policy Updates

  • Semi-annual review of policy effectiveness
  • Monitoring technological progress
  • Incorporating best practices from other journals
  • Stakeholder feedback integration
  • Maintaining COPE guidelines compliance

9.2 Quality Control

  • Violation trend analysis
  • Track compliance rate
  • Policy Effectiveness Evaluation
  • Stakeholder satisfaction measurement
  • Implement continuous improvement

10. Conclusion

This comprehensive AI policy positions ACAPRESS as a forward-thinking journal that embraces technological innovation while maintaining the highest standards of academic integrity.

By establishing clear guidelines, transparent processes, and strong enforcement mechanisms, the journal ensures compliance with the standards of Q2 and Q3 ranked journals while supporting the responsible adoption of AI in scientific publishing.

The policy framework balances innovation and integrity, provides clear guidance for all stakeholders, and lays the foundation for the sustainable integration of AI into academic publishing. Regular monitoring and continuous improvement ensures that the policy remains effective and relevant as AI technology continues to evolve.

This policy is a permanent document that is updated regularly to reflect technological developments, best practices, and stakeholder feedback.

All contributors to the journal are responsible for keeping up to date with the latest issue available on its website.

Effective date: October 23, 2025

Next review date: April 23, 2026

Contact: contact@acaprs.net