MJRD Statement on the Use of Generative AI

  1. Scope and Purpose

This statement defines the ethical use, disclosure, and oversight of Generative Artificial Intelligence (GenAI) in manuscript preparation, peer review, and editorial processes of the Mbeya Journal of Research and Development (MJRD). It is aligned with best practices in scholarly publishing and the principles of the Committee on Publication Ethics (COPE).

 

  1. Definition

Generative Artificial Intelligence (GenAI) refers to artificial intelligence systems capable of generating text, images, code, or other content in response to user prompts. Examples include tools such as ChatGPT, DALL·E, and similar technologies.

 

  1. Principles and Standards

3.1 Ethical Use (COPE Principles)

MJRD requires all authors, reviewers, and editors to adhere to ethical publishing standards. GenAI tools may be used only as supportive instruments for limited tasks such as language refinement. They must not replace human intellectual contribution, analysis, interpretation, or decision-making.

 

3.2 Transparency

Any use of GenAI must be clearly disclosed in the manuscript. Authors must specify the tool used and the purpose for which it was applied.

 

3.3 Authorship and Accountability

GenAI tools do not meet authorship criteria and therefore cannot be listed as authors or co-authors. Authors remain fully responsible for the accuracy, integrity, and originality of all submitted content.

 

3.4 Peer Review Integrity

Reviewers are required to maintain confidentiality and uphold independent scholarly judgment. The use of GenAI tools in peer review is discouraged unless explicitly approved and disclosed.

 

  1. Acceptable Uses of Generative AI

- Language refinement (grammar, syntax, style)

- Data presentation support (verified and cited data only)

- Idea exploration (brainstorming only)

 

  1. Prohibited Uses of Generative AI

- Fabrication or falsification of data or references

- Fake or unverifiable citations

- Undisclosed AI-generated scholarly content

- Misrepresentation of AI output as human work

 

  1. Disclosure Requirements

Authors must disclose AI use clearly, including tool name and purpose.

Example:

Generative AI tools such as ChatGPT were used for language editing and improving clarity. All research design, analysis, and interpretation were conducted by the authors.

 

  1. Editorial Oversight

Editors will evaluate AI disclosures and may request clarification or revision if misuse is suspected.

 

  1. Consequences of Misuse

Violations may result in rejection, retraction, or institutional notification in line with COPE guidelines.