AI Policy
Journal of Intercollegiate Sport – Policy on Use of Artificial Intelligence
Purpose
To preserve the integrity, originality, and scholarly value of works published in Journal of Intercollegiate Sport, this policy sets forth rules and expectations for the use of artificial intelligence tools by authors, reviewers, and editors. It clarifies what is allowed, what is not, and what must be disclosed, ensuring transparent, ethical, and rigorous academic publishing.
Definitions
- AI tools / Generative AI: Software or systems (e.g. large language models, ChatGPT, similar) that can generate text, code, analyses, images, or predictions in response to prompts.
- Semantic analysis: Use of tools to analyze meaning, relationships, patterns in text or other data (allowed as described below).
- Coding / programming: Writing or using computer code as part of data collection, data processing, statistical analysis, or methodological tools.
For Authors
- Authenticity of Research & Data Collection
- Authors must conduct real, original data collection, or use datasets acquired with proper permissions. Fabrication, falsification or simulation of data via AI tools is prohibited.
- AI may assist in data cleaning or analysis only if methods are transparently described (tool, version, parameters) and results are reliable.
- Writing & Manuscript Preparation
- AI tools may not be used to write substantial parts of the manuscript, such as abstracts, introduction, literature review, methods description, results, discussion, or conclusions. These must be the original scholarly work of the authors.
- Minor assistance is allowed: grammar correction, spelling, translation, formatting, readability. Authors are fully responsible for reviewing, editing, and verifying any content revised by such tools.
- Coding & Semantic Analysis
- Use of AI-based software for semantic analysis is permitted, provided the process is clearly documented. Authors must explain how the semantic analysis was done, what tools were used, how validity was assessed, and ensure the interpretations are fully their own.
- Any code used for analysis (AI-based or not) must be under the authors’ control, transparent, reproducible, and cited appropriately if external tools/packages are employed.
- Authorship & Attribution
- AI tools cannot be listed as authors or co-authors.
- All authors must take responsibility for the content, including any portion influenced by AI assistance.
- Disclosure Requirement
- If any AI tools were used (for grammar, translation, semantic analysis, data processing, etc.), the authors must disclose:
- The name(s) and version(s) of the tool(s).
- The purpose(s) for which each tool was used.
- The extent or proportion of the work influenced.
- Disclosure should appear in a distinct section (e.g. “AI Tools and Methods” or “Disclosure of AI Use”) or in the Methods / Acknowledgments section, and also in the cover letter at submission.
- Prohibited Practices
- Submitting manuscripts that are predominantly or entirely generated by AI in lieu of authors’ own original work.
- Using AI to generate false or fabricated citations, references, or data.
- Misrepresenting the contribution of AI tools (e.g. claiming human authorship of AI-generated text).
For Reviewers
- Confidentiality
- Reviewers shall not share any manuscript or parts thereof with AI tools that require uploading content to external platforms, unless explicitly authorized and secure, to protect confidentiality of authors and the unpublished work.
- Prohibited Uses of AI
- Reviewers must not use AI tools to generate substantive portions of their review (evaluation of methodology, intellectual merit, critique). Critical reasoning and judgment must be their own.
- Reviewers shall not improve or substitute their own evaluation by relying on AI to do major analytic or evaluative functions.
- Permitted Uses
- Reviewers may use AI tools for language polishing their reports (e.g. grammar, style) after having written their own review content.
- Reviewers may consult AI-based tools to check facts, references, or to clarify technical content, but responsibility for accuracy lies with the reviewer.
- Disclosure
- If a reviewer uses AI tools even for permitted assistance, those uses must be disclosed to the editorial office.
For Editors
- Editorial Decision & Evaluation
- Editors must not use AI tools to make decisions about manuscript acceptance, rejection, or about intellectual merit of submissions. All substantive editorial judgements must be from human editors.
- Editors also must not use AI to write decision letters, summary evaluations, or to produce content that substitutes for their own judgment without oversight.
- Enforcement of Policy
- Editors are responsible for ensuring that authors and reviewers comply with this policy.
- If suspicion arises that a submitted work violates the policy (e.g. undisclosed AI use, fabricated content, etc.), editors must investigate.
- Transparency
- Editors shall provide clear statements of this policy in the journal’s Instructions to Authors and Reviewer Guidelines.
- Any changes to the policy should be announced publicly.
Enforcement & Consequences
- Detection & Investigation: If during or after review, AI misuse is suspected (e.g. through similarity tools, reviewer/editor inspection), the editorial team may request clarifications from authors or reviewers.
- Sanctions: Depending on severity, consequences may include rejection of the manuscript, correction, retraction of published paper, banning of authors from future submissions, or notifying institutional authorities.
- Reviewer / Editor Misconduct: Reviewers who breach policy (e.g. uploading manuscripts to external AI platforms without authorization; using AI to do more than permitted) may be removed from the reviewer pool. Editors who violate policy may be asked to step aside or other remedial actions will be taken. This may include banning of reviewers from their editorial board, suspension of future submissions, or notifying institutional authorities.