IFKAD Principles and Policies for AI use

IFKAD adopts key principles and policies regarding the use of AI tools and technologies in writing and reviewing research contributions.
Acknowledging that AI has great potential to accelerate discovery and accessibility and foster the advance of the frontiers of science and research, however, if used inappropriately, it can determine risks, bias and unfair behaviours and actions.
To harness the benefits of AI tools and technology in an accountable, responsible, and transparent manner when preparing works for submission, IFKAD emphasizes the following key principles and policies of AI use for authors and reviewers to ensure appropriate human oversight of the writing process and guarantee that AI never replace human judgment or accountability.

AI Policy for Authors

Authors may selectively use AI tools and technologies in their research, but AI cannot replace human judgment or accountability at any stage of the research process. Accordingly, in their use of AI, authors are expected to:

  1. Take full responsibility for the accuracy of all content in their research contributions and the integrity of their research process
  2. Follow a two-step process when reporting the use of AI in their research:
    • Disclosure: For each stage of the research process, authors must identify whether AI tools were used
    • Accountability: If AI tools were used, authors must explicitly confirm that they carefully reviewed, verified, and accepted any AI-involved output. Upon request, authors may also be asked to provide additional details describing how AI was used or verified in specific tasks.

More specifically, authors submitting a research contribution to IFKAD are required to:

  1. Declare any AI use as part of the manuscript structure and stage of development (i.e. conceptualization, research design, data preparation and analysis, presentation of results, writing and editing, list of references, etc.)
  2. Verify the accuracy, validity, and appropriateness of the content and any citations generated by language models and correct any errors or inconsistencies
  3. Be conscious of the potential for plagiarism where the LLM may have reproduced substantial text from other sources. Check the original sources to be sure you are not plagiarizing someone else’s work
  4. Acknowledge the limitations of language models in the manuscript, including the potential for bias, errors, and gaps in knowledge.

IFKAD reserves the right to determine whether the use of an AI tool is permitted in a submitted manuscript, to reject submissions, to take appropriate corrective actions against authors when and where submitted manuscripts with no reports of use of AI and undisclosed use of such tools are identified during the review processes and before acceptance, and to take appropriate post-acceptance and publication actions on published material found to feature fabricated or fraudulent AI-generated content.

AI Policy for Reviewers

Reviewers are expected to exercise their own independent judgment and expertise in evaluating research contributions submitted to IFKAD. The use of AI should never substitute for the reviewer’s personal assessment of the quality, contribution, or integrity of a submission. To preserve confidentiality and intellectual integrity:
  1. Reviewers must not upload any portion of a manuscript, including text, figures, or data, into any AI tool or platform
  2. Reviewers must not use AI tools to generate review reports
  3. Reviewers may use AI tools only to assist in editing or improving the clarity of their own written reviews
Future development
Please note that these principles and policies may evolve furtherly according to the understanding of how new emerging AI technologies and tools can help or hinder the process of preparing research contributions. IFKAD will monitor these developments to ensure that our conference and related publications remain committed to transparency, high-quality and trusted academic contents.