The PRISMA-S (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) Search Extension was developed by librarians and methodologists to improve the reporting of literature search strategies in systematic reviews and other evidence synthesis. It was published in 2021 to address the need for transparent and comprehensive reporting of search methods, which are foundational to the quality and reproducibility of systematic reviews. Intersection between PRISMA-S and AI (Artificial Intelligence) is growing in knowledge synthesis as AI tools and software are being evaluated and incorporated into literature reviews and expert searching.
Without standardized guidance, AI-assisted [[systematic reviews] risk undermining trust in synthesized evidence. Two emerging frameworks address this gap: the PRISMA-trAIce Checklist and the Responsible AI in Evidence SynthEsis (RAISE) guidance. Both emphasize transparent documentation, human oversight, and rigorous evaluation to integrate AI responsibly.
RAISE Guidance
First drafted on September 11, 2024, and currently under revision (Version 2 updated June 3, 2025).
The Responsible AI in Evidence SynthEsis (RAISE) guidance is a collaborative effort led by the International Collaboration for Automation in Systematic Reviews, alongside major evidence synthesis organizations: Cochrane, Campbell Collaboration, Joanna Briggs Institute (JBI), the Collaboration for Environmental Evidence, and Wellcome. Available on the Open Science Framework (OSF), it provides practical recommendations for responsibly incorporating AI into evidence synthesis workflows.
PRISMA-trAIce Checklist
Published on December 10, 2025, in JMIR AI, the paper by Holst et al. introduces PRISMA-trAIce as a discipline-agnostic extension to the PRISMA 2020 reporting guideline, specifically tailored for SLRs using AI as a methodological tool (not as the research subject). Unlike PRISMA-AI (which focuses on reviews of AI studies and remains in development), PRISMA-trAIce synthesizes items from established AI reporting standards, such as CONSORT-AI, SPIRIT-AI, TRIPOD-AI, TRIPOD-LLM, DECIDE-AI, and GAMER, adapting them for SLR contexts.
The PRISMA-AI Steering Committee has begun the process of creating an AI implementation of PRISMA guidelines and extensions for studies addressing AI-based interventions. Our efforts include registering our extension with clinicaltrials.gov (NCT05382455) and the EQUATOR (enhancing the quality and transparency of health research) network as a guideline under development. Clinicians, researchers, statisticians, computer scientists, engineers, methodologists designing clinical trials, systematic reviewers, patients, journal editors, published AI-extensions contributors, and trialists with an interest in AI related to health care are being recruited to establish a community collaboration that will create reporting guidelines for systematic reviews and meta-analyses that include AI. EQUATOR guidance will be used by the PRISMA-AI team to establish a consensus for the framework of reporting guidelines on AI in systematic reviews and meta-analyses.
...this study systematically reviewed and analyzed AI reporting guidelines in the medical field, revealing that although there is a relatively large number of existing AI reporting standards, they have limitations in development methodology and scope. With the continuous advancement of GAI technology, its application scenarios and complexity have increased significantly, making current AI reporting guidelines in the medical domain insufficient to meet the needs arising from the intersection and integration of AI with medicine. Therefore, developing new reporting guidelines specifically for GAI is essential. Through systematic analysis of existing AI reporting standards, this study provides robust evidentiary support and theoretical foundation for the development of the GAMER reporting guidelines for GAI applications in medicine. In addition, by extracting key items from major AI reporting standards, this work has contributed to the establishment of a preliminary item pool for GAMER, laying a solid foundation for the subsequent development and refinement of the GAMER reporting guidelines.
The Chatbot Assessment Reporting Tool (CHART) is a reporting guideline developed to provide reporting recommendations for studies evaluating the performance of generative artificial intelligence (AI)-driven chatbots when summarizing clinical evidence and providing health advice, referred to as Chatbot Health Advice (CHA) studies. CHART was developed in several phases after performing a comprehensive systematic review to identify variation in the conduct, reporting and methodology in CHA studies. Findings from the review were used to develop a draft checklist that was revised through an international, multidisciplinary modified asynchronous Delphi consensus process of 531 stakeholders, three synchronous panel consensus meetings of 48 stakeholders, and subsequent pilot testing of the checklist. CHART includes 12 items and 39 subitems to promote transparent and comprehensive reporting of CHA studies.
Issues re: PRISMA-S and AI
What are the key issues regarding PRISMA-S and AI?
AI-powered literature searching: the gamut of AI-powered search tools such as Elicit.com and Open Evidence, to name a few;
Other AI tools using machine learning algorithms, natural language processing (NLP), and automated screening platforms (e.g., DistillerSR, Rayyan, ASReview), are increasingly used to automate aspects of search query development (with uneven results);
AI seems to suggest (some) relevant search terms and Boolean queries based on a research question;
Screening: tools using BERT-based models or active learning prioritize relevant studies, reducing manual screening time, are more common;
Deduplicating records: AI identifies and removes duplicate citations across databases, and are used in the AI-powered search tools;
Extracting data: NLP extracts key information (e.g., study design, outcomes) from full-text articles, when and where available;
Search for greys: AI-powered tools scrape websites, repositories, or social media (e.g., X posts) for non-traditional sources.
PRISMA-S Reporting Requirements for AI
PRISMA-S emphasizes transparency in reporting all aspects of the search process, including AI tool usage.
Relevant checklist items include:
Item 8 (Search methods): Describe AI tools used, specifying their role (e.g., “Elicit.com was used to find seed papers; ASReview, an active learning tool, was used to prioritize title/abstract screening”).
Item 9 (Search strategies): Report modifications to search strategies made by AI, such as automated term expansion or query optimization.
Item 13 (Software): Name the AI software or platform, including version and settings (e.g., “DistillerSR v2.35 with a logistic regression classifier”).
Item 15 (Peer review): Indicate if AI outputs (e.g., screened records) were validated by human reviewers or peer-reviewed for accuracy.
Item 16 (Documentation): Provide access to AI-generated outputs, such as training datasets or algorithm parameters, ideally in a public repository for reproducibility.
Challenges in Reporting AI with PRISMA-S
There are a range of challenges at this time with reporting AI usage using PRISMA-S:
Bias: the use of AI search tools to locate papers, or to create lists of free text terms, may introduce bias; researchers will want to include this in their limitations sections of their manuscripts and how they managed bias-related problems;
Variable standards: AI tools seem to vary widely (e.g., proprietary vs. open-source), and their “black box” nature make it difficult to report exact processes transparently.
Validation: PRISMA-S encourages reporting how AI outputs were validated (e.g., human checks on AI-screened studies), and AI may introduce bias or miss relevant studies.
Reproducibility: AI tools rely on dynamic models or training data, which may change over time, complicating replication. PRISMA-S pushes for detailed documentation to mitigate this issue.
Ethics: Using AI without some proper human oversight is not recommended; PRISMA-S indirectly addresses this by requiring clear reporting of methods. It might be useful for searchers to indicate other ethical issues they encountered in using AI.
Practical example
A systematic review on AI in American healthcare might use an AI search tool such as Open Evidence or similar;
Covidence with an NLP plugin to screen 10,000 records from MEDLINE and EMBASE.
In the methods section, researchers report (per PRISMA-S):
The bibliographic databases and AI powered tools searched and dates (Item 1)
AI tool’s role (e.g., “Covidence’s NLP module was used to rank abstracts by relevance, with a 90% recall threshold” – Item 8).
The full search strategy, including any AI-powered tools or tools used to aid in creating/modifying terms (Item 9).
Validation process (e.g., “10% of AI-excluded records were manually checked by two reviewers” – Item 15).
A PRISMA flow diagram showing AI-screened vs. human-screened records (Item 7).
Benefits of Using AI and PRISMA-S
Efficiency: AI reduces time and resource demands, allowing researchers to focus on analysis while PRISMA-S ensures rigor in reporting.
Transparency: PRISMA-S ensures AI’s role is clearly documented, enhancing trust in the review’s findings.
Scalability: AI enables handling massive datasets (e.g., big data from X or web searches), with PRISMA-S providing a framework to report these complex searches.
Reproducibility: Detailed PRISMA-S reporting of AI parameters supports future replication or updates to the review.
Commonly-cited platforms include ASReview (open-source, active learning), DistillerSR (proprietary, automation). Some ecent studies (e.g., on PubMed) note that AI tool adoption is growing, but reporting often lacks PRISMA-S-level detail, especially for algorithm validation.
Recommendations to searchers
Choose AI Tools Wisely: aim to select tools validated for your field (e.g., ASReview for health sciences) and ensure they align with PRISMA-S.
Document Extensively: document AI tools, settings, training data, and validation steps to meet PRISMA-S requirements.
Use PRISMA-S Early: plan your search with PRISMA-S checklist to ensure AI-related details are captured.
Validate AI Outputs: use AI with human oversight to minimize bias, reporting this process clearly.
Share Resources: deposit AI-generated search strategies or datasets in repositories like Zenodo, as encouraged by PRISMA-S.
Resources
PRISMA-S Checklist: Available at PRISMA website or via Rethlefsen et al. (2021) in Systematic Reviews.
Guidance: The Cochrane Handbook (2022 update) and EQUATOR Network provide tips on integrating AI into evidence synthesis while adhering to reporting standards.
Note: Please use your critical reading skills while reading entries. No warranties, implied or actual, are granted for any health or medical search or AI information obtained while using these pages. Check with your librarian for more contextual, accurate information.