Note: OpenAI leads the general AI space, but AI companies are developing deep research tools and experimenting with AI-powered searching in support of research. Perhaps you have faculty or students asking you to present these tools to classes.
Elicit.com, developed by Ought, is an AI-powered research assistant that aims to "transform [our] interactions with academic literature". It's powered by Semantic Scholar and uses machine learning models such as GPT.
Elicit helps to find relevant papers, summarize findings, and extract key information. A key feature is its claim to prioritize and present the most relevant literature based on a user’s research questions and interests. Workflow is simple enough: users locate papers, extract data from PDFs, and generate concept lists while receiving detailed source information, including SCImago journal rankings, citation counts, and DOI links (Kung 2023). The Unpaywall plugin offers open access to PDFs, enhancing research accessibility. Elicit.com uses deep research technology to automate and accelerate academic research. While Elicit is tailored to domains such as biomedicine and machine learning, it may produce around 10% inaccuracies that require user verification.
Elicit.com regularly sends out updates about new features; with the most recently-enabled features for (full-text) screening, it moves towards being a competitor to Covidence.
"The Chrome extension uses existing institutional access to pull full texts directly into your review. Even without institutional access, it retrieves roughly 30% more papers than Elicit alone. For the rest, you can cleanly upload PDFs. What used to take weeks of chasing down PDFs across publisher sites now happens in the background in minutes while you do other work. Download our browser extension to get started."
Elicit.com launched "Start a Systematic Review" feature in February 2025, aimed at starting a systematic review for researchers.
Elicit.com claims to automate key stages of systematic reviews such as searching, screening, data extraction, and report draft generation. The platform claims to reduce time needed for SRs by up to 80% without compromising accuracy.
Elicit applies semantic search across 125+ million papers in Semantic Scholar, then suggests screening criteria, extracts quantitative and qualitative data (even from tables), and provides inline supporting quotes. The SR feature on Elicit.com is currently only available to Pro, Team, and Enterprise users. It aims to automate literature searching for hundreds of papers up to 500, screening, and data extraction.
Bottom line: For health sciences librarians, the new tool or feature in Elicit might support their work with health professionals. However, its underlying AI technologies raises concerns for those interested in scientific accuracy, transparency and rigour in performing reviews. Note that the information provided to you on this page can be changed, so please check each tool's website for the most current information (or discuss with a librarian). I like to elucidate the distinction between searching for sources and searching for answers; LLMS provide the second while hiding the first.
Presentation by Elicit.com
Note: This presentation was selected by a librarian due to the presenter and their understanding of the product. As this is a marketing video and tutorial, some of the claims of the video should be tested and verified.
Many (if not all) of the AI-powered search tools such as Elicit.com and Undermind.ai use retrieval augmented generation (RAG) and deep research techniques to deliver results. RAG refers to a technique combining the strengths of retrieval-based and generative AI models. In RAG, an AI system first retrieves information from a large dataset or knowledge base and then uses this retrieved data to generate a response or output. Essentially, the RAG model augments the generation process with additional context or information pulled from relevant sources.
RAGs enhances large language models (LLMs) by integrating them with document retrieval systems, thus they are unlikely to hallucinate. Given a query, a document retriever is called to retrieve the most relevant documents. This is usually done by encoding the query and the documents into vectors, then finding the documents with vectors (usually stored in a vector database) most similar to the vector of the query. The LLM then generates an output based on both the query and context included from the retrieved documents.
Park (2025) examines seven (7) tools, including Elicit.com. Examines searching features mostly but other elements also.
Bernard (2025) case study of Elicit vs. one only human conducted umbrella review;
Lim et al (2025) mention tools, including Elicit and Consensus, but only in the surgical context;
Dukic (2025) evaluates SciSpace vs. Elicit in assisting with reviews;
Seth (2025) clinicians conducted three-way comparison; examined AI search engines (Elicit, Consensus, ChatGPT) vs. manual search for literature retrieval, focusing on osteoarthritis.
Spillias (undated) tested (GPT4-Turbo and Elicit).
Williamson (2025) a column evaluating SciSpace, Semantic Scholar, Elicit, Google Scholar, Research Rabbit, PubMed and CAB Abstracts. A veterinary medicine topic was selected to test the success of AI tools in searching for academic sources. The authors searched for scholarly literature on the topic, colic AND horses AND microbiome in each of the seven AI tools.
Bolanos (2024) a primer that includes Elicit and other tools;
Meliante (2024); clinicians evaluated Scite and Elicit to search for articles on “Glaucoma, pseudoexfoliation and Hearing Loss” comparing results with human-conducted PRISMA-reported review.
Librarian criticism
Elicit.com has been marketed as a kind of turn-key tool for reviews, and specifically systematic reviews. However, it may be backing away from that strategy. Although it performs searching and screening faster than some of the other tools such as Undermind.ai, it's unclear what is happening within the actual black box of algorithms and LLMs of the tool. Elicit.com, generally speaking performs synthesis of the literature satisfactorily, but it should not be used as the only tool in conducting a SR. Perhaps it can be used for its discussion points?
Elicit.com allows researchers to locate seed papers quickly, and is easy to use with no downloadable client required. However, caution is recommended. There is the potential for scientific malpractice when researchers use tools such as Elicit.com without understanding its limitations.
In summary: Elicit is best used for early-stage literature analysis, a priori seed paper finding, especially for scoping out a topic. It does not generate full reviews or papers per se, though its ability to extract and organize evidence across papers makes it useful for some topics. Speak to a librarian who is well-versed in AI-powered searching before using this product.
"... This research evaluates the performance of platforms such as SciSpace, Elicit, ResearchRabbit, Scite.ai, Consensus, Claude.ai, ChatGPT, Google Gemini, Perplexity, and Microsoft Co-Pilot across the key stages of SLRs—planning, conducting, and reporting. While these tools significantly enhance workflow efficiency and accuracy, challenges remain, including variability in result quality, limited access to advanced features in free-tier versions, and the necessity for human oversight to validate outputs..."
Note: Please use your critical reading skills while reading entries. No warranties, implied or actual, are granted for any health or medical search or AI information obtained while using these pages. Check with your librarian for more contextual, accurate information.