Jump to content

ChatGPT by OpenAI

From UBC Wiki
GPT vs ChatGPT @ OpenAI https://openai.com

Compiled by

Updated

Background

  • OpenAI leads the general AI space, but AI companies are developing deep research tools and experimenting with AI-powered academic searching in support of research. Perhaps you have faculty or students asking you to present these tools to classes.
  • Algorithms | Which companies are behind AI search tools? | Vector-based searching and embeddings
  • Note: Any discussion about AI geared towards librarians should start with a look at the ethical, legal, institutional and strategic concerns many librarians have about AI. Talk to your colleagues / librarian about your concerns to make informed decisions.
  • Remember: This entry is intended to help librarians and other information professionals learn about AI. It is not, in itself, meant to be seen as promotion of AI. If anything, the goal is harms mitigation or harms reduction.

Introduction

ChatGPT is a generative AI chatbot released by OpenAI in 2022. In August 2025, OpenAI released ChatGPT-5 following GPT-4 which had its problems according to users. ChatGPT uses generative pre-trained transformers (GPTs) to generate text, speech, and images in response to prompts. The tool has had a major role in accelerating AI adoption, and in investment and public attention in artificial intelligence (AI).

  • ChatGPT by OpenAI is widely considered the fastest-growing consumer software application in history, reaching 100 million users within two months of its launch in November 2022. By August 2025, the ChatGPT website ranked among the most-visited sites globally.
  • OpenAI, an artificial intelligence research and technology company, develops a range of AI systems including ChatGPT, which collectively receive roughly 2.5 billion user prompts each day. The organization was founded with the goal of advancing artificial general intelligence (AGI) while ensuring that its benefits are broadly shared and aligned with the interests of humanity.
  • ChatGPT by OpenAI has a range of issues and capabilities: answering follow-up questions, writing and debugging computer programs, translating, and summarizing text. Users interact with ChatGPT through text, audio, and image prompts. Since its launch, OpenAI has introduced new features, plugins, web browsing capabilities and image generation. It has created extensive media hype and public debate about the future of knowledge work. See also Copyright in Canada.

ChatGPT 5.3 2026 Update new2.gif

Generative AI systems such as ChatGPT have shown measurable improvements in reliability, reasoning, and transparency. Newer models are better at following instructions, summarizing complex information, and identifying uncertainty in responses.

Two widely noted improvements to ChatGPT in 2026 include:

  • Reduced hallucinations and improved accuracy. Newer models such as GPT-5.3 Codex significantly reduced fabricated or incorrect answers, with internal evaluations reporting about a 26–27% drop in hallucinations compared with earlier versions.
  • Stronger reasoning and ability to handle complex tasks. Recent versions (e.g., GPT-5 and later updates) improved multi-step reasoning, allowing the system to follow complicated instructions, analyze long documents, and maintain context more reliably during extended conversations.

Advances in model training and evaluation have reduced hallucinations and improved the overall quality of generated text, code, and analysis. Many systems now incorporate safeguards such as citation suggestions, structured reasoning, and tools for verifying information against external sources. Despite these improvements, generative AI remains imperfect and can still produce incorrect or misleading content. As a result, researchers, students, and professionals are encouraged to treat AI-generated information as a starting point rather than a definitive source, and to verify important claims using authoritative scholarly and library-based resources.

Chat GPT Is Eating the World (blog)

For a critical perspective on ChatGPT and generative AI, see Chat GPT Is Eating the World.

  • A specialized blog that monitors the evolving legal landscape surrounding artificial intelligence, particularly the growing number of copyright disputes involving technologies such as ChatGPT, OpenAI models, and image generators like DALL-E. It focuses on the intersection of AI innovation and law, examining issues such as the use of copyrighted material in AI training data, fair use, authorship rights, and the broader ethical implications of AI-generated content. The site also maintains a comprehensive tracker of copyright litigation against AI companies (e.g., a "Master List of Lawsuits v. AI" with 74 global cases and 51 in the US as of late 2025).

Model: GPT-3.5 and GPT-4

ChatGPT was launched as a conversational AI agent in 2022 and was fine-tuned for dialogue using reinforcement learning from human feedback (RLHF). It offered text-based responses, excelling in general knowledge, writing, and basic reasoning but was limited to text inputs and a 4,000-token context window (roughly 3,000–4,000 words). Known for fast responses but less nuanced than later models.

  • Availability: Free to all users, with no message limits initially.
  • Use Cases: General Q&A, writing assistance, basic code generation.
  • ChatGPT-4 is a more advanced tool than ChatGPT-3.5, offering more features such as greater accuracy, creativity, nuanced understanding, and safety training. GPT-3.5 and GPT-4 use a transformer-based architecture as part of a neural network that handles sequential data.

GPT-5 (2025)

GPT-5 <https://openai.com/index/introducing-gpt-5/> is OpenAI's advanced reasoning model released in August 2025. GPT-5 is available on the free version of the ChatGPT app, giving access for consumers for health-related queries, potentially increasing its impact on patient education.

OpenAI reports that GPT-5 apparently excels in healthcare-related topics due to several key advancements: 1) enhanced reasoning and factuality; 2) expert-level answers with improved factuality, especially for open-ended health-related questions. OpenAI has reduced "hallucinations" (inaccurate outputs), making it more reliable. It performed better than GPT-4, GPT-4o, and GPT-3 on an evaluation framework developed with 250 physicians from 60 countries, assessing safety, accuracy, and appropriateness.

What is ChatGPT’s Deep Research? ChatGPT Agentic AI?

"...generative AI tools (e.g. ChatGPT) source data from publicly available internet content... which raises legitimate concerns about ...integrity of AI in writing medical manuscripts [re:] plagiarism, fabricated or false information (hallucinations) and fabricated references..." — Cheng et al, 2025.
  • ChatGPT’s Deep Research, introduced by OpenAI in February 2025, is an AI-powered feature designed for complex, multi-step research tasks. Unlike standard ChatGPT responses, which are quick and based on its training data or limited web queries, Deep Research operates as an autonomous research agent. It uses a specialized version of OpenAI’s upcoming o3 model, optimized for web browsing and data analysis, to scour hundreds of online sources (text, images, PDFs) and produce detailed, cited reports in 5–30 minutes. It’s marketed as a tool that can rival a human research analyst, targeting professionals in fields like finance, science, policy, and academia, as well as consumers tackling complex decisions (e.g., choosing a laptop or researching market trends).
  • In July 2025, OpenAI released "ChatGPT agent", an AI agent performs multistep tasks - and, is available to users on the Pro, Plus, and Team plans, and later via Enterprise and Education plans. see https://openai.com/index/introducing-chatgpt-agent/

Librarian criticism

"...The idea that we should outsource academic authorship to LLMs rests on the assumption that writing is (only) a mechanical, predictable or reductive process which, with the right prompts, can be replicated with ease."Masters, 2025.

ChatGPT, in fact most generative AI, has been roundly criticized for its limitations and potential for unethical uses, particularly its inability to write intelligible prose and to "halluciate" references. In some cases, ChatGPT produces text that appears plausible and human-like, yet contains incorrect claims or invented references. For this reason, users are generally advised to verify any information generated by chatbots against reliable sources.

In August 2025, the release of GPT-5 showed improvements in early testing, including fewer hallucinations and fewer low-quality responses. However, such errors have not been eliminated and continue to require careful verification by users. Faculty, staff, and students at universities are encouraged to consult academic librarians when locating credible and authoritative sources to support their research and coursework. Scholars have raised concerns present in training data in AI-generated responses, reinforcing misinformation or propagating misleading narratives. Claims produced by AI tools should be checked against trusted library resources and other authoritative information sources.

Within academic settings, the use of generative AI has raised additional concerns, including academic dishonesty, the spread of misinformation, and the potential creation of malicious or harmful code. The use of copyrighted material in training AI models has also generated criticism and ongoing legal disputes involving publishers, authors, and academic institutions. These issues have led some libraries, workplaces, and educational institutions to restrict the use of certain AI tools and have prompted broader calls for clearer regulation and oversight of AI.

Note: In summary, users of ChatGPT and similar tools should approach AI-generated information with caution and critical judgment. Consulting librarians, verifying claims through scholarly and library-based resources, and cross-checking information through reliable public sources remain essential practices for responsible research.

see also

References

  • Article describes how systematic reviews, long the gold standard for evaluating scientific evidence in medicine and policy, are slow and labor intensive, often taking over a year to complete. AI tools could dramatically speed up these reviews by automating tasks like screening studies and summarizing findings, potentially making evidence synthesis faster and more up-to-date. However, experts warn that many AI systems lack transparency, reproducibility, and access to complete databases, risking poor or biased results. New guidance from major evidence-synthesis organizations emphasizes cautious, responsible use of AI to preserve trustworthiness while reaping efficiency gains.

Disclaimer

  • Note: Please use your critical reading skills while reading entries. No warranties, implied or actual, are granted for any health or medical search or AI information obtained while using these pages. Check with your librarian for more contextual, accurate information.