Jump to content

Generative AI - What is it?

From UBC Wiki
Source: AI Graphic Preisler, 2024, pg.6.

Compiled by

Updated

See also

What is Generative AI?

"...generative AI tools (e.g. ChatGPT) source data from publicly available internet content... which raises legitimate concerns about ...integrity of AI in writing medical manuscripts [re:] plagiarism, fabricated or false information (hallucinations) and fabricated references..." — Cheng et al, 2025.

Generative Artificial Intelligence (Generative AI or "GenAI") uses advanced algorithms to create ("generate") new content, typically delivered by a chatbot. GenAI includes words, text, images, videos, audio, music, and computer code, to name a few. GenAI produces content by learning patterns of syntax, semantics (meaning), logic etc. from an existing corpus of data. In ChatGPT by OpenAI, the corpus includes open material from the web and copyrighted (or stolen) news stories, books and articles, which is why some (many?) librarians oppose it.

GenAI developers have argued that such training is protected under fair use, while copyright holders have argued that it infringes their rights. Numerous lawsuits are underway globally pertaining to using stolen intellectual properties. AI, especially GenAI, will challenge copyright laws internationally, and librarians will want to uphold their principles in protecting intellectual property until it is decided by the courts. My suspicion is that legal cases will proceed similarly to the Google Books legal cases, however, we will all want to monitor developments closely.

Model: GPT-5 - released 7 August 2025 new2.gif

GPT-5 <https://openai.com/index/introducing-gpt-5/> is OpenAI's advanced reasoning model released in August 2025. GPT-5 is available on the free version of the ChatGPT app, giving access for consumers for health-related queries, potentially increasing its impact on patient education.

OpenAI says GPT-5 excels in healthcare due to several key advancements: 1) enhanced reasoning and factuality; 2) expert-level answers with improved factuality, especially for open-ended health-related questions. OpenAI has reduced "hallucinations" (inaccurate outputs), making it more reliable. It performed better than GPT-4, GPT-4o, and GPT-3 on an evaluation framework developed with 250 physicians from 60 countries, assessing safety, accuracy, and appropriateness.

History and background

  • GenAI's roots are linked to artificial intelligence (AI) as a whole. AI systems were created to perform tasks that required human intelligence. In 1950, Alan Turing proposed the Turing Test, which measured whether a machine could "think" like a human. Turing's idea became a foundation for AI research, and helped to shape what became known as symbolic AI, or “Good Old-Fashioned AI” (GOFAI), where machines followed strict, rules-based logic to make recommendations and decisions. While early AI systems were novel, they lacked flexibility and learning ability.
  • For years, much AI could only do what it was explicitly told to do. It followed instructions, but wasn't able to learn from experiences, or improve over time. That began to change in the 1980s with the introduction of neural networks. Inspired by the human brain, neural networks use artificial “neurons” that pass data through layers to recognize patterns. One of the first models was called the perceptron, which could complete basic tasks such as recognizing letters or shapes. Due to its limitations, interest in neural networks faded but that changed with the development of the backpropagation algorithm, which helped AI systems learn from their mistakes. Nobel prize winner and AI academic Geoffrey Hinton played a big role in improving these methods. Combined with better computer hardware, this allowed deep learning—neural networks with many layers—to become more powerful and capable of handling complex tasks like speech and image recognition.

In the 21st century

  • GenAI started to become more of a reality in the early 2000s. Earlier AI could spot patterns, but couldn’t generate anything new. Early generative models, such as Hidden Markov Models (HMMs) and Gaussian Mixture Models (GMMs), were used for speech recognition and simple image generation, but had limitations in producing diverse or detailed content. A major turning point came in 2014 with Generative Adversarial Networks (GANs). These systems use two networks: one generates content, and the other checks how realistic it is. The method helped AI produce much more convincing and lifelike images, videos, and more.
  • In 2017, the introduction of transformer models (see Vaswani) changed the way AI handles language by using something called "self-attention" to process entire sentences at once, rather than word by word. This allowed AI to better understand context and relationships between words, making responses sound more natural. GPT-3 (Generative Pretrained Transformer 3) - a powerful AI model that can take a simple prompt and generate detailed, relevant responses is the best example. GPT-3,4 and 5 ((2025) show how far generative AI has come in recent years.
  • In 2025, GenAI models are more advanced. DALL-E can take a description such as “a surfing dog” and create a new image. The ability to combine language and images shows how genAI will push copyright limits, and the boundaries of creativity. GenAI is reshaping how we perceive art, human expression and creativity. Not everyone is happy about this turn of events.

How does it generate content?

  • GenAI focuses on creating text, images, music, video, and learns by analyzing data, in books, reports, images, and audio files. By finding patterns in data, AI can generate original content. At the core of most genAI is a deep learning model known as a neural network; neural networks are inspired by the way human brains work. As our brains use neurons to pass signals and process information, neural networks use artificial "neurons" passing data through layers, and as data moves through the system, the AI learns more complex patterns and structures. When a neural network is trained to work with images, it begins by recognizing simple shapes and edges. As it processes more data and moves through layers, it sees more detailed features such as facial structures or textures. This allows AI to generate outputs that seem highly realistic, whether it’s a sentence that sounds human or an image that looks like it was taken by a camera.
  • The development of GenAI is possible through improvements in computing power, hardware and advanced GPUs developed by companies such as NVIDIA. These technologies allow systems to process data quickly and efficiently, essential for generating high-quality content. But more computational power means using more electricity which impacts the climate and environment.
  • Key in genAI development is the transformer model introduced by Vaswani et al in the paper "Attention Is All You Need". Transformers use self-attention to understand which part of an input or prompt is the most important in each context. This helps AI understand the prompt and how to give a more concise and clear response. Generative adversarial network or “GAN” were discovered in 2014; GANs place two neural networks together to allow the AI to generate new examples and distinguish which content is real or fake.

Ethical issues

  • GenAI raises serious ethical concerns around intellectual property. As genAI is trained on existing content, is the copied content original or who owns it (if deemed original by the courts). If AI generates art, for example, that copies an artist, how will the artist be compensated? The original artist’s estate? Today’s copyright laws are not able to address these scenarios yet, but recent court cases are instructive. Some critics argue AI-generated content is not original but the law is unclear. GenAI pulls ideas from existing works, and shouldn’t receive the same legal protections as human-created content. Advocates believe that giving legal rights to AI-generated work could encourage more innovation and investment.
  • There are big questions about the impact of generative AI on jobs and creativity. As AI gets better at producing high-quality content, there’s growing concern that it could replace human workers in creative fields. While AI can be a helpful tool for boosting creativity, it poses risks to professionals producing original work. These issues lead to a larger ethical question: How do we balance innovation with responsibility? If AI is going to reshape the future of work and creativity, we need to make sure that its benefits are shared fairly. That means thinking about who controls the technology, how it's used, and how we protect the people it might affect most.

Implications

Biased Outputs

GenAI is capable of producing biased outputs, stemming from a number of issues. Systemic biases present within institutions, culture, history, and/or society will affect the training data. Biases are then reflected as statistical and computational biases in the model. Inherent human biases exist as well, which influence the training data, the design of the model, and the use of its output.
Biases include how data was acquired (i.e., data scraped from a website is not representative of all humans or even other Internet users); systemic and historical biases, such as a correlation between race and location; and others. Design teams should have diverse members to reduce biases during decision-making processes. During the design phase, analyses should be conducted to identify sources of bias as well as plans to mitigate them. These should be continually evaluated to ensure the mitigation strategies work well. Any model should be monitored to ensure minimal bias. If necessary, a model should be retrained or decommissioned.

Copyright

GenAI models are usually trained with large datasets along with contents from various websites, social media, Wikipedia, and discussion hubs on Reddit. As it includes copyrighted material, it is riddled with copyright infringement. The debate on copyright infringement and AI is ongoing. Copyrighted content (both text and pictures) is used in the training data, which give rise to more languages for Generative AI. Should such copyrighted material be allowed to train datasets for Generative AI? As this works its way through the courts, but from the perspective of the copyright owners, it violates all international and national copyright legislation.

Job displacement

GenAI will contribute to unemployment and labour abuses. AI automates certain tasks or activities, displacing human workers. Consider marketing teams whose work includes making email templates, drafting campaign emails, and getting reports on the open and click rates. If AI creates content for them, those creating the email campaigns will be displaced. Automating customer service tasks and workers working at call centers providing in-person customer services will be laid off; genAI is able to write code in an effective manner, which means taking less time and causing minimal errors, displacing those skilled workers. How will genAI affect the professions such as law and medicine? libraries and librarians?

Truthfulness / Accuracy

GenAI provides AI a range of uncertain responses to questions, using phrases such as "may be", "not sure about", etc. As genAI makes use of machine learning, it aims for accuracy but falls short. AI can provide incorrect answers to a range of questions in health; and instances where it has failed miserably in solving certain problems in mathematics. In popular games such as chess, where computers are known to play smarter than humans, AI will make irregular moves that do not make sense. There are risks associated with spreading false information via chatbots.

Social engineering

GenAI can be misused by creating voices and personas that replicate human beings. Scammers will use this to send phishing emails or phone calls. For example, the attacker can use this tool to claim that he is calling from the IT department of the organization and can eventually convince the user to share his login ID and password, thereby gaining access to his system. So generative AI can lead to social engineering attacks.

How is genAI used in education, medicine and the arts?

  • GenAI changes how people work and how information is shared in different fields. In education, it is used to design personalized worksheets, flashcards, diagrams, games and simulations. Tools help teachers create engaging lessons and allow students to learn in a way that suits their needs. Game-based learning uses both digital and physical games as a way to improve classroom participation and make learning more interactive and immersive.
  • GenAI plays a role in improving patient care and medical training. It can be used to generate medical diagrams that compare a patient’s current condition to expected outcomes. AI-generated videos and simulations are used to train medical professionals. One technique called diffusion modeling uses AI to denoise and enhance medical images. Researchers tested this method on datasets like chest X-rays, MRIs, and CT scans, and found it outperformed previous methods.
  • The automotive industry uses GenAI to create technical diagrams, generate concept car designs, and to solve engineering problems. These systems can suggest innovative design options by working within specific constraints, offering ideas that may not have been considered. AI-generated design is used to imagine future versions of iPhones, gaming consoles, and other upcoming technologies.
  • In construction, GenAI is used to produce digital blueprints and explore structural concepts. It can also generate safety materials, such as training videos and posters, to improve communication on job sites and help prevent accidents.
  • Many content creators now use AI-generated images, voiceovers, and music on platforms such as TikTok and Instagram. The rise of deepfakes—videos that appear real but are artificially created—has led to misinformation and ethical debates. These manipulated videos can spread false information quickly and convincingly.

How accessible is Generative AI?

GenAI is widely accessible and platforms offer free tools, while others charge for advanced features. You can run many of these tools on basic devices, including phones. Some popular examples include:

Case studies in education, and the arts

GenAI is shaping modern industries and education systems by introducing tools that automate complex tasks. In pharmaceutical and manufacturing industries, GenAI is used to predict drug behaviours and improve product designs, reducing development time and costs. By automating content generation, data analysis, and customer service, companies are applying these tools to be competitive in evolving markets.

In education

In education, AI is more prevalent, especially since shifting to online learning during COVID-19. GenAI is widely adopted by students for tasks ranging from information retrieval to content creation. A study involving 586 student users revealed that while AI tools are helpful, their usage correlates with several challenges, such as decreased academic integrity and over-reliance on AI-generated content. Factors such as perceived stress and educational risk were found to significantly influence students' perceptions and use of AI in learning. Despite the challenges, AI offers personalization and intellectual collaboration for some. Generative models enable tailored learning by adjusting content to meet individual needs and comprehension levels. This supports more effective learning outcomes and broadens access to educational resources. Concerns remain over AI's inconsistencies, privacy risks, and potential to encourage passive learning habits among students.

In the arts

In the arts, GenAI has inspired new modes of expression. From generating art and music to personalized marketing, it can enhance creativity and broaden content creation. Its use in fashion, advertising, and entertainment reshapes how professionals design, produce, and distribute content. As artists and brands adopt these tools, they unlock new ways to engage audiences and craft immersive, interactive experiences. While GenAI presents opportunities for growth and efficiency, it requires a careful approach to ethics, accuracy, and educational integrity. Future development must strike a balance between leveraging AI's capabilities and preserving human judgment and creativity to ensure responsible and sustainable use.

In diverse industries

GenAI is being adopted across diverse industries to enhance user experiences, streamline operations, and deliver services. Wayfair introduced Decorify, an AI-powered tool allowing customers to upload photos of their living spaces and receive photorealistic interior design suggestions based on their style preferences. The tool recommends products from Wayfair’s catalog, simplifying redecorating for users. Mass General Brigham (MGB), a major healthcare provider in Massachusetts, piloted the use of Large Language Models (LLMs) to support physicians in responding to patient queries. Initial testing revealed that 82% of AI-generated responses were safe to send without misinformation, and over half required no further editing, demonstrating promising results for medical communication.

Salesforce launched Einstein GPT in early 2023. This multi-use generative AI tool is customized for different business departments—marketing, sales, and customer service—providing capabilities like content creation, email generation, and knowledge article summarization. It is built in collaboration with OpenAI and integrated directly into Salesforce’s CRM ecosystem, making it accessible and efficient across business verticals.

Creative and brand marketing have created innovative applications. Coca-Cola launched its limited-edition AI-inspired drink Y3000, crafted by blending customer feedback with AI to design the flavor and visual concept of a beverage representing the year 3000. The campaign included an interactive “AI Cam” experience accessible through QR codes on the product packaging. Meanwhile, Adidas adopted AI for internal efficiency by implementing a conversational knowledge management system that allows engineers to query the company’s vast information base. This tool has helped offload administrative work and accelerated innovation in their large-scale AI projects.

Recommendations

Ethical challenges such as misinformation, bias, job displacement, and misuse of personal data in genAI are highly problematic for academics and librarians. Developers should follow ethical frameworks that guide how generative AI is built. Transparency, accountability, public involvement are essential to ensure these systems are developed in a way that benefits others, not just multinationals. A starting point is defining the purpose and intent behind AI systems, and asking What problems are we trying to solve? Developers should have a clear understanding of what their AI system is meant to perform, and how it fits within social and ethical values. Ethical frameworks for academic use are critical, if genAI is to be used at all.

Transparency
Transparency means being fully open about how your AI system works for academics or librarians to use your system. Developers should clearly explain how the model was trained, what data was used, and how it makes decisions. When AI outputs can affect people in critical areas like healthcare or law, tools should be developed to help users and regulators understand how these systems reach their conclusions. No one with integrity will use your system otherwise.
Accountability
Lines of responsibility should be in place when AI systems produce harmful or misleading results. We need standards for accountability and not just about legal consequences it also includes moral responsibility - ensuring the system doesn’t spread bias or harmful behaviours.
Data privacy and consent
Data revealing any personal or sensitive information to train AI models should not be collected without informed consent and methods like anonymization. Developers must ensure training data does not violate rights. Systems that generate content — especially in education, journalism, and public communication — should include authenticity tools such as watermarks or labels to indicate content was AI-generated.
Fairness and bias
AI models learn from real world data and that data is biased. AI can reinforce harmful patterns that already exist in society. Developers run fairness audits and collect feedback to identify and reduce biased outputs. Independent reviews and outside evaluations aid in making systems fairer and equitable, though some critics question whether this is feasible.

Impact on professions and students

As generative AI tries to improve, there are several concerns about what potential future social impacts it may have:

Education

Teachers and students are using genAI in their learning, ie., to generate lesson plans, exams, essay questions, etc. "ChatGPT passed graduate-level business and law exams and medical licensing assessments (Hammer, 2023), leading to suggestions to remove these assessments from curricula in exchange for those that require more critical thinking.” GenAI is used for outlines, papers, and code; content generated by these models is not accurate, though. Students should take care to check their sources of information for accuracy. Technology will help to detect AI-submitted content to reduce abuse of generative AI; GPTZero, for example, was designed specifically for detecting ChatGPT-generated content.

Some academic institutions have banned genAI in the past. Some AI advocates say that academic institutions should focus on teaching students how to use AI in conjunction with their own knowledge and creativity. 

Pharmaceuticals

Generative AI systems can be trained on sequences of amino acids or molecular representation systems, such as AlphaFold, which is used for protein structure prediction and drug discovery. The models have become high-potential tools to transform the design, optimization, and synthesis of small molecules and macromolecules. On a large scale, there is the potential to boost the development process.

The stages of the process are as follows:

  • Stage 1: There is AI assistant target selection and validation.
  • Stage 2: Molecular design and chemical synthesis
  • Stage 3: Biological evaluation, clinical development, and post-marketing surveillance
  • Stage 4: Several successful preclinical and clinical molecules identified by AI and deep generative models

The Legal Profession

GenAI is having an impact on the legal profession. ChatGPT is capable of drafting advanced legal documents, including demand letters without prejudice and pleadings. The drafts demonstrate the ability of ChatGPT to enhance content based on simple facts. GenAI is able to identify legal strategies, generate a skeleton argument to support the case, anticipate potential defences, etc.. GenAI lacks the ability to undertake legal research and analysis as a competent lawyer. Legal databases like WestLaw and Lexis will use genAI as agents; other genAI tools lack the capability to undertake legal research and analysis to the same extent as a competent lawyer or librarian would.

CHART Collaborative

The Chatbot Assessment Reporting Tool (CHART) is a reporting guideline developed to provide reporting recommendations for studies evaluating the performance of generative artificial intelligence (AI)-driven chatbots when summarizing clinical evidence and providing health advice, referred to as Chatbot Health Advice (CHA) studies. CHART was developed in several phases after performing a comprehensive systematic review to identify variation in the conduct, reporting and methodology in CHA studies. Findings from the review were used to develop a draft checklist that was revised through an international, multidisciplinary modified asynchronous Delphi consensus process of 531 stakeholders, three synchronous panel consensus meetings of 48 stakeholders, and subsequent pilot testing of the checklist. CHART includes 12 items and 39 subitems to promote transparent and comprehensive reporting of CHA studies.

  • CHART Collaborative et al. Reporting guideline for chatbot health advice studies: The CHART statement. Artif Intell Med. 2025 Aug 1:103222.

References

Disclaimer

  • Note: Please use your critical reading skills while reading entries. No warranties, implied or actual, are granted for any health or medical search or AI information obtained while using these pages. Check with your librarian for more contextual, accurate information.