Note: OpenAI leads the general AI space, but AI companies are developing deep research tools and experimenting with AI-powered academic searching in support of research. Perhaps you have faculty or students asking you to present these tools to classes.
How will AI tools affect our traditional bibliographic databases? Will we see GenAI being put into our search platforms? Can we stop it?
Also: This open textbook (or wiki channel) is intended to help librarians and other information professionals learn about AI. It is not, in itself, meant to be seen as promotion of AI.
Introduction
The evolution of artificial intelligence, essentially tracing its conceptual history, starts in the ancient world. The idea of creating intelligent robots and artificial beings first appeared in ancient Greek myths of antiquity; Hephaestus, for example, blacksmith to the gods, had the power to animate metal creatures, essentially robots, imbued with divine knowledge. According to Homer, Hephaestus built automatons of metal to work for him or others.
Aristotle's development of syllogism and deductive reasoning is key in the quest to understand human intelligence. While the roots of AI concepts are long and deep over human history, the history of artificial intelligence as we think of it today spans less than a century. The following timeline is a look at some of the most important events in AI.
Note: Any discussion about AI geared towards librarians should start with a look at the ethical, legal, institutional and strategic concerns many librarians have about AI. Talk to your colleagues / librarian about your concerns to make informed decisions.
1949: In his book The Organization of Behavior: A Neuropsychological Theory, Donald Hebb proposes the theory that neural pathways are created from experiences and that connections between neurons become stronger the more frequently they're used. Hebbian learning continues to be an important model in AI.
1950: Alan Turing publishes "Computing Machinery and Intelligence, proposing what is now known as the Turing Test, a method for determining if a machine is intelligent. Harvard undergraduates Marvin Minsky and Dean Edmonds build SNARC, the first neural network computer. Claude Shannon publishes the paper "Programming a Computer for Playing Chess." Isaac Asimov publishes the "Three Laws of Robotics."
1952: Arthur Samuel develops a self-learning program to play checkers.
1954: The Georgetown-IBM machine translation experiment automatically translates 60 carefully selected Russian sentences into English.
1956: The phrase artificial intelligence is coined at the "Dartmouth Summer Research Project on Artificial Intelligence." Led by John McCarthy, the conference, which defined the scope and goals of AI, is widely considered to be the birth of artificial intelligence as we know it today. Allen Newell and Herbert Simon demonstrate Logic Theorist (LT), the first reasoning program.
1958: John McCarthy develops the AI programming language Lisp and publishes the paper "Programs with Common Sense." The paper proposed the hypothetical Advice Taker, a complete AI system with the ability to learn from experience as effectively as humans do.
1959: Allen Newell, Herbert Simon, and J.C. Shaw develop the General Problem Solver (GPS), a program designed to imitate human problem-solving. Herbert Gelernter develops the Geometry Theorem Prover program. Arthur Samuel coins the term machine learning while at IBM. John McCarthy and Marvin Minsky found the MIT Artificial Intelligence Project.
1963: John McCarthy starts the AI Lab at Stanford.
1966: The Automatic Language Processing Advisory Committee (ALPAC) report by the U.S. government details the lack of progress in machine translation research, a major Cold War initiative with the promise of automatic and instantaneous translation of Russian. The ALPAC report leads to the cancellation of all government-funded MT projects.
1969: The first successful expert systems are developed in DENDRAL, a XX program, and MYCIN, designed to diagnose blood infections, are created at Stanford.
1972: The logic programming language PROLOG is created.
1973: The "Lighthill Report," detailing the disappointments in AI research, is released by the British government and leads to severe cuts in funding for artificial intelligence projects.
1974-1980: slow progress of AI development leads to major DARPA cutbacks in academic grants. Combined with the earlier ALPAC report and the previous year's "Lighthill Report," artificial intelligence funding dries up, and research stalls. This period is known as the "First AI Winter."
1980: Digital Equipment Corporations develop R1 (also known as XCON), the first successful commercial expert system. Designed to configure orders for new computer systems, R1 kicks off an investment boom in expert systems that will last for much of the decade, effectively ending the first "AI Winter."
1982: Japan's Ministry of International Trade and Industry launches the ambitious Fifth Generation Computer Systems project. The goal of FGCS is to develop supercomputer-like performance and a platform for AI development.
1983: In response to Japan's FGCS, the U.S. government launches the Strategic Computing Initiative to provide DARPA-funded research in advanced computing and artificial intelligence.
1985: Companies are spending more than a billion dollars a year on expert systems and an entire industry known as the Lisp machine market springs up to support them. Companies like Symbolics and Lisp Machines Inc. build specialized computers to run on the AI programming language Lisp.
1987-1993: As computing technology improved, cheaper alternatives emerged and the Lisp machine market collapsed in 1987, ushering in the "Second AI Winter." During this period, expert systems proved too expensive to maintain and update, eventually falling out of favor. Japan terminates the FGCS project in 1992, citing its failure in meeting the ambitious goals outlined a decade earlier. DARPA ends the Strategic Computing Initiative in 1993 after spending nearly $1 billion and falling far short of expectations.
1991: U.S. forces deploy DART, an automated logistics planning and scheduling tool, during the Gulf War.
1997: IBM's Deep Blue beats world chess champion Gary Kasparov
2005: STANLEY, a self-driving car, wins the DARPA Grand Challenge. The U.S. military begins investing in autonomous robots like Boston Dynamic's "Big Dog" and iRobot's "PackBot."
2008: Google makes breakthroughs in speech recognition and introduces the feature in its iPhone app.
2011: IBM's Watson trounces the competition on Jeopardy!
2012: Andrew Ng, the founder of the Google Brain Deep Learning project, feeds a neural network using deep learning algorithms 10 million YouTube videos as a training set. The neural network learned to recognize a cat without being told what a cat is, ushering in a breakthrough era for neural networks and deep learning funding.
2014: Google makes the first self-driving car to pass a state driving test.
2015: DeepMind’s AlphaGo: DeepMind’s AlphaGo defeats a professional human Go player, Fan Hui, using deep reinforcement learning and neural networks, showcasing AI’s ability to tackle complex strategic games.
Convolutional Neural Networks (CNNs) Surge: CNNs become dominant in computer vision tasks, powering advances in image recognition (e.g., ResNet architecture introduced by Microsoft).
OpenAI Founded: OpenAI is established by Elon Musk, Sam Altman, and others, focusing on advancing AI research with a mission to ensure safe and beneficial AI.
TensorFlow Released: Google open-sources TensorFlow, a powerful machine learning framework, democratizing AI development.
2016: AlphaGo vs. Lee Sedol: AlphaGo defeats world champion Lee Sedol in Go (4-1), a landmark for reinforcement learning and deep learning, highlighting AI’s ability to master intuitive decision-making.
Generative Adversarial Networks (GANs) Gain Traction: Introduced by Ian Goodfellow in 2014, GANs see widespread adoption for generating realistic images, videos, and audio.
AI in Assistants: Virtual assistants like Amazon’s Alexa and Google Assistant gain popularity, integrating natural language processing (NLP) into consumer products.
2017: Transformer Architecture Introduced: The paper “Attention is All You Need” by Vaswani et al. introduces the transformer, revolutionizing NLP with its attention mechanism, laying the groundwork for future models like BERT and GPT.
AlphaZero: DeepMind’s AlphaZero learns chess, Go, and shogi from scratch, surpassing human performance in hours, demonstrating the power of self-play in reinforcement learning.
AI Ethics Concerns Emerge: Discussions on AI bias, fairness, and safety grow, with organizations like AI Now Institute forming to address societal impacts.
2018: BERT by Google: Bidirectional Encoder Representations from Transformers (BERT) sets new benchmarks in NLP tasks like question answering and sentiment analysis.
AI in Healthcare: AI systems begin assisting in medical diagnostics, such as detecting diabetic retinopathy and cancer from imaging data.
Autonomous Driving Advances: Companies like Waymo and Tesla push self-driving car technology, though full autonomy remains elusive.
Deepfakes Emerge: AI-generated fake videos raise concerns about misinformation, sparking research into detection methods.
2019: GPT-2 by OpenAI: OpenAI releases GPT-2, a large-scale language model capable of generating coherent text, raising debates about misuse (e.g., fake news).
AI in Gaming: AI systems like OpenAI’s Dota 2 bot (OpenAI Five) defeat professional teams, showcasing multi-agent reinforcement learning.
Quantum AI Exploration: Google claims “quantum supremacy” with its Sycamore processor, solving a task faster than classical computers, hinting at future AI-computing synergies.
AI Regulation Talks Begin: Governments and organizations start discussing AI governance, with the EU drafting early AI ethics guidelines.
2020: GPT-3 by OpenAI: GPT-3, with 175 billion parameters, demonstrates unprecedented language generation capabilities, powering applications like chatbots and code generation.
AI for COVID-19: AI models aid in drug discovery, vaccine development, and pandemic forecasting, highlighting AI’s role in global crises.
Diffusion Models Introduced: Early diffusion models for image generation (precursors to DALL·E 2 and Stable Diffusion) emerge in research.
Ethical AI Push: Companies face pressure to address biases in AI systems, with tools like Fairness Indicators released to audit models.
2021: DALL·E and CLIP by OpenAI: DALL·E generates images from text prompts, while CLIP connects images and text, advancing multimodal AI.
AlphaCode by DeepMind: AI begins competing in programming competitions, generating functional code for complex problems.
AI in Creative Arts: AI tools for music (e.g., OpenAI’s Jukebox) and art generation gain traction, sparking debates about creativity and authorship.
China’s AI Surge: China accelerates AI development, with companies like Baidu and Tencent rivaling Western counterparts in NLP and computer vision.
2022: Stable Diffusion Released: open-source image generation model democratizes high-quality visual AI; tools such as Midjourney and DALL-E 2 to flourish.
ChatGPT by OpenAI, built on GPT-3.5, is global phenomenon for its conversational abilities, driving mass AI adoption.
AI in Scientific Discovery: DeepMind’s AlphaFold solves protein folding, a decades-old biological puzzle, accelerating drug discovery.
AI Regulation Intensifies: EU proposes AI Act, categorizing AI systems by risk levels, while U.S. releases AI Bill of Rights blueprint.
2023: GPT-4 by OpenAI, multimodal model, excels in text, image processing, reasoning, powering applications like ChatGPT Plus.
LLaMA by Meta AI: Meta’s language models advance research; restricted to non-commercial use.
AI in Education: Tools like Khan Academy’s Khanmigo integrate AI tutors, transforming personalized learning.
Misinformation and Safety Concerns: AI-generated content fuels misinformation, prompting calls for watermarking and detection tools.
2024: xAI releases Grok designed to accelerate scientific discovery and provide helpful, truthful answers, competing with ChatGPT.
Multimodal AI; Models like Google’s Gemini and Anthropic’s Claude 3 integrate text, images, and other data, enabling richer interactions.
AI in Robotics: Advances in embodied AI lead to smarter robots for manufacturing, logistics, and home assistance (e.g., Tesla’s Optimus prototype).
Global AI Governance: The EU AI Act is finalized, while international summits address AI safety, with focus on mitigating existential risks.
US National Library of Medicine releases MTIX for automated indexing, using neural networks.
2025:xAI launches Grok 3 with features like voice mode and DeepSearch. SuperGrok offers higher usage quotas.
In November 2025, OpenAI released GPT-5.1 a more advanced iteration of their flagship large language model series, featuring enhanced conversational abilities, improved reasoning for complex tasks, and better integration for developers via APIs and tools like Codex-Max for code generation.
Note: Please use your critical reading skills while reading entries. No warranties, implied or actual, are granted for any health or medical search or AI information obtained while using these pages. Check with your librarian for more contextual, accurate information.