Course:CPSC522/AGI

From UBC Wiki

Artificial General Intelligence

Artificial General Intelligence is the field that aim to build general purpose machines with human or super-human level intelligence.

Principal Author: Wenyi Wang
Collaborators:

Abstract

Artificial General Intelligence (AGI) is the field that aim to build general purpose machines with human or super-human level intelligence. Although it was the original goal of AI, a new term is necessary because the mainstream of AI changes its interests to narrowed problems. This page contains a brief history of AI and AGI, the goals and methodologies for AGI research, and its difficulties and objections. At the end, it introduces three representative works of contemporary AGI research.

Builds on

Artificial General Intelligence is a subfiled of Artificial Intelligence.

Related Pages

Artificial General Intelligence is related to control theory, information theory, cognitive science, epistemology, etc.

Content

Introduction

History of AI vs. AGI

The idea of intelligent and thinking machine can be traced back to Descartes, Pascal, Leibniz, et al in 17th century. The modern study of AI starts around 1950s [1]. The ultimate goal of artificial intelligence was to study and develop machines of human-level intelligence. In some sense, AI and AGI are equivalent concepts in the beginning of modern AI research.

Early projects aiming this goal including general problem solver [2] and the fifth generation computer systems [3] were all failed because the difficulty of the problem. After 1970s, the mainstream AI research shifted its interests to narrowed problems for many reasons. Some of the researcher believed that studying general intelligence was too early at that time, and the concept of artificial general intelligence was non-scientific for many of the people. Studying narrowed problems allowed researchers to develop rigorous theories and experiments rather than heavily relay on intuitions. However there were opposite standpoints about this shift of interests those criticize that developing solutions to narrowed problems in particular domain-specific and special-purpose problems betrayals the original goal of AI.

In 21th century, AGI ushered its revival from both inside its community and the mainstream. The AGI society has established with its own journal and conferences. Works that are designed to be robust and handle complex scenarios in contrast to “old fashion AI" have been published. From the mainstream AI, impressive progress has been made raises the hope of human-level intelligence, and draws bake attention to AGI.

Goals and methodologies of AGI research

In short, the goal of AGI is to build computational machines with general intelligence. But what do we mean by general intelligence? This is a controversial topic. Two approaches to identify general intelligence that are majorly adapted by AI researchers are Turning test [4]liked examinations and control theory approach. In contrast to classical psychological definition of intelligence, both of them focus on the behavior instead of “thinking process”.

Roughly speaking, Turing test liked examinations ask AI agents to simulate human in tasks that require human-level intelligence, and make judgement based on the distinguishability of AI’s behaviors v.s. human. This approach has advantage that it is relatively easy to identify using intuitions. However there are critics for this approach. People critic this kind of tests being sufficient but not necessary conditions and lack of theoretical foundations that can lead future development. The original design of Truing test was published in Turing’s Computing Machinery and Intelligence.

The control theory approach attempts to measure intelligence using utilities achieved by agent’s actions. In contrast to narrowed AI approaches, control theory approach to define general intelligence tries to define the problem with less constrains in order to include general enough problems. However it usually resolves in extremely hard problems, and makes it hard to develop efficient solutions. A typical example of this approach is Hutter’s Universal Artificial Intelligence[5].

Current approaches for AGI research include but not limited to develop core mechanisms that are hypothesized to be essential for general intelligence, and build architectures that can integrate different components of AI that solve different specific tasks. Methods can be classified by levels of abstraction including structure, behavior, capability, function and principle.

Difficulties and Objections

Some people argue that general intelligence dose not exist because every being only performances well in specific environments. Theoretically one can prove that for general enough problems there is no method always outperform others. Even for human-level intelligence which we know it exists, it is arguable that it may not be Turing computable. These arguments support that hypothesis that AGI is theoretically impossible [6].

For people who believed in AGI is possible, AGI is still a very hard problem. As we discussed in goals of AGI, the criteria on general intelligence is vague. In some sense we are trying to design something that we do not know what it is. To formalize AGI as a well defined problem, one way of viewing this is to define a set of candidate machines and a function that identify intelligent machines over the candidate set. To make the problem solvable, the candidate set and identification function need to have special properties. But to include general intelligence, the candidate set needs to be general enough unless we know the special properties of intelligence. To make an appropriate compromise one needs to know some principles of general purpose systems, which at the current stage we do not understand much.

Representative Projects

In this section, we will give a brief introduction on some representative projects of contemporary AGI research. All of them have some desired properties. But non of them show a recognized roadmap that leads to general intelligent machines.

UAI

The Universal Artificial Intelligence (UAI) project tries to build a comprehensive top-down theory of intelligence. It adopts sequential decision making framework, and works with unknown prior probabilities. Unlike standard reinforcement learning, Markov property or full observations are not assumed. Using Solomonoff’s theory, they solve the optimal control problem in the context of information theory, propose a parameter-free theory of universal Artificial Intelligence, and argue that the resulting AIXI model is the most intelligent unbiased agent possible. They also propose the AIXItl model which is computable and more intelligent than any other time t and length l bounded agent ,although the AIXItl model still requires experiential computation time [7].

OpenCog

The OpenCog [8]project takes an integrative approach that builds an architecture that integrates many (claimed to be all in the original literature) aspects of intelligence. They hypothesize that "if this design is fully implemented and tested on a reasonable-sized distributed network, the result will be an AGI system with general intelligence at the human level and ultimately beyond.” [9] The architecture is designed “wherein different components are specifically integrated in such a way as to compensate for each others scalability weaknesses.” [10] They have implemented a specific framework and tested on virtual agents in virtual worlds and a Nao humanoid robot.

NARS

NARS [11] is a reasoning system with a logic, a memory structure, and a control mechanism. The system takes tasks to insert knowledge or answer questions in real time. In this work they claim an essential part of intelligence it the capability to work with insufficient knowledge and resources in dynamic environments. Base on this, the system is designed “to not maximize the system’s performance, but to minimize its theoretical assumption and technical foundation”, while still being able to learn from experience and open to unexpected tasks in real time.

They propose a logic system called Non-Axiomatic logic. Here we introduce the minimum Non-Axiomatic logic, which is later extended to higher order and procedural logics with richer semantics and syntax, which include time, and can infer about events, operations and goals. Every statement in the minimum Non-Axiomatic consists a subject term and a predicate term. “Intuitively, the statement says that the subject is a specialization of the predicate, and the predicate is a generalization of the subject.” [12] What makes this logic different from traditional logics is that the truth value in this logic consists two real numbers frequency and confidence depend of agent’s memory. An inference rule including deduction, abduction, and induction is developed for its semantics.

The memory of an agent structured as a network. Each node in the network is either a statement with its truth value or a task. Nodes that are directly related to each other (i.e. nodes share common terms) are linked. Each node with its neighborhood forms a concept. “When the system is running, usually there are many tasks in its memory. The system assigns a priority-value to every concept, task, and belief. At each inference step, a concept is selected, and then a task and a belief are selected within the concept.” When processing the tasks, new nodes and links may generate. They are traded as new tasks that insert to the system. When memory is full, the system delete nodes and links based on priority-values.

Annotated Bibliography

  1. Buchanan, B. G. (2005). A (very) brief history of artificial intelligence. Ai Magazine, 26(4), 53.
  2. Newell, A., Shaw, J. C., & Simon, H. A. (1959, January). Report on a general problem solving program. In IFIP congress (Vol. 256, p. 64).
  3. Moto-Oka, T. (Ed.). (2012). Fifth generation computer systems. Elsevier.
  4. Turing, A. M. (2009). Computing machinery and intelligence. In Parsing the Turing Test (pp. 23-65). Springer, Dordrecht.
  5. Hutter, M. (2004). Universal artificial intelligence: Sequential decisions based on algorithmic probability. Springer Science & Business Media.
  6. Churchland, P. M., & Churchland, P. S. (1990). Could a machine think?. Scientific American, 262(1), 32-39.
  7. Hutter, M. (2007). Universal algorithmic intelligence: A mathematical top→ down approach. In Artificial general intelligence (pp. 227-290). Springer, Berlin, Heidelberg.
  8. Goertzel, B., Pennachin, C., & Geisweiller, N. (2014). The OpenCog Framework. In Engineering General Intelligence, Part 2 (pp. 3-29). Atlantis Press, Paris.
  9. Goertzel, B., Pennachin, C., & Geisweiller, N. (2014). The OpenCog Framework. In Engineering General Intelligence, Part 2 (pp. 3-29). Atlantis Press, Paris.
  10. Goertzel, B., Pennachin, C., & Geisweiller, N. (2014). The OpenCog Framework. In Engineering General Intelligence, Part 2 (pp. 3-29). Atlantis Press, Paris.
  11. Wang, P. (2013). Non-axiomatic logic: A model of intelligent reasoning. World Scientific.
  12. Wang, P. (2013). Non-axiomatic logic: A model of intelligent reasoning. World Scientific.

To Add

Put links and content here to be added. This does not need to be organized, and will not be graded as part of the page. If you find something that might be useful for a page, feel free to put it here.


Some rights reserved
Permission is granted to copy, distribute and/or modify this document according to the terms in Creative Commons License, Attribution-NonCommercial-ShareAlike 3.0. The full text of this license may be found here: CC by-nc-sa 3.0
By-nc-sa-small-transparent.png