Course:CPSC522/Self Improving Machines
SelfImproving Machines
Selfimproving machines as literally refer to machines that capable of improving itself over time.
Principal Author: Wenyi Wang
Collaborators:
Abstract
Selfimproving machines as literally refer to machines that capable of improving itself over time. In this page, we focus on recursively selfimproving (RSI) machines that not only perform better with time, but getting better at getting better. We will discuss two papers: Steunebrink and Schmidhuber (2011) ^{[1]} and Yampolskiy (2014) ^{[2]}. The first paper describes a particular type of RSI machines, i.e. the Gödel machine. And the second paper provides a higherlevel review and analysis of RSI systems.
Builds on
The selfimproving machine is an approach to Artificial Intelligence, and particularly proposed for Artificial General Intelligence.
Related Pages
The selfimproving machine is highly related to Reinforcement Learning and Metalearning.
Content
Introduction
As early as year 1950, Turning proposed that to build a machine with humanlevel intelligence, instead of simulating the adult mind, an alternative approach is to produce a machine with children’s level of intelligence and learning capability, and educate it to achieve adultlevel intelligence ^{[3]}. This kind of selfimproving approach is particularly interesting for AGI researchers since many of them believed that a general propose system would be adequately complex so that it cannot be hard coded, therefore need to be discovered by machines. The concept selfimproving machines include a board family of systems including systems that simply optimize a fixed set of parameters, nonparametric algorithms, and various kinds of metalearning methods. In this page, we are particularly interested in recursively selfimproving (RSI) machines, which improve the system’s ability of selfimproving iteratively. With this property, the system has potential to completely rewrite its original implementation, and take completely different approaches ^{[2]}. This leads to the idea of intelligence explosion.
Paper 1: A Family of Gödel Machine Implementations
The Gödel machine is originally proposed in Schmidhuber’s (one of the authors of this paper) previous work ^{[4]}. It is a universal problem solver that may interacts with some partially observable unknown environment, and optimize a given utility. In this paper, they adapt the expected cumulative future reward utility. A Gödel machine arbitrarily rewrites any parts of itself if the rewrite is proved to be useful. The proof procedure requires evaluating the machine itself; therefore a Gödel machine is a selfreferential machine. The Gödel machine are proved to be globally optimal. In this paper, they refine inconvenient component of the original Gödel machine definition in the implementation aspect, and describe different approaches of attaining selfreflectivity for implementing a Gödel machine.
A Gödel machine consists two parts: a solver and a searcher. The solver is a program that interacts the environment. The searcher is a program that tries to find a provably improving rewrite of both the solver and the searcher. The main topic of this paper is on how to construct a target theorem of proving a given rewrite is an improvement so that the searcher is implementable.
The Utility Function and Target Theorem
Consider a utility function that measure the goodness of a given program for an environment state in the following form.
where is an environment distribution that describes the probabilistic dynamics of the environment, equals machine’s life time, equals the current time, and is the reward of a time based on the program state and environment at the time. Note that the terms and are part of the utility function , although any program that represents a Gödel machine encodes possibly different values of these terms. Another clarification needs to be made is that times and do not relate to the interactive steps between the machine and environment but the step’s for the machine performs internal computational operations. The goal of a Gödel machine initially is to optimize for current and with the true unknown environment distribution and user defined . Since all and are encoded in the Gödel machine and the Gödel machine may rewrite arbitrary part of its code, the machine will not directly optimize the originally defined objective function, but the objective function encoded in the current version of the Gödel machine. Because of the difficulty of implementation and conceptual understanding, in this work they extend the utility function with the concept of continuation ^{[5]}.
Within a computational platform, a continuation states what is going to be executed next. An example is illustrated from the paper. “For example, suppose the current continuation is {A(); if B() then C() else D()}; this continuation specifies that the next thing to be done is expanding A and executing its body, and then the conditional statement will be executed, which means that first B will be expanded and depending on its result, either C or D will be expanded. Note that before executing B, it is not clear yet whether C or D will be executed in the future; so it makes no sense to expand either of them before we know the result of B.” With this definition of continuation, the utility function encoded in program is rewritten as a function of program state and continuation
and represent the environment distribution encoded in , and and are the programcontinuation pair encoded in .
Based on the new utility function, they introduce a target theorem that explicitly includes a program rewrite function . The utility of rewrite the program using at time with expected future program state before time equals is formulated as with evaluating program (e.g. the current program) . Denote "proceeding following program s" by continuation . A target theorem to show a rewrite is an improvement is constructed as
Intuitively the target theorem being true means that utility of applying the rewrite is greater than then utility of continuing with the same program.
SelfReferential Implementation
As we mentioned above, a programming language with selfreferential construction is needed for a Gödel machine implementation. In the discussion and conclusion section of this paper, metacircular evaluators ^{[6]} as an existing technique for attaining selfreflexivity is stated. A metacircular evaluator is an interpreter that interprets its own source code. They propose that double nesting of metacircular evaluators can be used to realize a Gödel machine's selfevaluation and selfmodification. Although in theory such technique is sufficient for the Gödel machine, a practically efficient way is to implement a Gödel machine directly in an assembly language.
Paper 2: From Seed AI to Technological Singularity via Recursively SelfImproving Software
Selfimproving systems has been dreamed since the early days of computer science and artificial intelligence. This paper provides definitions and classifies types for selfimproving software, and particularly focused on RSI systems. Since there was no working RSI software exists, the paper analysis RSI systems conceptually. This paper may serve as a guideline for future RSI system development.
Definitions and Types of SelfImprovement
Selfimproving software products allow for some optimization or customization of the product. Nonrecursive optimization processes ^{[7]} proposed by Yudkowsky is a common class of selfimproving systems. The nonrecursive optimization refers to a situation that one component of the system is optimized by the other part. A concept, the law of diminishing returns, is introduced as a common phenomenon for selfimproving systems. It refers to the behavior that a selfimproving system quickly discovers the "lowhanging fruits", and further improvements are less frequent and less significant. It is common that a Nonrecursive optimization process quickly fails into the law of diminishing. RSI systems is hypothesized to overcome this issue.
Recursive selfimproving systems create new software iteratively. The newly created software should be better at creating future software. Chalmers' proportionality thesis ^{[8]} hypothesizes that increases in the capability of designing future intelligent systems is propositional to the increases in intelligence. With this hypothesis he shows if a process iteratively generate a greater intelligent system using current system, this process leads to superintelligence. One classification of the RSI software is based on number of possible improvements. The hope is that meaningful RSI systems will be able to make many improvements. Possibilities of infinite number of improvements and upper bound of intelligence remain as open questions.
Another way to classify RSI systems is to distinguish how the improvements are found. One approach is to consider a set of possible source codes and to prove one of them is an improvement. Another approach is to use the system's capability of designing and testing for producing the newer version of the system. There is an existing phenomena considerable as a hybrid RSI system: the the collaboration of artificial intelligence system and human scientists. The combinations of human and artificial intelligence achieved significant success in many regions.
Limits of RSI Systems
The first type of limits relies on physical restrictions. Such restrictions include speed of light, uncertainty principle and other physical laws we many not know. Similar limits has been studied for human brains. However we are far from reaching these limits. Another type of limits are based on computability. Results of non computable problems and non polynomial problems apply to RSI systems. Limits developed for general artificial intelligence also apply to RSI systems. As an example, Wiedermann classifies cognitive systems using Arithmetic Hierarchies ^{[9]} ^{[10]}. He argues that the human intelligence are upper bounded by the class of hierarchy, and a machine's intelligence should be bounded by the level of hierarchy limited by its computational limits.
Beside the limits that apply to more general problems, many specific limits are developed for RSI systems. Two limits are discussed for RSI systems in general. One attempt is to prove a RSI process is impossible by contradiction. Assume program is capable of solving problems with maximum difficulty of . Let be a machine generated by that is capable of solving a problem with difficulty . This is a contradiction since solves with difficulty greater than via . Another more precise limit made by Mahoney is that for a family of goal orientated RSI systems, he proves the algorithmic complexity of a improving sequence of programs is ^{[11]}.
Other obstacles that might be faced when designing a particular class of RSI systems is discussed: Löb's theorem states that a formal system contains arithmetic is either inconsistent or not sound ^{[12]}; "Errors which are not detrimental to system's performance are very hard to detect and may accumulate from generation to generation ... until a critical mass" leads to irreversible failures; A system with lower level of intelligence may not able to understand higher intelligence; A system may not able to predict itself without ruining it.
RSI Convergence Theory
The RSI convergence theory is about the limiting behavior of RSI systems. A possibility is that all intelligent enough RSI systems will converge to the same software architecture. It is also possible that an upper bound of intelligence exists, so that all RSI systems eventually converge to the same level of intelligence. Alternatively, intelligence might be an unbounded property, and RSI system does not converge. If the RSI systems converge to be equal in some sense, it may lead to creation of a Singleton, i.e. a single system that controls everything ^{[13]}, since all the RSI systems may recognize them as a single agent due to their equalities. It is also arguable that humanity is on the early path of converging with a slow speed.
Discussion
From the author of this paper's perspective, it is questionable that how would the initial utility function of a Gödel machine be evaluated since the model assumes unknown environment dynamics. If one uses the current estimation of environment dynamics, the Gödel machine may modify the reward function such that the modified version of reward function is equivalent to the user defined reward function in the sense of optimization with current estimation environment dynamics but not equivalent with the true environment dynamics. Then in the future when the estimation of environment dynamics being correct, the utility function defined by the estimation of environment dynamics and reward function is no longer correct. It is also questionable that why there is no working implementation of Gödel machine exists.
The second paper provides useful tools and scopes to analysis Gödel machines. For example, dose the Löb's theorem tell us a Gödel machine with a sound and consistent target theorem proof searcher cannot include arithmetic? On the other hand, the Gödel machine provide a useful framework for answering questions or testing hypotheses mentioned in paper two, i.e. with the precisely defined architecture, one may be able to analyze its convergence for particular problems.
Annotated Bibliography
 ↑ Steunebrink, B. R., & Schmidhuber, J. (2011, August). A family of Gödel machine implementations. In International Conference on Artificial General Intelligence (pp. 275280). Springer, Berlin, Heidelberg.
 ↑ ^{2.0} ^{2.1} Yampolskiy, R. V. (2015). From seed AI to technological singularity via recursively selfimproving software. arXiv preprint arXiv:1502.06512.
 ↑ Turing, A. M. (2009). Computing machinery and intelligence. In Parsing the Turing Test (pp. 2365). Springer, Dordrecht.
 ↑ Schmidhuber, J. (2003). Godel machines: Selfreferential universal problem solvers making provably optimal selfimprovements. arXiv preprint cs.LO/0309048.
 ↑ Queinnec, C. (2003). Lisp in small pieces. Cambridge University Press.
 ↑ Abelson, H., Sussman, G. J., & Sussman, J. (1996). Structure and Interpretation of Computer Programs. 2nd.
 ↑ Yudkowsky, E. (2013). Intelligence explosion microeconomics. Machine Intelligence Research Institute, accessed online October, 23, 2015.
 ↑ Chalmers, D. J. (2010). The singularity. Science Fiction and Philosophy: From Time Travel to Superintelligence, Second Edition, 171224.
 ↑ Wiedermann, J. (2012). Is There Something Beyond AI? Frequently Emerging, but Seldom Answered Questions about Artificial SuperIntelligence. Beyond AI: Artificial Dreams, 76.
 ↑ Wiedermann, J. (2012). A computability argument against superintelligence. Cognitive Computation, 4(3), 236245.
 ↑ Mahoney, M. (2008). A Model for Recursively Self Improving Programs.
 ↑ Yudkowsky, E., & Herreshoff, M. (2013). Tiling agents for selfmodifying AI, and the Löbian obstacle. Early draft, MIRI.
 ↑ Bostrom, N. (2006). What is a singleton. Linguistic and Philosophical Investigations, 5(2), 4854.
To Add
Put links and content here to be added. This does not need to be organized, and will not be graded as part of the page. If you find something that might be useful for a page, feel free to put it here.
