Skip to content

Tags

  • Metadata: #topic
  • Part of:
  • Related: [[Recursive self improvement]]
  • Includes:
  • Additional:

Significance

Intuitive summaries

Definitions

Technical summaries

  • Gödel machine is a hypothetical [[self-improving]] computer program that solves problems in an optimal way. It uses a [[recursive self-improvement]] protocol in which it rewrites its own code when it can prove the new code provides a better strategy. The machine was invented by [[Jürgen Schmidhuber]] (first proposed in 2003), but is named after [[Kurt Gödel]] who inspired the mathematical theories.

Mathematically rigorous, general, fully [[self-referential]], [[self-improving]], [[optimal]]ly efficient problem solvers.

The Gödel machine is often discussed when dealing with issues of meta-learning, also known as "learning to learn." Applications include automating human design decisions and transfer of knowledge between multiple related tasks, and may lead to design of more robust and general learning architectures. Though theoretically possible, no full implementation has been created.

The Gödel machine is often compared with Marcus Hutter's AIXI, another formal specification for an artificial general intelligence. Schmidhuber points out that the Gödel machine could start out by implementing AIXItl as its initial sub-program, and self-modify after it finds proof that another algorithm for its search code will be better.

Inspired by Kurt Gödel's celebrated self-referential formulas (1931), a Gödel machine (or Goedel machine' but notGodel machine') rewrites any part of its own code as soon as it has found a proof that the rewrite is useful, where the problem-dependent [[utility function]] and the hardware and the entire initial code are described by [[axioms]] encoded in an initial [[proof search]]er which is also part of the initial code. The searcher systematically and efficiently tests computable proof techniques (programs whose outputs are proofs) until it finds a provably useful, computable self-rewrite. We show that such a self-rewrite is globally optimal - no local maxima! - since the code first had to prove that it is not useful to continue the proof search for alternative self-rewrites. Unlike previous non-self-referential methods based on hardwired proof searchers, ours not only boasts an optimal order of [[complexity]] but can optimally reduce any slowdowns hidden by the O()-notation, provided the utility of such speed-ups is provable at all.

Main resources

Landscapes

Contents

Deep dives

Brain storming

Additional resources

Explanation by AI

Landscapes by AI

Deep dives by AI

AI

Additional metadata

  • processed #processing #toprocess #important #short #long #casual #focus

  • Unfinished: #metadata #tags