GoodAI’s paper “ToyArchitecture: Unsupervised learning of interpretable models of the environment,” focuses on one of our experimental AI architectures, ToyArch, that we were working on in 2019. Although we are no longer pursuing ToyArch, the work helped us get to where we are now. The paper was recently published in PLOS ONE. You can read the abstract and the full paper below.
Abstract
Research in Artificial Intelligence (AI) has focused mostly on two extremes: either on small improvements in narrow AI domains, or on universal theoretical frameworks which are often uncomputable, or lack practical implementations. In this paper we attempt to follow a big picture view while also providing a particular theory and its implementation to present a novel, purposely simple, and interpretable hierarchical architecture. This architecture incorporates the unsupervised learning of a model of the environment, learning the influence of one’s own actions, model-based reinforcement learning, hierarchical planning, and symbolic/sub-symbolic integration in general. The learned model is stored in the form of hierarchical representations which are increasingly more abstract, but can retain details when needed. We demonstrate the universality of the architecture by testing it on a series of diverse environments ranging from audio/visual compression to discrete and continuous action spaces, to learning disentangled representations.