Badger Architecture

Badger architecture is the unifying framework for our research defined by its key principle modular life-long learning.

The modular aspect is expressed in the architecture through a network of identical agents. The life-long learning means that the network will be capable of adapting to a growing (open-ended) range of new and unseen tasks while being able to reuse knowledge acquired in previous tasks. The algorithm that is run by individual Badger modules (a.k.a. experts) will be discovered through meta-learning. 

We expect the design principles of Badger architecture to be its key advantages. The modular approach should enable scaling beyond what is possible for a monolithic system and the focus on life-long learning will allow for incremental, piece-wise learning, driving down the demand for training data.

State of Badger

Below you can find a taster of some of our latest work.

Inspired by the cumulative nature of culture in human society, we seek to replicate the dynamics where the computation of multiple experts effectively distributes information with the goal of collectively solving a task. Badger experts should be able to communicate with each other in order to quickly adapt to new tasks while easily reusing past knowledge. Upon novel but similar tasks, experts should also be able to adapt by rewiring their society, and only in certain cases, would the addition of new experts be needed. Ways in which experts can learn are diverse, but always local, an aspect that ensures the linear scalability of the system.

The collective learning process is driven by memetics, which is based on the horizontal transfer of knowledge between agents and which we hypothesize, fosters collaborative behavior. In the simulations, both the environment and the agents run entirely on GPU, allowing 100,000 agents to run at more than 30 simulation steps per second. In these videos we can see a simulation in which 5,000 agents (in white) try to survive by gathering food (green particles) and grouping up together (colored lines show inter-agent links). Read more about our findings here.

Principles of Badger

Badger is an architecture and a learning procedure where:

  • An agent is made up of many experts
  • All experts share the same communication policy (expert policy), but have different internal memory states
  • There are two levels of learning, an inner loop (with a communication stage) and an outer loop
  • Inner loop – Agent’s behavior and adaptation emerges as a result of experts communicating between each other. Experts send messages (of any complexity) to each other and update their internal memories/states based on observations/messages and their internal state from the previous time-step. Expert policy is fixed and does not change during the inner loop.
  • Inner loop loss need not even be a proper loss function. It can be any kind of structured feedback so long as it eventually relates to the outer loop performance.
  • Outer loop – An expert policy is discovered over generations of agents, ensuring that strategies that find solutions to problems in diverse environments can quickly emerge in the inner loop.
  • Agent’s objective is to adapt fast to novel tasks
  • Open-ended inner loop learning needs to be enabled by a suitable design of the outer loop, for instance through the support of agent self-reference and by using curiosity as an implicit agent goal creation mechanism. An open-ended agent should be able to come up with novel and creative solutions to problems it faces. The environment it operates in needs to be open-ended too – it must enable creation of novel and unforeseen tasks that match the current skill level of the agent, to support its further improvement.

Exhibiting the following novel properties:

  • Roles of experts and connectivity among them assigned dynamically at inference time
  • Learned communication protocol with context-dependent messages of varied complexity
  • Generalizes to different numbers and types of inputs/outputs
  • Can be trained to handle variations in architecture during both training and testing

GoodAI Research Roadmap 2021/2022

In 2021/22 GoodAI will focus on four core research areas: learning to learn, lifelong (gradual) learning, open-endedness, and generalization / extrapolation of meta-learned algorithms.

Explore

Badger paper

For the motivation behind Badger, more details, preliminary experiments, literature, please see the full paper using the button below.

Explore

Badger workshops

GoodAI runs regular workshops with external collaborators in order to advance the Badger Architecture you can read summaries of past workshops and find information about upcoming workshops below:

Past workshops 

If you would like to join one of these workshops in the future please contact us.

Join our team

We are growing our team, we are looking for people interested in collaborating on the Badger Architecture to join us in our office in Prague or remotely. Please see our jobs page for open positions.

From our blog

Read the latest technical blogs from GoodAI.

Charlie Mnemonic – Update 5: Introducing Chain-of-Thought and Integrated Recall System

We’re excited to announce the fifth major update to Charlie Mnemonic, your open-source AI assistant with Long-Term Memory. This release brings groundbreaking features, including Chain-of-Thought reasoning and an integrated Recall system that allows you to effortlessly search and reference past.

Read more

Major Charlie Mnemonic update released!

We are announcing major updates for Charlie Mnemonic, your AI assistant with Long-Term Memory that’s getting smarter and more capable every day. We’ve been working hard to integrate new features and improve existing ones, and we are excited to share.

Read more

GoodAI LTM Benchmark v3 Released

A Standardization Release: The main purpose of the GoodAI LTM Benchmark has always been to serve as an objective measure for our progress in the development of agents capable of continual and life-long learning.

Read more