Unlocking the Potential of Large Language Model Agents
At GoodAI, we are dedicated to pushing the boundaries of artificial intelligence. Our current focus is on the development of Large Language Model (LLM)-based agents with personalities that go beyond simple conversations, and instead exhibit LLM-driven behaviors, interacting with humans and other agents, as well as their virtual environment. Our agents learn from feedback, store long-term memories, and express goal-oriented behaviors. We are building a cognitive architecture on top of LLM, which is used as a reasoning engine, and adding long-term memory, the foundation for continual learning.
Since 2021 we’ve been applying our research in the development of the AI People, our in-house video game where LLM agents come alive. In this open-ended sandbox simulation, agents interact with each other and their environment, forming relationships and displaying emotions.
Marek Rosa’s slides from the ML Prague ‘23 conference explain the technology behind the game:
How It Works
Our LLM agents are emulated personalities with goals and memories. As the designers, we describe their personalities in plain text, which serves as a blueprint for their behavior. We feed the agents’ observations and recent events into the LLM, and generate responses that reflect what the agent would do in a given situation.
These responses are then translated into possible game actions, providing our agents with the autonomy and adaptability to navigate their surroundings. It’s important to note that our agents’ behaviors are not scripted; they are dynamically generated by the LLM, resulting in unpredictable, realistic, and amusing experiences.
Cognitive Architecture generates goal-oriented behavior
Thanks to the cognitive architecture design, goal-oriented behavior is at the core of our LLM agents. When presented with a goal, they employ a planning and execution process to achieve it. If the goal can be accomplished using atomic game actions, we generate a plan that outlines how to solve the objective. For longer-term goals, we decompose the plan into simpler tasks that can be completed within shorter timeframes. This iterative approach continues until the task becomes solvable using atomic game actions. We rely on feedback to guide our agents, with completed tasks leading to new goals and unsuccessful plans prompting us to reassess and revise our strategies.
Long-term Memory enables continual learning
Our LLM agents rely on their Long-Term Memory (LTM) to store and retrieve crucial memories. Conversations, thoughts, plans, actions, observations, skills, and behaviors are all stored within a vector database. The LTM enables pre-processing and post-processing of memories, ensuring optimal retrieval. By considering factors such as context, recency, importance, and relevance, our agents can access the appropriate memories to inform their actions. LTM acts as a foundation for continual learning, enabling our LLM agents to grow and develop over time.
Overcoming Challenges
While our LLM-driven universal agents hold tremendous potential, they also face certain challenges. The LLM’s responses can be volatile and unreliable, as slight changes in the prompt can lead to significant variations in the output. Occasionally, the LLM may generate irrelevant or nonsensical information, necessitating ongoing improvements. Additionally, the context window has limited capacity, which can impact our agents’ understanding of complex scenarios.
Although our agents currently focus on language understanding, we recognize the need for multi-modal comprehension. We are actively working on addressing these challenges, as well as improving long-term memory, to enhance the performance of our LLM agents.
GoodAI’s Vision for the Future
Since our inception in 2014, we have been pursuing the goal of beneficial general artificial intelligence. In 2021, we embarked on the path of LLM-driven agents, applying our findings directly in the development of the AI People game. While video games are an ideal developmental environment, we believe that the possibilities presented by collaborative LLM agents go far beyond entertainment. Some of our current collaborative agent-based work includes AI Researcher, Multi-Agent Coder, Assistant, and Stoic Mentor.
We invite you to follow our journey and get in touch with us if you would like to become part of it.
Leave a comment