A reinforcement learning codebase focusing on the emergence of cooperation and alignment in multi-agent AI systems.
- Discord: https://discord.gg/mQzrgwqmwy
- Short (5m) Talk: https://www.youtube.com/watch?v=bt6hV73VA8I
- Talk: https://foresight.org/summary/david-bloomin-metta-learning-love-is-all-you-need/
Metta AI is an open-source research project investigating the emergence of cooperation and alignment in multi-agent AI systems. By creating a model organism for complex multi-agent gridworld environments, the project aims to study the impact of social dynamics, such as kinship and mate selection, on learning and cooperative behaviors of AI agents.
Metta AI explores the hypothesis that social dynamics, akin to love in biological systems, play a crucial role in the development of cooperative AGI and AI alignment. The project introduces a novel reward-sharing mechanism mimicking familial bonds and mate selection, allowing researchers to observe the evolution of complex social behaviors and cooperation among AI agents. By investigating this concept in a controlled multi-agent setting, the project seeks to contribute to the broader discussion on the path towards safe and beneficial AGI.
Metta is a simulation environment (game) designed to train AI agents capable of meta-learning general intelligence. The core idea is to create an environment where incremental intelligence is rewarded, fostering the development of generally intelligent agents.
-
Agents and Environment: Agents are shaped by their environment, learning policies that enhance their fitness. To develop general intelligence, agents need an environment where increasing intelligence is continually rewarded.
-
Competitive and Cooperative Dynamics: A game with multiple agents and some competition creates an evolving environment where challenges increase with agent intelligence. Purely competitive games often reach a Nash equilibrium, where locally optimal strategies are hard to deviate from. Adding cooperative dynamics introduces more behavioral possibilities and smooths the behavioral space.
-
Kinship Structures: The game features a flexible kinship structure, simulating a range of relationships from close kin to strangers. Agents must learn to coordinate with close kin, negotiate with more distant kin, and compete with strangers. This diverse social environment encourages continuous learning and intelligence growth.
The game is designed to evolve with the agents, providing unlimited learning opportunities despite simple rules.
The current version of the game can be found here. It's a grid world with the following dynamics:
- Agents and Vision: Agents can see a limited number of squares around them.
- Resources: Agents harvest diamonds, convert them to energy at charger stations, and use energy to power the "heart altar" for rewards.
- Energy Management: All actions cost energy, so agents learn to manage their energy budgets efficiently.
- Combat: Agents can attack others, temporarily freezing the target and stealing resources.
- Defense: Agents can toggle shields, which drain energy but absorb attacks.
- Cooperation: Agents can share energy or resources and use markers to communicate.
The game offers numerous possibilities for exploration, including:
- Diverse Energy Profiles: Assigning different energy profiles to agents, essentially giving them different bodies and policies.
- Dynamic Energy Profiles: Allowing agents to change their energy profiles, reflecting different postures or emotions.
- Resource Types and Conversions: Introducing different resource types and conversion mechanisms.
- Environment Modification: Enabling agents to modify the game board by creating, destroying, or altering objects.
The game explores various kinship structures:
- Random Kinship Scores: Each pair of agents has a kinship score sampled from a distribution.
- Teams: Agents belong to teams with symmetric kinship among team members.
- Hives/Clans/Families: Structuring agents into larger kinship groups.
Future plans include incorporating mate-selection dynamics, where agents share future rewards at a cost, potentially leading to intelligence gains through a signaling arms race.
Metta aims to create a rich, evolving environment where AI agents can develop general intelligence through continuous learning and adaptation.
The project's modular design and open-source nature make it easy for researchers to adapt and extend the platform to investigate their own hypotheses in this domain. The highly performant, open-ended game rules provide a rich environment for studying these behaviors and their potential implications for AI alignment.
Some areas of research interest:
Develop rich and diverse gridworld environments with complex dynamics, such as resource systems, agent diversity, procedural terrain generation, support for various environment types, population dynamics, and kinship schemes.
Incorporate techniques like dense learning signals, surprise minimization, exploration strategies, and blending reinforcement and imitation learning.
Investigate scalable training approaches, including distributed reinforcement learning, student-teacher architectures, and blending reinforcement learning with imitation learning, to enable efficient training of large-scale multi-agent systems.
Design and implement a comprehensive suite of intelligence evaluations for gridworld agents, covering navigation tasks, maze solving, in-context learning, cooperation, and competition scenarios.
Develop tools and infrastructure for efficient management, tracking, and deployment of experiments, such as cloud cluster management, experiment tracking and visualization, and continuous integration and deployment pipelines.
This README provides only a brief overview of research explorations. Visit the research roadmap for more details.
Clone the repository and run the setup:
git clone https://github.com/Metta-AI/metta.git
cd metta
./metta.sh configure # Interactive setup wizard
./metta.sh install # Install configured components
For more information on setup options and managing components, run ./metta.sh --help
or see the setup documentation.
The repository contains command-line tools in the tools/
directory. Most of these tools use Hydra for configuration management, which allows flexible parameter overrides and composition.
- Override parameters:
param=value
sets configuration values directly - Compose configs:
+group=option
loads additional configuration files fromconfigs/group/option.yaml
- Use config groups: Load user-specific settings with
+user=<name>
fromconfigs/user/<name>.yaml
./tools/train.py run=my_experiment +hardware=macbook wandb=off +user=<name>
Parameters:
run=my_experiment
- Names your experiment and controls where checkpoints are saved undertrain_dir/<run>
+hardware=macbook
- Loads hardware-specific settings fromconfigs/hardware/macbook.yaml
wandb=off
- Disables Weights & Biases logging+user=<name>
- Loads your personal settings fromconfigs/user/<name>.yaml
To use WandB with your personal account:
- Get your WandB API key from wandb.ai (click your profile → API keys)
- Add it to your
~/.netrc
file:machine api.wandb.ai login user password YOUR_API_KEY_HERE
- Edit
configs/wandb/external_user.yaml
and replace???
with your WandB username:entity: ??? # Replace with your WandB username
Now you can run training with your personal WandB config:
./tools/train.py run=local.yourname.123 +hardware=macbook wandb=user
Mettascope allows you to run and view episodes in the environment you specify. It goes beyond just spectator mode, and allows taking over an agent and controlling it manually.
For more information, see ./mettascope/README.md.
./tools/play.py run=<name> [options]
Arguments:
run=<name>
- Required. Experiment identifierpolicy_uri=<path>
- Specify the policy the models follow when not manually controller with a model checkpoint (.pt
file).- For local files, supply the path:
./train_dir/<run_name>/checkpoints/<checkpoint_name>.pt
. These checkpoint files are created during training - For wandb artifacts, prefix with
wandb://
- For local files, supply the path:
+hardware=<config>
- Hardware configuration (see Training a Model)
./tools/renderer.py run=demo_obstacles \
renderer_job.environment.uri="configs/env/mettagrid/maps/debug/simple_obstacles.map"
When you run training, if you have WandB enabled, then you will be able to see in your WandB run page results for the eval suites.
However, this will not apply for anything trained before April 8th.
If you want to run evaluation post-training to compare different policies, you can do the following:
To add your policy to the existing navigation evals DB:
./tools/sim.py \
sim=navigation \
run=navigation101 \
policy_uri=wandb://run/YOUR_POLICY_URI \
sim_job.stats_db_uri=wandb://stats/navigation_db \
device=cpu
This will run your policy through the configs/eval/navigation
eval_suite and then save it to the navigation_db
artifact on WandB.
Then, to see the results in the heatmap along with the other policies in the database, you can run:
./tools/dashboard.py +eval_db_uri=wandb://stats/navigation_db run=navigation_db ++dashboard.output_path=s3://softmax-public/policydash/navigation.html
To run the style checks and tests locally:
ruff format
ruff check
pyright metta # optional, some stubs are missing
pytest
Running these commands mirrors our CI configuration and helps keep the codebase consistent.
Some sample map patterns in scenes/dcss
were adapted from the open-source game Dungeon Crawl Stone Soup (DCSS),
specifically from the file simple.des
.
DCSS is licensed under the GNU General Public License v2.0.