Skip to content

Add friction-evidence models for agent interaction GSoC proposal#2

Open
SumeetDUTTA wants to merge 9 commits intomesa:mainfrom
SumeetDUTTA:main
Open

Add friction-evidence models for agent interaction GSoC proposal#2
SumeetDUTTA wants to merge 9 commits intomesa:mainfrom
SumeetDUTTA:main

Conversation

@SumeetDUTTA
Copy link
Copy Markdown

Adds three working models demonstrating Mesa interaction friction points:

  1. Market Auction — exposes request-reply messaging gaps

    • Workaround: shared inbox pattern
    • Evidence: 30 steps, 8 sales, revenue tracking
  2. Needs-Driven Wolf-Sheep — exposes behavior-selection friction

    • Workaround: priority logic embedded in step()
    • Evidence: action metrics show all four branches active
  3. Opinion Diffusion — exposes pub-sub communication gaps

    • Workaround: manual neighbor subscription wiring
    • Evidence: 40 steps, opinions converge as expected

Each model includes:

  • Reproducible run with fixed seed
  • Measured outputs and interpretation
  • Clear Mesa friction-point documentation
  • Proposed minimal API improvements
  • Smoke tests proving reproducibility

These evidence models form the core of the 175-hour agent interaction framework GSoC proposal.

…iled insights on background, interest in Mesa, learning goals, and future contributions.
… objectives in agent-based modeling and Mesa contributions.
…pinion Diffusion models: clarify goals, setup, and results; enhance reproducibility with code snippets; improve structure and readability.
… Opinion Diffusion models: implement unit tests for model initialization, step execution, and data collection.
Copilot AI review requested due to automatic review settings March 22, 2026 15:11
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 22, 2026

Important

Review skipped

Auto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 03ce2281-98c9-4aa2-85dc-23ce1c8c4017

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds three “friction-evidence” Mesa models (market auction, needs-driven wolf-sheep, opinion diffusion) intended to demonstrate gaps in agent interaction patterns for a GSoC proposal, along with supporting writeups and smoke tests.

Changes:

  • Add three behavioral experiment models under models/behavioral_experiment/01_* through 03_*, each with code, README evidence, and smoke tests.
  • Update candidate materials (motivation.md, models/ANALYSIS.md) and improve top-level README.md formatting.
  • Add VS Code pytest configuration.

Reviewed changes

Copilot reviewed 16 out of 16 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
motivation.md Filled in motivation/fit for Mesa and learning goals.
README.md Formatting adjustments (spacing/markdown consistency).
models/ANALYSIS.md Notes about running/analyzing advanced Wolf-Sheep issues.
.vscode/settings.json Configure pytest runner defaults for VS Code.
models/behavioral_experiment/01_market_auction/model.py Implements a seller/buyer auction interaction model.
models/behavioral_experiment/01_market_auction/agents.py Defines Buyer/Seller agents and a Bid message object.
models/behavioral_experiment/01_market_auction/test_smoke.py Adds basic execution/log-structure smoke tests.
models/behavioral_experiment/01_market_auction/README.md Documents friction point + reproducible run + outputs.
models/behavioral_experiment/02_needs_driven_wolf_sheep/model.py Implements needs-driven sheep actions + metrics collection.
models/behavioral_experiment/02_needs_driven_wolf_sheep/agents.py Sheep/Wolf/GrassPatch agent behaviors with internal drives.
models/behavioral_experiment/02_needs_driven_wolf_sheep/test_smoke.py Adds initialization, datacollector, and metric smoke tests.
models/behavioral_experiment/02_needs_driven_wolf_sheep/README.md Documents friction point + reproducible run + outputs.
models/behavioral_experiment/03_opinion_diffusion/model.py Implements opinion diffusion over a random network.
models/behavioral_experiment/03_opinion_diffusion/agents.py Defines OpinionAgent using mesa_signals observables.
models/behavioral_experiment/03_opinion_diffusion/test_smoke.py Adds initialization, bounds, and message-receipt smoke tests.
models/behavioral_experiment/03_opinion_diffusion/README.md Documents friction point + reproducible run + outputs.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +36 to +40
def __init__(
self,
scenario: NeedsDrivenWolfSheepScenario = NeedsDrivenWolfSheepScenario,
):
super().__init__(scenario=scenario)
Copy link

Copilot AI Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default value for scenario is the NeedsDrivenWolfSheepScenario class, not an instance. Calling NeedsDrivenWolfSheepModel() without passing a scenario will make scenario.height/width/... fail at runtime. Default this parameter to an instance (e.g., NeedsDrivenWolfSheepScenario()) or accept None and construct a default inside __init__.

Copilot uses AI. Check for mistakes.
Comment on lines +11 to +90
sys.path.insert(0, str(MESA_SOURCE_ROOT))


def _build_model(seed: int = 42):
from model import OpinionDiffusionModel

return OpinionDiffusionModel(
num_agents=20,
avg_degree=5,
learning_rate=0.1,
contrarian_probability=0.2,
rng=seed,
)


def test_model_initializes():
m = _build_model(seed=42)
assert m.round == 0
assert m.time == 0
assert len(m.agents) == 20
assert len(m.history) == 1

first = m.history[0]
for key in ("round", "mean_opinion", "min_opinion", "max_opinion"):
assert key in first


def test_model_runs_10_steps():
m = _build_model(seed=42)
for _ in range(10):
m.step()

assert m.round == 10
assert m.time == 10
assert len(m.history) == 11
assert m.history[-1]["round"] == 10


def test_opinions_stay_in_unit_interval():
from agents import OpinionAgent

m = _build_model(seed=42)
for _ in range(15):
m.step()

for agent in m.agents_by_type[OpinionAgent]:
assert 0.0 <= agent.opinion <= 1.0


def test_agents_receive_neighbor_messages():
from agents import OpinionAgent

m = _build_model(seed=42)
m.step()

received_total = sum(
agent.messages_received for agent in m.agents_by_type[OpinionAgent]
)
assert received_total > 0


def test_seeded_runs_advance_consistently():
m1 = _build_model(seed=42)
m2 = _build_model(seed=123)

m1.step()
m2.step()

assert m1.round == m2.round == 1
assert m1.time == m2.time == 1
assert len(m1.history) == len(m2.history) == 2


if __name__ == "__main__":
test_model_initializes()
test_model_runs_10_steps()
test_opinions_stay_in_unit_interval()
test_agents_receive_neighbor_messages()
test_seeded_runs_advance_consistently()
print("All tests passed!")
Copy link

Copilot AI Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file mixes tabs and spaces for indentation (e.g., the sys.path.insert block and most function bodies are tab-indented). This can cause style/tooling issues and is inconsistent with standard Python formatting (PEP 8). Re-indent using spaces consistently.

Suggested change
sys.path.insert(0, str(MESA_SOURCE_ROOT))
def _build_model(seed: int = 42):
from model import OpinionDiffusionModel
return OpinionDiffusionModel(
num_agents=20,
avg_degree=5,
learning_rate=0.1,
contrarian_probability=0.2,
rng=seed,
)
def test_model_initializes():
m = _build_model(seed=42)
assert m.round == 0
assert m.time == 0
assert len(m.agents) == 20
assert len(m.history) == 1
first = m.history[0]
for key in ("round", "mean_opinion", "min_opinion", "max_opinion"):
assert key in first
def test_model_runs_10_steps():
m = _build_model(seed=42)
for _ in range(10):
m.step()
assert m.round == 10
assert m.time == 10
assert len(m.history) == 11
assert m.history[-1]["round"] == 10
def test_opinions_stay_in_unit_interval():
from agents import OpinionAgent
m = _build_model(seed=42)
for _ in range(15):
m.step()
for agent in m.agents_by_type[OpinionAgent]:
assert 0.0 <= agent.opinion <= 1.0
def test_agents_receive_neighbor_messages():
from agents import OpinionAgent
m = _build_model(seed=42)
m.step()
received_total = sum(
agent.messages_received for agent in m.agents_by_type[OpinionAgent]
)
assert received_total > 0
def test_seeded_runs_advance_consistently():
m1 = _build_model(seed=42)
m2 = _build_model(seed=123)
m1.step()
m2.step()
assert m1.round == m2.round == 1
assert m1.time == m2.time == 1
assert len(m1.history) == len(m2.history) == 2
if __name__ == "__main__":
test_model_initializes()
test_model_runs_10_steps()
test_opinions_stay_in_unit_interval()
test_agents_receive_neighbor_messages()
test_seeded_runs_advance_consistently()
print("All tests passed!")
sys.path.insert(0, str(MESA_SOURCE_ROOT))
def _build_model(seed: int = 42):
from model import OpinionDiffusionModel
return OpinionDiffusionModel(
num_agents=20,
avg_degree=5,
learning_rate=0.1,
contrarian_probability=0.2,
rng=seed,
)
def test_model_initializes():
m = _build_model(seed=42)
assert m.round == 0
assert m.time == 0
assert len(m.agents) == 20
assert len(m.history) == 1
first = m.history[0]
for key in ("round", "mean_opinion", "min_opinion", "max_opinion"):
assert key in first
def test_model_runs_10_steps():
m = _build_model(seed=42)
for _ in range(10):
m.step()
assert m.round == 10
assert m.time == 10
assert len(m.history) == 11
assert m.history[-1]["round"] == 10
def test_opinions_stay_in_unit_interval():
from agents import OpinionAgent
m = _build_model(seed=42)
for _ in range(15):
m.step()
for agent in m.agents_by_type[OpinionAgent]:
assert 0.0 <= agent.opinion <= 1.0
def test_agents_receive_neighbor_messages():
from agents import OpinionAgent
m = _build_model(seed=42)
m.step()
received_total = sum(
agent.messages_received for agent in m.agents_by_type[OpinionAgent]
)
assert received_total > 0
def test_seeded_runs_advance_consistently():
m1 = _build_model(seed=42)
m2 = _build_model(seed=123)
m1.step()
m2.step()
assert m1.round == m2.round == 1
assert m1.time == m2.time == 1
assert len(m1.history) == len(m2.history) == 2
if __name__ == "__main__":
test_model_initializes()
test_model_runs_10_steps()
test_opinions_stay_in_unit_interval()
test_agents_receive_neighbor_messages()
test_seeded_runs_advance_consistently()
print("All tests passed!")

Copilot uses AI. Check for mistakes.
Comment on lines +11 to +126
sys.path.insert(0, str(MESA_SOURCE_ROOT))


def _small_scenario(seed: int = 42):
from model import NeedsDrivenWolfSheepScenario

return NeedsDrivenWolfSheepScenario(
rng=seed,
width=10,
height=10,
initial_sheep=30,
initial_wolves=8,
sheep_reproduce=0.04,
wolf_reproduce=0.03,
wolf_gain_from_food=16.0,
sheep_gain_from_food=4.0,
grass=True,
grass_regrowth_time=12,
)


def test_model_initializes():
from model import NeedsDrivenWolfSheepModel

m = NeedsDrivenWolfSheepModel(scenario=_small_scenario())
assert m.time == 0
assert m.running is True
assert len(m.agents) > 0

df = m.datacollector.get_model_vars_dataframe()
assert len(df) == 1


def test_model_runs_10_steps_and_collects_data():
from model import NeedsDrivenWolfSheepModel

m = NeedsDrivenWolfSheepModel(scenario=_small_scenario())
for _ in range(10):
m.step()

assert m.time == 10
df = m.datacollector.get_model_vars_dataframe()
assert len(df) == 11


def test_datacollector_has_expected_columns():
from model import NeedsDrivenWolfSheepModel

m = NeedsDrivenWolfSheepModel(scenario=_small_scenario())
m.step()

df = m.datacollector.get_model_vars_dataframe()
expected = {
"Wolves",
"Sheep",
"Grass",
"AvgSheepFear",
"MaxSheepFear",
"AvgSheepFatigue",
"AvgSheepHunger",
"FleeCount",
"ForageCount",
"RestCount",
"WanderCount",
}
assert expected.issubset(set(df.columns))


def test_action_counts_are_non_negative():
from model import NeedsDrivenWolfSheepModel

m = NeedsDrivenWolfSheepModel(scenario=_small_scenario())
m.step()
row = m.datacollector.get_model_vars_dataframe().iloc[-1]

for key in ("FleeCount", "ForageCount", "RestCount", "WanderCount"):
assert row[key] >= 0


def test_grass_metric_zero_when_grass_disabled():
from model import NeedsDrivenWolfSheepModel, NeedsDrivenWolfSheepScenario

scenario = NeedsDrivenWolfSheepScenario(
rng=42,
width=10,
height=10,
initial_sheep=20,
initial_wolves=5,
grass=False,
)
m = NeedsDrivenWolfSheepModel(scenario=scenario)
m.step()

row = m.datacollector.get_model_vars_dataframe().iloc[-1]
assert row["Grass"] == 0


def test_seeded_run_advances_time_consistently():
from model import NeedsDrivenWolfSheepModel

m1 = NeedsDrivenWolfSheepModel(scenario=_small_scenario(seed=42))
m2 = NeedsDrivenWolfSheepModel(scenario=_small_scenario(seed=123))

m1.step()
m2.step()
assert m1.time == m2.time == 1


if __name__ == "__main__":
test_model_initializes()
test_model_runs_10_steps_and_collects_data()
test_datacollector_has_expected_columns()
test_action_counts_are_non_negative()
test_grass_metric_zero_when_grass_disabled()
test_seeded_run_advances_time_consistently()
print("All tests passed!")
Copy link

Copilot AI Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file uses tab indentation in several places (e.g., the sys.path.insert block). Standard Python formatting expects spaces, and mixed indentation can break linters/formatters and lead to subtle issues. Please convert indentation to spaces consistently.

Suggested change
sys.path.insert(0, str(MESA_SOURCE_ROOT))
def _small_scenario(seed: int = 42):
from model import NeedsDrivenWolfSheepScenario
return NeedsDrivenWolfSheepScenario(
rng=seed,
width=10,
height=10,
initial_sheep=30,
initial_wolves=8,
sheep_reproduce=0.04,
wolf_reproduce=0.03,
wolf_gain_from_food=16.0,
sheep_gain_from_food=4.0,
grass=True,
grass_regrowth_time=12,
)
def test_model_initializes():
from model import NeedsDrivenWolfSheepModel
m = NeedsDrivenWolfSheepModel(scenario=_small_scenario())
assert m.time == 0
assert m.running is True
assert len(m.agents) > 0
df = m.datacollector.get_model_vars_dataframe()
assert len(df) == 1
def test_model_runs_10_steps_and_collects_data():
from model import NeedsDrivenWolfSheepModel
m = NeedsDrivenWolfSheepModel(scenario=_small_scenario())
for _ in range(10):
m.step()
assert m.time == 10
df = m.datacollector.get_model_vars_dataframe()
assert len(df) == 11
def test_datacollector_has_expected_columns():
from model import NeedsDrivenWolfSheepModel
m = NeedsDrivenWolfSheepModel(scenario=_small_scenario())
m.step()
df = m.datacollector.get_model_vars_dataframe()
expected = {
"Wolves",
"Sheep",
"Grass",
"AvgSheepFear",
"MaxSheepFear",
"AvgSheepFatigue",
"AvgSheepHunger",
"FleeCount",
"ForageCount",
"RestCount",
"WanderCount",
}
assert expected.issubset(set(df.columns))
def test_action_counts_are_non_negative():
from model import NeedsDrivenWolfSheepModel
m = NeedsDrivenWolfSheepModel(scenario=_small_scenario())
m.step()
row = m.datacollector.get_model_vars_dataframe().iloc[-1]
for key in ("FleeCount", "ForageCount", "RestCount", "WanderCount"):
assert row[key] >= 0
def test_grass_metric_zero_when_grass_disabled():
from model import NeedsDrivenWolfSheepModel, NeedsDrivenWolfSheepScenario
scenario = NeedsDrivenWolfSheepScenario(
rng=42,
width=10,
height=10,
initial_sheep=20,
initial_wolves=5,
grass=False,
)
m = NeedsDrivenWolfSheepModel(scenario=scenario)
m.step()
row = m.datacollector.get_model_vars_dataframe().iloc[-1]
assert row["Grass"] == 0
def test_seeded_run_advances_time_consistently():
from model import NeedsDrivenWolfSheepModel
m1 = NeedsDrivenWolfSheepModel(scenario=_small_scenario(seed=42))
m2 = NeedsDrivenWolfSheepModel(scenario=_small_scenario(seed=123))
m1.step()
m2.step()
assert m1.time == m2.time == 1
if __name__ == "__main__":
test_model_initializes()
test_model_runs_10_steps_and_collects_data()
test_datacollector_has_expected_columns()
test_action_counts_are_non_negative()
test_grass_metric_zero_when_grass_disabled()
test_seeded_run_advances_time_consistently()
print("All tests passed!")
sys.path.insert(0, str(MESA_SOURCE_ROOT))
def _small_scenario(seed: int = 42):
from model import NeedsDrivenWolfSheepScenario
return NeedsDrivenWolfSheepScenario(
rng=seed,
width=10,
height=10,
initial_sheep=30,
initial_wolves=8,
sheep_reproduce=0.04,
wolf_reproduce=0.03,
wolf_gain_from_food=16.0,
sheep_gain_from_food=4.0,
grass=True,
grass_regrowth_time=12,
)
def test_model_initializes():
from model import NeedsDrivenWolfSheepModel
m = NeedsDrivenWolfSheepModel(scenario=_small_scenario())
assert m.time == 0
assert m.running is True
assert len(m.agents) > 0
df = m.datacollector.get_model_vars_dataframe()
assert len(df) == 1
def test_model_runs_10_steps_and_collects_data():
from model import NeedsDrivenWolfSheepModel
m = NeedsDrivenWolfSheepModel(scenario=_small_scenario())
for _ in range(10):
m.step()
assert m.time == 10
df = m.datacollector.get_model_vars_dataframe()
assert len(df) == 11
def test_datacollector_has_expected_columns():
from model import NeedsDrivenWolfSheepModel
m = NeedsDrivenWolfSheepModel(scenario=_small_scenario())
m.step()
df = m.datacollector.get_model_vars_dataframe()
expected = {
"Wolves",
"Sheep",
"Grass",
"AvgSheepFear",
"MaxSheepFear",
"AvgSheepFatigue",
"AvgSheepHunger",
"FleeCount",
"ForageCount",
"RestCount",
"WanderCount",
}
assert expected.issubset(set(df.columns))
def test_action_counts_are_non_negative():
from model import NeedsDrivenWolfSheepModel
m = NeedsDrivenWolfSheepModel(scenario=_small_scenario())
m.step()
row = m.datacollector.get_model_vars_dataframe().iloc[-1]
for key in ("FleeCount", "ForageCount", "RestCount", "WanderCount"):
assert row[key] >= 0
def test_grass_metric_zero_when_grass_disabled():
from model import NeedsDrivenWolfSheepModel, NeedsDrivenWolfSheepScenario
scenario = NeedsDrivenWolfSheepScenario(
rng=42,
width=10,
height=10,
initial_sheep=20,
initial_wolves=5,
grass=False,
)
m = NeedsDrivenWolfSheepModel(scenario=scenario)
m.step()
row = m.datacollector.get_model_vars_dataframe().iloc[-1]
assert row["Grass"] == 0
def test_seeded_run_advances_time_consistently():
from model import NeedsDrivenWolfSheepModel
m1 = NeedsDrivenWolfSheepModel(scenario=_small_scenario(seed=42))
m2 = NeedsDrivenWolfSheepModel(scenario=_small_scenario(seed=123))
m1.step()
m2.step()
assert m1.time == m2.time == 1
if __name__ == "__main__":
test_model_initializes()
test_model_runs_10_steps_and_collects_data()
test_datacollector_has_expected_columns()
test_action_counts_are_non_negative()
test_grass_metric_zero_when_grass_disabled()
test_seeded_run_advances_time_consistently()
print("All tests passed!")

Copilot uses AI. Check for mistakes.
Comment on lines +24 to +35
def step(self) -> None:
self.round += 1

# 1) Seller posts the current price.
self.seller.post_price()

# 2) Buyers submit their bids(via shared inbox workaround).
self.agents_by_type[Buyer].shuffle_do('step')

# 3) Seller processes the highest valid bid.
self.seller.step()

Copy link

Copilot AI Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

step() overrides mesa.Model.step but never advances self.time (or calls super().step()). The accompanying smoke tests assert that m.time increments with each step(), so this is likely to fail and also makes per-round timestamps (e.g., in logs) misleading. Consider calling super().step() (if it owns time advancement) or explicitly incrementing self.time to keep time and round consistent.

Copilot uses AI. Check for mistakes.
}
)

def step(self) -> None:
Copy link

Copilot AI Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

step() doesn't advance the model clock (self.time) or call super().step(). The test suite asserts m.time == 10 after 10 calls to m.step(), so this will likely fail and also affects any time-based logging/behavior that relies on model.time. Align round and time by advancing time in step() (or delegating to the base class).

Suggested change
def step(self) -> None:
def step(self) -> None:
# Advance the model clock to keep `self.time` aligned with `round`.
self.time += 1

Copilot uses AI. Check for mistakes.

def step(self):
self.agents_by_type[Sheep].shuffle_do("step")
self.agents_by_type[Wolf].shuffle_do("step")
Copy link

Copilot AI Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NeedsDrivenWolfSheepModel.step() doesn't advance self.time (and doesn't call super().step()). The smoke tests assert that m.time increases by 1 per call, so the current implementation likely breaks those assertions and can desynchronize DataCollector indexing if it keys off time.

Suggested change
self.agents_by_type[Wolf].shuffle_do("step")
self.agents_by_type[Wolf].shuffle_do("step")
# Advance model time so that it stays in sync with step calls and data collection.
self.time += 1

Copilot uses AI. Check for mistakes.
…mework analysis, adding detailed model descriptions and problems exposed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants