Add friction-evidence models for agent interaction GSoC proposal#2
Add friction-evidence models for agent interaction GSoC proposal#2SumeetDUTTA wants to merge 9 commits intomesa:mainfrom
Conversation
…iled insights on background, interest in Mesa, learning goals, and future contributions.
… objectives in agent-based modeling and Mesa contributions.
…g bid handling and auction logic
…pinion Diffusion models: clarify goals, setup, and results; enhance reproducibility with code snippets; improve structure and readability.
… Opinion Diffusion models: implement unit tests for model initialization, step execution, and data collection.
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
This PR adds three “friction-evidence” Mesa models (market auction, needs-driven wolf-sheep, opinion diffusion) intended to demonstrate gaps in agent interaction patterns for a GSoC proposal, along with supporting writeups and smoke tests.
Changes:
- Add three behavioral experiment models under
models/behavioral_experiment/01_*through03_*, each with code, README evidence, and smoke tests. - Update candidate materials (
motivation.md,models/ANALYSIS.md) and improve top-levelREADME.mdformatting. - Add VS Code pytest configuration.
Reviewed changes
Copilot reviewed 16 out of 16 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| motivation.md | Filled in motivation/fit for Mesa and learning goals. |
| README.md | Formatting adjustments (spacing/markdown consistency). |
| models/ANALYSIS.md | Notes about running/analyzing advanced Wolf-Sheep issues. |
| .vscode/settings.json | Configure pytest runner defaults for VS Code. |
| models/behavioral_experiment/01_market_auction/model.py | Implements a seller/buyer auction interaction model. |
| models/behavioral_experiment/01_market_auction/agents.py | Defines Buyer/Seller agents and a Bid message object. |
| models/behavioral_experiment/01_market_auction/test_smoke.py | Adds basic execution/log-structure smoke tests. |
| models/behavioral_experiment/01_market_auction/README.md | Documents friction point + reproducible run + outputs. |
| models/behavioral_experiment/02_needs_driven_wolf_sheep/model.py | Implements needs-driven sheep actions + metrics collection. |
| models/behavioral_experiment/02_needs_driven_wolf_sheep/agents.py | Sheep/Wolf/GrassPatch agent behaviors with internal drives. |
| models/behavioral_experiment/02_needs_driven_wolf_sheep/test_smoke.py | Adds initialization, datacollector, and metric smoke tests. |
| models/behavioral_experiment/02_needs_driven_wolf_sheep/README.md | Documents friction point + reproducible run + outputs. |
| models/behavioral_experiment/03_opinion_diffusion/model.py | Implements opinion diffusion over a random network. |
| models/behavioral_experiment/03_opinion_diffusion/agents.py | Defines OpinionAgent using mesa_signals observables. |
| models/behavioral_experiment/03_opinion_diffusion/test_smoke.py | Adds initialization, bounds, and message-receipt smoke tests. |
| models/behavioral_experiment/03_opinion_diffusion/README.md | Documents friction point + reproducible run + outputs. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| def __init__( | ||
| self, | ||
| scenario: NeedsDrivenWolfSheepScenario = NeedsDrivenWolfSheepScenario, | ||
| ): | ||
| super().__init__(scenario=scenario) |
There was a problem hiding this comment.
The default value for scenario is the NeedsDrivenWolfSheepScenario class, not an instance. Calling NeedsDrivenWolfSheepModel() without passing a scenario will make scenario.height/width/... fail at runtime. Default this parameter to an instance (e.g., NeedsDrivenWolfSheepScenario()) or accept None and construct a default inside __init__.
| sys.path.insert(0, str(MESA_SOURCE_ROOT)) | ||
|
|
||
|
|
||
| def _build_model(seed: int = 42): | ||
| from model import OpinionDiffusionModel | ||
|
|
||
| return OpinionDiffusionModel( | ||
| num_agents=20, | ||
| avg_degree=5, | ||
| learning_rate=0.1, | ||
| contrarian_probability=0.2, | ||
| rng=seed, | ||
| ) | ||
|
|
||
|
|
||
| def test_model_initializes(): | ||
| m = _build_model(seed=42) | ||
| assert m.round == 0 | ||
| assert m.time == 0 | ||
| assert len(m.agents) == 20 | ||
| assert len(m.history) == 1 | ||
|
|
||
| first = m.history[0] | ||
| for key in ("round", "mean_opinion", "min_opinion", "max_opinion"): | ||
| assert key in first | ||
|
|
||
|
|
||
| def test_model_runs_10_steps(): | ||
| m = _build_model(seed=42) | ||
| for _ in range(10): | ||
| m.step() | ||
|
|
||
| assert m.round == 10 | ||
| assert m.time == 10 | ||
| assert len(m.history) == 11 | ||
| assert m.history[-1]["round"] == 10 | ||
|
|
||
|
|
||
| def test_opinions_stay_in_unit_interval(): | ||
| from agents import OpinionAgent | ||
|
|
||
| m = _build_model(seed=42) | ||
| for _ in range(15): | ||
| m.step() | ||
|
|
||
| for agent in m.agents_by_type[OpinionAgent]: | ||
| assert 0.0 <= agent.opinion <= 1.0 | ||
|
|
||
|
|
||
| def test_agents_receive_neighbor_messages(): | ||
| from agents import OpinionAgent | ||
|
|
||
| m = _build_model(seed=42) | ||
| m.step() | ||
|
|
||
| received_total = sum( | ||
| agent.messages_received for agent in m.agents_by_type[OpinionAgent] | ||
| ) | ||
| assert received_total > 0 | ||
|
|
||
|
|
||
| def test_seeded_runs_advance_consistently(): | ||
| m1 = _build_model(seed=42) | ||
| m2 = _build_model(seed=123) | ||
|
|
||
| m1.step() | ||
| m2.step() | ||
|
|
||
| assert m1.round == m2.round == 1 | ||
| assert m1.time == m2.time == 1 | ||
| assert len(m1.history) == len(m2.history) == 2 | ||
|
|
||
|
|
||
| if __name__ == "__main__": | ||
| test_model_initializes() | ||
| test_model_runs_10_steps() | ||
| test_opinions_stay_in_unit_interval() | ||
| test_agents_receive_neighbor_messages() | ||
| test_seeded_runs_advance_consistently() | ||
| print("All tests passed!") |
There was a problem hiding this comment.
This file mixes tabs and spaces for indentation (e.g., the sys.path.insert block and most function bodies are tab-indented). This can cause style/tooling issues and is inconsistent with standard Python formatting (PEP 8). Re-indent using spaces consistently.
| sys.path.insert(0, str(MESA_SOURCE_ROOT)) | |
| def _build_model(seed: int = 42): | |
| from model import OpinionDiffusionModel | |
| return OpinionDiffusionModel( | |
| num_agents=20, | |
| avg_degree=5, | |
| learning_rate=0.1, | |
| contrarian_probability=0.2, | |
| rng=seed, | |
| ) | |
| def test_model_initializes(): | |
| m = _build_model(seed=42) | |
| assert m.round == 0 | |
| assert m.time == 0 | |
| assert len(m.agents) == 20 | |
| assert len(m.history) == 1 | |
| first = m.history[0] | |
| for key in ("round", "mean_opinion", "min_opinion", "max_opinion"): | |
| assert key in first | |
| def test_model_runs_10_steps(): | |
| m = _build_model(seed=42) | |
| for _ in range(10): | |
| m.step() | |
| assert m.round == 10 | |
| assert m.time == 10 | |
| assert len(m.history) == 11 | |
| assert m.history[-1]["round"] == 10 | |
| def test_opinions_stay_in_unit_interval(): | |
| from agents import OpinionAgent | |
| m = _build_model(seed=42) | |
| for _ in range(15): | |
| m.step() | |
| for agent in m.agents_by_type[OpinionAgent]: | |
| assert 0.0 <= agent.opinion <= 1.0 | |
| def test_agents_receive_neighbor_messages(): | |
| from agents import OpinionAgent | |
| m = _build_model(seed=42) | |
| m.step() | |
| received_total = sum( | |
| agent.messages_received for agent in m.agents_by_type[OpinionAgent] | |
| ) | |
| assert received_total > 0 | |
| def test_seeded_runs_advance_consistently(): | |
| m1 = _build_model(seed=42) | |
| m2 = _build_model(seed=123) | |
| m1.step() | |
| m2.step() | |
| assert m1.round == m2.round == 1 | |
| assert m1.time == m2.time == 1 | |
| assert len(m1.history) == len(m2.history) == 2 | |
| if __name__ == "__main__": | |
| test_model_initializes() | |
| test_model_runs_10_steps() | |
| test_opinions_stay_in_unit_interval() | |
| test_agents_receive_neighbor_messages() | |
| test_seeded_runs_advance_consistently() | |
| print("All tests passed!") | |
| sys.path.insert(0, str(MESA_SOURCE_ROOT)) | |
| def _build_model(seed: int = 42): | |
| from model import OpinionDiffusionModel | |
| return OpinionDiffusionModel( | |
| num_agents=20, | |
| avg_degree=5, | |
| learning_rate=0.1, | |
| contrarian_probability=0.2, | |
| rng=seed, | |
| ) | |
| def test_model_initializes(): | |
| m = _build_model(seed=42) | |
| assert m.round == 0 | |
| assert m.time == 0 | |
| assert len(m.agents) == 20 | |
| assert len(m.history) == 1 | |
| first = m.history[0] | |
| for key in ("round", "mean_opinion", "min_opinion", "max_opinion"): | |
| assert key in first | |
| def test_model_runs_10_steps(): | |
| m = _build_model(seed=42) | |
| for _ in range(10): | |
| m.step() | |
| assert m.round == 10 | |
| assert m.time == 10 | |
| assert len(m.history) == 11 | |
| assert m.history[-1]["round"] == 10 | |
| def test_opinions_stay_in_unit_interval(): | |
| from agents import OpinionAgent | |
| m = _build_model(seed=42) | |
| for _ in range(15): | |
| m.step() | |
| for agent in m.agents_by_type[OpinionAgent]: | |
| assert 0.0 <= agent.opinion <= 1.0 | |
| def test_agents_receive_neighbor_messages(): | |
| from agents import OpinionAgent | |
| m = _build_model(seed=42) | |
| m.step() | |
| received_total = sum( | |
| agent.messages_received for agent in m.agents_by_type[OpinionAgent] | |
| ) | |
| assert received_total > 0 | |
| def test_seeded_runs_advance_consistently(): | |
| m1 = _build_model(seed=42) | |
| m2 = _build_model(seed=123) | |
| m1.step() | |
| m2.step() | |
| assert m1.round == m2.round == 1 | |
| assert m1.time == m2.time == 1 | |
| assert len(m1.history) == len(m2.history) == 2 | |
| if __name__ == "__main__": | |
| test_model_initializes() | |
| test_model_runs_10_steps() | |
| test_opinions_stay_in_unit_interval() | |
| test_agents_receive_neighbor_messages() | |
| test_seeded_runs_advance_consistently() | |
| print("All tests passed!") |
| sys.path.insert(0, str(MESA_SOURCE_ROOT)) | ||
|
|
||
|
|
||
| def _small_scenario(seed: int = 42): | ||
| from model import NeedsDrivenWolfSheepScenario | ||
|
|
||
| return NeedsDrivenWolfSheepScenario( | ||
| rng=seed, | ||
| width=10, | ||
| height=10, | ||
| initial_sheep=30, | ||
| initial_wolves=8, | ||
| sheep_reproduce=0.04, | ||
| wolf_reproduce=0.03, | ||
| wolf_gain_from_food=16.0, | ||
| sheep_gain_from_food=4.0, | ||
| grass=True, | ||
| grass_regrowth_time=12, | ||
| ) | ||
|
|
||
|
|
||
| def test_model_initializes(): | ||
| from model import NeedsDrivenWolfSheepModel | ||
|
|
||
| m = NeedsDrivenWolfSheepModel(scenario=_small_scenario()) | ||
| assert m.time == 0 | ||
| assert m.running is True | ||
| assert len(m.agents) > 0 | ||
|
|
||
| df = m.datacollector.get_model_vars_dataframe() | ||
| assert len(df) == 1 | ||
|
|
||
|
|
||
| def test_model_runs_10_steps_and_collects_data(): | ||
| from model import NeedsDrivenWolfSheepModel | ||
|
|
||
| m = NeedsDrivenWolfSheepModel(scenario=_small_scenario()) | ||
| for _ in range(10): | ||
| m.step() | ||
|
|
||
| assert m.time == 10 | ||
| df = m.datacollector.get_model_vars_dataframe() | ||
| assert len(df) == 11 | ||
|
|
||
|
|
||
| def test_datacollector_has_expected_columns(): | ||
| from model import NeedsDrivenWolfSheepModel | ||
|
|
||
| m = NeedsDrivenWolfSheepModel(scenario=_small_scenario()) | ||
| m.step() | ||
|
|
||
| df = m.datacollector.get_model_vars_dataframe() | ||
| expected = { | ||
| "Wolves", | ||
| "Sheep", | ||
| "Grass", | ||
| "AvgSheepFear", | ||
| "MaxSheepFear", | ||
| "AvgSheepFatigue", | ||
| "AvgSheepHunger", | ||
| "FleeCount", | ||
| "ForageCount", | ||
| "RestCount", | ||
| "WanderCount", | ||
| } | ||
| assert expected.issubset(set(df.columns)) | ||
|
|
||
|
|
||
| def test_action_counts_are_non_negative(): | ||
| from model import NeedsDrivenWolfSheepModel | ||
|
|
||
| m = NeedsDrivenWolfSheepModel(scenario=_small_scenario()) | ||
| m.step() | ||
| row = m.datacollector.get_model_vars_dataframe().iloc[-1] | ||
|
|
||
| for key in ("FleeCount", "ForageCount", "RestCount", "WanderCount"): | ||
| assert row[key] >= 0 | ||
|
|
||
|
|
||
| def test_grass_metric_zero_when_grass_disabled(): | ||
| from model import NeedsDrivenWolfSheepModel, NeedsDrivenWolfSheepScenario | ||
|
|
||
| scenario = NeedsDrivenWolfSheepScenario( | ||
| rng=42, | ||
| width=10, | ||
| height=10, | ||
| initial_sheep=20, | ||
| initial_wolves=5, | ||
| grass=False, | ||
| ) | ||
| m = NeedsDrivenWolfSheepModel(scenario=scenario) | ||
| m.step() | ||
|
|
||
| row = m.datacollector.get_model_vars_dataframe().iloc[-1] | ||
| assert row["Grass"] == 0 | ||
|
|
||
|
|
||
| def test_seeded_run_advances_time_consistently(): | ||
| from model import NeedsDrivenWolfSheepModel | ||
|
|
||
| m1 = NeedsDrivenWolfSheepModel(scenario=_small_scenario(seed=42)) | ||
| m2 = NeedsDrivenWolfSheepModel(scenario=_small_scenario(seed=123)) | ||
|
|
||
| m1.step() | ||
| m2.step() | ||
| assert m1.time == m2.time == 1 | ||
|
|
||
|
|
||
| if __name__ == "__main__": | ||
| test_model_initializes() | ||
| test_model_runs_10_steps_and_collects_data() | ||
| test_datacollector_has_expected_columns() | ||
| test_action_counts_are_non_negative() | ||
| test_grass_metric_zero_when_grass_disabled() | ||
| test_seeded_run_advances_time_consistently() | ||
| print("All tests passed!") |
There was a problem hiding this comment.
This file uses tab indentation in several places (e.g., the sys.path.insert block). Standard Python formatting expects spaces, and mixed indentation can break linters/formatters and lead to subtle issues. Please convert indentation to spaces consistently.
| sys.path.insert(0, str(MESA_SOURCE_ROOT)) | |
| def _small_scenario(seed: int = 42): | |
| from model import NeedsDrivenWolfSheepScenario | |
| return NeedsDrivenWolfSheepScenario( | |
| rng=seed, | |
| width=10, | |
| height=10, | |
| initial_sheep=30, | |
| initial_wolves=8, | |
| sheep_reproduce=0.04, | |
| wolf_reproduce=0.03, | |
| wolf_gain_from_food=16.0, | |
| sheep_gain_from_food=4.0, | |
| grass=True, | |
| grass_regrowth_time=12, | |
| ) | |
| def test_model_initializes(): | |
| from model import NeedsDrivenWolfSheepModel | |
| m = NeedsDrivenWolfSheepModel(scenario=_small_scenario()) | |
| assert m.time == 0 | |
| assert m.running is True | |
| assert len(m.agents) > 0 | |
| df = m.datacollector.get_model_vars_dataframe() | |
| assert len(df) == 1 | |
| def test_model_runs_10_steps_and_collects_data(): | |
| from model import NeedsDrivenWolfSheepModel | |
| m = NeedsDrivenWolfSheepModel(scenario=_small_scenario()) | |
| for _ in range(10): | |
| m.step() | |
| assert m.time == 10 | |
| df = m.datacollector.get_model_vars_dataframe() | |
| assert len(df) == 11 | |
| def test_datacollector_has_expected_columns(): | |
| from model import NeedsDrivenWolfSheepModel | |
| m = NeedsDrivenWolfSheepModel(scenario=_small_scenario()) | |
| m.step() | |
| df = m.datacollector.get_model_vars_dataframe() | |
| expected = { | |
| "Wolves", | |
| "Sheep", | |
| "Grass", | |
| "AvgSheepFear", | |
| "MaxSheepFear", | |
| "AvgSheepFatigue", | |
| "AvgSheepHunger", | |
| "FleeCount", | |
| "ForageCount", | |
| "RestCount", | |
| "WanderCount", | |
| } | |
| assert expected.issubset(set(df.columns)) | |
| def test_action_counts_are_non_negative(): | |
| from model import NeedsDrivenWolfSheepModel | |
| m = NeedsDrivenWolfSheepModel(scenario=_small_scenario()) | |
| m.step() | |
| row = m.datacollector.get_model_vars_dataframe().iloc[-1] | |
| for key in ("FleeCount", "ForageCount", "RestCount", "WanderCount"): | |
| assert row[key] >= 0 | |
| def test_grass_metric_zero_when_grass_disabled(): | |
| from model import NeedsDrivenWolfSheepModel, NeedsDrivenWolfSheepScenario | |
| scenario = NeedsDrivenWolfSheepScenario( | |
| rng=42, | |
| width=10, | |
| height=10, | |
| initial_sheep=20, | |
| initial_wolves=5, | |
| grass=False, | |
| ) | |
| m = NeedsDrivenWolfSheepModel(scenario=scenario) | |
| m.step() | |
| row = m.datacollector.get_model_vars_dataframe().iloc[-1] | |
| assert row["Grass"] == 0 | |
| def test_seeded_run_advances_time_consistently(): | |
| from model import NeedsDrivenWolfSheepModel | |
| m1 = NeedsDrivenWolfSheepModel(scenario=_small_scenario(seed=42)) | |
| m2 = NeedsDrivenWolfSheepModel(scenario=_small_scenario(seed=123)) | |
| m1.step() | |
| m2.step() | |
| assert m1.time == m2.time == 1 | |
| if __name__ == "__main__": | |
| test_model_initializes() | |
| test_model_runs_10_steps_and_collects_data() | |
| test_datacollector_has_expected_columns() | |
| test_action_counts_are_non_negative() | |
| test_grass_metric_zero_when_grass_disabled() | |
| test_seeded_run_advances_time_consistently() | |
| print("All tests passed!") | |
| sys.path.insert(0, str(MESA_SOURCE_ROOT)) | |
| def _small_scenario(seed: int = 42): | |
| from model import NeedsDrivenWolfSheepScenario | |
| return NeedsDrivenWolfSheepScenario( | |
| rng=seed, | |
| width=10, | |
| height=10, | |
| initial_sheep=30, | |
| initial_wolves=8, | |
| sheep_reproduce=0.04, | |
| wolf_reproduce=0.03, | |
| wolf_gain_from_food=16.0, | |
| sheep_gain_from_food=4.0, | |
| grass=True, | |
| grass_regrowth_time=12, | |
| ) | |
| def test_model_initializes(): | |
| from model import NeedsDrivenWolfSheepModel | |
| m = NeedsDrivenWolfSheepModel(scenario=_small_scenario()) | |
| assert m.time == 0 | |
| assert m.running is True | |
| assert len(m.agents) > 0 | |
| df = m.datacollector.get_model_vars_dataframe() | |
| assert len(df) == 1 | |
| def test_model_runs_10_steps_and_collects_data(): | |
| from model import NeedsDrivenWolfSheepModel | |
| m = NeedsDrivenWolfSheepModel(scenario=_small_scenario()) | |
| for _ in range(10): | |
| m.step() | |
| assert m.time == 10 | |
| df = m.datacollector.get_model_vars_dataframe() | |
| assert len(df) == 11 | |
| def test_datacollector_has_expected_columns(): | |
| from model import NeedsDrivenWolfSheepModel | |
| m = NeedsDrivenWolfSheepModel(scenario=_small_scenario()) | |
| m.step() | |
| df = m.datacollector.get_model_vars_dataframe() | |
| expected = { | |
| "Wolves", | |
| "Sheep", | |
| "Grass", | |
| "AvgSheepFear", | |
| "MaxSheepFear", | |
| "AvgSheepFatigue", | |
| "AvgSheepHunger", | |
| "FleeCount", | |
| "ForageCount", | |
| "RestCount", | |
| "WanderCount", | |
| } | |
| assert expected.issubset(set(df.columns)) | |
| def test_action_counts_are_non_negative(): | |
| from model import NeedsDrivenWolfSheepModel | |
| m = NeedsDrivenWolfSheepModel(scenario=_small_scenario()) | |
| m.step() | |
| row = m.datacollector.get_model_vars_dataframe().iloc[-1] | |
| for key in ("FleeCount", "ForageCount", "RestCount", "WanderCount"): | |
| assert row[key] >= 0 | |
| def test_grass_metric_zero_when_grass_disabled(): | |
| from model import NeedsDrivenWolfSheepModel, NeedsDrivenWolfSheepScenario | |
| scenario = NeedsDrivenWolfSheepScenario( | |
| rng=42, | |
| width=10, | |
| height=10, | |
| initial_sheep=20, | |
| initial_wolves=5, | |
| grass=False, | |
| ) | |
| m = NeedsDrivenWolfSheepModel(scenario=scenario) | |
| m.step() | |
| row = m.datacollector.get_model_vars_dataframe().iloc[-1] | |
| assert row["Grass"] == 0 | |
| def test_seeded_run_advances_time_consistently(): | |
| from model import NeedsDrivenWolfSheepModel | |
| m1 = NeedsDrivenWolfSheepModel(scenario=_small_scenario(seed=42)) | |
| m2 = NeedsDrivenWolfSheepModel(scenario=_small_scenario(seed=123)) | |
| m1.step() | |
| m2.step() | |
| assert m1.time == m2.time == 1 | |
| if __name__ == "__main__": | |
| test_model_initializes() | |
| test_model_runs_10_steps_and_collects_data() | |
| test_datacollector_has_expected_columns() | |
| test_action_counts_are_non_negative() | |
| test_grass_metric_zero_when_grass_disabled() | |
| test_seeded_run_advances_time_consistently() | |
| print("All tests passed!") |
| def step(self) -> None: | ||
| self.round += 1 | ||
|
|
||
| # 1) Seller posts the current price. | ||
| self.seller.post_price() | ||
|
|
||
| # 2) Buyers submit their bids(via shared inbox workaround). | ||
| self.agents_by_type[Buyer].shuffle_do('step') | ||
|
|
||
| # 3) Seller processes the highest valid bid. | ||
| self.seller.step() | ||
|
|
There was a problem hiding this comment.
step() overrides mesa.Model.step but never advances self.time (or calls super().step()). The accompanying smoke tests assert that m.time increments with each step(), so this is likely to fail and also makes per-round timestamps (e.g., in logs) misleading. Consider calling super().step() (if it owns time advancement) or explicitly incrementing self.time to keep time and round consistent.
| } | ||
| ) | ||
|
|
||
| def step(self) -> None: |
There was a problem hiding this comment.
step() doesn't advance the model clock (self.time) or call super().step(). The test suite asserts m.time == 10 after 10 calls to m.step(), so this will likely fail and also affects any time-based logging/behavior that relies on model.time. Align round and time by advancing time in step() (or delegating to the base class).
| def step(self) -> None: | |
| def step(self) -> None: | |
| # Advance the model clock to keep `self.time` aligned with `round`. | |
| self.time += 1 |
|
|
||
| def step(self): | ||
| self.agents_by_type[Sheep].shuffle_do("step") | ||
| self.agents_by_type[Wolf].shuffle_do("step") |
There was a problem hiding this comment.
NeedsDrivenWolfSheepModel.step() doesn't advance self.time (and doesn't call super().step()). The smoke tests assert that m.time increases by 1 per call, so the current implementation likely breaks those assertions and can desynchronize DataCollector indexing if it keys off time.
| self.agents_by_type[Wolf].shuffle_do("step") | |
| self.agents_by_type[Wolf].shuffle_do("step") | |
| # Advance model time so that it stays in sync with step calls and data collection. | |
| self.time += 1 |
…mework analysis, adding detailed model descriptions and problems exposed
Adds three working models demonstrating Mesa interaction friction points:
Market Auction — exposes request-reply messaging gaps
Needs-Driven Wolf-Sheep — exposes behavior-selection friction
Opinion Diffusion — exposes pub-sub communication gaps
Each model includes:
These evidence models form the core of the 175-hour agent interaction framework GSoC proposal.