Skip to content

Fix scenario mining evaluation bug#311

Merged
benjaminrwilson merged 2 commits intoargoverse:mainfrom
CainanD:main
May 18, 2025
Merged

Fix scenario mining evaluation bug#311
benjaminrwilson merged 2 commits intoargoverse:mainfrom
CainanD:main

Conversation

@CainanD
Copy link
Copy Markdown
Contributor

@CainanD CainanD commented May 9, 2025

PR Summary

Super simple fix to change the indices for scenario mining evaluation metrics. This is done so that the correct score is printed for the correct metric and so that the EvalAI leaderboard is reported correctly.

Testing

The previous test case reported the max score of 1.0 for both metrics, which is why the bug slipped through. This was caught after hand-verifying the using the output confusion matrices.

In order to ensure this PR works as intended, it is:

  • [✅ ] unit tested.
  • [ ✅] other or not applicable (additional detail/rationale required)

Compliance with Standards

This change does not appear to pass the linters. This is strange because it is such a minor change. I assume that the linters have been updated since the previous pull request.

As the author, I certify that this PR conforms to the following standards:

  • [✅ ] Code changes conform to PEP8 and docstrings conform to the Google Python style guide.
  • [✅ ] A well-written summary explains what was done and why it was done.
  • [✅ ] The PR is adequately tested and the testing details and links to external results are included.

@benjaminrwilson benjaminrwilson merged commit f1bedd7 into argoverse:main May 18, 2025
45 of 48 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants