-
Notifications
You must be signed in to change notification settings - Fork 534
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Checklist
- I have searched for similar issues before opening this one.
- I am using the latest version of lmms-eval.
Bug Description
When running the model qwen2.5vl or qwen3vl, the following error occurs:
AttributeError: 'Qwen3_VL' object has no attribute 'fps'
It appears the model class does not recognize the fps attribute. Could you please clarify if:
The fps parameter should be passed within a specific configuration object?
Steps to Reproduce
python -m lmms_eval --model qwen3_vl --model_args pretrained=Qwen/Qwen3-VL-4B-Instruct --tasks longvideobench_val_v --batch_size 1 --limit 8Error Message / Traceback
2026-03-03 05:05:14 | INFO | lmms_eval.__main__:cli_evaluate:480 - Verbosity set to INFO
2026-03-03 05:05:16 | INFO | lmms_eval.__main__:cli_evaluate_single:599 - Evaluation tracker args: {'token': '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'}
2026-03-03 05:05:16 | WARNING | lmms_eval.__main__:cli_evaluate_single:629 - --limit SHOULD ONLY BE USED FOR TESTING.REAL METRICS SHOULD NOT BE COMPUTED USING LIMIT.
2026-03-03 05:05:16 | INFO | lmms_eval.__main__:cli_evaluate_single:683 - Selected Tasks: ['longvideobench_val_v']
2026-03-03 05:05:16 | INFO | lmms_eval.evaluator:simple_evaluate:186 - Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
Fetching 2 files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 15335.66it/s]
Download complete: : 0.00B [00:00, ?B/s] | 0/2 [00:00<?, ?it/s]
Loading weights: 100%|██████████████████████████████████████████████████████████████████████████████████| 713/713 [00:02<00:00, 279.48it/s, Materializing param=model.visual.pos_embed.weight]
2026-03-03 05:05:47 | INFO | lmms_eval.evaluator:evaluate:861 - Running on rank 0 (local rank 0)
2026-03-03 05:05:47 | INFO | lmms_eval.api.task:build_all_requests:430 - Building contexts for longvideobench_val_v on rank 0...
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:00<00:00, 2601.32it/s]
2026-03-03 05:05:47 | INFO | lmms_eval.evaluator:evaluate:955 - Running generate_until requests
Model Responding: 0%| | 0/8 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/data3/tangjialei/dongfengming/lmms-eval/lmms_eval/__main__.py", line 547, in cli_evaluate
results, samples = cli_evaluate_single(args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data3/tangjialei/dongfengming/lmms-eval/lmms_eval/__main__.py", line 687, in cli_evaluate_single
results = evaluator.simple_evaluate(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data3/tangjialei/dongfengming/lmms-eval/lmms_eval/utils.py", line 724, in _wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data3/tangjialei/dongfengming/lmms-eval/lmms_eval/evaluator.py", line 410, in simple_evaluate
results = evaluate(
^^^^^^^^^
File "/data3/tangjialei/dongfengming/lmms-eval/lmms_eval/utils.py", line 724, in _wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data3/tangjialei/dongfengming/lmms-eval/lmms_eval/evaluator.py", line 979, in evaluate
resps = getattr(lm, reqtype)(cloned_reqs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data3/tangjialei/dongfengming/lmms-eval/lmms_eval/models/chat/qwen3_vl.py", line 64, in generate_until
if self.fps is not None:
^^^^^^^^
AttributeError: 'Qwen3_VL' object has no attribute 'fps'
2026-03-03 05:05:47 | ERROR | lmms_eval.__main__:cli_evaluate:569 - Error during evaluation: 'Qwen3_VL' object has no attribute 'fps'. Please set `--verbosity=DEBUG` to get more informati
on.
Model Responding: 0%|Environment
Python:3.11
Additional Context
No response
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working