You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to run MMMU-test on internvl2 model, and i found that the metric in mmmu_test yaml config is 'submission'.
But there is no submission file generated.
And another question is that there is no lmms_eval_specific_kwargs in mmmu_test.yaml, which leads to the following error:
> lmms_eval/tasks/mmmu/utils.py", line 145, in mmmu_doc_to_text
> question = construct_prompt(doc, lmms_eval_specific_kwargs["multiple_choice_prompt"], lmms_eval_specific_kwargs["open_ended_prompt"])
> TypeError: 'NoneType' object is not subscriptable
So, should i add my own lmms_eval_specific_kwargs in the yaml config?
The text was updated successfully, but these errors were encountered:
I tried to run MMMU-test on internvl2 model, and i found that the metric in mmmu_test yaml config is 'submission'.
But there is no submission file generated.
And another question is that there is no lmms_eval_specific_kwargs in mmmu_test.yaml, which leads to the following error:
So, should i add my own lmms_eval_specific_kwargs in the yaml config?
The text was updated successfully, but these errors were encountered: