Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MMMU-test doesn't generate submission files #538

Open
zjuAJW opened this issue Feb 13, 2025 · 0 comments
Open

MMMU-test doesn't generate submission files #538

zjuAJW opened this issue Feb 13, 2025 · 0 comments

Comments

@zjuAJW
Copy link

zjuAJW commented Feb 13, 2025

python -m lmms_eval \
    --model=internvl2 \
    --model_args=pretrained=my_model_path,device_map=auto \
    --tasks=mmmu_test \
    --batch_size=1 \
    --log_samples \
    --log_samples_suffix=internvl2 \
    --output_path="./logs/" \
    --limit 100

I tried to run MMMU-test on internvl2 model, and i found that the metric in mmmu_test yaml config is 'submission'.
But there is no submission file generated.

And another question is that there is no lmms_eval_specific_kwargs in mmmu_test.yaml, which leads to the following error:


> lmms_eval/tasks/mmmu/utils.py", line 145, in mmmu_doc_to_text
>     question = construct_prompt(doc, lmms_eval_specific_kwargs["multiple_choice_prompt"], lmms_eval_specific_kwargs["open_ended_prompt"])
> TypeError: 'NoneType' object is not subscriptable

So, should i add my own lmms_eval_specific_kwargs in the yaml config?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant