Skip to content

Query on Leaderboard Results and Reproducibility in OSWorld #380

@Yeeesir

Description

@Yeeesir

Dear OSWorld Maintainers,

I hope this message finds you well. I have a few inquiries regarding the reproducibility of the model scores presented on the OSWorld leaderboard.

  1. Official Verification of Leaderboard Scores:
    Are all the model performance scores on the leaderboard officially verified by the OSWorld team through direct reproduction?

  2. Reproduction Attempt Using Provided Script:
    I attempted to reproduce the scores for UI-TARS-250705 (41.8%) and doubao-1-5-thinking-vision-pro-250717 (40.0%) using the script available at run_multienv_uitars15_v1.py.
    However, I encountered bugs during execution. After resolving these issues, I was unable to achieve the scores reported on the leaderboard.

  3. Official Reproduction Methodology:
    Could you confirm whether the OSWorld team can reproduce these specific model scores?
    If so, are the official reproduction results obtained using the publicly released test scripts, or are there additional configurations or procedures involved?
    Your clarification on these points would be immensely helpful. It would provide the community with greater transparency and confidence in the reported benchmarks.

Thank you very much for your time and assistance.

Best regards

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions