Skip to content

Conversation

@arturnn
Copy link
Contributor

@arturnn arturnn commented Nov 2, 2023

This PR fixes multi-gpu inference, as reported in #177
While trying to fix it, I found some more errors related to proper handling of error spans metadata (with XCOMET models).

I tried out the fixes with XCOMET-XL and wmt22-comet-da models - but it could use some more testing (the code could also be somewhat improved, but I haven't mastered the COMET codebase yet ;) )

@ricardorei ricardorei merged commit cb353f5 into Unbabel:master Jan 8, 2024
@ricardorei
Copy link
Contributor

Hi @arturnn sorry for taking time to do this I was out in December.

I just merged your code and published a new version (2.2.1). I tested you code with both Unbabel/wmt22-comet-da and Unbabel/XCOMET-XL. On both models everything seems to be working well.

I used the following files:
src.txt
ref.txt
mt.txt

The output and scores using 1 GPU or 2 GPUs is exactly the same and thus everything seems correct:

comet-score -s src.txt -t mt.txt -r ref.txt --model Unbabel/XCOMET-XL
comet-score -s src.txt -t mt.txt -r ref.txt --model Unbabel/XCOMET-XL --gpus 2

Output:
/mnt/data/ricardorei/external/tmp/mt.txt score: 0.8700

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants