-
Notifications
You must be signed in to change notification settings - Fork 844
Added Sycophancy Evaluation Metric in SDK, FE, Docs #2624
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Added Sycophancy Evaluation Metric in SDK, FE, Docs #2624
Conversation
Hello, @vincentkoc please review and suggest changes if any. Also kindly help me understand the frontend issue mentioned above.
|
Thanks! @yashkumar2603 the team will review and circle back. |
sdks/python/src/opik/evaluation/metrics/llm_judges/syc_eval/metric.py
Outdated
Show resolved
Hide resolved
sdks/python/src/opik/evaluation/metrics/llm_judges/syc_eval/metric.py
Outdated
Show resolved
Hide resolved
sdks/python/src/opik/evaluation/metrics/llm_judges/syc_eval/parser.py
Outdated
Show resolved
Hide resolved
Hi @yashkumar2603 ! Thank you for your work on this PR — it looks very promising. I’ve left a few review comments. Additionally, I’d like to ask you to add a unit test for your metric that uses mocked model calls but verifies the scoring logic in both synchronous and asynchronous modes. Please take a look how other LLM judge metrics are tested. |
Thanks for the review @yaricom !! |
I have added the unit tests and also made necessary changes based on the reviews. |
sdks/python/tests/library_integration/metrics_with_llm_judge/test_evaluation_metrics.py
Show resolved
Hide resolved
@aadereiko Could you please take a look at frontend changes if you have any comments or suggestions. |
@yaricom @yashkumar2603 |
I have made the changes mentioned in the above comment, moved the test from integration to unit. Thank you for your time. |
sdks/python/tests/unit/evaluation/metrics/llm_judges/syc_eval/test_parser.py
Outdated
Show resolved
Hide resolved
sdks/python/tests/library_integration/metrics_with_llm_judge/test_evaluation_metrics.py
Show resolved
Hide resolved
sdks/python/tests/unit/evaluation/metrics/llm_judges/syc_eval/test_parser.py
Outdated
Show resolved
Hide resolved
sdks/python/tests/unit/evaluation/metrics/llm_judges/syc_eval/test_parser.py
Outdated
Show resolved
Hide resolved
sdks/python/tests/unit/evaluation/metrics/llm_judges/syc_eval/test_parser.py
Show resolved
Hide resolved
sdks/python/tests/unit/evaluation/metrics/llm_judges/syc_eval/test_parser.py
Show resolved
Hide resolved
sdks/python/tests/unit/evaluation/metrics/llm_judges/syc_eval/test_parser.py
Outdated
Show resolved
Hide resolved
Dear @yashkumar2603 ! Thank you for committing the changes. Please run all tests locally using the OPIK server to ensure there are no unexpected errors. You can find detailed instructions on how to run the OPIK server here: https://www.comet.com/docs/opik/quickstart |
Thank you for your contribution! 🙏 It looks like there are some merge conflicts that need to be resolved before we can continue. When you have a chance, could you please update the branch? Once the conflicts are resolved, we’ll be happy to provide a new review. Let us know if you have any questions or need assistance! |
Thank you for your reviews @yaricom @andrescrz 😃 |
Hello all!! Please review @andrescrz. Thank you for your time 🙏🏾 |
1. Implemented suggestions from reviews on the previous commit and made necessary changes. 2. Added unit tests for the sycophancy_evaluation_metric just like how it is applied for the other metrics
Moved test for invalid score into the unit tests as it uses a dummy model and doesnt need to be in integration tests. removed unnecessary @model_parametrizer from the same test.
36e2a46
to
0423038
Compare
0423038
to
2a1bc01
Compare
Sorry for the delays, I have fixed the examples. The working tree got messed up a bit, as it was my first experience with merge conflicts. Really sorry for that, still learning. |
Hi @yashkumar2603 ! Thank you for the commit. Please fix the following frontend linter errors:
https://github.com/comet-ml/opik/actions/runs/16720751857/job/47324614565?pr=2624 Please fix the SDK linter errors:https://github.com/comet-ml/opik/actions/runs/16720751851/job/47324614555?pr=2624 cd sdks/python
pre-commit run --all-files |
Details
Resolves #2520
This PR adds the SycEval metric for evaluating sycophantic behavior in large language models. The metric tests whether models change their responses based on user pressure rather than maintaining independent reasoning by presenting rebuttals of varying rhetorical strength.
It is based on this paper https://arxiv.org/pdf/2502.08177
as linked in the issue.
Key Features:
Implementation:
SycEval
class with sync/async scoring methodsfrom opik.evaluation.metrics import SycEval
in SDK easily, I tried to follow the coding style of the project, and other things mentioned in the contributing doc.Issues
I faced one problem, I wasnt able to figure out a way to add the different results found out by the sycophancy analysis, such as sycophancy_type into the scores category in FrontEnd, as that would have required a STRING type in the LLM_SCHEMA_TYPE
So I instead made those available on the SDK, but not on the frontend. Please suggest something to tackle this problem. Guide me to make the necessary improvements in PR.
Documentation
Working Video
2025-06-29_23-50-51.mp4
/claim #2520
Edit: added working video I forgot to add