Skip to content

Conversation

claudepi
Copy link
Contributor

Issue Link

Changes Made

  • Modified src/ragas/llms/base.py to properly define and use old_temperature (see example below)

Testing

N/A

References

N/A

Examples

Current (incorrect) behavior

from langchain_openai import ChatOpenAI
from ragas.llms import LangchainLLMWrapper
from ragas import SingleTurnSample
from ragas.metrics import AspectCritic

llm=ChatOpenAI(model="gpt-4o",temperature=0.33)
evaluator_llm = LangchainLLMWrapper(llm)

print('Before call:')
print(f'llm.temperature: {llm.temperature}') # 0.33
print(f'evaluator_llm.langchain_llm.temperature: {evaluator_llm.langchain_llm.temperature}') # 0.33

test_data = {
    "user_input": "summarise given text\nThe company reported an 8% rise in Q3 2024, driven by strong performance in the Asian market. Sales in this region have significantly contributed to the overall growth. Analysts attribute this success to strategic marketing and product localization. The positive trend in the Asian market is expected to continue into the next quarter.",
    "response": "The company experienced an 8% increase in Q3 2024, largely due to effective marketing strategies and product adaptation, with expectations of continued growth in the coming quarter.",
}
metric = AspectCritic(name="summary_accuracy",llm=evaluator_llm,
                      definition="Verify if the summary is accurate.")
await metric.single_turn_ascore(SingleTurnSample(**test_data))

print('\nAfter call:')
print(f'llm.temperature: {llm.temperature}') # 0.01
print(f'evaluator_llm.langchain_llm.temperature: {evaluator_llm.langchain_llm.temperature}') # 0.01
Before call:
llm.temperature: 0.33
evaluator_llm.langchain_llm.temperature: 0.33

After call:
llm.temperature: 0.01
evaluator_llm.langchain_llm.temperature: 0.01

With change (correct behavior):

Before call:
llm.temperature: 0.33
evaluator_llm.langchain_llm.temperature: 0.33

After call:
llm.temperature: 0.33
evaluator_llm.langchain_llm.temperature: 0.33

@dosubot dosubot bot added the size:XS This PR changes 0-9 lines, ignoring generated files. label Sep 20, 2025
@jjmachan
Copy link
Member

hey @claudepi thank you for raising this PR and the fix ❤️ but could you fix the CI issue too - it's a type issue from the line you added

hoping to merge this fix soon 🙂

@claudepi
Copy link
Contributor Author

Thanks @jjmachan and sorry about that. I added a commit with a # type: ignore comment to the type checker to ignore/silence the error:

image

Not sure if that's what you had in mind. If not, just let me know what I should do instead.

I re-ran the test locally and it passed but I can't figure out how to re-run the failed workflow in Actions CI.

Copy link
Contributor

@anistark anistark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can also see that right now, in generate_text, the default temperature is 0.01, but in agenerate_text it's None.

Can we also fix this?

@claudepi
Copy link
Contributor Author

@anistark Sure thing. What should I set the defaults to, 0.01 or None?

@anistark
Copy link
Contributor

@anistark Sure thing. What should I set the defaults to, 0.01 or None?

0.01 should be good.

@claudepi
Copy link
Contributor Author

@anistark Added a new commit that changed the default temperature to 0.01

@anistark anistark merged commit eff4968 into explodinggradients:main Sep 22, 2025
7 of 8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:XS This PR changes 0-9 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Incorrect Temperature Reset in generate_text Method

3 participants