Skip to content

NITt: Add think Field to ChatOllama for Controlling Thought Process in Responses #8481

@rliu6915

Description

@rliu6915

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain.js documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain.js rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

The following code:

test("test deep seek model with think=false", async () => {
  const ollama = new ChatOllama({
    model: "deepseek-r1:32b",
    think: false, // Ensure the "think" field is explicitly set to false
    maxRetries: 1,
  });

  const res = await ollama.invoke([
    new HumanMessage({
      content: "Explain the process of photosynthesis briefly.",
    }),
  ]);

  // Ensure the response is defined
  expect(res).toBeDefined();
  expect(res.content).toBeDefined();

  const responseContent = res.content;

  // Validate that the response does not include any <think>...</think> blocks
  // s means allow . to match new line character 
  expect(responseContent).not.toMatch(/<think>.*?<\/think>/is);

  // Ensure the response is concise and directly answers the question
  expect(responseContent).toMatch(/photosynthesis/i); // Check it includes the topic
  expect(responseContent.length).toBeGreaterThan(1);
});

Error Message and Stack Trace (if applicable)

Problem:

Image

Full stack trace:

Image

Description

  • I’m trying to use the langchain library and its Ollama integration to stream responses from Ollama with the think: false option so I can hide the model’s chain-of-thought (deepseek-r1:32b).
  • I expect not to see the chain-of-thought wrapped in … tags inside the content field (the same format Ollama returns when think is false). Because when using Ollama javascript library, you can disable thinking, by setting think to false in Ollama.chat() method, the reference is here.
  • Instead, when think is false the wrapper still appears.

System Info

"yarn info langchain"

langchain: 0.3.29
langchain-core: 0.3.62

platform: macOS 15.5 (Apple Silicon)
Node version: v20.9.0
yarn version: 1.22.22

Metadata

Metadata

Assignees

No one assigned

    Labels

    help wantedThis would make a good PR

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions