Issue with Custom LLM Integration in Theia IDE #16209
Replies: 2 comments 1 reply
-
Hi, How is your model provider configured? And which server are you using? I've noticed that different combinations of providers/models/server implementation can produce different behaviors, even when using the same underlying API / Theia Provider. Based on your log, it seems that the chat ends "normally", as the last received Token includes a "finish_reason": {
...
"logprobs": null,
"finish_reason": "tool_calls",
"matched_stop": 151645
} (So it's actually the LLM terminating the session, rather than the Theia Provider?) However, you mention:
What does this look like exactly? Is the Chat UI still showing a spinner or cancel request button? |
Beta Was this translation helpful? Give feedback.
-
Hi, Unfortunately, every server may have slightly different implementation, even for a given API or Model (e.g. OpenAI API). Without knowing which server implementation is used in the backend, I'm unable to reproduce the issue. As far as I can tell from the logs, all prompts seem to end with a natural stop ( |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Good day,
I am currently exploring your IDE and its AI mechanisms. At the moment, I am working with the connection of custom LLMs (provided by OpenAI). I managed to achieve quite a lot: autocompletion works, the chat is functional, but I am facing an issue with function calls from the model.
The scenario looks something like this: I write a request like "@coder get the contents of this file #editorContext". Then it sends me some text and a function call is triggered (in this example, getFileContent() is called). After that, the process gets stuck in infinite loading.
I tried investigating the model logs and checking what requests Theia sends and what it receives in return. I came to the conclusion that at some point Theia stops sending requests to my model. Here is what the chat log looks like (I write the queries in Russian, but that doesn’t change the essence): https://pastebin.com/8esS6xJ6
— I hope the link works for you. The "model" field is not related to the LLM I am using; I am working with Qwen.
I am not familiar with the protocols used for interaction between the client and the LLM, so I cannot figure out the reason for this error. I would be very grateful if you could advise me on how to fix this. Thank you very much in advance!
Beta Was this translation helpful? Give feedback.
All reactions