Replies: 2 comments
-
Hi @jwzaden, Yes, you can integrate I had a quick look at the docs, it seems
If you want to go the first route, and for example want to wrap the Open AI SDK, then you need to rebind the OpenAiLanguageModelManager to your own version. In it you would create your own instances of the OpenAiModel which uses the For manual tracing, you need to decide on which level you want to trace. If you want to trace on a very low level, you could use a similar approach to the first one, just instead of using the wrapped SDK, you add the additional tracing calls. Alternatively you could go a level higher and integrate tracing in a custom version of the LanguageModelService. All requests and answers of LLMs pass through sendRequest which therefore is in a good position for such tracing. In fact, our own AI history view is collecting its data there. However this is one level up the raw SDK access, so this might be not specific enough for your requirements. Edit: I want to mention it for completeness sake: There might be some webpack black magic you could do to replace the |
Beta Was this translation helpful? Give feedback.
-
Thank you for the detailed response! This is quite helpful. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I am building a custom IDE that makes use of Theia AI. It would be helpful if we could easily integrate a tool like LangFuse to gain some insight into how our prompts are performing, but I haven't found simple way to do this. The docs for LangFuse indicate that we need to modify the code close to the prompt execution (LangFuse tracing). I would prefer to not modify the Theia AI code if possible.
Does Theia AI provide a means to integrate an observability tool?
Beta Was this translation helpful? Give feedback.
All reactions