-
Notifications
You must be signed in to change notification settings - Fork 80
Open
Description
When attempting to use local LLM services with Ollama, the browser throws a TypeError: fetch failed error.
This occurs because Node.js's built-in fetch (via undici) imposes default timeouts that are too restrictive for large language models, which can take significant time.
Error:
Error: Error invoking remote method 'lIm-chat': TypeError: fetch failed
Potential solution:
import { Agent } from 'undici';
const llmAgent = new Agent({
headersTimeout: 0, // No timeout for headers
bodyTimeout: 0 // No timeout for body
});
// Applied to all fetch requests
const response = await fetch(url, {
method: 'POST',
headers,
body: JSON.stringify(data),
dispatcher: llmAgent // Use custom agent
});This allows large models to take unlimited time to respond without hitting artificial timeout limits, enabling reliable local LLM functionality with Ollama.
Hardware:
- M1 Mac 8 gb RAM
Metadata
Metadata
Assignees
Labels
No labels