Think of it like a control panel where you can:
- Store your API keys and settings for AI services
- Share these settings with other Obsidian plugins
- Avoid entering the same AI settings multiple times
The plugin itself doesn't do any AI processing - it just helps other plugins connect to AI services more easily.
- Ollama
- OpenAI
- OpenAI compatible API
- OpenRouter
- Google Gemini
- LM Studio
- Groq
- Fully encapsulated API for working with AI providers
- Develop AI plugins faster without dealing directly with provider-specific APIs
- Easily extend support for additional AI providers in your plugin
- Available in 4 languages: English, Chinese, German, and Russian (more languages coming soon)
This plugin is available in the Obsidian community plugin store https://obsidian.md/plugins?id=ai-providers
You can install this plugin via BRAT: pfrankov/obsidian-ai-providers
- Install Ollama.
- Install Gemma 2
ollama pull gemma2or any preferred model from the library. - Select
OllamainProvider type - Click refresh button and select the model that suits your needs (e.g.
gemma2)
Additional: if you have issues with streaming completion with Ollama try to set environment variable OLLAMA_ORIGINS to *:
- For MacOS run
launchctl setenv OLLAMA_ORIGINS "*". - For Linux and Windows check the docs.
- Select
OpenAIinProvider type - Set
Provider URLtohttps://api.openai.com/v1 - Retrieve and paste your
API keyfrom the API keys page - Click refresh button and select the model that suits your needs (e.g.
gpt-4o)
There are several options to run local OpenAI-like server:
- Open WebUI
- llama.cpp
- llama-cpp-python
- LocalAI
- Obabooga Text generation web UI
- LM Studio
- ...maybe more
- Select
OpenRouterinProvider type - Set
Provider URLtohttps://openrouter.ai/api/v1 - Retrieve and paste your
API keyfrom the API keys page - Click refresh button and select the model that suits your needs (e.g.
anthropic/claude-3.7-sonnet)
- Select
Google GeminiinProvider type - Set
Provider URLtohttps://generativelanguage.googleapis.com/v1beta/openai - Retrieve and paste your
API keyfrom the API keys page - Click refresh button and select the model that suits your needs (e.g.
gemini-1.5-flash)
- Select
LM StudioinProvider type - Set
Provider URLtohttp://localhost:1234/v1 - Click refresh button and select the model that suits your needs (e.g.
gemma2)
- Select
GroqinProvider type - Set
Provider URLtohttps://api.groq.com/openai/v1 - Retrieve and paste your
API keyfrom the API keys page - Click refresh button and select the model that suits your needs (e.g.
llama3-70b-8192)
Docs: How to integrate AI Providers in your plugin.
Quick reference (details in SDK docs):
try {
const finalText = await aiProviders.execute({
provider,
prompt: "Hello",
onProgress: (chunk, full) => {/* stream UI update */},
abortController
});
// use finalText
} catch (e) {
// handle error / abort
}Removed callbacks: onEnd / onError — promise resolve/reject covers them (only onProgress remains for streaming). Legacy chainable handler also deprecated.
- Docs for devs
- Ollama context optimizations
- German translations
- Chinese translations
- Update to latest OpenAI version and embedding models
- Russian translations
- Groq Provider support
- Passing messages instead of one prompt
- Anthropic Provider support
- Shared embeddings to avoid re-embedding the same documents multiple times
- Spanish, Italian, French, Dutch, Portuguese, Japanese, Korean translations
- Incapsulated basic RAG search with optional BM25 search
- Local GPT that assists with local AI for maximum privacy and offline access.
- Colored Tags that colorizes tags in distinguishable colors.