Skip to content

Conversation

@jacoblee93
Copy link
Contributor

Adds Ollama as an LLM. Ollama can run various open source models locally e.g. Llama 2 and Vicuna, automatically configuring and GPU-optimizing them.

@rlancemartin @hwchase17

@vercel
Copy link

vercel bot commented Aug 6, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
langchain ✅ Ready (Inspect) Visit Preview 💬 Add feedback Aug 8, 2023 4:19am

@rlancemartin
Copy link
Contributor

Looks great; looking forward to testing later today / this evening.

)


class _OllamaCommon(BaseLanguageModel):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't think we need to split out _OllamaCommon if there's only one child class, unless we're planning on adding a ChatOllama soon?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I figured it was a possibility, can recombine them though

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's leave it, no point in recombining actually and the potential is there

@rlancemartin
Copy link
Contributor

LGTM! Just cleaned up the ntbk. I see 50 tok / s (Mac M2 Max 32 GB, w/ Llama-13b).

@rlancemartin
Copy link
Contributor

ollama.mov

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants