Skip to content

Getting started

Rickard Edén edited this page Nov 4, 2024 · 5 revisions

By chance, you already have a supported backend. Either kobold.cpp or an openai api compatible (which means most) LLM server will* work. Select your preference in llm_config.yaml and then configure the backend in one of the backend_*.yaml files. Regarding openai api's, backend_llama_cpp.yaml is probably most suited if you're running local, while backend_openai.yaml is if you're using openai actual.

*No guarantees.

  • Python required. Either 3.8 or 3.10 minimum.
  • Download repo, either with git clone [email protected]:neph1/LlamaTale.git or as a zip. Master branch should be stable.
  • Run pip install -r requirements.txt
  • Start your backend, KoboldCpp or OpenAI compatible. (port 5001 by default, or change in llm_config.yaml)
  • Start the demo with python -m stories.prancingllama.story

Optional:

Clone this wiki locally