π¦ Run Ollama large language models (LLMs) with GitHub Actions.
# .github/workflows/ollama.yml
on: push
jobs:
ollama:
runs-on: ubuntu-latest
steps:
- name: Run model
uses: ai-action/ollama-action@v1
id: model
with:
model: llama3.2
prompt: Explain the basics of machine learning.
- name: Print response
run: echo "$response"
env:
response: ${{ steps.model.outputs.response }}
Run a prompt against a model:
- uses: ai-action/ollama-action@v1
id: explanation
with:
model: tinyllama
prompt: "What's a large language model?"
- run: echo "$response"
env:
response: ${{ steps.explanation.outputs.response }}
See action.yml
Required: The language model to use.
- uses: ai-action/ollama-action@v1
with:
model: llama3.2
Required: The input prompt to generate the text from.
- uses: ai-action/ollama-action@v1
with:
prompt: Tell me a joke.
To set a multiline prompt:
- uses: ai-action/ollama-action@v1
with:
prompt: |
Tell me
a joke.
Optional: The Ollama version. See all available versions.
- uses: ai-action/ollama-action@v1
with:
version: 0.9.5
Optional: Whether to cache the model. Defaults to true
.
- uses: ai-action/ollama-action@v1
with:
cache: true
The generated response message.
- uses: ai-action/ollama-action@v1
id: answer
with:
model: llama3.2
prompt: What's 1+1?
- run: echo "$response"
env:
response: ${{ steps.answer.outputs.response }}
Note
The environment variable is wrapped in double quotes to preserve newlines.