You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Note that local LLM requires separate installations and will not work out of the box due to how hardware dependent it is. Be sure to check the [notebooks](notebooks/local_llm/) for more details.
31
-
32
30
The package is split into subpackages, so you can install only the parts you need.
33
31
34
32
### Base
@@ -49,16 +47,7 @@ The package is split into subpackages, so you can install only the parts you nee
49
47
1. Using AOAI requires using Entra ID authentication. See https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/managed-identity for how to set this up for your AOAI deployment.
50
48
* Requires the correct role assigned to your user account and being signed into the Azure CLI.
51
49
1. (Optional) Set the `AZURE_OPENAI_ENDPOINT` environment variable.
52
-
1. Setup GitHub Models
53
-
1. Get a Personal Access Token from https://github.com/settings/tokens and set the `GITHUB_TOKEN` environment variable. The token does not need any permissions.
54
-
1. Check the [Github Marketplace](https://github.com/marketplace/models) to see which models are available.
55
-
56
-
57
-
### Local LLM
58
-
1.`pip install not_again_ai[llm,local_llm]`
59
-
1. Some HuggingFace transformers tokenizers are gated behind access requests. If you wish to use these, you will need to request access from HuggingFace on the model card.
60
-
* Then set the `HF_TOKEN` environment variable to your HuggingFace API token which can be found here: https://huggingface.co/settings/tokens
61
-
1. If you wish to use Ollama:
50
+
1. If you wish to use Ollama:
62
51
1. Follow the instructions at https://github.com/ollama/ollama to install Ollama for your system.
63
52
1. (Optional) [Add Ollama as a startup service (recommended)](https://github.com/ollama/ollama/blob/main/docs/linux.md#adding-ollama-as-a-startup-service-recommended)
64
53
1. (Optional) To make the Ollama service accessible on your local network from a Linux server, add the following to the `/etc/systemd/system/ollama.service` file which will make Ollama available at `http://<local_address>:11434`:
@@ -68,7 +57,6 @@ The package is split into subpackages, so you can install only the parts you nee
68
57
Environment="OLLAMA_HOST=0.0.0.0"
69
58
```
70
59
1. It is recommended to always have the latest version of Ollama. To update Ollama check the [docs](https://github.com/ollama/ollama/blob/main/docs/). The commandfor Linux is: `curl -fsSL https://ollama.com/install.sh | sh`
71
-
1. HuggingFace transformers and other requirements are hardware dependent so for providers other than Ollama, this only installs some generic dependencies. Check the [notebooks](notebooks/local_llm/) for more details on what is available and how to install it.
72
60
73
61
74
62
### Statistics
@@ -112,10 +100,8 @@ $ poetry update
112
100
113
101
To install all dependencies (with all extra dependencies) into an isolated virtual environment:
114
102
115
-
> Append `--sync` to uninstall dependencies that are no longer in use from the virtual environment.
116
-
117
103
```bash
118
-
$ poetry install --all-extras
104
+
$ poetry sync --all-extras
119
105
```
120
106
121
107
To [activate](https://python-poetry.org/docs/basic-usage#activating-the-virtual-environment) the
@@ -171,7 +157,7 @@ Automated code quality checks are performed using
171
157
environments and run commands based on [`noxfile.py`](./noxfile.py) for unit testing, PEP 8 style
172
158
guide checking, type checking and documentation generation.
173
159
174
-
> Note: `nox` is installed into the virtual environment automatically by the `poetry install`
160
+
> Note: `nox` is installed into the virtual environment automatically by the `poetry sync`
175
161
>command above. Run `poetry shell` to activate the virtual environment.
0 commit comments