File tree Expand file tree Collapse file tree 1 file changed +29
-0
lines changed
Expand file tree Collapse file tree 1 file changed +29
-0
lines changed Original file line number Diff line number Diff line change @@ -34,6 +34,35 @@ NOTE: All server options are also available as environment variables. For exampl
3434
3535## Guides
3636
37+ ### Code Completion
38+
39+ ` llama-cpp-python ` supports code completion via GitHub Copilot.
40+
41+ * NOTE* : Without GPU acceleration this is unlikely to be fast enough to be usable.
42+
43+ You'll first need to download one of the available code completion models in GGUF format:
44+
45+ - [ replit-code-v1_5-GGUF] ( https://huggingface.co/abetlen/replit-code-v1_5-3b-GGUF )
46+
47+ Then you'll need to run the OpenAI compatible web server with a increased context size substantially for GitHub Copilot requests:
48+
49+ ``` bash
50+ python3 -m llama_cpp.server --model < model_path> --n_ctx 16192
51+ ```
52+
53+ Then just update your settings in ` .vscode/settings.json ` to point to your code completion server:
54+
55+ ``` json
56+ {
57+ // ...
58+ "github.copilot.advanced" : {
59+ "debug.testOverrideProxyUrl" : " http://<host>:<port>" ,
60+ "debug.overrideProxyUrl" : " http://<host>:<port>"
61+ }
62+ // ...
63+ }
64+ ```
65+
3766### Function Calling
3867
3968` llama-cpp-python ` supports structured function calling based on a JSON schema.
You can’t perform that action at this time.
0 commit comments