77
88 {{template "views/partials/navbar" .}}
99 < div class ="container mx-auto px-4 flex-grow ">
10-
1110 < div class ="workers mt-12 text-center ">
11+
1212 < h2 class ="text-3xl font-semibold text-gray-100 mb-8 ">
1313 < i class ="fa-solid fa-circle-nodes "> </ i > Distributed inference with P2P
1414 < a href ="https://localai.io/features/distribute/ " target ="_blank ">
@@ -17,53 +17,20 @@ <h2 class="text-3xl font-semibold text-gray-100 mb-8">
1717 </ h2 >
1818 < h5 class ="mb-4 text-justify "> LocalAI uses P2P technologies to enable distribution of work between peers. It is possible to share an instance with Federation and/or split the weights of a model across peers (only available with llama.cpp models). You can now share computational resources between your devices or your friends!</ h5 >
1919
20- <!-- Tabs for Instructions -->
20+ <!-- Federation Box -->
2121 < div class ="bg-gray-800 p-6 rounded-lg shadow-lg mb-12 text-left ">
22- < h3 class ="text-2xl font-semibold text-gray-100 mb-6 "> < i class ="fa-solid fa-book "> </ i > Start a new llama.cpp P2P worker</ h3 >
23- < p class ="mb-4 "> You can start llama.cpp workers to distribute weights between the workers and offload part of the computation. To start a new worker, you can use the CLI or Docker.</ p >
24-
25- <!-- Tabs navigation -->
26- < ul class ="mb-5 flex list-none flex-row flex-wrap ps-0 " role ="tablist " data-twe-nav-ref >
27- < li role ="presentation " class ="flex-auto text-center ">
28- < a href ="#tabs-cli " class ="tablink my-2 block border-0 bg-gray-800 px-7 pb-3.5 pt-4 text-xs font-medium uppercase leading-tight text-white hover:bg-gray-700 focus:bg-gray-700 data-[twe-nav-active]:border-yellow-500 data-[twe-nav-active]:text-yellow-500 data-[twe-nav-active]:bg-gray-700 active " data-twe-toggle ="pill " data-twe-target ="#tabs-cli " data-twe-nav-active role ="tab " aria-controls ="tabs-cli " aria-selected ="true "> < i class ="fa-solid fa-terminal "> </ i > CLI</ a >
29- </ li >
30- < li role ="presentation " class ="flex-auto text-center ">
31- < a href ="#tabs-docker " class ="tablink my-2 block border-0 bg-gray-800 px-7 pb-3.5 pt-4 text-xs font-medium uppercase leading-tight text-white hover:bg-gray-700 focus:bg-gray-700 data-[twe-nav-active]:border-yellow-500 data-[twe-nav-active]:text-yellow-500 data-[twe-nav-active]:bg-gray-700 " data-twe-toggle ="pill " data-twe-target ="#tabs-docker " role ="tab " aria-controls ="tabs-docker " aria-selected ="false "> < i class ="fa-solid fa-box-open "> </ i > Container images</ a >
32- </ li >
33- </ ul >
34-
35- <!-- Tabs content -->
36- < div class ="mb-6 ">
37- < div class ="tabcontent hidden opacity-100 transition-opacity duration-150 ease-linear data-[twe-tab-active]:block p-4 " id ="tabs-cli " role ="tabpanel " aria-labelledby ="tabs-cli " data-twe-tab-active >
38- < p class ="mb-2 "> To start a new worker, run the following command:</ p >
39- < code class ="block bg-gray-700 text-yellow-300 p-4 rounded-lg break-words ">
40- export TOKEN="< span class ="token "> {{.P2PToken}}</ span > "< br >
41- local-ai worker p2p-llama-cpp-rpc
42- </ code >
4322
44- < p class ="mt-2 "> For all the options available, please refer to the < a href ="https://localai.io/features/distribute/#starting-workers " target ="_blank " class ="text-yellow-300 hover:text-yellow-400 "> documentation</ a > .</ p >
45- </ div >
46- < div class ="tabcontent hidden opacity-0 transition-opacity duration-150 ease-linear data-[twe-tab-active]:block p-4 " id ="tabs-docker " role ="tabpanel " aria-labelledby ="tabs-docker ">
47- < p class ="mb-2 "> To start a new worker with docker, run the following command:</ p >
48- < code class ="block bg-gray-700 text-yellow-300 p-4 rounded-lg break-words ">
49- docker run -ti --net host -e TOKEN="< span class ="token "> {{.P2PToken}}</ span > " --name local-ai -p 8080:8080 localai/localai:latest-cpu worker p2p-llama-cpp-rpc
50- </ code >
23+ < p class ="text-xl font-semibold text-gray-200 "> < i class ="text-gray-200 fa-solid fa-circle-nodes "> </ i > Federated Nodes: < span hx-get ="/p2p/ui/workers-federation-stats " hx-trigger ="every 1s "> </ span > </ p >
24+ < p class ="mb-4 "> You can start LocalAI in federated mode to share your instance, or start the federated server to balance requests between nodes of the federation.</ p >
5125
52- < p class ="mt-2 " > For all the options available and see what image to use, please refer to the < a href =" https://localai.io/basics/container/ " target =" _blank " class =" text-yellow-300 hover:text-yellow-400 " > Container images documentation </ a > and < a href =" https://localai.io/advanced/#cli-parameters " target =" _blank " class =" text-yellow-300 hover:text-yellow-400 " > CLI parameters documentation </ a > . </ p >
53- </ div >
26+ < div class ="grid grid-cols-1 sm:grid-cols-2 md:grid-cols-3 gap-4 mb-12 " >
27+ < div hx-get =" /p2p/ui/workers-federation " hx-trigger =" every 1s " > < /div >
5428 </ div >
55- </ div >
5629
57- < p class ="text-xl font-semibold text-gray-200 "> < i class ="text-gray-200 fa-solid fa-circle-nodes "> </ i > Workers (llama.cpp): < span hx-get ="/p2p/ui/workers-stats " hx-trigger ="every 1s "> </ span > </ p >
58- < div class ="grid grid-cols-1 sm:grid-cols-2 md:grid-cols-3 gap-4 mb-12 ">
59- < div hx-get ="/p2p/ui/workers " hx-trigger ="every 1s "> </ div >
60- </ div >
61-
62- < hr class ="border-gray-700 mb-12 ">
30+ < hr class ="border-gray-700 mb-12 ">
6331
64- < div class ="bg-gray-800 p-6 rounded-lg shadow-lg mb-12 text-left ">
6532 < h3 class ="text-2xl font-semibold text-gray-100 mb-6 "> < i class ="fa-solid fa-book "> </ i > Start a federated instance</ h3 >
66- < p class =" mb-4 " > You can start LocalAI in federated mode to share your instance, or start the federated server to balance requests between nodes of the federation. </ p >
33+
6734
6835 <!-- Tabs navigation -->
6936 < ul class ="mb-5 flex list-none flex-row flex-wrap ps-0 " role ="tablist " data-twe-nav-ref >
@@ -77,14 +44,18 @@ <h3 class="text-2xl font-semibold text-gray-100 mb-6"><i class="fa-solid fa-book
7744
7845 <!-- Tabs content -->
7946 < div class ="mb-6 ">
47+
8048 < div class ="tabcontent hidden opacity-100 transition-opacity duration-150 ease-linear data-[twe-tab-active]:block p-4 " id ="tabs-federated-cli " role ="tabpanel " aria-labelledby ="tabs-federated-cli " data-twe-tab-active >
81- < p class ="mb-2 "> To start a new federated instance:</ p >
49+
50+
51+ < p class ="mb-2 "> To start a new instance to share:</ p >
8252 < code class ="block bg-gray-700 text-yellow-300 p-4 rounded-lg break-words ">
53+ # Start a new instance to share with --federated and a TOKEN< br >
8354 export TOKEN="< span class ="token "> {{.P2PToken}}</ span > "< br >
8455 local-ai run --federated --p2p
8556 </ code >
8657
87- < p class ="mt-2 "> Note: If you don't have a token do not specify it and use the generated one that you can find in this page afterwards .</ p >
58+ < p class ="mt-2 "> Note: If you don't have a token do not specify it and use the generated one that you can find in this page.</ p >
8859
8960 < p class ="mb-2 "> To start a new federated load balancer:</ p >
9061 < code class ="block bg-gray-700 text-yellow-300 p-4 rounded-lg break-words ">
@@ -112,10 +83,52 @@ <h3 class="text-2xl font-semibold text-gray-100 mb-6"><i class="fa-solid fa-book
11283 </ div >
11384 </ div >
11485
115- < p class ="text-xl font-semibold text-gray-200 "> < i class ="text-gray-200 fa-solid fa-circle-nodes "> </ i > Federated Nodes: < span hx-get ="/p2p/ui/workers-federation-stats " hx-trigger ="every 1s "> </ span > </ p >
116- < div class ="grid grid-cols-1 sm:grid-cols-2 md:grid-cols-3 gap-4 mb-12 ">
117- < div hx-get ="/p2p/ui/workers-federation " hx-trigger ="every 1s "> </ div >
86+ <!-- Llama.cpp Box -->
87+
88+ < div class ="bg-gray-800 p-6 rounded-lg shadow-lg mb-12 text-left ">
89+
90+ < p class ="text-xl font-semibold text-gray-200 "> < i class ="text-gray-200 fa-solid fa-circle-nodes "> </ i > Workers (llama.cpp): < span hx-get ="/p2p/ui/workers-stats " hx-trigger ="every 1s "> </ span > </ p >
91+ < p class ="mb-4 "> You can start llama.cpp workers to distribute weights between the workers and offload part of the computation. To start a new worker, you can use the CLI or Docker.</ p >
92+
93+ < div class ="grid grid-cols-1 sm:grid-cols-2 md:grid-cols-3 gap-4 mb-12 ">
94+ < div hx-get ="/p2p/ui/workers " hx-trigger ="every 1s "> </ div >
95+ </ div >
96+ < hr class ="border-gray-700 mb-12 ">
97+
98+ < h3 class ="text-2xl font-semibold text-gray-100 mb-6 "> < i class ="fa-solid fa-book "> </ i > Start a new llama.cpp P2P worker</ h3 >
99+
100+ <!-- Tabs navigation -->
101+ < ul class ="mb-5 flex list-none flex-row flex-wrap ps-0 " role ="tablist " data-twe-nav-ref >
102+ < li role ="presentation " class ="flex-auto text-center ">
103+ < a href ="#tabs-cli " class ="tablink my-2 block border-0 bg-gray-800 px-7 pb-3.5 pt-4 text-xs font-medium uppercase leading-tight text-white hover:bg-gray-700 focus:bg-gray-700 data-[twe-nav-active]:border-yellow-500 data-[twe-nav-active]:text-yellow-500 data-[twe-nav-active]:bg-gray-700 active " data-twe-toggle ="pill " data-twe-target ="#tabs-cli " data-twe-nav-active role ="tab " aria-controls ="tabs-cli " aria-selected ="true "> < i class ="fa-solid fa-terminal "> </ i > CLI</ a >
104+ </ li >
105+ < li role ="presentation " class ="flex-auto text-center ">
106+ < a href ="#tabs-docker " class ="tablink my-2 block border-0 bg-gray-800 px-7 pb-3.5 pt-4 text-xs font-medium uppercase leading-tight text-white hover:bg-gray-700 focus:bg-gray-700 data-[twe-nav-active]:border-yellow-500 data-[twe-nav-active]:text-yellow-500 data-[twe-nav-active]:bg-gray-700 " data-twe-toggle ="pill " data-twe-target ="#tabs-docker " role ="tab " aria-controls ="tabs-docker " aria-selected ="false "> < i class ="fa-solid fa-box-open "> </ i > Container images</ a >
107+ </ li >
108+ </ ul >
109+
110+ <!-- Tabs content -->
111+ < div class ="mb-6 ">
112+ < div class ="tabcontent hidden opacity-100 transition-opacity duration-150 ease-linear data-[twe-tab-active]:block p-4 " id ="tabs-cli " role ="tabpanel " aria-labelledby ="tabs-cli " data-twe-tab-active >
113+ < p class ="mb-2 "> To start a new worker, run the following command:</ p >
114+ < code class ="block bg-gray-700 text-yellow-300 p-4 rounded-lg break-words ">
115+ export TOKEN="< span class ="token "> {{.P2PToken}}</ span > "< br >
116+ local-ai worker p2p-llama-cpp-rpc
117+ </ code >
118+
119+ < p class ="mt-2 "> For all the options available, please refer to the < a href ="https://localai.io/features/distribute/#starting-workers " target ="_blank " class ="text-yellow-300 hover:text-yellow-400 "> documentation</ a > .</ p >
120+ </ div >
121+ < div class ="tabcontent hidden opacity-0 transition-opacity duration-150 ease-linear data-[twe-tab-active]:block p-4 " id ="tabs-docker " role ="tabpanel " aria-labelledby ="tabs-docker ">
122+ < p class ="mb-2 "> To start a new worker with docker, run the following command:</ p >
123+ < code class ="block bg-gray-700 text-yellow-300 p-4 rounded-lg break-words ">
124+ docker run -ti --net host -e TOKEN="< span class ="token "> {{.P2PToken}}</ span > " --name local-ai -p 8080:8080 localai/localai:latest-cpu worker p2p-llama-cpp-rpc
125+ </ code >
126+
127+ < p class ="mt-2 "> For all the options available and see what image to use, please refer to the < a href ="https://localai.io/basics/container/ " target ="_blank " class ="text-yellow-300 hover:text-yellow-400 "> Container images documentation</ a > and < a href ="https://localai.io/advanced/#cli-parameters " target ="_blank " class ="text-yellow-300 hover:text-yellow-400 "> CLI parameters documentation</ a > .</ p >
128+ </ div >
129+ </ div >
118130 </ div >
131+ <!-- Llama.cpp Box END -->
119132 </ div >
120133 </ div >
121134
0 commit comments