You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Summary: Please add/expose option in Settings section to skip VRAM-to-RAM swap check which fallbacks computation from (i)GPU to CPU.
Context: I have an Acer laptop with 8840HS CPU, Radeon 780m iGPU and 40GB(8+32) of RAM. Unfortunately (for some stupid reason) Acer laptop has no option in BIOS to change UMA buffer size for iGPU, thus I'm stuck with 512MB of 'dedicated' VRAM while having also a 20GB of shared GPU VRAM, that should be enough for most of my tasks.
Unfortunately, when I'm trying to load model to iGPU VRAM - GPT4All says "GPU loading failed (out of vram?)". Probably because it sees only 512MB and switches to CPU, while there are plenty of free shared VRAM (20GB) which operates with same performance as that 512MB chunk.
Proposal: Could you please add/expose option in Settings section to skip VRAM-to-RAM swap check so the GPT4All would be able to use shared VRAM?
If there is some kind of parameter/argument to launch GPT4All with to force usage of shared VRAM - it would be a great stop-gap solution for now!