Skip to content

Commit 037c38e

Browse files
Try to improve inference speed on some machines.
1 parent 1e11d2d commit 037c38e

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

comfy/model_management.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -432,11 +432,11 @@ def load_models_gpu(models, memory_required=0, force_patch_weights=False, minimu
432432
global vram_state
433433

434434
inference_memory = minimum_inference_memory()
435-
extra_mem = max(inference_memory, memory_required)
435+
extra_mem = max(inference_memory, memory_required) + 100 * 1024 * 1024
436436
if minimum_memory_required is None:
437437
minimum_memory_required = extra_mem
438438
else:
439-
minimum_memory_required = max(inference_memory, minimum_memory_required)
439+
minimum_memory_required = max(inference_memory, minimum_memory_required) + 100 * 1024 * 1024
440440

441441
models = set(models)
442442

0 commit comments

Comments
 (0)