Skip to content

Conversation

@deece
Copy link

@deece deece commented May 26, 2024

This PR adds the sample from the documentation to the repo, a gradio UI, and squishes a few bugs that were discovered along the way.

deece added 8 commits May 24, 2024 18:51
When load_4bit=True is passed to load_pretrained_model(), we get the
following error:
  File "LLaVA-NeXT/scripts/image/./gradio-ui.py", line 30, in load_model
    tokenizer, model, image_processor, max_length = load_pretrained_model(
                                                    ^^^^^^^^^^^^^^^^^^^^^^
  File "LLaVA-NeXT/llava/model/builder.py", line 175, in load_pretrained_model
    model = LlavaLlamaForCausalLM.from_pretrained(model_path, low_cpu_mem_usage=True, attn_implementation=attn_implementation, config=llava_cfg, **kwargs)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "LLaVA-NeXT/venv/lib/python3.12/site-packages/transformers/modeling_utils.py", line 2977, in from_pretrained
    raise ValueError(
ValueError: You can't pass `load_in_4bit`or `load_in_8bit` as a kwarg when passing `quantization_config` argument at the same time.

This commit fixes this by removing the "load_in_4bit" kwarg and relying
on the quantization_config only.

Signed-off-by: Alastair D'Silva <[email protected]>
Signed-off-by: Alastair D'Silva <[email protected]>
Error: You are calling `save_pretrained` to a 4-bit converted model, but your `bitsandbytes` version doesn't support it. If you want to save 4-bit models, make sure to have `bitsandbytes>=0.41.3` installed.

Signed-off-by: Alastair D'Silva <[email protected]>
Signed-off-by: Alastair D'Silva <[email protected]>
Signed-off-by: Alastair D'Silva <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant