Skip to content

Error for inference #4

@alien-kai

Description

@alien-kai

Hi, thanks for your amazing work.

I have encountered some issues when doing inference. Any guidance will be greatly appreciated.

Here are some errors:

FIrst:
########################################

[2025-10-28 11:35:46,413] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect)
/home3/qslh82/miniconda3/envs/safeauto/lib/python3.10/site-packages/torchvision/transforms/_functional_video.py:6: UserWarning: The 'torchvision.transforms._functional_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms.functional' module instead.
warnings.warn(
/home3/qslh82/miniconda3/envs/safeauto/lib/python3.10/site-packages/torchvision/transforms/_transforms_video.py:22: UserWarning: The 'torchvision.transforms._transforms_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms' module instead.
warnings.warn(
/home3/qslh82/miniconda3/envs/safeauto/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be removed in 0.17. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
warnings.warn(
Traceback (most recent call last):
File "/home3/qslh82/miniconda3/envs/safeauto/lib/python3.10/site-packages/transformers/configuration_utils.py", line 672, in _get_config_dict
resolved_config_file = cached_file(
File "/home3/qslh82/miniconda3/envs/safeauto/lib/python3.10/site-packages/transformers/utils/hub.py", line 417, in cached_file
resolved_file = hf_hub_download(
File "/home3/qslh82/miniconda3/envs/safeauto/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py", line 101, in inner_f
return f(*args, **kwargs)
File "/home3/qslh82/miniconda3/envs/safeauto/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 106, in _inner_fn
validate_repo_id(arg_value)
File "/home3/qslh82/miniconda3/envs/safeauto/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 154, in validate_repo_id
raise HFValidationError(
huggingface_hub.errors.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': './cache_dir/LanguageBind_Image'. Use repo_type argument if needed.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home3/qslh82/miniconda3/envs/safeauto/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home3/qslh82/miniconda3/envs/safeauto/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home3/qslh82/SafeAuto/llava/serve/eval_custom_predsig_bddx.py", line 222, in
main(args)
File "/home3/qslh82/SafeAuto/llava/serve/eval_custom_predsig_bddx.py", line 63, in main
tokenizer, model, processor, context_len = load_pretrained_model(args.model_path, args.model_base, model_name,
File "/home3/qslh82/SafeAuto/llava/model/builder.py", line 128, in load_pretrained_model
model = AutoModelForCausalLM.from_pretrained(model_path, low_cpu_mem_usage=True, **kwargs)
File "/home3/qslh82/miniconda3/envs/safeauto/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 493, in from_pretrained
return model_class.from_pretrained(
File "/home3/qslh82/miniconda3/envs/safeauto/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2700, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/home3/qslh82/SafeAuto/llava/model/language_model/llava_llama.py", line 46, in init
self.model = LlavaLlamaModel(config)
File "/home3/qslh82/SafeAuto/llava/model/language_model/llava_llama.py", line 38, in init
super(LlavaLlamaModel, self).init(config)
File "/home3/qslh82/SafeAuto/llava/model/llava_arch.py", line 33, in init
self.image_tower = build_image_tower(config, delay_load=True)
File "/home3/qslh82/SafeAuto/llava/model/multimodal_encoder/builder.py", line 14, in build_image_tower
return LanguageBindImageTower(image_tower, args=image_tower_cfg, cache_dir='./cache_dir', **kwargs)
File "/home3/qslh82/SafeAuto/llava/model/multimodal_encoder/languagebind/init.py", line 109, in init
self.cfg_only = LanguageBindImageConfig.from_pretrained(self.image_tower_name, cache_dir=self.cache_dir)
File "/home3/qslh82/miniconda3/envs/safeauto/lib/python3.10/site-packages/transformers/configuration_utils.py", line 590, in from_pretrained
config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/home3/qslh82/miniconda3/envs/safeauto/lib/python3.10/site-packages/transformers/configuration_utils.py", line 617, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/home3/qslh82/miniconda3/envs/safeauto/lib/python3.10/site-packages/transformers/configuration_utils.py", line 693, in _get_config_dict
raise EnvironmentError(
OSError: Can't load the configuration of './cache_dir/LanguageBind_Image'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure './cache_dir/LanguageBind_Image' is the correct path to a directory containing a config.json file
Finished inference. Output saved to: results/bddx_0.00-0.00_norag
############################################################

Second:

I cannot find PGM_PATH="./pgm/ckpts/pgm/bddx_weights.npy" in eval_bddx.sh. Does it need to be trained myself? Can you provide a download link for this pretrained model?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions