Skip to content

[BFCL] Evaluation with Correct Precision Settings for Locally-Hosted Models #575

@HuanzhiMao

Description

@HuanzhiMao

The following models are intended to be evaluated using bfloat16 precision instead of float16 according to their model card on HuggingFace. We should change the default precision setting for their model handlers. This means they cannot be evaluated using v100 GPUs.

  • deepseek-ai/deepseek-coder-6.7b-instruct
  • google/gemma-7b-it
  • meetkai/functionary-small-v2.2-FC
  • meetkai/functionary-medium-v2.2-FC
  • meetkai/functionary-small-v2.4-FC
  • meetkai/functionary-medium-v2.4-FC
  • NousResearch/Hermes-2-Pro-Llama-3-70B
  • NousResearch/Hermes-2-Pro-Mistral-7B
  • NousResearch/Hermes-2-Theta-Llama-3-8B
  • NousResearch/Hermes-2-Theta-Llama-3-70B
  • meta-llama/Meta-Llama-3-8B-Instruct
  • meta-llama/Meta-Llama-3-70B-Instruct
  • ibm-granite/granite-20b-functioncalling
  • THUDM/glm-4-9b-chat

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions