Skip to content

Commit b4edd01

Browse files
zhangch-ssai_userHuanzhiMao
authored
Add Qwen handler and fix mean_latency calculation error for OSS models (#642)
Hello, there may be an anomaly in the mean latency calculation of the OSS model, and I have attempted to fix it. --------- Co-authored-by: ai_user <[email protected]> Co-authored-by: Huanzhi (Hans) Mao <[email protected]>
1 parent 7c0efb1 commit b4edd01

File tree

5 files changed

+56
-0
lines changed

5 files changed

+56
-0
lines changed

berkeley-function-call-leaderboard/CHANGELOG.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,11 @@
22

33
All notable changes to the Berkeley Function Calling Leaderboard will be documented in this file.
44

5+
- [Oct 5, 2024] [#642](https://github.com/ShishirPatil/gorilla/pull/642): Add the following new models to the leaderboard:
6+
- `Qwen/Qwen2.5-7B-Instruct`
7+
- `Qwen/Qwen2.5-1.5B-Instruct`
8+
- `Qwen/Qwen2-7B-Instruct`
9+
- `Qwen/Qwen2-1.5B-Instruct`
510
- [Oct 4, 2024] [#653](https://github.com/ShishirPatil/gorilla/pull/653): Add new model `Team-ACE/ToolACE-8B` to the leaderboard.
611
- [Oct 4, 2024] [#671](https://github.com/ShishirPatil/gorilla/pull/671): Speed up locally-hosted model's inference process by parallelizing the inference requests.
712
- [Sept 27, 2024] [#640](https://github.com/ShishirPatil/gorilla/pull/640): Add the following new models to the leaderboard:

berkeley-function-call-leaderboard/README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -186,6 +186,8 @@ Below is _a table of models we support_ to run our leaderboard evaluation agains
186186
|ibm-granite/granite-20b-functioncalling 💻| Function Calling|
187187
|yi-large-fc | Function Calling|
188188
|MadeAgents/Hammer-7b 💻| Function Calling|
189+
|Qwen/Qwen2.5-{1.5B,7B}-Instruct 💻| Prompt|
190+
|Qwen/Qwen2-{1.5B,7B}-Instruct 💻| Prompt|
189191
|Team-ACE/ToolACE-8B 💻| Function Calling|
190192

191193
Here {MODEL} 💻 means the model needs to be hosted locally and called by vllm, {MODEL} means the models that are called API calls. For models with a trailing `-FC`, it means that the model supports function-calling feature. You can check out the table summarizing feature supports among different models [here](https://gorilla.cs.berkeley.edu/blogs/8_berkeley_function_calling_leaderboard.html#prompt).

berkeley-function-call-leaderboard/bfcl/eval_checker/model_metadata.py

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -581,6 +581,30 @@
581581
"Microsoft",
582582
"MIT",
583583
],
584+
"Qwen/Qwen2-1.5B-Instruct": [
585+
"Qwen2-1.5B-Instruct (Prompt)",
586+
"https://huggingface.co/Qwen/Qwen2-1.5B-Instruct",
587+
"Qwen",
588+
"apache-2.0",
589+
],
590+
"Qwen/Qwen2-7B-Instruct": [
591+
"Qwen2-7B-Instruct (Prompt)",
592+
"https://huggingface.co/Qwen/Qwen2-7B-Instruct",
593+
"Qwen",
594+
"apache-2.0",
595+
],
596+
"Qwen/Qwen2.5-1.5B-Instruct": [
597+
"Qwen2.5-1.5B-Instruct (Prompt)",
598+
"https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct",
599+
"Qwen",
600+
"apache-2.0",
601+
],
602+
"Qwen/Qwen2.5-7B-Instruct": [
603+
"Qwen2.5-7B-Instruct (Prompt)",
604+
"https://huggingface.co/Qwen/Qwen2.5-7B-Instruct",
605+
"Qwen",
606+
"apache-2.0",
607+
],
584608
"Team-ACE/ToolACE-8B": [
585609
"ToolACE-8B (FC)",
586610
"https://huggingface.co/Team-ACE/ToolACE-8B",

berkeley-function-call-leaderboard/bfcl/model_handler/handler_map.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@
99
from bfcl.model_handler.oss_model.llama_fc import LlamaFCHandler
1010
from bfcl.model_handler.oss_model.phi import PhiHandler
1111
from bfcl.model_handler.oss_model.salesforce import SalesforceHandler
12+
from bfcl.model_handler.oss_model.qwen import QwenHandler
1213
from bfcl.model_handler.proprietary_model.claude import ClaudeHandler
1314
from bfcl.model_handler.proprietary_model.cohere import CohereHandler
1415
from bfcl.model_handler.proprietary_model.databricks import DatabricksHandler
@@ -108,6 +109,10 @@
108109
"ibm-granite/granite-20b-functioncalling": GraniteHandler,
109110
# "MadeAgents/Hammer-7b": HammerHandler, # TODO: Update handler once they have a multi-turn format
110111
"THUDM/glm-4-9b-chat": GLMHandler,
112+
"Qwen/Qwen2-1.5B-Instruct": QwenHandler,
113+
"Qwen/Qwen2-7B-Instruct": QwenHandler,
114+
"Qwen/Qwen2.5-1.5B-Instruct": QwenHandler,
115+
"Qwen/Qwen2.5-7B-Instruct": QwenHandler,
111116
"Team-ACE/ToolACE-8B": LlamaHandler,
112117

113118
# Deprecated/outdated models, no longer on the leaderboard
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
from bfcl.model_handler.oss_model.base_oss_handler import OSSHandler
2+
3+
4+
class QwenHandler(OSSHandler):
5+
def __init__(self, model_name, temperature) -> None:
6+
super().__init__(model_name, temperature)
7+
8+
def _format_prompt(self, messages, function):
9+
# Qwen is using its prompting mode, not the tool use mode
10+
"""
11+
"chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n {%- else %}\n {{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}\n {%- endif %}\n {{- \"\\n\\n# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0]['role'] == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0]['content'] + '<|im_end|>\\n' }}\n {%- else %}\n {{- '<|im_start|>system\\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- for message in messages %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) or (message.role == \"assistant\" and not message.tool_calls) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {{- '<|im_start|>' + message.role }}\n {%- if message.content %}\n {{- '\\n' + message.content }}\n {%- endif %}\n {%- for tool_call in message.tool_calls %}\n {%- if tool_call.function is defined %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '\\n<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {{- tool_call.arguments | tojson }}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- message.content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}\n",
12+
"""
13+
formatted_prompt = ""
14+
15+
for message in messages:
16+
formatted_prompt += f"<|im_start|>{message['role']}\n{message['content']}<|im_end|>\n"
17+
18+
formatted_prompt += "<|im_start|>assistant\n"
19+
20+
return formatted_prompt

0 commit comments

Comments
 (0)