Skip to content
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions berkeley-function-call-leaderboard/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

All notable changes to the Berkeley Function Calling Leaderboard will be documented in this file.

- [Oct 5, 2024] [#633](https://github.com/ShishirPatil/gorilla/pull/633): Add new model `openbmb/MiniCPM3-4B` to the leaderboard.
- [Oct 5, 2024] [#642](https://github.com/ShishirPatil/gorilla/pull/642): Add the following new models to the leaderboard:
- `Qwen/Qwen2.5-7B-Instruct`
- `Qwen/Qwen2.5-1.5B-Instruct`
Expand Down
1 change: 1 addition & 0 deletions berkeley-function-call-leaderboard/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -189,6 +189,7 @@ Below is _a table of models we support_ to run our leaderboard evaluation agains
|Qwen/Qwen2.5-{1.5B,7B}-Instruct 💻| Prompt|
|Qwen/Qwen2-{1.5B,7B}-Instruct 💻| Prompt|
|Team-ACE/ToolACE-8B 💻| Function Calling|
|openbmb/MiniCPM3-4B 💻| Function Calling|

Here {MODEL} 💻 means the model needs to be hosted locally and called by vllm, {MODEL} means the models that are called API calls. For models with a trailing `-FC`, it means that the model supports function-calling feature. You can check out the table summarizing feature supports among different models [here](https://gorilla.cs.berkeley.edu/blogs/8_berkeley_function_calling_leaderboard.html#prompt).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -611,6 +611,12 @@
"Huawei Noah & USTC",
"Apache-2.0",
],
"openbmb/MiniCPM3-4B": [
"MiniCPM3-4B (FC)",
"https://huggingface.co/openbmb/MiniCPM3-4B",
"openbmb",
"Apache-2.0",
],
}

INPUT_PRICE_PER_MILLION_TOKEN = {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
from bfcl.model_handler.oss_model.phi import PhiHandler
from bfcl.model_handler.oss_model.salesforce import SalesforceHandler
from bfcl.model_handler.oss_model.qwen import QwenHandler
from bfcl.model_handler.oss_model.minicpm import MiniCPMHandler
from bfcl.model_handler.proprietary_model.claude import ClaudeHandler
from bfcl.model_handler.proprietary_model.cohere import CohereHandler
from bfcl.model_handler.proprietary_model.databricks import DatabricksHandler
Expand Down Expand Up @@ -114,6 +115,7 @@
"Qwen/Qwen2.5-1.5B-Instruct": QwenHandler,
"Qwen/Qwen2.5-7B-Instruct": QwenHandler,
"Team-ACE/ToolACE-8B": LlamaHandler,
"openbmb/MiniCPM3-4B": MiniCPMHandler,

# Deprecated/outdated models, no longer on the leaderboard
# "gorilla-openfunctions-v0": GorillaHandler,
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
from bfcl.model_handler.oss_model.base_oss_handler import OSSHandler

class MiniCPMHandler(OSSHandler):
def __init__(self, model_name, temperature) -> None:
super().__init__(model_name, temperature)

def _format_prompt(self, messages, function):
"""
"chat_template": "{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}"
"""
formatted_prompt = ""

for message in messages:
formatted_prompt += f"<|im_start|>{message['role']}\n{message['content']}<|im_end|>\n"

formatted_prompt += f"<|im_start|>assistant\n"

return formatted_prompt