- GPTComet: AI-Powered Git Commit Message Generator And Reviewer
GPTComet is an AI-powered developer tool that streamlines your Git workflow and enhances code quality through automated commit message generation and intelligent code review.
This project leverages the power of large language models to automate repetitive tasks and improve the overall development process. The core features include:
- Automatic Commit Message Generation: GPTComet can generate commit messages based on the changes made in the code.
- Support for Multiple Languages: GPTComet supports multiple languages, including English, Chinese and so on.
- Customizable Configuration: GPTComet allows users to customize the configuration to suit their needs, such llm model and prompt.
- Support for Rich Commit Messages: GPTComet supports rich commit messages, which include a title, summary, and detailed description.
- Support for Multiple Providers: GPTComet supports multiple providers, including OpenAI, Gemini, Claude/Anthropic, Vertex, Azure, Ollama, and others.
- Support SVN and Git: GPTComet supports both SVN and Git repositories.
To use GPTComet, you can download from Github release, or by install scripts:
curl -sSL https://cdn.jsdelivr.net/gh/belingud/gptcomet@master/install.sh | bashWindows:
irm https://cdn.jsdelivr.net/gh/belingud/gptcomet@master/install.ps1 | iexIf you want to install specific version, you can use the following script:
curl -sSL https://cdn.jsdelivr.net/gh/belingud/gptcomet@master/install.sh | bash -s -- -v 0.4.2irm https://cdn.jsdelivr.net/gh/belingud/gptcomet@master/install.ps1 | iex -CommandArgs @("-v", "0.4.2")If you prefer to run in python, you can install by pip directly, it packaged the binary files corresponding to the platform already.
pip install gptcomet
# Using pipx
pipx install gptcomet
# Using uv
uv tool install gptcomet
Resolved 1 package in 1.33s
Installed 1 package in 8ms
 + gptcomet==0.1.6
Installed 2 executables: gmsg, gptcometTo use gptcomet, follow these steps:
- Install GPTComet: Install GPTComet through pypi.
- Configure GPTComet: See Setup. Configure GPTComet with your api_key and other required keys like:
- provider: The provider of the language model (default- openai).
- api_base: The base URL of the API (default- https://api.openai.com/v1).
- api_key: The API key for the provider.
- model: The model used for generating commit messages (default- gpt-4o).
- Run GPTComet: Run GPTComet using the following command: gmsg commit.
If you are using openai provider, and finished set api_key, you can run gmsg commit directly.
- 
Direct Configuration - Configure directly in ~/.config/gptcomet/gptcomet.yaml.
 
- Configure directly in 
- 
Interactive Setup - Use the gmsg newprovidercommand for guided setup.
 
- Use the 
gmsg newprovider
    Select Provider
  > 1. azure
    2. chatglm
    3. claude
    4. cohere
    5. deepseek
    6. gemini
    7. groq
    8. kimi
    9. mistral
    10. ollama
    11. openai
    12. openrouter
    13. sambanova
    14. silicon
    15. tongyi
    16. vertex
    17. xai
    18. Input Manually
    ↑/k up • ↓/j down • ? moreOpenAI api key page: https://platform.openai.com/api-keys
gmsg newprovider
Selected provider: openai
Configure provider:
Previous inputs:
  Enter OpenAI API base: https://api.openai.com/v1
  Enter API key: sk-abc*********************************************
  Enter max tokens: 1024
Enter Enter model name (default: gpt-4o):
> gpt-4o
Provider openai configured successfully!Gemini api key page: https://aistudio.google.com/u/1/apikey
gmsg newprovider
Selected provider: gemini
Configure provider:
Previous inputs:
  Enter Gemini API base: https://generativelanguage.googleapis.com/v1beta/models
  Enter API key: AIz************************************
  Enter max tokens: 1024
Enter Enter model name (default: gemini-1.5-flash):
> gemini-2.0-flash-exp
Provider gemini already has a configuration. Do you want to overwrite it? (y/N): y
Provider gemini configured successfully!I don't have an anthropic account yet, please see Anthropic console
Vertex console page: https://console.cloud.google.com
gmsg newprovider
Selected provider: vertex
Configure provider:
Previous inputs:
  Enter Vertex AI API Base URL: https://us-central1-aiplatform.googleapis.com/v1
  Enter API key: sk-awz*********************************************
  Enter location (e.g., us-central1): us-central1
  Enter max tokens: 1024
  Enter model name: gemini-1.5-pro
Enter Enter Google Cloud project ID:
> test-project
Provider vertex configured successfully!gmsg newprovider
Selected provider: azure
Configure provider:
Previous inputs:
  Enter Azure OpenAI endpoint: https://gptcomet.openai.azure.com
  Enter API key: ********************************
  Enter API version: 2024-02-15-preview
  Enter Azure OpenAI deployment name: gpt4o
  Enter max tokens: 1024
Enter Enter deployment name (default: gpt-4o):
> gpt-4o
Provider azure configured successfully!gmsg newprovider
Selected provider: ollama
Configure provider:
Previous inputs:
  Enter Ollama API Base URL: http://localhost:11434/api
  Enter max tokens: 1024
Enter Enter model name (default: llama2):
> llama2
Provider ollama configured successfully!- Groq
- Mistral
- Tongyi/Qwen
- XAI
- Sambanova
- Silicon
- Deepseek
- ChatGLM
- KIMI
- Cohere
- OpenRouter
- Hunyuan
- ModelScope
- MiniMax
- Yi (lingyiwanwu)
Not supported:
- Baidu ERNIE
Or you can enter the provider name manually, and setup config manually.
gmsg newprovider
You can either select one from the list or enter a custom provider name.
  ...
  vertex
> Input manually
Enter provider name: test
Enter OpenAI API Base URL [https://api.openai.com/v1]:
Enter model name [gpt-4o]:
Enter API key: ************************************
Enter max tokens [1024]:
[GPTComet] Provider test configured successfully.Some special provider may need your custome config. Like cloudflare.
Be aware that the model name is not used in cloudflare api.
$ gmsg newprovider
Selected provider: cloudflare
Configure provider:
Previous inputs:
  Enter API Base URL: https://api.cloudflare.com/client/v4/accounts/<account_id>/ai/run
  Enter model name: llama-3.3-70b-instruct-fp8-fast
  Enter API key: abc*************************************
Enter Enter max tokens (default: 1024):
> 1024
Provider cloudflare already has a configuration. Do you want to overwrite it? (y/N): y
Provider cloudflare configured successfully!
$ gmsg config set cloudflare.completion_path @cf/meta/llama-3.3-70b-instruct-fp8-fast
$ gmsg config set cloudflare.answer_path result.responseThe following are the available commands for GPTComet:
- gmsg config: Config manage commands group.- get <key>: Get the value of a configuration key.
- list: List the entire configuration content.
- reset: Reset the configuration to default values (optionally reset only the prompt section with- --prompt).
- set <key> <value>: Set a configuration value.
- path: Get the configuration file path.
- remove <key> [value]: Remove a configuration key or a value from a list. (List value only, like- fileignore)
- append <key> <value>: Append a value to a list configuration.(List value only, like- fileignore)
- keys: List all supported configuration keys.
 
- gmsg commit: Generate commit message by changes/diff.- --svn: Generate commit message for svn.
- --dry-run: Dry run the command without actually generating the commit message.
- -y/--yes: Skip the confirmation prompt.
- --no-verify: Skip git hooks verification, akin to using- git commit --no-verify
- --repo: Path to the repository (default ".").
- --answer-path: Override answer path
- --api-base: Override API base URL
- --api-key: Override API key
- --completion-path: Override completion path
- --frequency-penalty: Override frequency penalty
- --max-tokens: Override maximum tokens
- --model: Override model name
- --provider: Override AI provider (openai/deepseek)
- --proxy: Override proxy URL
- --retries: Override retry count
- --temperature: Override temperature
- --top-p: Override top_p value
 
- gmsg newprovider: Add a new provider.
- gmsg review: Review staged diff or pipe to- gmsg review.- --svn: Get diff from svn.
- --stream: Stream output as it arrives from the LLM.
- --repo: Path to the repository (default ".").
- --answer-path: Override answer path
- --api-base: Override API base URL
- --api-key: Override API key
- --completion-path: Override completion path
- --frequency-penalty: Override frequency penalty
- --max-tokens: Override maximum tokens
- --model: Override model name
- --provider: Override AI provider (openai/deepseek)
- --proxy: Override proxy URL
- --retries: Override retry count
- --temperature: Override temperature
- --top-p: Override top_p value
 
Global flags:
  -c, --config string   Config file path
  -d, --debug           Enable debug modeHere's a summary of the main configuration keys:
| Key | Description | Default Value | 
|---|---|---|
| provider | The name of the LLM provider to use. | openai | 
| file_ignore | A list of file patterns to ignore in the diff. | (See file_ignore) | 
| output.lang | The language for commit message generation. | en | 
| output.rich_template | The template to use for rich commit messages. | <title>:<summary>\n\n<detail> | 
| output.translate_title | Translate the title of the commit message. | false | 
| output.review_lang | The language to generate the review message. | en | 
| output.markdown_theme | The theme to display markdown_theme content. | auto | 
| console.verbose | Enable verbose output. | true | 
| <provider>.api_base | The API base URL for the provider. | (Provider-specific) | 
| <provider>.api_key | The API key for the provider. | |
| <provider>.model | The model name to use. | (Provider-specific) | 
| <provider>.retries | The number of retry attempts for API requests. | 2 | 
| <provider>.proxy | The proxy URL to use (if needed). | |
| <provider>.max_tokens | The maximum number of tokens to generate. | 2048 | 
| <provider>.top_p | The top-p value for nucleus sampling. | 0.7 | 
| <provider>.temperature | The temperature value for controlling randomness. | 0.7 | 
| <provider>.frequency_penalty | The frequency penalty value. | 0 | 
| <provider>.extra_headers | Extra headers to include in API requests (JSON string). | {} | 
| <provider>.extra_body | Extra body to include in API requests (JSON string). | {} | 
| <provider>.completion_path | The API path for completion requests. | (Provider-specific) | 
| <provider>.answer_path | The JSON path to extract the answer from the API response. | (Provider-specific) | 
| prompt.brief_commit_message | The prompt template for generating brief commit messages. | (See defaults/defaults.go) | 
| prompt.rich_commit_message | The prompt template for generating rich commit messages. | (See defaults/defaults.go) | 
| prompt.translation | The prompt template for translating commit messages. | (See defaults/defaults.go) | 
Note: <provider> should be replaced with the actual provider name (e.g., openai, gemini, claude).
Some providers require specific keys, such as Vertex needing project ID, location, etc.
The configuration file for GPTComet is gptcomet.yaml. The file should contain the following keys:
output.translate_title is used to determine whether to translate the title of the commit message.
For example in output.lang: zh-cn, the title of the commit message is feat: Add new feature
If output.translate_title is set to true, the commit message will be translated to 功能:新增功能.
Otherwise, the commit message will be translated to feat: 新增功能.
In some case you can set complation_path to empty string, like <provider>.completion_path: "", to use api_base endpoint directly.
The file to ignore when generating a commit. The default value is
- bun.lockb
- Cargo.lock
- composer.lock
- Gemfile.lock
- package-lock.json
- pnpm-lock.yaml
- poetry.lock
- yarn.lock
- pdm.lock
- Pipfile.lock
- "*.py[cod]"
- go.sum
- uv.lockYou can add more file_ignore by using the gmsg config append file_ignore <xxx> command.
<xxx> is same syntax as gitignore, like *.so to ignore all .so suffix files.
The provider configuration of the language model.
The default provider is openai.
Provider config just like:
provider: openai
openai:
    api_base: https://api.openai.com/v1
    api_key: YOUR_API_KEY
    model: gpt-4o
    retries: 2
    max_tokens: 1024
    temperature: 0.7
    top_p: 0.7
    frequency_penalty: 0
    extra_headers: {}
    answer_path: choices.0.message.content
    completion_path: /chat/completionsIf you are using openai, just leave the api_base as default. Set your api_key in the config section.
If you are using an openai class provider, or a provider compatible interface, you can set the provider to openai.
And set your custom api_base, api_key and model.
For example:
Openrouter providers api interface compatible with openai,
you can set provider to openai and set api_base to https://openrouter.ai/api/v1,
api_key to your api key from keys page
and model to meta-llama/llama-3.1-8b-instruct:free or some other you prefer.
gmsg config set openai.api_base https://openrouter.ai/api/v1
gmsg config set openai.api_key YOUR_API_KEY
gmsg config set openai.model meta-llama/llama-3.1-8b-instruct:free
gmsg config set openai.max_tokens 1024Silicon providers the similar interface with openrouter, so you can set provider to openai
and set api_base to https://api.siliconflow.cn/v1.
Note that max tokens may vary, and will return an error if it is too large.
The output configuration of the commit message.
The default output is
output:
    lang: en
    rich_template: "<title>:<summary>\n\n<detail>"
    translate_title: false
    review_lang: "en"
    markdown_theme: "auto"You can set rich_template to change the template of the rich commit message,
and set lang to change the language of the commit message.
Supported markdown theme:
- auto: Auto detect markdown theme (default).
- ascii: ASCII style.
- dark: Dark theme.
- dracula: Dracula theme.
- light: Light theme.
- tokyo-night: Tokyo Night theme.
- notty: Notty style, no render.
- pink: Pink theme.
If you not set markdown_theme, the markdown theme will be auto detected.
If you are using light terminal, the markdown theme will be dark, if you are using dark terminal, the markdown theme will be light.
GPTComet is using glamour to render markdown, you can preview the markdown theme in glamour preview.
output.lang and output.review_lang support the following languages:
- en: English
- zh-cn: Simplified Chinese
- zh-tw: Traditional Chinese
- fr: French
- vi: Vietnamese
- ja: Japanese
- ko: Korean
- ru: Russian
- tr: Turkish
- id: Indonesian
- th: Thai
- de: German
- es: Spanish
- pt: Portuguese
- it: Italian
- ar: Arabic
- hi: Hindi
- el: Greek
- pl: Polish
- nl: Dutch
- sv: Swedish
- fi: Finnish
- hu: Hungarian
- cs: Czech
- ro: Romanian
- bg: Bulgarian
- uk: Ukrainian
- he: Hebrew
- lt: Lithuanian
- la: Latin
- ca: Catalan
- sr: Serbian
- sl: Slovenian
- mk: Macedonian
- lv: Latvian
The console output config.
The default console is
console:
    verbose: trueWhen verbose is true, more information will be printed in the console.
You can use gmsg config keys to check supported keys.
Here is an example of how to use GPTComet:
- When you first set your OpenAI KEY by gmsg config set openai.api_key YOUR_API_KEY, it will generate config file at~/.local/gptcomet/gptcomet.yaml, includes:
provider: "openai"
openai:
  api_base: "https://api.openai.com/v1"
  api_key: "YOUR_API_KEY"
  model: "gpt-4o"
  retries: 2
output:
  lang: "en"
- Run the following command to generate a commit message: gmsg commit
- GPTComet will generate a commit message based on the changes made in the code and display it in the console.
Note: Replace YOUR_API_KEY with your actual API key for the provider.
If you'd like to contribute to GPTComet, feel free to fork this project and submit a pull request.
First, fork the project and clone your repo.
git clone https://github.com/<yourname>/gptcometSecond, make sure you have _, you can install by pip, brew or other way in their installation docs
Use just command install dependence, just is a handy way to save and run project-specific commands, just docs https://github.com/casey/just
just installIf you have any questions or suggestions, feel free to contact.
If you like GPTComet, you can buy me a coffee to support me. Any support can help the project go further.
GPTComet is licensed under the MIT License.

