Skip to content

fix(deps): update dependency huggingface-hub to ~=0.32.4 #1483

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 9, 2025

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Jun 8, 2025

This PR contains the following updates:

Package Change Age Adoption Passing Confidence
huggingface-hub ~=0.28.0 -> ~=0.32.4 age adoption passing confidence

Release Notes

huggingface/huggingface_hub (huggingface-hub)

v0.32.4: [v0.32.4]: Bug fixes in tiny-agents, and fix input handling for question-answering task.

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.32.3...v0.32.4

This release introduces bug fixes to tiny-agents and InferenceClient.question_answering:

v0.32.3: [v0.32.3]: Handle env variables in tiny-agents, better CLI exit and handling of MCP tool calls arguments

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.32.2...v0.32.3

This release introduces some improvements and bug fixes to tiny-agents:

  • [tiny-agents] Handle env variables in tiny-agents (Python client) #​3129
  • [Fix] tiny-agents cli exit issues #​3125
  • Improve Handling of MCP Tool Call Arguments #​3127

v0.32.2: [v0.32.2]: Add endpoint support in Tiny-Agent + fix snapshot_download on large repos

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.32.1...v0.32.2

v0.32.1: [v0.32.1]: hot-fix: Fix tiny agents on Windows

Compare Source

Patch release to fix #​3116

Full Changelog: huggingface/huggingface_hub@v0.32.0...v0.32.1

v0.32.0: [v0.32.0]: MCP Client, Tiny Agents CLI and more!

Compare Source

🤖 Powering LLMs with Tools: MCP Client & Tiny Agents CLI

✨ The huggingface_hub library now includes an MCP Client, designed to empower Large Language Models (LLMs) with the ability to interact with external Tools via Model Context Protocol (MCP). This client extends the InfrenceClient and provides a seamless way to connect LLMs to both local and remote tool servers!

pip install -U huggingface_hub[mcp]

In the following example, we use the Qwen/Qwen2.5-72B-Instruct model via the Nebius inference provider. We then add a remote MCP server, in this case, an SSE server which makes the Flux image generation tool available to the LLM:

import os

from huggingface_hub import ChatCompletionInputMessage, ChatCompletionStreamOutput, MCPClient

async def main():
    async with MCPClient(
        provider="nebius",
        model="Qwen/Qwen2.5-72B-Instruct",
        api_key=os.environ["HF_TOKEN"],
    ) as client:
        await client.add_mcp_server(type="sse", url="https://evalstate-flux1-schnell.hf.space/gradio_api/mcp/sse")
        messages = [
            {
                "role": "user",
                "content": "Generate a picture of a cat on the moon",
            }
        ]
        async for chunk in client.process_single_turn_with_tools(messages):

### Log messages
            if isinstance(chunk, ChatCompletionStreamOutput):
                delta = chunk.choices[0].delta
                if delta.content:
                    print(delta.content, end="")

### Or tool calls
            elif isinstance(chunk, ChatCompletionInputMessage):
                print(
                    f"\nCalled tool '{chunk.name}'. Result: '{chunk.content if len(chunk.content) < 1000 else chunk.content[:1000] + '...'}'"
                )

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())

For even simpler development, we now also offer a higher-level Agent class. These 'Tiny Agents' simplify creating conversational Agents by managing the chat loop and state, essentially acting as a user-friendly wrapper around MCPClient. It's designed to be a simple while loop built right on top of an MCPClient.

You can run these Agents directly from the command line:

> tiny-agents run --help

 Usage: tiny-agents run [OPTIONS] [PATH] COMMAND [ARGS]...                                                                                                                           

 Run the Agent in the CLI                                                                                                                                                            


╭─ Arguments ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│   path      [PATH]  Path to a local folder containing an agent.json file or a built-in agent stored in the 'tiny-agents/tiny-agents' Hugging Face dataset                         │
│                     (https://huggingface.co/datasets/tiny-agents/tiny-agents)                                                                                                     │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Options ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ --help          Show this message and exit.                                                                                                                                       │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

You can run these Agents using your own local configs or load them directly from the Hugging Face dataset tiny-agents.

This is an early version of the MCPClient, and community contributions are welcome 🤗

⚡ Inference Providers

Thanks to @​diadorer, feature extraction (embeddings) inference is now supported with Nebius provider!

We’re thrilled to introduce Nscale as an official inference provider! This expansion strengthens the Hub as the go-to entry point for running inference on open-weight models 🔥

We also fixed compatibility issues with structured outputs across providers by ensuring the InferenceClient follows the OpenAI API specs structured output.

💾 Serialization

We've introduced a new @strict decorator for dataclasses, providing robust validation capabilities to ensure data integrity both at initialization and during assignment. Here is a basic example:

from dataclasses import dataclass
from huggingface_hub.dataclasses import strict, as_validated_field

### Custom validator to ensure a value is positive
def positive_int(value: int):
    if not value > 0:
        raise ValueError(f"Value must be positive, got {value}")

class Config:
    model_type: str
    hidden_size: int = positive_int(default=16)
    vocab_size: int = 32  # Default value

### Methods named `validate_xxx` are treated as class-wise validators
    def validate_big_enough_vocab(self):
        if self.vocab_size < self.hidden_size:
            raise ValueError(f"vocab_size ({self.vocab_size}) must be greater than hidden_size ({self.hidden_size})")

config = Config(model_type="bert", hidden_size=24)   # Valid
config = Config(model_type="bert", hidden_size=-1)   # Raises StrictDataclassFieldValidationError

### `vocab_size` too small compared to `hidden_size`
config = Config(model_type="bert", hidden_size=32, vocab_size=16)   # Raises StrictDataclassClassValidationError

This feature also includes support for custom validators, class-wise validation logic, handling of additional keyword arguments, and automatic validation based on type hints. Documentation can be found here.

This release brings also support for DTensor in _get_unique_id / get_torch_storage_size helpers, allowing transformers to seamlessly use save_pretrained with DTensor.

✨ HF API

When creating an Endpoint, the default for scale_to_zero_timeout is now None, meaning endpoints will no longer scale to zero by default unless explicitly configured.

We've also introduced experimental helpers to manage OAuth within FastAPI applications, bringing functionality previously used in Gradio to a wider range of frameworks for easier integration.

📚 Documentation

We now have much more detailed documentation for Inference! This includes more detailed explanations and examples to clarify that the InferenceClient can also be effectively used with local endpoints (llama.cpp, vllm, MLX..etc).

🛠️ Small fixes and maintenance

😌 QoL improvements
🐛 Bug and typo fixes
🏗️ internal

Community contributions

Significant community contributions

The following contributors have made significant changes to the library over the last release:

v0.31.4: [v0.31.4]: strict dataclasses, support DTensor saving & some bug fixes

Compare Source

This release includes some new features and bug fixes:

Full Changelog: huggingface/huggingface_hub@v0.31.2...v0.31.4

v0.31.3

Compare Source

v0.31.2: [v0.31.2] Hot-fix: make hf-xet optional again and bump the min version of the package

Compare Source

Patch release to make hf-xet optional. More context in #​3079 and #​3078.

Full Changelog: huggingface/huggingface_hub@v0.31.1...v0.31.2

v0.31.1

Compare Source

v0.31.0: [v0.31.0] LoRAs with Inference Providers, auto mode for provider selection, embeddings models and more

Compare Source

🧑‍🎨 Introducing LoRAs with fal.ai and Replicate providers

We're introducing blazingly fast LoRA inference powered by
fal.ai and Replicate through Hugging Face Inference Providers! You can use any compatible LoRA available on the Hugging Face Hub and get generations at lightning fast speed ⚡

from huggingface_hub import InferenceClient

client = InferenceClient(provider="fal-ai") # or provider="replicate"

### output is a PIL.Image object
image = client.text_to_image(
    "a boy and a girl looking out of a window with a cat perched on the window sill. There is a bicycle parked in front of them and a plant with flowers to the right side of the image. The wall behind them is visible in the background.",
    model="openfree/flux-chatgpt-ghibli-lora",
)

⚙️ auto mode for provider selection

You can now automatically select a provider for a model using auto mode — it will pick the first available provider based on your preferred order set in https://hf.co/settings/inference-providers.

from huggingface_hub import InferenceClient

### will select the first provider available for the model, sorted by your order.
client = InferenceClient(provider="auto") 

completion = client.chat.completions.create(
    model="Qwen/Qwen3-235B-A22B",
    messages=[
        {
            "role": "user",
            "content": "What is the capital of France?"
        }
    ],
)

print(completion.choices[0].message)

⚠️ Note: This is now the default value for the provider argument. Previously, the default was hf-inference, so this change may be a breaking one if you're not specifying the provider name when initializing InferenceClient or AsyncInferenceClient.

🧠 Embeddings support with Sambanova (feature-extraction)

We added support for feature extraction (embeddings) inference with sambanova provider.

⚡ Other Inference features

HF Inference API provider is now fully integrated as an Inference Provider, this means it only supports a predefined list of deployed models, selected based on popularity.
Cold-starting arbitrary models from the Hub is no longer supported — if a model isn't already deployed, it won’t be available via HF Inference API.

Miscellaneous improvements and some bug fixes:

✅ Of course, all of those inference changes are available in the AsyncInferenceClient async equivalent 🤗

🚀 Xet

Thanks to @​bpronan's PR, Xet now supports uploading byte arrays:

from huggingface_hub import upload_file

file_content = b"my-file-content"
repo_id = "username/model-name" # `hf-xet` should be installed and Xet should be enabled for this repo

upload_file(
    path_or_fileobj=file_content,
    repo_id=repo_id,
)

Additionally, we’ve added documentation for environment variables used by hf-xet to optimize file download/upload performance — including options for caching (HF_XET_CHUNK_CACHE_SIZE_BYTES), concurrency (HF_XET_NUM_CONCURRENT_RANGE_GETS), high-performance mode (HF_XET_HIGH_PERFORMANCE), and sequential writes (HF_XET_RECONSTRUCT_WRITE_SEQUENTIALLY).

Miscellaneous improvements:

✨ HF API

We added HTTP download support for files larger than 50GB — enabling more reliable handling of large file downloads.

We also added dynamic batching to upload_large_folder, replacing the fixed 50-files-per-commit rule with an adaptive strategy that adjusts based on commit success and duration — improving performance and reducing the risk of hitting the commits rate limit on large repositories.

We added support for new arguments when creating or updating Hugging Face Inference Endpoints.

💔 Breaking changes

  • The default value of the provider argument in InferenceClient and AsyncInferenceClient is now "auto" instead of "hf-inference" (HF Inference API). This means provider selection will now follow your preferred order set in your inference provider settings.
    If your code relied on the previous default ("hf-inference"), you may need to update it explicitly to avoid unexpected behavior.
  • HF Inference API Routing Update: The inference URL path for feature-extraction and sentence-similarity tasks has changed from https://router.huggingface.co/hf-inference/pipeline/{task}/{model}to https://router.huggingface.co/hf-inference/models/{model}/pipeline/{task}.
  • [inference] Necessary breaking change: nest task-specific route inside of model route by @​julien-c in #​3044

🛠️ Small fixes and maintenance

😌 QoL improvements
🐛 Bug and typo fixes
🏗️ internal

Community contributions

The following contributors have made significant changes to the library over the last release:

v0.30.2: : Fix text-generation task in InferenceClient

Compare Source

Fixing some InferenceClient-related bugs:

Full Changelog: huggingface/huggingface_hub@v0.30.1...v0.30.2

v0.30.1: : fix 'sentence-transformers/all-MiniLM-L6-v2' doesn't support task 'feature-extraction'

Compare Source

Patch release to fix https://github.com/huggingface/huggingface_hub/issues/2967.

Full Changelog: huggingface/huggingface_hub@v0.30.0...v0.30.1

v0.30.0: Xet is here! (+ many cool Inference-related things!)

Compare Source

🚀 Ready. Xet. Go!

This might just be our biggest update in the past two years! Xet is a groundbreaking new protocol for storing large objects in Git repositories, designed to replace Git LFS. Unlike LFS, which deduplicates files, Xet operates at the chunk level—making it a game-changer for AI builders collaborating on massive models and datasets. Our Python integration is powered by xet-core, a Rust-based package that handles all the low-level details.

You can start using Xet today by installing the optional dependency:

pip install -U huggingface_hub[hf_xet]

With that, you can seamlessly download files from Xet-enabled repositories! And don’t worry—everything remains fully backward-compatible if you’re not ready to upgrade yet.

Blog post: Xet on the Hub
Docs: Storage backends → Xet

[!TIP]
Want to store your own files with Xet? We’re gradually rolling out support on the Hugging Face Hub, so hf_xet uploads may need to be enabled for your repo. Join the waitlist to get onboarded soon!

This is the result of collaborative work by @​bpronan, @​hanouticelina, @​rajatarya, @​jsulz, @​assafvayner, @​Wauplin, + many others on the infra/Hub side!

⚡ Enhanced InferenceClient

The InferenceClient has received significant updates and improvements in this release, making it more robust and easy to work with.

We’re thrilled to introduce Cerebras and Cohere as official inference providers! This expansion strengthens the Hub as the go-to entry point for running inference on open-weight models.

Novita is now our 3rd provider to support text-to-video task after Fal.ai and Replicate:

from huggingface_hub import InferenceClient

client = InferenceClient(provider="novita")

video = client.text_to_video(
    "A young man walking on the street",
    model="Wan-AI/Wan2.1-T2V-14B",
)

It is now possible to centralize billing on your organization rather than individual accounts! This helps companies managing their budget and setting limits at a team level. Organization must be subscribed to Enterprise Hub.

from huggingface_hub import InferenceClient
client = InferenceClient(provider="fal-ai", bill_to="openai")
image = client.text_to_image(
    "A majestic lion in a fantasy forest",
    model="black-forest-labs/FLUX.1-schnell",
)
image.save("lion.png")

Handling long-running inference tasks just got easier! To prevent request timeouts, we’ve introduced asynchronous calls for text-to-video inference. We are expecting more providers to leverage the same structure soon, ensuring better robustness and developer-experience.

Miscellaneous improvements:

✨ New Features and Improvements

This release also includes several other notable features and improvements.

It's now possible to pass a path with wildcard to the upload command instead of passing --include=... option:

huggingface-cli upload my-cool-model *.safetensors

Deploying an Inference Endpoint from the Model Catalog just got 100x easier! Simply select which model to deploy and we handle the rest to guarantee the best hardware and settings for your dedicated endpoints.

from huggingface_hub import create_inference_endpoint_from_catalog

endpoint = create_inference_endpoint_from_catalog("unsloth/DeepSeek-R1-GGUF")
endpoint.wait()

endpoint.client.chat_completion(...)

The ModelHubMixin got two small updates:

  • authors can provide a paper URL that will be added to all model cards pushed by the library.
  • dataclasses are now supported for any init arg (was only the case of config until now)

You can now sort by name, size, last updated and last used where using the delete-cache command:

huggingface-cli delete-cache --sort=size

Since end 2024, it is possible to manage the LFS files stored in a repo from the UI (see docs). This release makes it possible to do the same programmatically. The goal is to enable users to free-up some storage space in their private repositories.

>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> lfs_files = api.list_lfs_files("username/my-cool-repo")

### Filter files files to delete based on a combination of `filename`, `pushed_at`, `ref` or `size`.
### e.g. select only LFS files in the "checkpoints" folder
>>> lfs_files_to_delete = (lfs_file for lfs_file in lfs_files if lfs_file.filename.startswith("checkpoints/"))

### Permanently delete LFS files
>>> api.permanently_delete_lfs_files("username/my-cool-repo", lfs_files_to_delete)

[!WARNING]
This is a power-user tool to use carefully. Deleting LFS files from a repo is a non-revertible action.

💔 Breaking Changes

labels has been removed from InferenceClient.zero_shot_classification and InferenceClient.zero_shot_image_classification tasks in favor of candidate_labels. There has been a proper deprecation warning for that.

🛠️ Small Fixes and Maintenance

🐛 Bug and Typo Fixes
🏗️ Internal

Thanks to the work previously introduced by the diffusers team, we've published a GitHub Action that runs code style tooling on demand on Pull Requests, making the life of contributors and reviewers easier.

Other minor updates:

Significant community contributions

The following contributors have made significant changes to the library over the last release:

v0.29.3: [v0.29.3]: Adding 2 new Inference Providers: Cerebras and Cohere 🔥

Compare Source

Added client-side support for Cerebras and Cohere providers for upcoming official launch on the Hub.

Cerebras: https://github.com/huggingface/huggingface_hub/pull/2901.
Cohere: https://github.com/huggingface/huggingface_hub/pull/2888.

Full Changelog: huggingface/huggingface_hub@v0.29.2...v0.29.3

v0.29.2: [v0.29.2] Fix payload model name when model id is a URL & Restore sys.stdout in notebook_login() after error

Compare Source

This patch release includes two fixes:

Full Changelog: huggingface/huggingface_hub@v0.29.1...v0.29.2

v0.29.1: [v0.29.1] Fix revision URL encoding in upload_large_folder & Fix endpoint update state handling in InferenceEndpoint.wait()

Compare Source

This patch release includes two fixes:

  • Fix revision bug in _upload_large_folder.py #​2879
  • bug fix in inference_endpoint wait function for proper waiting on update #​2867

Full Changelog: huggingface/huggingface_hub@v0.29.0...v0.29.1

v0.29.0: [v0.29.0]: Introducing 4 new Inference Providers: Fireworks AI, Hyperbolic, Nebius AI Studio, and Novita 🔥

Compare Source

We’re thrilled to announce the addition of three more outstanding serverless Inference Providers to the Hugging Face Hub: Fireworks AI, Hyperbolic, Nebius AI Studio, and Novita. These providers join our growing ecosystem, enhancing the breadth and capabilities of serverless inference directly on the Hub’s model pages. This release adds official support for these 3 providers, making it super easy to use a wide variety of models with your preferred providers.

See our announcement blog for more details: https://huggingface.co/blog/new-inference-providers.


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Never, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

Summary by Sourcery

Build:

  • Bump huggingface_hub from ~=0.28.0 to ~=0.32.4 in pyproject.toml

Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Copy link
Contributor

sourcery-ai bot commented Jun 8, 2025

Reviewer's Guide

This PR updates the project’s development dependency on huggingface_hub from version ~0.28.0 to ~0.32.4 by adjusting the version constraint in pyproject.toml.

File-Level Changes

Change Details Files
Bump dev dependency version of huggingface_hub
  • Replace huggingface_hub~=0.28.0 with huggingface_hub~=0.32.4 in the dev dependencies
pyproject.toml

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@rhatdan
Copy link
Member

rhatdan commented Jun 9, 2025

LGTM

@rhatdan rhatdan merged commit e4ea40a into main Jun 9, 2025
15 checks passed
@renovate renovate bot deleted the renovate/huggingface-hub-0.x branch June 9, 2025 04:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant