Spaces:
Runtime error
๐ข [v0.26.0]: Multi-tokens support, conversational VLMs and quality of life improvements
๐ Multiple access tokens support
Managing fine-grained access tokens locally just became much easier and more efficient!
Fine-grained tokens let you create tokens with specific permissions, making them especially useful in production environments or when working with external organizations, where strict access control is essential.
To make managing these tokens easier, we've added a โจ new set of CLI commands โจ that allow you to handle them programmatically:
- Store multiple tokens on your machine by simply logging in with the
login()
command with each token:
huggingface-cli login
- Switch between tokens and choose the one that will be used for all interactions with the Hub:
huggingface-cli auth switch
- List available access tokens on your machine:
huggingface-cli auth list
- Delete a specific token from your machine with:
huggingface-cli logout [--token-name TOKEN_NAME]
โ
Nothing changes if you are using the HF_TOKEN
environment variable as it takes precedence over the token set via the CLI. More details in the documentation. ๐ค
- Support multiple tokens locally by @hanouticelina in #2549
โก๏ธ InferenceClient improvements
๐ผ๏ธ Conversational VLMs support
Conversational vision-language models inference is now supported with InferenceClient
's chat completion!
from huggingface_hub import InferenceClient
# works with remote url or base64 encoded url
image_url ="https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
client = InferenceClient("meta-llama/Llama-3.2-11B-Vision-Instruct")
output = client.chat.completions.create(
messages=[
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {"url": image_url},
},
{
"type": "text",
"text": "Describe this image in one sentence.",
},
],
},
],
)
print(output.choices[0].message.content)
#A determine figure of Lady Liberty stands tall, holding a torch aloft, atop a pedestal on an island.
๐ง More complete support for inference parameters
You can now pass additional inference parameters to more task methods in the InferenceClient
, including: image_classification
, text_classification
, image_segmentation
, object_detection
, document_question_answering
and more!
For more details, visit the InferenceClient
reference guide.
โ Of course, all of those changes are also available in the AsyncInferenceClient async equivalent ๐ค
- Support VLM in chat completion (+some specs updates) by @Wauplin in #2556
- [Inference Client] Add task parameters and a maintenance script of these parameters by @hanouticelina in #2561
- Document vision chat completion with Llama 3.2 11B V by @Wauplin in #2569
โจ HfApi
update_repo_settings
can now be used to switch visibility status of a repo. This is a drop-in replacement for update_repo_visibility
which is deprecated and will be removed in version v0.29.0
.
- update_repo_visibility(repo_id, private=True)
+ update_repo_settings(repo_id, private=True)
- Feature: switch visibility with update_repo_settings by @WizKnight in #2541
๐ Daily papers API is now supported in huggingface_hub
, enabling you to search for papers on the Hub and retrieve detailed paper information.
>>> from huggingface_hub import HfApi
>>> api = HfApi()
# List all papers with "attention" in their title
>>> api.list_papers(query="attention")
# Get paper information for the "Attention Is All You Need" paper
>>> api.paper_info(id="1706.03762")
- Daily Papers API by @hlky in #2554
๐ ๐ Documentation
Efforts from the Tamil-speaking community to translate guides and package references to TM! Check out the result here.
- Translated index.md and installation.md to Tamil by @Raghul-M in #2555
๐ Breaking changes
A few breaking changes have been introduced:
cached_download()
,url_to_filename()
,filename_to_url()
methods are now completely removed. From now on, you will have to usehf_hub_download()
to benefit from the new cache layout.legacy_cache_layout
argument fromhf_hub_download()
has been removed as well.
These breaking changes have been announced with a regular deprecation cycle.
Also, any templating-related utility has been removed from huggingface_hub
. Client side templating is not necessary now that all conversational text-generation models in InferenceAPI are served with TGI.
Prepare for release 0.26 by @hanouticelina in #2579
Remove templating utility by
@Wauplin
in #2611
๐ ๏ธ Small fixes and maintenance
๐ QoL improvements
- docs: move translations to
i18n
by @SauravMaheshkar in #2566 - Preserve card metadata format/ordering on load->save by @hlky in #2570
- Remove raw HTML from error message content and improve request ID capture by @hanouticelina in #2584
- [Inference Client] Factorize inference payload build by @hanouticelina in #2601
- Use proper logging in auth module by @hanouticelina in #2604
๐ fixes
- Use repo_type in HfApi.grant_access url by @albertvillanova in #2551
- Raise error if encountered in chat completion SSE stream by @Wauplin in #2558
- Add 500 HTTP Error to retry list by @farzadab in #2567
- Add missing documentation by @adiaholic in #2572
- Serialization: take into account meta tensor when splitting the
state_dict
by @SunMarc in #2591 - Fix snapshot download when
local_dir
is provided. by @hanouticelina in #2592 - Fix PermissionError while creating '.no_exist/' directory in cache by @Wauplin in #2594
- Fix 2609 - Import packaging by default by @Wauplin in #2610
๐๏ธ internal
- Fix test by @Wauplin in #2582
- Make SafeTensorsInfo.parameters a Dict instead of List by @adiaholic in #2585
- Fix tests listing text generation models by @Wauplin in #2593
- Skip flaky Repository test by @Wauplin in #2595
- Support python 3.12 by @hanouticelina in #2605
Significant community contributions
The following contributors have made significant changes to the library over the last release:
-
@SauravMaheshkar
- docs: move translations to
i18n
(#2566)
- docs: move translations to
-
@WizKnight
- Feature: switch visibility with update_repo_settings #2537 (#2541)
-
@hlky
- Preserve card metadata format/ordering on load->save (#2570)
- Daily Papers API (#2554)
-
@Raghul-M
- Translated index.md and installation.md to Tamil (#2555)