Transformers v4.6 is our first release dedicated to computer vision!
1️⃣ CLIP from OpenAI, for Image-Text similarity or Zero-Shot Image classification
2️⃣ ViT from GoogleAI
3️⃣ DeiT from FacebookAI
Try SOTA image classification with ViT and DeiT on the Model Hub!
🤗Datasets v1.6 brings you speed, features, and of course datasets:
- Now blazing fast: ~0.1ms per query for a 100 billion rows dataset 🚀🤯
- Even faster for small ones in memory by default 🏎
- Easy datasets concatenation: row ↔️, column ↕️, from memory 🧠 or disk 💽
- 800+ datasets available 📈, now with CUAD, OpenSLR, GEM1.1 and more
This new program offers direct premium support from the Hugging Face team, to accelerate companies in their Transformers journey.
🔮 Which model to fine-tune, how?
🏎 How do I reduce latency by 10X?
⚙️ How do I optimize my production setup?
🧠 How do I leverage Transformers in SageMaker?
🔍 How do I mitigate bias in datasets and models?
Contact us to learn more!
The Accelerated Inference API is now available through our $9/mo Supporter plan!
It’s the easiest way to integrate and serve any of the 13,000+ Hugging Face models - or your own private models - using our accelerated and scalable infrastructure, via simple API calls.
Sign up to become a 🤗 Supporter today.
Create and deploy fine-tuned state of the art models automagically with AutoNLP!
New this month:
📃 Summarization models
🗣 Speech Recognition (ASR) models
📈 Regression models
🇺🇳 New languages: Hindi, Japanese, Chinese and Dutch
Let us know which task or language you'd like us to add next!
GoogleAI's JAX/Flax library can now be used as Transformers' backbone ML library.
JAX/Flax makes distributed training on TPU effortless and highly efficient!
Over 3,000 pretrained model checkpoints have been converted to JAX and can be fine-tuned on Natural Language Understanding downstream tasks.
👉 Google Colab
👉 Runtime evaluation
Want to run your PyTorch training loop on multi-GPUs or TPUs without using an abstract class you can't control or tweak easily? Try out our new open source library, 🤗 Accelerate!
With just five lines of code to add, your script will run locally (for debugging) as well as on any distributed setup!