News
📣 Inference API: Pricing Announcement 📣
We've just launched our Inference API beta which lets you run fast inference on any of the 3,000+ models made available by the community.
It is an optimized and accelerated version of the open-access API that powers our free inference widgets, available on all of our model pages.
➡️ To subscribe, you will need to create or join an organization and head over to huggingface.co/pricing
If you need faster (GPU) inference, large volumes of requests, and/or a dedicated endpoint, let us know at api@huggingface.co
You can find documentation about the API here.
New Release of Tokenizers
The new 0.9.0 version of the library brings a lot of improvements:
- More robust alignment tracking
- Better error messages
- Many bug fixes
But most importantly, this new release brings the full support of the Unigram algorithm 🎉. You can convert your SentencePiece to 🤗 Tokenizers and start using all the features you love. We also support training Unigram tokenizers from scratch. Someone talked about a Byte-level Unigram 🤫?
Also the library now has a proper documentation! Go check it out here!
New Model + Demo: Retrieval-Augmented Generation (RAG)
We've just released RAG, the first retrieval-augmented model in the library in collaboration with Facebook AI. Retrieval augmentation is a new paradigm that empowers models to efficiently find new information in a text corpus like Wikipedia at inference time rather than try to fit all of this knowledge into a huge fixed parameter set. The method lets the model excel at a number of tasks including question answering and question generation. You can try out our demo for both of these settings here or check out the model docs.
New Model: LXMERT
🤗 Transformers welcome its first ever end-2-end multimodal transformer and demo. LXMERT is the current state-of-the-art model for visual question answering (answering textual questions about a given image).
The above GIF demonstrates the capabilities of the version of the model pre-trained on the VQA dataset.
Check out our colab notebook to play with the model using your own questions and images.
New Model: Funnel-Transformer
The latest 🤗 Transformers release includes yet another type of model: the Funnel-Transformer (paper). This model combines the classic transformer architecture with a feature widely used in CNNs used in computer vision: pooling. After a given block of layers, the hidden state is pooled and the sequence length is cut in half.
These pooling steps, circumvented by a residual skip connection, allow funnel transformers to be deeper than other transformers with lower computational cost. The design speeds up inference without hindering the performance on tasks such as classification (that just require a summary of the sentence) and with a decoder that upsamples the sentence back to its original length. The model also gets state-of-the-art performance on some tasks like unmasking tokens.
New 🤗 Datasets + Windows Support
The new 1.1.0 release of 🤗 Datasets adds support for Windows as well as a number of cool datasets, thanks to help from our amazing contributors:
- HotpotQA
- A new, debiased subset of Winogrande
- OpenWebText - an open source effort to reproduce OpenAI’s WebText dataset used by GPT-2, and it is also needed to reproduce ELECTRA
Finally, we’ve added the documentation of the ElasticSearch integration in 🤗Datasets. It lets you easily add a fast text search engine to browse your datasets. More information on how to use it in the documentation.
Community
🚀 Model Hub Highlights 🚀
🗜️ Efficient Models: Model Compression
You may already be familiar with DistilBERT. As one way of accelerating models, model compression includes quantization, pruning, distillation and module replacing. Check out this blog post by Madison May and this survey by Qiu et al to learn more.
In the 🤗 model hub, we have the Distil* family, TinyBERT, MobileBERT, BERT-of-Theseus and MiniLM(multilingual) all waiting for your fine-tuning!
🌍 Language Spotlight: Esperanto
Esperanto is the most widely spoken constructed international auxiliary language. Its goal is to be an easy and flexible language that would serve as a universal second language to foster world peace and international understanding.
In 🤗 model hub, we have dozens of Machine Translation models and two models for natural language understanding.
A debate: Is Esperanto successful?
🔥Top Contributors 🔥
Every newsletter, we'll be highlighting some top contributors to the Hugging Face library!
This week's top contributors:
- Aleksandra Piktus - Led the integration of the RAG model.
- Pengcheng He - Added DeBERTa.
- Forrest Iandola - Added SqueezeBert.
- Minghao Li - Added LayoutLM Model.
- Suraj Patil - Added the “Seq2Seq” Trainer.
- Stas Bekman - Multiple fixes everywhere in the library, especially in tests.
- Yih-Dar - Multiple enhancements and bug fixes on Trainer.
- Muhammad Harris - Added a long-awaited notebook for TF T5 Training notebook.
Want to be featured? A great way to contribute is to check out these good first issues!
Tutorials
The Ultimate Guide to Encoder-Decoder Models
Transformer-based encoder-decoder models have become indispensable for seq2seq tasks such as summarization and translation. Recently, there has been a lot of research on different pre-training objectives for transformer-based encoder-decoder models, e.g. T5, Bart, Pegasus, ProphetNet, Marge, etc. However, the model architecture has stayed largely the same.
This week, we published an illustrative blog post explaining transformer-based encoder-decoder architecture models sequence-to-sequence problems. We focus on the mathematical model defined by the architecture, show how the model can be used in inference and establish the link between theory and practical usage in 🤗Transformers.
🚧 Simple considerations for simple people building fancy neural networks
Neural networks are finicky creatures, you should treat them with care and attention. In our latest blog post, Machine Learning Scientist Victor Sanh shares some tips for building deep learning models.
In this post, I will try to highlight a few steps of my mental process when it comes to building and debugging neural networks. I will also point out things you can look at when you are not sure what your next step should be by listing the typical questions I ask myself.
Check out the full post here.
Events & Talks
Data Science fwdays'20 Online Conference
In case you missed it, CSO Thomas Wolf gave a talk on August 15th at Data Science fwdays'20 online conference covering an introduction to transfer learning. You can see the full talk on Data Science fwdays YouTube channel along with many other talks given that day!
🤗 at Open Core Summit Digital 2020
Hugging Face CEO, Clement Delangue will be speaking at Open Core Summit Digital 2020, November 4th-6th.
Open Core Summit illuminates the intersection of Open-Source and Entrepreneurship to accelerate the growth of our World’s most valuable and inclusive technology ecosystem.
Registration is free and open to all! Get tickets here.