CANINE-s (CANINE pre-trained with subword loss)
Pretrained CANINE model on 104 languages using a masked language modeling (MLM) objective. It was introduced in the paper CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation and first released in this repository.
What’s special about CANINE is that it doesn’t require an explicit tokenizer (such as WordPiece or SentencePiece) as other models like BERT and RoBERTa. Instead, it directly operates at a character level: each character is turned into its Unicode code point.
This means that input processing is trivial and can typically be accomplished as:
input_ids = [ord(char) for char in text]<br />
The ord() function is part of Python, and turns each character into its Unicode code point.
Disclaimer: The team releasing CANINE did not write a model card for this model so this model card has been written by the Hugging Face team.
Model description
CANINE is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion, similar to BERT. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives:
- Masked language modeling (MLM): one randomly masks part of the inputs, which the model needs to predict. This model (CANINE-s) is trained with a subword loss, meaning that the model needs to predict the identities of subword tokens, while taking characters as input. By reading characters yet predicting subword tokens, the hard token boundary constraint found in other models such as BERT is turned into a soft inductive bias in CANINE.
- Next sentence prediction (NSP): the model concatenates two sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not.
This way, the model learns an inner representation of multiple languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the CANINE model as inputs.
Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it’s mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at models like GPT2.
How to use
Here is how to use this model:
from transformers import CanineTokenizer, CanineModel<br /> model = CanineModel.from_pretrained('google/canine-s')<br /> tokenizer = CanineTokenizer.from_pretrained('google/canine-s')<br /> inputs = ["Life is like a box of chocolates.", "You never know what you gonna get."]<br /> encoding = tokenizer(inputs, padding="longest", truncation=True, return_tensors="pt")<br /> outputs = model(**encoding) # forward pass<br /> pooled_output = outputs.pooler_output<br /> sequence_output = outputs.last_hidden_state<br />
Training data
The CANINE model was pretrained on on the multilingual Wikipedia data of mBERT, which includes 104 languages.
BibTeX entry and citation info
@article{DBLP:journals/corr/abs-2103-06874,<br /> author = {Jonathan H. Clark and<br /> Dan Garrette and<br /> Iulia Turc and<br /> John Wieting},<br /> title = {{CANINE:} Pre-training an Efficient Tokenization-Free Encoder for<br /> Language Representation},<br /> journal = {CoRR},<br /> volume = {abs/2103.06874},<br /> year = {2021},<br /> url = {https://arxiv.org/abs/2103.06874},<br /> archivePrefix = {arXiv},<br /> eprint = {2103.06874},<br /> timestamp = {Tue, 16 Mar 2021 11:26:59 +0100},<br /> biburl = {https://dblp.org/rec/journals/corr/abs-2103-06874.bib},<br /> bibsource = {dblp computer science bibliography, https://dblp.org}<br /> }<br />
收录说明:
1、本网页并非 google/canine-s 官网网址页面,此页面内容编录于互联网,只作展示之用;2、如果有与 google/canine-s 相关业务事宜,请访问其网站并获取联系方式;3、本站与 google/canine-s 无任何关系,对于 google/canine-s 网站中的信息,请用户谨慎辨识其真伪。4、本站收录 google/canine-s 时,此站内容访问正常,如遇跳转非法网站,有可能此网站被非法入侵或者已更换新网址,导致旧网址被非法使用,5、如果你是网站站长或者负责人,不想被收录请邮件删除:i-hu#Foxmail.com (#换@)
前往AI网址导航
2、本站所有文章、图片、资源等如果未标明原创,均为收集自互联网公开资源;分享的图片、资源、视频等,出镜模特均为成年女性正常写真内容,版权归原作者所有,仅作为个人学习、研究以及欣赏!如有涉及下载请24小时内删除;
3、如果您发现本站上有侵犯您的权益的作品,请与我们取得联系,我们会及时修改、删除并致以最深的歉意。邮箱: i-hu#(#换@)foxmail.com