Releasing Hindi ELECTRA model
This is a first attempt at a Hindi language model trained with Google Research’s ELECTRA.
As of 2022 I recommend Google’s MuRIL model trained on English, Hindi, and other major Indian languages, both in their script and latinized script: https://huggingface.co/google/muril-base-cased and https://huggingface.co/google/muril-large-cased
For causal language models, I would suggest https://huggingface.co/sberbank-ai/mGPT, though this is a large model
Tokenization and training CoLab
I originally used a modified ELECTRA for finetuning, but now use SimpleTransformers.
Blog post – I was greatly influenced by: https://huggingface.co/blog/how-to-train
Example Notebooks
This small model has comparable results to Multilingual BERT on BBC Hindi news classification
and on Hindi movie reviews / sentiment analysis (using SimpleTransformers)
You can get higher accuracy using ktrain by adjusting learning rate (also: changing model_type in config.json – this is an open issue with ktrain): https://colab.research.google.com/drive/1mSeeSfVSOT7e-dVhPlmSsQRvpn6xC05w?usp=sharing
Question-answering on MLQA dataset: https://colab.research.google.com/drive/1i6fidh2tItf_-IDkljMuaIGmEU6HT2Ar#scrollTo=IcFoAHgKCUiQ
A larger model (Hindi-TPU-Electra) using ELECTRA base size outperforms both models on Hindi movie reviews / sentiment analysis, but
does not perform as well on the BBC news classification task.
Corpus
Download: https://drive.google.com/drive/folders/1SXzisKq33wuqrwbfp428xeu_hDxXVUUu?usp=sharing
The corpus is two files:
- Hindi CommonCrawl deduped by OSCAR https://traces1.inria.fr/oscar/
- latest Hindi Wikipedia ( https://dumps.wikimedia.org/hiwiki/ ) + WikiExtractor to txt
Bonus notes:
- Adding English wiki text or parallel corpus could help with cross-lingual tasks and training
Vocabulary
https://drive.google.com/file/d/1-6tXrii3tVxjkbrpSJE9MOG_HhbvP66V/view?usp=sharing
Bonus notes:
- Created with HuggingFace Tokenizers; you can increase vocabulary size and re-train; remember to change ELECTRA vocab_size
Training
Structure your files, with data-dir named “trainer” here
trainer<br /> - vocab.txt<br /> - pretrain_tfrecords<br /> -- (all .tfrecord... files)<br /> - models<br /> -- modelname<br /> --- checkpoint<br /> --- graph.pbtxt<br /> --- model.*<br />
CoLab notebook gives examples of GPU vs. TPU setup
configure_pretraining.py
Conversion
Use this process to convert an in-progress or completed ELECTRA checkpoint to a Transformers-ready model:
git clone https://github.com/huggingface/transformers<br /> python ./transformers/src/transformers/convert_electra_original_tf_checkpoint_to_pytorch.py<br /> --tf_checkpoint_path=./models/checkpointdir<br /> --config_file=config.json<br /> --pytorch_dump_path=pytorch_model.bin<br /> --discriminator_or_generator=discriminator<br /> python<br />
from transformers import TFElectraForPreTraining<br /> model = TFElectraForPreTraining.from_pretrained("./dir_with_pytorch", from_pt=True)<br /> model.save_pretrained("tf")<br />
Once you have formed one directory with config.json, pytorch_model.bin, tf_model.h5, special_tokens_map.json, tokenizer_config.json, and vocab.txt on the same level, run:
transformers-cli upload directory<br />
收录说明:
1、本网页并非 monsoon-nlp/hindi-bert 官网网址页面,此页面内容编录于互联网,只作展示之用;2、如果有与 monsoon-nlp/hindi-bert 相关业务事宜,请访问其网站并获取联系方式;3、本站与 monsoon-nlp/hindi-bert 无任何关系,对于 monsoon-nlp/hindi-bert 网站中的信息,请用户谨慎辨识其真伪。4、本站收录 monsoon-nlp/hindi-bert 时,此站内容访问正常,如遇跳转非法网站,有可能此网站被非法入侵或者已更换新网址,导致旧网址被非法使用,5、如果你是网站站长或者负责人,不想被收录请邮件删除:i-hu#Foxmail.com (#换@)
前往AI网址导航
2、本站所有文章、图片、资源等如果未标明原创,均为收集自互联网公开资源;分享的图片、资源、视频等,出镜模特均为成年女性正常写真内容,版权归原作者所有,仅作为个人学习、研究以及欣赏!如有涉及下载请24小时内删除;
3、如果您发现本站上有侵犯您的权益的作品,请与我们取得联系,我们会及时修改、删除并致以最深的歉意。邮箱: i-hu#(#换@)foxmail.com