古风汉服美女图集

monsoon-nlp/hindi-bert

2023-12-26 13:29 0 微浪网
导语: Releasing Hindi ELECTRA mod...,

monsoon-nlp/hindi-bert


Releasing Hindi ELECTRA model

This is a first attempt at a Hindi language model trained with Google Research’s ELECTRA.
As of 2022 I recommend Google’s MuRIL model trained on English, Hindi, and other major Indian languages, both in their script and latinized script: https://huggingface.co/google/muril-base-cased and https://huggingface.co/google/muril-large-cased
For causal language models, I would suggest https://huggingface.co/sberbank-ai/mGPT, though this is a large model
Tokenization and training CoLab
I originally used a modified ELECTRA for finetuning, but now use SimpleTransformers.
Blog post – I was greatly influenced by: https://huggingface.co/blog/how-to-train


Example Notebooks

This small model has comparable results to Multilingual BERT on BBC Hindi news classification
and on Hindi movie reviews / sentiment analysis (using SimpleTransformers)
You can get higher accuracy using ktrain by adjusting learning rate (also: changing model_type in config.json – this is an open issue with ktrain): https://colab.research.google.com/drive/1mSeeSfVSOT7e-dVhPlmSsQRvpn6xC05w?usp=sharing
Question-answering on MLQA dataset: https://colab.research.google.com/drive/1i6fidh2tItf_-IDkljMuaIGmEU6HT2Ar#scrollTo=IcFoAHgKCUiQ
A larger model (Hindi-TPU-Electra) using ELECTRA base size outperforms both models on Hindi movie reviews / sentiment analysis, but
does not perform as well on the BBC news classification task.


Corpus

Download: https://drive.google.com/drive/folders/1SXzisKq33wuqrwbfp428xeu_hDxXVUUu?usp=sharing
The corpus is two files:

  • Hindi CommonCrawl deduped by OSCAR https://traces1.inria.fr/oscar/
  • latest Hindi Wikipedia ( https://dumps.wikimedia.org/hiwiki/ ) + WikiExtractor to txt

Bonus notes:

  • Adding English wiki text or parallel corpus could help with cross-lingual tasks and training


Vocabulary

https://drive.google.com/file/d/1-6tXrii3tVxjkbrpSJE9MOG_HhbvP66V/view?usp=sharing
Bonus notes:

  • Created with HuggingFace Tokenizers; you can increase vocabulary size and re-train; remember to change ELECTRA vocab_size


Training

Structure your files, with data-dir named “trainer” here
trainer<br /> - vocab.txt<br /> - pretrain_tfrecords<br /> -- (all .tfrecord... files)<br /> - models<br /> -- modelname<br /> --- checkpoint<br /> --- graph.pbtxt<br /> --- model.*<br />

CoLab notebook gives examples of GPU vs. TPU setup
configure_pretraining.py


Conversion

Use this process to convert an in-progress or completed ELECTRA checkpoint to a Transformers-ready model:
git clone https://github.com/huggingface/transformers<br /> python ./transformers/src/transformers/convert_electra_original_tf_checkpoint_to_pytorch.py<br /> --tf_checkpoint_path=./models/checkpointdir<br /> --config_file=config.json<br /> --pytorch_dump_path=pytorch_model.bin<br /> --discriminator_or_generator=discriminator<br /> python<br />

from transformers import TFElectraForPreTraining<br /> model = TFElectraForPreTraining.from_pretrained("./dir_with_pytorch", from_pt=True)<br /> model.save_pretrained("tf")<br />

Once you have formed one directory with config.json, pytorch_model.bin, tf_model.h5, special_tokens_map.json, tokenizer_config.json, and vocab.txt on the same level, run:
transformers-cli upload directory<br />


收录说明:
1、本网页并非 monsoon-nlp/hindi-bert 官网网址页面,此页面内容编录于互联网,只作展示之用;2、如果有与 monsoon-nlp/hindi-bert 相关业务事宜,请访问其网站并获取联系方式;3、本站与 monsoon-nlp/hindi-bert 无任何关系,对于 monsoon-nlp/hindi-bert 网站中的信息,请用户谨慎辨识其真伪。4、本站收录 monsoon-nlp/hindi-bert 时,此站内容访问正常,如遇跳转非法网站,有可能此网站被非法入侵或者已更换新网址,导致旧网址被非法使用,5、如果你是网站站长或者负责人,不想被收录请邮件删除:i-hu#Foxmail.com (#换@)

前往AI网址导航

声明:本文来自投稿,不代表微浪网立场,版权归原作者所有,欢迎分享本文,转载请保留出处!

2023-12-26

2023-12-26

古风汉服美女图集
扫一扫二维码分享