Model description
This model is a fine-tuned version of the DistilBERT model to classify toxic comments.
How to use
You can use the model with the following code.
from transformers import AutoModelForSequenceClassification, AutoTokenizer, TextClassificationPipeline<br /> model_path = "martin-ha/toxic-comment-model"<br /> tokenizer = AutoTokenizer.from_pretrained(model_path)<br /> model = AutoModelForSequenceClassification.from_pretrained(model_path)<br /> pipeline = TextClassificationPipeline(model=model, tokenizer=tokenizer)<br /> print(pipeline('This is a test text.'))<br />
Limitations and Bias
This model is intended to use for classify toxic online classifications. However, one limitation of the model is that it performs poorly for some comments that mention a specific identity subgroup, like Muslim. The following table shows a evaluation score for different identity group. You can learn the specific meaning of this metrics here. But basically, those metrics shows how well a model performs for a specific group. The larger the number, the better.
subgroup | subgroup_size | subgroup_auc | bpsn_auc | bnsp_auc |
---|---|---|---|---|
muslim | 108 | 0.689 | 0.811 | 0.88 |
jewish | 40 | 0.749 | 0.86 | 0.825 |
homosexual_gay_or_lesbian | 56 | 0.795 | 0.706 | 0.972 |
black | 84 | 0.866 | 0.758 | 0.975 |
white | 112 | 0.876 | 0.784 | 0.97 |
female | 306 | 0.898 | 0.887 | 0.948 |
christian | 231 | 0.904 | 0.917 | 0.93 |
male | 225 | 0.922 | 0.862 | 0.967 |
psychiatric_or_mental_illness | 26 | 0.924 | 0.907 | 0.95 |
收录说明:
1、本网页并非 martin-ha/toxic-comment-model 官网网址页面,此页面内容编录于互联网,只作展示之用;2、如果有与 martin-ha/toxic-comment-model 相关业务事宜,请访问其网站并获取联系方式;3、本站与 martin-ha/toxic-comment-model 无任何关系,对于 martin-ha/toxic-comment-model 网站中的信息,请用户谨慎辨识其真伪。4、本站收录 martin-ha/toxic-comment-model 时,此站内容访问正常,如遇跳转非法网站,有可能此网站被非法入侵或者已更换新网址,导致旧网址被非法使用,5、如果你是网站站长或者负责人,不想被收录请邮件删除:i-hu#Foxmail.com (#换@)
前往AI网址导航
声明:本文来自AIGC网址导航投稿,不代表微浪网立场,版权归原作者所有,欢迎分享本文,转载请保留出处!