古风汉服美女图集

siebert/sentiment-roberta-large-english

2023-12-27 23:00 1 微浪网
导语: SiEBERT - English-Language ...,

siebert/sentiment-roberta-large-english


SiEBERT – English-Language Sentiment Classification


Overview

This model (“SiEBERT”, prefix for “Sentiment in English”) is a fine-tuned checkpoint of RoBERTa-large (Liu et al. 2019). It enables reliable binary sentiment analysis for various types of English-language text. For each instance, it predicts either positive (1) or negative (0) sentiment. The model was fine-tuned and evaluated on 15 data sets from diverse text sources to enhance generalization across different types of texts (reviews, tweets, etc.). Consequently, it outperforms models trained on only one type of text (e.g., movie reviews from the popular SST-2 benchmark) when used on new data as shown below.


Predictions on a data set

If you want to predict sentiment for your own data, we provide an example script via Google Colab. You can load your data to a Google Drive and run the script for free on a Colab GPU. Set-up only takes a few minutes. We suggest that you manually label a subset of your data to evaluate performance for your use case. For performance benchmark values across various sentiment analysis contexts, please refer to our paper (Hartmann et al. 2022).


Use in a Hugging Face pipeline

The easiest way to use the model for single predictions is Hugging Face’s sentiment analysis pipeline, which only needs a couple lines of code as shown in the following example:
from transformers import pipeline<br /> sentiment_analysis = pipeline("sentiment-analysis",model="siebert/sentiment-roberta-large-english")<br /> print(sentiment_analysis("I love this!"))<br />


Use for further fine-tuning

The model can also be used as a starting point for further fine-tuning of RoBERTa on your specific data. Please refer to Hugging Face’s documentation for further details and example code.


Performance

To evaluate the performance of our general-purpose sentiment analysis model, we set aside an evaluation set from each data set, which was not used for training. On average, our model outperforms a DistilBERT-based model (which is solely fine-tuned on the popular SST-2 data set) by more than 15 percentage points (78.1 vs. 93.2 percent, see table below). As a robustness check, we evaluate the model in a leave-one-out manner (training on 14 data sets, evaluating on the one left out), which decreases model performance by only about 3 percentage points on average and underscores its generalizability. Model performance is given as evaluation set accuracy in percent.

Dataset DistilBERT SST-2 This model
McAuley and Leskovec (2013) (Reviews) 84.7 98.0
McAuley and Leskovec (2013) (Review Titles) 65.5 87.0
Yelp Academic Dataset 84.8 96.5
Maas et al. (2011) 80.6 96.0
Kaggle 87.2 96.0
Pang and Lee (2005) 89.7 91.0
Nakov et al. (2013) 70.1 88.5
Shamma (2009) 76.0 87.0
Blitzer et al. (2007) (Books) 83.0 92.5
Blitzer et al. (2007) (DVDs) 84.5 92.5
Blitzer et al. (2007) (Electronics) 74.5 95.0
Blitzer et al. (2007) (Kitchen devices) 80.0 98.5
Pang et al. (2002) 73.5 95.5
Speriosu et al. (2011) 71.5 85.5
Hartmann et al. (2019) 65.5 98.0
Average 78.1 93.2


收录说明:
1、本网页并非 siebert/sentiment-roberta-large-english 官网网址页面,此页面内容编录于互联网,只作展示之用;2、如果有与 siebert/sentiment-roberta-large-english 相关业务事宜,请访问其网站并获取联系方式;3、本站与 siebert/sentiment-roberta-large-english 无任何关系,对于 siebert/sentiment-roberta-large-english 网站中的信息,请用户谨慎辨识其真伪。4、本站收录 siebert/sentiment-roberta-large-english 时,此站内容访问正常,如遇跳转非法网站,有可能此网站被非法入侵或者已更换新网址,导致旧网址被非法使用,5、如果你是网站站长或者负责人,不想被收录请邮件删除:i-hu#Foxmail.com (#换@)

前往AI网址导航
1、本文来自 AIGC网址导航 投稿的内容 siebert/sentiment-roberta-large-english ,所有言论和图片纯属作者个人意见,版权归原作者所有;不代表 本站 立场;
2、本站所有文章、图片、资源等如果未标明原创,均为收集自互联网公开资源;分享的图片、资源、视频等,出镜模特均为成年女性正常写真内容,版权归原作者所有,仅作为个人学习、研究以及欣赏!如有涉及下载请24小时内删除;
3、如果您发现本站上有侵犯您的权益的作品,请与我们取得联系,我们会及时修改、删除并致以最深的歉意。邮箱: i-hu#(#换@)foxmail.com

2023-12-27

2023-12-27

古风汉服美女图集
扫一扫二维码分享