probability ratio
Recently Published Documents


TOTAL DOCUMENTS

349
(FIVE YEARS 33)

H-INDEX

24
(FIVE YEARS 1)

Resonance ◽  
2021 ◽  
Vol 26 (11) ◽  
pp. 1559-1565
Author(s):  
Harshada Vidwans ◽  
Rohini Kharate ◽  
Milind Watve
Keyword(s):  

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Dadasaheb G. Godase ◽  
Shashibhushan B. Mahadik

Abstract A nonparametric sequential probability ratio test control chart to monitor the process dispersion based on the sequential sign statistic is proposed. The statistical performance of this chart is evaluated by comparing it with that of the charts for dispersion based on sign statistic in the existing literature. It is found that the proposed chart outperforms all these charts uniformly in detecting a shift of any size over a wide range. An implementation of the chart is illustrated through an example.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Ruixue Duan ◽  
Zhuofan Huang ◽  
Yangsen Zhang ◽  
Xiulei Liu ◽  
Yue Dang

The mobile social network contains a large amount of information in a form of commentary. Effective analysis of the sentiment in the comments would help improve the recommendations in the mobile network. With the development of well-performing pretrained language models, the performance of sentiment classification task based on deep learning has seen new breakthroughs in the past decade. However, deep learning models suffer from poor interpretability, making it difficult to integrate sentiment knowledge into the model. This paper proposes a sentiment classification model based on the cascade of the BERT model and the adaptive sentiment dictionary. First, the pretrained BERT model is used to fine-tune with the training corpus, and the probability of sentiment classification in different categories is obtained through the softmax layer. Next, to allow a more effective comparison between the probabilities for the two classes, a nonlinearity is introduced in a form of positive-negative probability ratio, using the rule method based on sentiment dictionary to deal with the probability ratio below the threshold. This method of cascading the pretrained model and the semantic rules of the sentiment dictionary allows to utilize the advantages of both models. Different sized Chnsenticorp data sets are used to train the proposed model. Experimental results show that the Dict-BERT model is better than the BERT-only model, especially when the training set is relatively small. The improvement is obvious with the accuracy increase of 0.8%.


2021 ◽  
Vol 101 ◽  
pp. 102505
Author(s):  
Sandipan Pramanik ◽  
Valen E. Johnson ◽  
Anirban Bhattacharya

Sign in / Sign up

Export Citation Format

Share Document