Mining the Local Dependency Itemset in a Products Network

2020 ◽  
Vol 11 (1) ◽  
pp. 1-31
Author(s):  
Li Ni ◽  
Wenjian Luo ◽  
Nannan Lu ◽  
Wenjie Zhu
Keyword(s):  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mario Cantó-Cerdán ◽  
Pilar Cacho-Martínez ◽  
Francisco Lara-Lacárcel ◽  
Ángel García-Muñoz

AbstractTo develop the Symptom Questionnaire for Visual Dysfunctions (SQVD) and to perform a psychometric analysis using Rasch method to obtain an instrument which allows to detect the presence and frequency of visual symptoms related to any visual dysfunction. A pilot version of 33 items was carried out on a sample of 125 patients from an optometric clinic. Rasch model (using Andrich Rating Scale Model) was applied to investigate the category probability curves and Andrich thresholds, infit and outfit mean square, local dependency using Yen’s Q3 statistic, Differential item functioning (DIF) for gender and presbyopia, person and item reliability, unidimensionality, targeting and ordinal to interval conversion table. Category probability curves suggested to collapse a response category. Rasch analysis reduced the questionnaire from 33 to 14 items. The final SQVD showed that 14 items fit to the model without local dependency and no significant DIF for gender and presbyopia. Person reliability was satisfactory (0.81). The first contrast of the residual was 1.908 eigenvalue, showing unidimensionality and targeting was − 1.59 logits. In general, the SQVD is a well-structured tool which shows that data adequately fit the Rasch model, with adequate psychometric properties, making it a reliable and valid instrument to measure visual symptoms.



2021 ◽  
Author(s):  
Shuyi Ge ◽  
Oliver B. Linton


2021 ◽  
Author(s):  
Qiu-shi Zhu ◽  
Jie Zhang ◽  
Ming-hui Wu ◽  
Xin Fang ◽  
Li-Rong Dai


Author(s):  
Maosheng Guo ◽  
Yu Zhang ◽  
Ting Liu

Natural Language Inference (NLI) is an active research area, where numerous approaches based on recurrent neural networks (RNNs), convolutional neural networks (CNNs), and self-attention networks (SANs) has been proposed. Although obtaining impressive performance, previous recurrent approaches are hard to train in parallel; convolutional models tend to cost more parameters, while self-attention networks are not good at capturing local dependency of texts. To address this problem, we introduce a Gaussian prior to selfattention mechanism, for better modeling the local structure of sentences. Then we propose an efficient RNN/CNN-free architecture named Gaussian Transformer for NLI, which consists of encoding blocks modeling both local and global dependency, high-order interaction blocks collecting the evidence of multi-step inference, and a lightweight comparison block saving lots of parameters. Experiments show that our model achieves new state-of-the-art performance on both SNLI and MultiNLI benchmarks with significantly fewer parameters and considerably less training time. Besides, evaluation using the Hard NLI datasets demonstrates that our approach is less affected by the undesirable annotation artifacts.





2009 ◽  
Vol 6 (3) ◽  
pp. 272-277 ◽  
Author(s):  
C.N. de Santana ◽  
A.S. Fontes ◽  
M.A. dos S. Cidreira ◽  
R.B. Almeida ◽  
A.P. González ◽  
...  




2021 ◽  
pp. 115712
Author(s):  
Xiaofei Zhu ◽  
Ling Zhu ◽  
Jiafeng Guo ◽  
Shangsong Liang ◽  
Stefan Dietze


2019 ◽  
Vol 45 (3) ◽  
pp. 515-558
Author(s):  
Marina Fomicheva ◽  
Lucia Specia

Automatic Machine Translation (MT) evaluation is an active field of research, with a handful of new metrics devised every year. Evaluation metrics are generally benchmarked against manual assessment of translation quality, with performance measured in terms of overall correlation with human scores. Much work has been dedicated to the improvement of evaluation metrics to achieve a higher correlation with human judgments. However, little insight has been provided regarding the weaknesses and strengths of existing approaches and their behavior in different settings. In this work we conduct a broad meta-evaluation study of the performance of a wide range of evaluation metrics focusing on three major aspects. First, we analyze the performance of the metrics when faced with different levels of translation quality, proposing a local dependency measure as an alternative to the standard, global correlation coefficient. We show that metric performance varies significantly across different levels of MT quality: Metrics perform poorly when faced with low-quality translations and are not able to capture nuanced quality distinctions. Interestingly, we show that evaluating low-quality translations is also more challenging for humans. Second, we show that metrics are more reliable when evaluating neural MT than the traditional statistical MT systems. Finally, we show that the difference in the evaluation accuracy for different metrics is maintained even if the gold standard scores are based on different criteria.





Sign in / Sign up

Export Citation Format

Share Document