sentence level
Recently Published Documents


TOTAL DOCUMENTS

967
(FIVE YEARS 408)

H-INDEX

37
(FIVE YEARS 7)

2022 ◽  
Vol 22 (1) ◽  
pp. 1-30
Author(s):  
Ashima Yadav ◽  
Dinesh Kumar Vishwakarma

Towards the end of 2019, Wuhan experienced an outbreak of novel coronavirus, which soon spread worldwide, resulting in a deadly pandemic that infected millions of people around the globe. The public health agencies followed many strategies to counter the fatal virus. However, the virus severely affected the lives of the people. In this paper, we study the sentiments of people from the top five worst affected countries by the virus, namely the USA, Brazil, India, Russia, and South Africa. We propose a deep language-independent Multilevel Attention-based Conv-BiGRU network (MACBiG-Net) , which includes embedding layer, word-level encoded attention, and sentence-level encoded attention mechanisms to extract the positive, negative, and neutral sentiments. The network captures the subtle cues in a document by focusing on the local characteristics of text along with the past and future context information for the sentiment classification. We further develop a COVID-19 Sentiment Dataset by crawling the tweets from Twitter and applying topic modeling to extract the hidden thematic structure of the document. The classification results demonstrate that the proposed model achieves an accuracy of 85%, which is higher than other well-known algorithms for sentiment classification. The findings show that the topics which evoked positive sentiments were related to frontline workers, entertainment, motivation, and spending quality time with family. The negative sentiments were related to socio-economic factors like racial injustice, unemployment rates, fake news, and deaths. Finally, this study provides feedback to the government and health professionals to handle future outbreaks and highlight future research directions for scientists and researchers.


Author(s):  
Xiaomian Kang ◽  
Yang Zhao ◽  
Jiajun Zhang ◽  
Chengqing Zong

Document-level neural machine translation (DocNMT) has yielded attractive improvements. In this article, we systematically analyze the discourse phenomena in Chinese-to-English translation, and focus on the most obvious ones, namely lexical translation consistency. To alleviate the lexical inconsistency, we propose an effective approach that is aware of the words which need to be translated consistently and constrains the model to produce more consistent translations. Specifically, we first introduce a global context extractor to extract the document context and consistency context, respectively. Then, the two types of global context are integrated into a encoder enhancer and a decoder enhancer to improve the lexical translation consistency. We create a test set to evaluate the lexical consistency automatically. Experiments demonstrate that our approach can significantly alleviate the lexical translation inconsistency. In addition, our approach can also substantially improve the translation quality compared to sentence-level Transformer.


2022 ◽  
Vol 12 ◽  
Author(s):  
J. Tilak Ratnanather ◽  
Lydia C. Wang ◽  
Seung-Ho Bae ◽  
Erin R. O'Neill ◽  
Elad Sagi ◽  
...  

Objective: Speech tests assess the ability of people with hearing loss to comprehend speech with a hearing aid or cochlear implant. The tests are usually at the word or sentence level. However, few tests analyze errors at the phoneme level. So, there is a need for an automated program to visualize in real time the accuracy of phonemes in these tests.Method: The program reads in stimulus-response pairs and obtains their phonemic representations from an open-source digital pronouncing dictionary. The stimulus phonemes are aligned with the response phonemes via a modification of the Levenshtein Minimum Edit Distance algorithm. Alignment is achieved via dynamic programming with modified costs based on phonological features for insertion, deletions and substitutions. The accuracy for each phoneme is based on the F1-score. Accuracy is visualized with respect to place and manner (consonants) or height (vowels). Confusion matrices for the phonemes are used in an information transfer analysis of ten phonological features. A histogram of the information transfer for the features over a frequency-like range is presented as a phonemegram.Results: The program was applied to two datasets. One consisted of test data at the sentence and word levels. Stimulus-response sentence pairs from six volunteers with different degrees of hearing loss and modes of amplification were analyzed. Four volunteers listened to sentences from a mobile auditory training app while two listened to sentences from a clinical speech test. Stimulus-response word pairs from three lists were also analyzed. The other dataset consisted of published stimulus-response pairs from experiments of 31 participants with cochlear implants listening to 400 Basic English Lexicon sentences via different talkers at four different SNR levels. In all cases, visualization was obtained in real time. Analysis of 12,400 actual and random pairs showed that the program was robust to the nature of the pairs.Conclusion: It is possible to automate the alignment of phonemes extracted from stimulus-response pairs from speech tests in real time. The alignment then makes it possible to visualize the accuracy of responses via phonological features in two ways. Such visualization of phoneme alignment and accuracy could aid clinicians and scientists.


2022 ◽  
Vol 9 (1) ◽  
Author(s):  
Jin Wang ◽  
Marisa N. Lytle ◽  
Yael Weiss ◽  
Brianna L. Yamasaki ◽  
James R. Booth

AbstractThis dataset examines language development with a longitudinal design and includes diffusion- and T1-weighted structural magnetic resonance imaging (MRI), task-based functional MRI (fMRI), and a battery of psycho-educational assessments and parental questionnaires. We collected data from 5.5-6.5-year-old children (ses-5) and followed them up when they were 7-8 years old (ses-7) and then again at 8.5-10 years old (ses-9). To increase the sample size at the older time points, another cohort of 7-8-year-old children (ses-7) were recruited and followed up when they were 8.5–10 years old (ses-9). In total, 322 children who completed at least one structural and functional scan were included. Children performed four fMRI tasks consisting of two word-level tasks examining phonological and semantic processing and two sentence-level tasks investigating semantic and syntactic processing. The MRI data is valuable for examining changes over time in interactive specialization due to the use of multiple imaging modalities and tasks in this longitudinal design. In addition, the extensive psycho-educational assessments and questionnaires provide opportunities to explore brain-behavior and brain-environment associations.


Author(s):  
Shrinidhi Kanchi ◽  
Alain Pagani ◽  
Hamam Mokayed ◽  
Marcus Liwicki ◽  
Didier Stricker ◽  
...  

Document classification is one of the most critical steps in the document analysis pipeline. There are two types of approaches for document classification, known as image-based and multimodal approaches. The image-based document classification approaches are solely based on the inherent visual cues of the document images. In contrast, the multimodal approach co-learns the visual and textual features, and it has proved to be more effective. Nonetheless, these approaches require a huge amount of data. This paper presents a novel approach for document classification that works with a small amount of data and outperforms other approaches. The proposed approach incorporates a hierarchical attention network(HAN) for the textual stream and the EfficientNet-B0 for the image stream. The hierarchical attention network in the textual stream uses the dynamic word embedding through fine-tuned BERT. HAN incorporates both the word level and sentence level features. While the earlier approaches rely on training on a large corpus (RVL-CDIP), we show that our approach works with a small amount of data (Tobacco-3482). To this end, we trained the neural network at Tobacco-3428 from scratch. Thereby, we outperform state-of-the-art by obtaining an accuracy of 90.3%. This results in a relative error reduction rate of 7.9%.


2022 ◽  
Vol 12 (1) ◽  
pp. 499
Author(s):  
Ying Zhou ◽  
Xiaokang Hu ◽  
Vera Chung

Paraphrase detection and generation are important natural language processing (NLP) tasks. Yet the term paraphrase is broad enough to include many fine-grained relations. This leads to different tolerance levels of semantic divergence in the positive paraphrase class among publicly available paraphrase datasets. Such variation can affect the generalisability of paraphrase classification models. It may also impact the predictability of paraphrase generation models. This paper presents a new model which can use few corpora of fine-grained paraphrase relations to construct automatically using language inference models. The fine-grained sentence level paraphrase relations are defined based on word and phrase level counterparts. We demonstrate that the fine-grained labels from our proposed system can make it possible to generate paraphrases at desirable semantic level. The new labels could also contribute to general sentence embedding techniques.


2022 ◽  
Vol 59 (1) ◽  
pp. 102734
Author(s):  
Min Pan ◽  
Junmei Wang ◽  
Jimmy X. Huang ◽  
Angela J. Huang ◽  
Qi Chen ◽  
...  

2021 ◽  
Vol 14 (4) ◽  
pp. 1-24
Author(s):  
Sushant Kafle ◽  
Becca Dingman ◽  
Matt Huenerfauth

There are style guidelines for authors who highlight important words in static text, e.g., bolded words in student textbooks, yet little research has investigated highlighting in dynamic texts, e.g., captions during educational videos for Deaf or Hard of Hearing (DHH) users. In our experimental study, DHH participants subjectively compared design parameters for caption highlighting, including: decoration (underlining vs. italicizing vs. boldfacing), granularity (sentence level vs. word level), and whether to highlight only the first occurrence of a repeating keyword. In partial contrast to recommendations in prior research, which had not been based on experimental studies with DHH users, we found that DHH participants preferred boldface, word-level highlighting in captions. Our empirical results provide guidance for the design of keyword highlighting during captioned videos for DHH users, especially in educational video genres.


Sign in / Sign up

Export Citation Format

Share Document