Evaluation Dataset and System for Japanese Lexical Simplification

Author(s):  
Tomoyuki Kajiwara ◽  
Kazuhide Yamamoto
Keyword(s):  
2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Michael Rutherford ◽  
Seong K. Mun ◽  
Betty Levine ◽  
William Bennett ◽  
Kirk Smith ◽  
...  

AbstractWe developed a DICOM dataset that can be used to evaluate the performance of de-identification algorithms. DICOM objects (a total of 1,693 CT, MRI, PET, and digital X-ray images) were selected from datasets published in the Cancer Imaging Archive (TCIA). Synthetic Protected Health Information (PHI) was generated and inserted into selected DICOM Attributes to mimic typical clinical imaging exams. The DICOM Standard and TCIA curation audit logs guided the insertion of synthetic PHI into standard and non-standard DICOM data elements. A TCIA curation team tested the utility of the evaluation dataset. With this publication, the evaluation dataset (containing synthetic PHI) and de-identified evaluation dataset (the result of TCIA curation) are released on TCIA in advance of a competition, sponsored by the National Cancer Institute (NCI), for algorithmic de-identification of medical image datasets. The competition will use a much larger evaluation dataset constructed in the same manner. This paper describes the creation of the evaluation datasets and guidelines for their use.


Author(s):  
J A Hall ◽  
R J Harris ◽  
A Zaidi ◽  
S C Woodhall ◽  
G Dabrera ◽  
...  

Abstract Background Household transmission of SARS-CoV-2 is an important component of the community spread of the pandemic. Little is known about the factors associated with household transmission, at the level of the case, contact or household, or how these have varied over the course of the pandemic. Methods The Household Transmission Evaluation Dataset (HOSTED) is a passive surveillance system linking laboratory-confirmed COVID-19 cases to individuals living in the same household in England. We explored the risk of household transmission according to: age of case and contact, sex, region, deprivation, month and household composition between April and September 2020, building a multivariate model. Results In the period studied, on average, 5.5% of household contacts in England were diagnosed as cases. Household transmission was most common between adult cases and contacts of a similar age. There was some evidence of lower transmission rates to under-16s [adjusted odds ratios (aOR) 0.70, 95% confidence interval (CI) 0.66–0.74). There were clear regional differences, with higher rates of household transmission in the north of England and the Midlands. Less deprived areas had a lower risk of household transmission. After controlling for region, there was no effect of deprivation, but houses of multiple occupancy had lower rates of household transmission [aOR 0.74 (0.66–0.83)]. Conclusions Children are less likely to acquire SARS-CoV-2 via household transmission, and consequently there was no difference in the risk of transmission in households with children. Households in which cases could isolate effectively, such as houses of multiple occupancy, had lower rates of household transmission. Policies to support the effective isolation of cases from their household contacts could lower the level of household transmission.


2020 ◽  
Vol 34 (05) ◽  
pp. 8592-8599
Author(s):  
Sheena Panthaplackel ◽  
Milos Gligoric ◽  
Raymond J. Mooney ◽  
Junyi Jessy Li

Comments are an integral part of software development; they are natural language descriptions associated with source code elements. Understanding explicit associations can be useful in improving code comprehensibility and maintaining the consistency between code and comments. As an initial step towards this larger goal, we address the task of associating entities in Javadoc comments with elements in Java source code. We propose an approach for automatically extracting supervised data using revision histories of open source projects and present a manually annotated evaluation dataset for this task. We develop a binary classifier and a sequence labeling model by crafting a rich feature set which encompasses various aspects of code, comments, and the relationships between them. Experiments show that our systems outperform several baselines learning from the proposed supervision.


2020 ◽  
Vol 2020 ◽  
pp. 1-19
Author(s):  
Wei Zhang ◽  
Zhihai Wang ◽  
Jidong Yuan ◽  
Shilei Hao

As a representation of discriminative features, the time series shapelet has recently received considerable research interest. However, most shapelet-based classification models evaluate the differential ability of the shapelet on the whole training dataset, neglecting characteristic information contained in each instance to be classified and the classwise feature frequency information. Hence, the computational complexity of feature extraction is high, and the interpretability is inadequate. To this end, the efficiency of shapelet discovery is improved through a lazy strategy fusing global and local similarities. In the prediction process, the strategy learns a specific evaluation dataset for each instance, and then the captured characteristics are directly used to progressively reduce the uncertainty of the predicted class label. Moreover, a shapelet coverage score is defined to calculate the discriminability of each time stamp for different classes. The experimental results show that the proposed method is competitive with the benchmark methods and provides insight into the discriminative features of each time series and each type in the data.


Electronics ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. 1706
Author(s):  
Soonshin Seo ◽  
Ji-Hwan Kim

One of the most important parts of a text-independent speaker verification system is speaker embedding generation. Previous studies demonstrated that shortcut connections-based multi-layer aggregation improves the representational power of a speaker embedding system. However, model parameters are relatively large in number, and unspecified variations increase in the multi-layer aggregation. Therefore, in this study, we propose a self-attentive multi-layer aggregation with feature recalibration and deep length normalization for a text-independent speaker verification system. To reduce the number of model parameters, we set the ResNet with the scaled channel width and layer depth as a baseline. To control the variability in the training, we apply a self-attention mechanism to perform multi-layer aggregation with dropout regularizations and batch normalizations. Subsequently, we apply a feature recalibration layer to the aggregated feature using fully-connected layers and nonlinear activation functions. Further, deep length normalization is used on a recalibrated feature in the training process. Experimental results using the VoxCeleb1 evaluation dataset showed that the performance of the proposed methods was comparable to that of state-of-the-art models (equal error rate of 4.95% and 2.86%, using the VoxCeleb1 and VoxCeleb2 training datasets, respectively).


2020 ◽  
Vol 16 (3) ◽  
pp. 110-127
Author(s):  
Raabia Mumtaz ◽  
Muhammad Abdul Qadir

This article describes CustNER: a system for named-entity recognition (NER) of person, location, and organization. Realizing the incorrect annotations of existing NER, four categories of false negatives have been identified. The NEs not annotated contain nationalities, have corresponding resource in DBpedia, are acronyms of other NEs. A rule-based system, CustNER, has been proposed that utilizes existing NERs and DBpedia knowledge base. CustNER has been trained on the open knowledge extraction (OKE) challenge 2017 dataset and evaluated on OKE and CoNLL03 (Conference on Natural Language Learning) datasets. The OKE dataset has also been annotated with the three types. Evaluation results show that CustNER outperforms existing NERs with F score 12.4% better than Stanford NER and 3.1% better than Illinois NER. On another standard evaluation dataset for which the system is not trained, the CoNLL03 dataset, CustNER gives results comparable to existing systems with F score 3.9% better than Stanford NER, though Illinois NER F score is 1.3% better than CustNER.


2016 ◽  
Vol 2016 ◽  
pp. 1-19 ◽  
Author(s):  
Marius Miron ◽  
Julio J. Carabias-Orti ◽  
Juan J. Bosch ◽  
Emilia Gómez ◽  
Jordi Janer

This paper proposes a system for score-informed audio source separation for multichannel orchestral recordings. The orchestral music repertoire relies on the existence of scores. Thus, a reliable separation requires a good alignment of the score with the audio of the performance. To that extent, automatic score alignment methods are reliable when allowing a tolerance window around the actual onset and offset. Moreover, several factors increase the difficulty of our task: a high reverberant image, large ensembles having rich polyphony, and a large variety of instruments recorded within a distant-microphone setup. To solve these problems, we design context-specific methods such as the refinement of score-following output in order to obtain a more precise alignment. Moreover, we extend a close-microphone separation framework to deal with the distant-microphone orchestral recordings. Then, we propose the first open evaluation dataset in this musical context, including annotations of the notes played by multiple instruments from an orchestral ensemble. The evaluation aims at analyzing the interactions of important parts of the separation framework on the quality of separation. Results show that we are able to align the original score with the audio of the performance and separate the sources corresponding to the instrument sections.


Author(s):  
WASEEM AHMAD ◽  
AJIT NARAYANAN

Outlier detection has important applications in various data mining domains such as fraud detection, intrusion detection, customers' behavior and employees' performance analysis. Outliers are characterized by being significantly or "interestingly" different from the rest of the data. In this paper, a novel cluster-based outlier detection method is proposed using a humoral-mediated clustering algorithm (HAIS) based on concepts of antibody secretion in natural immune systems. The proposed method finds meaningful clusters as well as outliers simultaneously. This is an iterative approach where only clusters above threshold (larger sized clusters) are carried forward to the next cycle of cluster formation while removing small sized clusters. This paper also demonstrates through experimental results that the mere existence of outliers severely affects the clustering outcome, and removing those outliers can result in better clustering solutions. The feasibility of the method is demonstrated through simulated datasets, current datasets from the literature as well as a real-world doctors' performance evaluation dataset where the task is to identify potentially under-performing doctors. The results indicate that HAIS has capabilities of detecting single point as well as cluster-based outliers.


Sign in / Sign up

Export Citation Format

Share Document