common task
Recently Published Documents


TOTAL DOCUMENTS

256
(FIVE YEARS 105)

H-INDEX

15
(FIVE YEARS 4)

2022 ◽  
Vol 1 ◽  
Author(s):  
Mickael Tardy ◽  
Diana Mateus

In breast cancer screening, binary classification of mammograms is a common task aiming to determine whether a case is malignant or benign. A Computer-Aided Diagnosis (CADx) system based on a trainable classifier requires clean data and labels coming from a confirmed diagnosis. Unfortunately, such labels are not easy to obtain in clinical practice, since the histopathological reports of biopsy may not be available alongside mammograms, while normal cases may not have an explicit follow-up confirmation. Such ambiguities result either in reducing the number of samples eligible for training or in a label uncertainty that may decrease the performances. In this work, we maximize the number of samples for training relying on multi-task learning. We design a deep-neural-network-based classifier yielding multiple outputs in one forward pass. The predicted classes include binary malignancy, cancer probability estimation, breast density, and image laterality. Since few samples have all classes available and confirmed, we propose to introduce the uncertainty related to the classes as a per-sample weight during training. Such weighting prevents updating the network's parameters when training on uncertain or missing labels. We evaluate our approach on the public INBreast and private datasets, showing statistically significant improvements compared to baseline and independent state-of-the-art approaches. Moreover, we use mammograms from Susan G. Komen Tissue Bank for fine-tuning, further demonstrating the ability to improve the performances in our multi-task learning setup from raw clinical data. We achieved the binary classification performance of AUC = 80.46 on our private dataset and AUC = 85.23 on the INBreast dataset.


2021 ◽  
Author(s):  
Yu Cheng Hsu ◽  
Tsougenis Efstratios ◽  
Kwok-Leung Tsui ◽  
Qingpeng Zhang

Abstract Background Counting the repetition of human exercise and physical rehabilitation is a common task in rehabilitation and exercise training. The existing vision-based repetition counting methods less emphasize the concurrent motions in the same video. Methods This work analyzed the spectrogram of the pose estimation result to count the repetition. Besides from the public datasets. This work also collected exercise videos from 11 adults to verify the proposed method is capable for handling concurrent motion and different view angles. Results The presented method was validated on the University of Idaho Physical Rehabilitation Movements Data Set (UI-PRMD) and MM-fit dataset. The overall mean absolute error (MAE) for MM-fit was 0.06 with off-by-one Accuracy (OBOA) 0.94. As for UI-PRMD dataset, MAE was 0.06 with OBOA 0.95. We have also tested the performance in a variety of camera locations and concurrent motions with 57 skeleton time-series video with overall MAE 0.07 and OBOA 0.91. Conclusion The proposed method provides a view-angle and motion agnostic concurrent motion counting. This method can potentially use in large-scale remote rehabilitation and exercise training with only one camera.


2021 ◽  
Vol 5 (ISS) ◽  
pp. 1-18
Author(s):  
Futian Zhang ◽  
Sachi Mizobuchi ◽  
Wei Zhou ◽  
Edward Lank

One common task when controlling smart displays is the manipulation of menu items. Given current examples of smart displays that support distant bare hand control, in this paper we explore menu item selection tasks with three different mappings of barehand movement to target selection. Through a series of experiments, we demonstrate that Positional mapping is faster than other mappings when the target is visible but requires many clutches in large targeting spaces. Rate-based mapping is, in contrast, preferred by participants due to its perceived lower effort, despite being slightly harder to learn initially. Tradeoffs in the design of target selection in smart tv displays are discussed.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Youheng Bai ◽  
Yan Zhang ◽  
Kui Xiao ◽  
Yuanyuan Lou ◽  
Kai Sun

Concept prerequisite relation prediction is a common task in the field of knowledge discovery. Concept prerequisite relations can be used to rank learning resources and help learners plan their learning paths. As the largest Internet encyclopedia, Wikipedia is composed of many articles edited in multiple languages. Basic knowledge concepts in a variety of subjects can be found on Wikipedia. Although there are many knowledge concepts in each field, the prerequisite relations between them are not clear. When we browse pages in an area on Wikipedia, we do not know which page to start. In this paper, we propose a BERT-based Wikipedia concept prerequisite relation prediction model. First, we created two types of concept pair features, one is based on BERT sentence embedding and the other is based on the attributes of Wikipedia articles. Then, we use these two types of concept pair features to predict the prerequisite relations between two concepts. Experimental results show that our proposed method performs better than state-of-the-art methods for English and Chinese datasets.


2021 ◽  
Vol 27 (3) ◽  
pp. 102-106
Author(s):  
Mariya V. Patrikeeva

The author of the article examines the reflection of the events of the Crimean War of 1853–1856 in the plots of Russian fables. The article analyses the system of characters, plot dominants of fables, demonstrates the possibilities of this genre in describing the military clash of Russia with hostile states. The author of the article points to a common task that unites the plots of the fables of different poets (it was obligatory to emphasise the confrontation between the sides of the military conflict), but at the same time considers the content and stylistic features of each fable separately. Noting the specificity of the fable genre of the period of the Crimean campaign of 1853–1856, the author of the article points out its connection with the traditions laid down in Russian literature by Ivan Krylov, which were reflected in the plot, stylistics and numerous allusions.


2021 ◽  
Author(s):  
Tjaša Heričko ◽  
Boštjan Šumak ◽  
Saša Brdnik

Web performance testing with tools such as Google Lighthouse is a common task in software practice and research. However, variability in time-based performance measurement results is observed quickly when using the tool, even if the website has not changed. This can occur due to variability in the network, web, and client devices. In this paper, we investigated how this challenge was addressed in the existing literature. Furthermore, an experiment was conducted, highlighting how unrepresentative measurements can result from single runs; thus, researchers and practitioners are advised to run performance tests multiple times and use an aggregation value. Based on the empirical results, 5 consecutive runs using a median to aggregate results reduce variability greatly, and can be performed in a reasonable time. The study’s findings alert to p otential pitfalls when using single run-based measurement results and serve as guidelines for future use of the tool.


2021 ◽  
Author(s):  
Martin Denk ◽  
◽  
Klemens Rother ◽  
Kristin Paetzold ◽  
◽  
...  

The automated reverse engineering of wireframes is a common task in topology optimization, fast concept design, bionic and point cloud reconstruction. This article deals with the usage of skeleton-based reconstruction of sketches in 2D images. The result leads to a flexible at least C₁ continuous shape description.


2021 ◽  
Author(s):  
Nicolò Rossi ◽  
Colautti Andrea ◽  
Lucilla Iacumin ◽  
Carla Piazza

Summary: Whole Genome Assembly (WGA) of bacterial genomes with short reads is a quite common task as DNA sequencing has become cheaper with the advances of its technology. The process of assembling a genome has no absolute golden standard (Del Angel et al. (2018)) and it requires to perform a sequence of steps each of which can involve combinations of many different tools. However, the quality of the final assembly is always strongly related to the quality of the input data. With this in mind we built WGA-LP, a package that connects state-of-art programs and novel scripts to check and improve the quality of both samples and resulting assemblies. WGA-LP, with its conservative decontamination approach, has shown to be capable of creating high quality assemblies even in the case of contaminated reads. Availability and Implementation: WGA-LP is available on GitHub (https://github.com/redsnic/WGA-LP) and Docker Hub (https://hub.docker.com/r/redsnic/wgalp). The web app for node visualization is hosted by shinyapps.io (https://redsnic.shinyapps.io/ContigCoverageVisualizer/). Contact: Nicolò Rossi, [email protected] Supplementary information: Supplementary data are available at bioRxiv online.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Natalja Kurbatova ◽  
Rowan Swiers

Abstract Background Data integration to build a biomedical knowledge graph is a challenging task. There are multiple disease ontologies used in data sources and publications, each having its hierarchy. A common task is to map between ontologies, find disease clusters and finally build a representation of the chosen disease area. There is a shortage of published resources and tools to facilitate interactive, efficient and flexible cross-referencing and analysis of multiple disease ontologies commonly found in data sources and research. Results Our results are represented as a knowledge graph solution that uses disease ontology cross-references and facilitates switching between ontology hierarchies for data integration and other tasks. Conclusions Grakn core with pre-installed “Disease ontologies for knowledge graphs” facilitates the biomedical knowledge graph build and provides an elegant solution for the multiple disease ontologies problem.


2021 ◽  
Author(s):  
Florian Mock ◽  
Fleming Kretschmer ◽  
Anton Kriese ◽  
Sebastian Böcker ◽  
Manja Marz

Taxonomic classification, i.e., the identification and assignment to groups of biological organisms with the same origin and characteristics, is a common task in genetics. Nowadays, taxonomic classification is mainly based on genome similarity search to large genome databases. In this process, the classification quality depends heavily on the database since representative relatives have to be known already. Many genomic sequences cannot be classified at all or only with a high misclassification rate. Here we present BERTax, a program that uses a deep neural network to precisely classify the superkingdom, phylum, and genus of DNA sequences taxonomically without the need for a known representative relative from a database. For this, BERTax uses the natural language processing model BERT trained to represent DNA. We show BERTax to be at least on par with the state-of-the-art approaches when taxonomically similar species are part of the training data. In case of an entirely novel organism, however, BERTax clearly outperforms any existing approach. Finally, we show that BERTax can also be combined with database approaches to further increase the prediction quality. Since BERTax is not based on homologous entries in databases, it allows precise taxonomic classification of a broader range of genomic sequences. This leads to a higher number of correctly classified sequences and thus increases the overall information gain.


Sign in / Sign up

Export Citation Format

Share Document