jaccard index
Recently Published Documents


TOTAL DOCUMENTS

136
(FIVE YEARS 75)

H-INDEX

10
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Georg Hahn ◽  
Sanghun Lee ◽  
Dmitry Prokopenko ◽  
Tanya Novak ◽  
Julian Hecker ◽  
...  

The GISAID database contains more than 100,000 SARS-CoV-2 genomes, including sequences of the recently discovered SARS-CoV-2 omicron variant and of prior SARS-CoV-2 strains that have been collected from patients around the world since the beginning of the pandemic. We applied unsupervised cluster analysis to the SARS-CoV-2 genomes, assessing their similarity at a genome-wide level based on the Jaccard index and principal component analysis. Our analysis results show that the omicron variant sequences are most similar to sequences that have been submitted early in the pandemic around January 2020. Furthermore, the omicron variants in GISAID are spread across the entire range of the first principal component, suggesting that the strain has been in circulation for some time. This observation supports a long-term infection hypothesis as the omicron strain origin.


2021 ◽  
Vol 21 ◽  
pp. 316-323
Author(s):  
Roman Voitovych ◽  
Edyta Łukasik

This paper presents an approach to compare and classify books written in the Polish language by comparing their lexis fields. Books can be classified by their features, such as literature type, literary genre, style, author, etc. Using a preassembled dictionary and Jaccard index, we managed to prove a compact hypothesis concerning similar books. Further analysis with the PAM clustering algorithm presented a lexical connection between books of the same type or author. Overall static behaviour of similarities of any particular field on one side and some anomalous tendencies in other cases suggest that recognition of other features is possible. The method presented in this article allows drawing conclusions regarding the connection between any arbitrary books based solely on their vocabulary.


2021 ◽  
Vol 1 (1) ◽  
pp. 20-22
Author(s):  
Awadelrahman M. A. Ahmed ◽  
Leen A. M. Ali

This paper contributes in automating medical image segmentation by proposing generative adversarial network based models to segment both polyps and instruments in endoscopy images. A main contribution of this paper is providing explanations for the predictions using layer-wise relevance propagation approach, showing which pixels in the input image are more relevant to the predictions. The models achieved 0.46 and 0.70, on Jaccard index and 0.84 and 0.96 accuracy, on the polyp segmentation and the instrument segmentation, respectively.


2021 ◽  
Vol 1 (1) ◽  
pp. 35-37
Author(s):  
Saurab Rauniyar ◽  
Vabesh Kumar Jha ◽  
Ritika Kumari Jha ◽  
Debesh Jha ◽  
Ashish Rauniyar

Colorectal cancer is one of the major causes of cancer-related deaths globally. Although colonoscopy is considered as the gold standard for examination of colon polyps, there is a significant miss rate of around 22-28 %. Deep learning algorithms such as convolutional neural networks can aid in the detection and describe abnormalities in the colon that clinicians might miss during endoscopic examinations. The "MedAI: Transparency in Medical Image Segmentation" competition provides an opportunity to develop accurate and automated polyp segmentation algorithms on the same dataset provided by the challenge organizer. We participate in the polyp segmentation task of the challenge and provide a solution based on the dual decoder attention network (DDANet). The DDANet is an encoder-decoder-based architecture based on a dual decoder attention network. Our experimental results on the organizers' dataset showed a dice coefficient of 0.7967, Jaccard index of 0.7220, a recall of 0.8214, a precision of 0.8359, and an accuracy of 0.9557. Our results on unseen datasets suggest that deep learning and computer vision-based methods can effectively solve automated polyp segmentation tasks.


2021 ◽  
Author(s):  
Guohui Ruan ◽  
Jiaming Liu ◽  
Ziqi An ◽  
Kaiibin Wu ◽  
Chuanjun Tong ◽  
...  

Skull stripping is an initial and critical step in the pipeline of mouse fMRI analysis. Manual labeling of the brain usually suffers from intra- and inter-rater variability and is highly time-consuming. Hence, an automatic and efficient skull-stripping method is in high demand for mouse fMRI studies. In this study, we investigated a 3D U-Net based method for automatic brain extraction in mouse fMRI studies. Two U-Net models were separately trained on T2-weighted anatomical images and T2*-weighted functional images. The trained models were tested on both interior and exterior datasets. The 3D U-Net models yielded a higher accuracy in brain extraction from both T2-weighted images (Dice > 0.984, Jaccard index > 0.968 and Hausdorff distance < 7.7) and T2*-weighted images (Dice > 0.964, Jaccard index > 0.931 and Hausdorff distance < 3.3), compared with the two widely used mouse skull-stripping methods (RATS and SHERM). The resting-state fMRI results using automatic segmentation with the 3D U-Net models are identical to those obtained by manual segmentation for both the seed-based and group independent component analysis. These results demonstrate that the 3D U-Net based method can replace manual brain extraction in mouse fMRI analysis.


Mathematics ◽  
2021 ◽  
Vol 9 (19) ◽  
pp. 2471
Author(s):  
Miguel-Angel Gil-Rios ◽  
Igor V. Guryev ◽  
Ivan Cruz-Aceves ◽  
Juan Gabriel Avina-Cervantes ◽  
Martha Alicia Hernandez-Gonzalez ◽  
...  

The automatic detection of coronary stenosis is a very important task in computer aided diagnosis systems in the cardiology area. The main contribution of this paper is the identification of a suitable subset of 20 features that allows for the classification of stenosis cases in X-ray coronary images with a high performance overcoming different state-of-the-art classification techniques including deep learning strategies. The automatic feature selection stage was driven by the Univariate Marginal Distribution Algorithm and carried out by statistical comparison between five metaheuristics in order to explore the search space, which is O(249) computational complexity. Moreover, the proposed method is compared with six state-of-the-art classification methods, probing its effectiveness in terms of the Accuracy and Jaccard Index evaluation metrics. All the experiments were performed using two X-ray image databases of coronary angiograms. The first database contains 500 instances and the second one 250 images. In the experimental results, the proposed method achieved an Accuracy rate of 0.89 and 0.88 and Jaccard Index of 0.80 and 0.79, respectively. Finally, the average computational time of the proposed method to classify stenosis cases was ≈0.02 s, which made it highly suitable to be used in clinical practice.


2021 ◽  
Vol 7 (1) ◽  
pp. 2
Author(s):  
Mateo Gende ◽  
Joaquim de Moura ◽  
Jorge Novo ◽  
Pablo Charlón ◽  
Marcos Ortega

The Epiretinal Membrane (ERM) is an ocular disease that appears as a fibro-cellular layer of tissue over the retina, specifically, over the Inner Limiting Membrane (ILM). It causes vision blurring and distortion, and its presence can be indicative of other ocular pathologies, such as diabetic macular edema. The ERM diagnosis is usually performed by visually inspecting Optical Coherence Tomography (OCT) images, a manual process which is tiresome and prone to subjectivity. In this work, we present a methodology for the automatic segmentation and visualisation of the ERM in OCT volumes using deep learning. By employing a Densely Connected Convolutional Network, every pixel in the ILM can be classified into either healthy or pathological. Thus, a segmentation of the region susceptible to ERM appearance can be produced. This methodology also produces an intuitive colour map representation of the ERM presence over a visualisation of the eye fundus created from the OCT volume. In a series of representative experiments conducted to evaluate this methodology, it achieved a Dice score of 0.826±0.112 and a Jaccard index of 0.714±0.155. The results that were obtained demonstrate the competitive performance of the proposed methodology when compared to other works in the state of the art.


2021 ◽  
pp. 1-12
Author(s):  
Yanhan Zhang ◽  
Shengwei Tian ◽  
Long Yu ◽  
Yuan Ren ◽  
Zhongyu Gao ◽  
...  

In recent years, the incidence of skin diseases has increased significantly, and some malignant tumors caused by skin diseases have brought great hidden dangers to people’s health. In order to help experts perform lesion measurement and auxiliary diagnosis, automatic segmentation methods are very needed in clinical practice. Deep learning and contextual information extraction methods have been applied to many image segmentation tasks. However, their performance is limited due to insufficient training of a large number of parameters and these parameters sometimes fail to capture long-term dependencies. In addition, due to the many interfering factors of the skin disease image, the complex boundary and the uncertain size and shape of the lesion, the segmentation of the skin disease image is still a challenging problem. To solve these problems, we propose a long-distance contextual attention network(LCA-Net). By connecting the non-local module and the channel attention (CAM) in parallel to form a non-local operation, the long-term dependence is captured from the two dimensions of space and channel to enhance the network’s ability to extract features of skin diseases. Our method has an average Jaccard index of 0.771 on the ISIC2017 dataset, which represents a 0.6%improvement over the ISIC2017 Challenge Champion model. The average Jaccard index of 5-fold cross-validation on the ISIC2018 dataset is 0.8256. At the same time, we also compared with some advanced methods of image segmentation, the experimental results show our proposed method has a competitive performance.


2021 ◽  
Vol 5 (3) ◽  
pp. 306
Author(s):  
Ridho Ananda ◽  
Agi Prasetiadi

One of the problems in the clustering process is that the objects under inquiry are multivariate measures containing geometrical information that requires shape clustering. Because Procrustes is a technique to obtaining the similarity measure of two shapes, it can become the solution. Therefore, this paper tried to use Procrustes as the main process in the clustering method. Several algorithms proposed for the shape clustering process using Procrustes were namely hierarchical the goodness-of-fit of Procrustes (HGoFP), k-means the goodness-of-fit of Procrustes (KMGoFP), hierarchical ordinary Procrustes analysis (HOPA), and k-means ordinary Procrustes analysis (KMOPA). Those algorithms were evaluated using Rand index, Jaccard index, F-measure, and Purity. Data used was the line drawing dataset that consisted of 180 drawings classified into six clusters. The results showed that the HGoFP, KMGoFP, HOPA and KMOPA algorithms were good enough in Rand index, F-measure, and Purity with 0.697 as a minimum value. Meanwhile, the good clustering results in the Jaccard index were only the HGoFP, KMGoFP, and HOPA algorithms with 0.561 as a minimum value. KMGoFP has the worst result in the Jaccard index that is about 0.300. In the time complexity, the fastest algorithm is the HGoFP algorithm; the time complexity is 4.733. Based on the results, the algorithms proposed in this paper particularly deserve to be proposed as new algorithms to cluster the objects in the line drawing dataset. Then, the HGoFP is suggested clustering the objects in the dataset used.


PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0252777
Author(s):  
Dan Zhu ◽  
Haiyan Ding ◽  
M. Muz Zviman ◽  
Henry Halperin ◽  
Michael Schär ◽  
...  

Purpose We aim to determine an advantageous approach for the acceleration of high spatial resolution 3D cardiac T2 relaxometry data by comparing the performance of different undersampling patterns and reconstruction methods over a range of acceleration rates. Methods Multi-volume 3D high-resolution cardiac images were acquired fully and undersampled retrospectively using 1) optimal CAIPIRINHA and 2) a variable density random (VDR) sampling. Data were reconstructed using 1) multi-volume sensitivity encoding (SENSE), 2) joint-sparsity SENSE and 3) model-based SENSE. Four metrics were calculated on 3 naïve swine and 8 normal human subjects over a whole left-ventricular region of interest: root-mean-square error (RMSE) of image signal intensity, RMSE of T2, the bias of mean T2, and standard deviation (SD) of T2. Fully sampled data and volume-by-volume SENSE with standard equally spaced undersampling were used as references. The Jaccard index calculated from one swine with acute myocardial infarction (MI) was used to demonstrate preservation of segmentation of edematous tissues with elevated T2. Results In naïve swine and normal human subjects, all methods had similar performance when the net reduction factor (Rnet) <2.5. VDR sampling with model-based SENSE showed the lowest RMSEs (10.5%-14.2%) and SDs (+1.7–2.4 ms) of T2 when Rnet>2.5, while VDR sampling with the joint-sparsity SENSE had the lowest bias of mean T2 (0.0–1.1ms) when Rnet>3. The RMSEs of parametric T2 values (9.2%-24.6%) were larger than for image signal intensities (5.2%-18.4%). In the swine with MI, VDR sampling with either joint-sparsity or model-based SENSE showed consistently higher Jaccard index for all Rnet (0.71–0.50) than volume-by-volume SENSE (0.68–0.30). Conclusions Retrospective exploration of undersampling and reconstruction in 3D whole-heart T2 parametric mapping revealed that maps were more sensitive to undersampling than images, presenting a more stringent limiting factor on Rnet. The combination of VDR sampling patterns with model-based or joint-sparsity SENSE reconstructions were more robust for Rnet>3.


Sign in / Sign up

Export Citation Format

Share Document