Detecting Liveness in Fingerprint Scanners Using Wavelets: Results of the Test Dataset

Author(s):  
Stephanie Schuckers ◽  
Aditya Abhyankar
Keyword(s):  
2019 ◽  
Vol 20 (5) ◽  
pp. 565-578 ◽  
Author(s):  
Lidong Wang ◽  
Ruijun Zhang

Ubiquitination is an important post-translational modification (PTM) process for the regulation of protein functions, which is associated with cancer, cardiovascular and other diseases. Recent initiatives have focused on the detection of potential ubiquitination sites with the aid of physicochemical test approaches in conjunction with the application of computational methods. The identification of ubiquitination sites using laboratory tests is especially susceptible to the temporality and reversibility of the ubiquitination processes, and is also costly and time-consuming. It has been demonstrated that computational methods are effective in extracting potential rules or inferences from biological sequence collections. Up to the present, the computational strategy has been one of the critical research approaches that have been applied for the identification of ubiquitination sites, and currently, there are numerous state-of-the-art computational methods that have been developed from machine learning and statistical analysis to undertake such work. In the present study, the construction of benchmark datasets is summarized, together with feature representation methods, feature selection approaches and the classifiers involved in several previous publications. In an attempt to explore pertinent development trends for the identification of ubiquitination sites, an independent test dataset was constructed and the predicting results obtained from five prediction tools are reported here, together with some related discussions.


2020 ◽  
Author(s):  
Jinseok Lee

BACKGROUND The coronavirus disease (COVID-19) has explosively spread worldwide since the beginning of 2020. According to a multinational consensus statement from the Fleischner Society, computed tomography (CT) can be used as a relevant screening tool owing to its higher sensitivity for detecting early pneumonic changes. However, physicians are extremely busy fighting COVID-19 in this era of worldwide crisis. Thus, it is crucial to accelerate the development of an artificial intelligence (AI) diagnostic tool to support physicians. OBJECTIVE We aimed to quickly develop an AI technique to diagnose COVID-19 pneumonia and differentiate it from non-COVID pneumonia and non-pneumonia diseases on CT. METHODS A simple 2D deep learning framework, named fast-track COVID-19 classification network (FCONet), was developed to diagnose COVID-19 pneumonia based on a single chest CT image. FCONet was developed by transfer learning, using one of the four state-of-art pre-trained deep learning models (VGG16, ResNet50, InceptionV3, or Xception) as a backbone. For training and testing of FCONet, we collected 3,993 chest CT images of patients with COVID-19 pneumonia, other pneumonia, and non-pneumonia diseases from Wonkwang University Hospital, Chonnam National University Hospital, and the Italian Society of Medical and Interventional Radiology public database. These CT images were split into a training and a testing set at a ratio of 8:2. For the test dataset, the diagnostic performance to diagnose COVID-19 pneumonia was compared among the four pre-trained FCONet models. In addition, we tested the FCONet models on an additional external testing dataset extracted from the embedded low-quality chest CT images of COVID-19 pneumonia in recently published papers. RESULTS Of the four pre-trained models of FCONet, the ResNet50 showed excellent diagnostic performance (sensitivity 99.58%, specificity 100%, and accuracy 99.87%) and outperformed the other three pre-trained models in testing dataset. In additional external test dataset using low-quality CT images, the detection accuracy of the ResNet50 model was the highest (96.97%), followed by Xception, InceptionV3, and VGG16 (90.71%, 89.38%, and 87.12%, respectively). CONCLUSIONS The FCONet, a simple 2D deep learning framework based on a single chest CT image, provides excellent diagnostic performance in detecting COVID-19 pneumonia. Based on our testing dataset, the ResNet50-based FCONet might be the best model, as it outperformed other FCONet models based on VGG16, Xception, and InceptionV3.


2020 ◽  
pp. bjophthalmol-2020-317825
Author(s):  
Yonghao Li ◽  
Weibo Feng ◽  
Xiujuan Zhao ◽  
Bingqian Liu ◽  
Yan Zhang ◽  
...  

Background/aimsTo apply deep learning technology to develop an artificial intelligence (AI) system that can identify vision-threatening conditions in high myopia patients based on optical coherence tomography (OCT) macular images.MethodsIn this cross-sectional, prospective study, a total of 5505 qualified OCT macular images obtained from 1048 high myopia patients admitted to Zhongshan Ophthalmic Centre (ZOC) from 2012 to 2017 were selected for the development of the AI system. The independent test dataset included 412 images obtained from 91 high myopia patients recruited at ZOC from January 2019 to May 2019. We adopted the InceptionResnetV2 architecture to train four independent convolutional neural network (CNN) models to identify the following four vision-threatening conditions in high myopia: retinoschisis, macular hole, retinal detachment and pathological myopic choroidal neovascularisation. Focal Loss was used to address class imbalance, and optimal operating thresholds were determined according to the Youden Index.ResultsIn the independent test dataset, the areas under the receiver operating characteristic curves were high for all conditions (0.961 to 0.999). Our AI system achieved sensitivities equal to or even better than those of retina specialists as well as high specificities (greater than 90%). Moreover, our AI system provided a transparent and interpretable diagnosis with heatmaps.ConclusionsWe used OCT macular images for the development of CNN models to identify vision-threatening conditions in high myopia patients. Our models achieved reliable sensitivities and high specificities, comparable to those of retina specialists and may be applied for large-scale high myopia screening and patient follow-up.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Sougata Sadhukhan ◽  
Holly Root-Gutteridge ◽  
Bilal Habib

AbstractPrevious studies have posited the use of acoustics-based surveys to monitor population size and estimate their density. However, decreasing the bias in population estimations, such as by using Capture–Mark–Recapture, requires the identification of individuals using supervised classification methods, especially for sparsely populated species like the wolf which may otherwise be counted repeatedly. The cryptic behaviour of Indian wolf (Canis lupus pallipes) poses serious challenges to survey efforts, and thus, there is no reliable estimate of their population despite a prominent role in the ecosystem. Like other wolves, Indian wolves produce howls that can be detected over distances of more than 6 km, making them ideal candidates for acoustic surveys. Here, we explore the use of a supervised classifier to identify unknown individuals. We trained a supervised Agglomerative Nesting hierarchical clustering (AGNES) model using 49 howls from five Indian wolves and achieved 98% individual identification accuracy. We tested our model’s predictive power using 20 novel howls from a further four individuals (test dataset) and resulted in 75% accuracy in classifying howls to individuals. The model can reduce bias in population estimations using Capture-Mark-Recapture and track individual wolves non-invasively by their howls. This has potential for studies of wolves’ territory use, pack composition, and reproductive behaviour. Our method can potentially be adapted for other species with individually distinctive vocalisations, representing an advanced tool for individual-level monitoring.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mu Sook Lee ◽  
Yong Soo Kim ◽  
Minki Kim ◽  
Muhammad Usman ◽  
Shi Sub Byon ◽  
...  

AbstractWe examined the feasibility of explainable computer-aided detection of cardiomegaly in routine clinical practice using segmentation-based methods. Overall, 793 retrospectively acquired posterior–anterior (PA) chest X-ray images (CXRs) of 793 patients were used to train deep learning (DL) models for lung and heart segmentation. The training dataset included PA CXRs from two public datasets and in-house PA CXRs. Two fully automated segmentation-based methods using state-of-the-art DL models for lung and heart segmentation were developed. The diagnostic performance was assessed and the reliability of the automatic cardiothoracic ratio (CTR) calculation was determined using the mean absolute error and paired t-test. The effects of thoracic pathological conditions on performance were assessed using subgroup analysis. One thousand PA CXRs of 1000 patients (480 men, 520 women; mean age 63 ± 23 years) were included. The CTR values derived from the DL models and diagnostic performance exhibited excellent agreement with reference standards for the whole test dataset. Performance of segmentation-based methods differed based on thoracic conditions. When tested using CXRs with lesions obscuring heart borders, the performance was lower than that for other thoracic pathological findings. Thus, segmentation-based methods using DL could detect cardiomegaly; however, the feasibility of computer-aided detection of cardiomegaly without human intervention was limited.


2021 ◽  
pp. 159101992110009
Author(s):  
Xinke Liu ◽  
Junqiang Feng ◽  
Zhenzhou Wu ◽  
Zhonghao Neo ◽  
Chengcheng Zhu ◽  
...  

Objective Accurate diagnosis and measurement of intracranial aneurysms are challenging. This study aimed to develop a 3D convolutional neural network (CNN) model to detect and segment intracranial aneurysms (IA) on 3D rotational DSA (3D-RA) images. Methods 3D-RA images were collected and annotated by 5 neuroradiologists. The annotated images were then divided into three datasets: training, validation, and test. A 3D Dense-UNet-like CNN (3D-Dense-UNet) segmentation algorithm was constructed and trained using the training dataset. Diagnostic performance to detect aneurysms and segmentation accuracy was assessed for the final model on the test dataset using the free-response receiver operating characteristic (FROC). Finally, the CNN-inferred maximum diameter was compared against expert measurements by Pearson’s correlation and Bland-Altman limits of agreement (LOA). Results A total of 451 patients with 3D-RA images were split into n = 347/41/63 training/validation/test datasets, respectively. For aneurysm detection, observed FROC analysis showed that the model managed to attain a sensitivity of 0.710 at 0.159 false positives (FP)/case, and 0.986 at 1.49 FP/case. The proposed method had good agreement with reference manual aneurysmal maximum diameter measurements (8.3 ± 4.3 mm vs. 7.8 ± 4.8 mm), with a correlation coefficient r = 0.77, small bias of 0.24 mm, and LOA of -6.2 to 5.71 mm. 37.0% and 77% of diameter measurements were within ±1 mm and ±2.5 mm of expert measurements. Conclusions A 3D-Dense-UNet model can detect and segment aneurysms with relatively high accuracy using 3D-RA images. The automatically measured maximum diameter has potential clinical application value.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Kara-Louise Royle ◽  
David A. Cairns

Abstract Background The United Kingdom Myeloma Research Alliance (UK-MRA) Myeloma Risk Profile is a prognostic model for overall survival. It was trained and tested on clinical trial data, aiming to improve the stratification of transplant ineligible (TNE) patients with newly diagnosed multiple myeloma. Missing data is a common problem which affects the development and validation of prognostic models, where decisions on how to address missingness have implications on the choice of methodology. Methods Model building The training and test datasets were the TNE pathways from two large randomised multicentre, phase III clinical trials. Potential prognostic factors were identified by expert opinion. Missing data in the training dataset was imputed using multiple imputation by chained equations. Univariate analysis fitted Cox proportional hazards models in each imputed dataset with the estimates combined by Rubin’s rules. Multivariable analysis applied penalised Cox regression models, with a fixed penalty term across the imputed datasets. The estimates from each imputed dataset and bootstrap standard errors were combined by Rubin’s rules to define the prognostic model. Model assessment Calibration was assessed by visualising the observed and predicted probabilities across the imputed datasets. Discrimination was assessed by combining the prognostic separation D-statistic from each imputed dataset by Rubin’s rules. Model validation The D-statistic was applied in a bootstrap internal validation process in the training dataset and an external validation process in the test dataset, where acceptable performance was pre-specified. Development of risk groups Risk groups were defined using the tertiles of the combined prognostic index, obtained by combining the prognostic index from each imputed dataset by Rubin’s rules. Results The training dataset included 1852 patients, 1268 (68.47%) with complete case data. Ten imputed datasets were generated. Five hundred twenty patients were included in the test dataset. The D-statistic for the prognostic model was 0.840 (95% CI 0.716–0.964) in the training dataset and 0.654 (95% CI 0.497–0.811) in the test dataset and the corrected D-Statistic was 0.801. Conclusion The decision to impute missing covariate data in the training dataset influenced the methods implemented to train and test the model. To extend current literature and aid future researchers, we have presented a detailed example of one approach. Whilst our example is not without limitations, a benefit is that all of the patient information available in the training dataset was utilised to develop the model. Trial registration Both trials were registered; Myeloma IX-ISRCTN68454111, registered 21 September 2000. Myeloma XI-ISRCTN49407852, registered 24 June 2009.


Author(s):  
Nils F. Grauhan ◽  
Stefan M. Niehues ◽  
Robert A. Gaudin ◽  
Sarah Keller ◽  
Janis L. Vahldiek ◽  
...  

Abstract Objective Training a convolutional neural network (CNN) to detect the most common causes of shoulder pain on plain radiographs and to assess its potential value in serving as an assistive device to physicians. Materials and methods We used a CNN of the ResNet-50 architecture which was trained on 2700 shoulder radiographs from clinical practice of multiple institutions. All radiographs were reviewed and labeled for six findings: proximal humeral fractures, joint dislocation, periarticular calcification, osteoarthritis, osteosynthesis, and joint endoprosthesis. The trained model was then evaluated on a separate test dataset, which was previously annotated by three independent expert radiologists. Both the training and the test datasets included radiographs of highly variable image quality to reflect the clinical situation and to foster robustness of the CNN. Performance of the model was evaluated using receiver operating characteristic (ROC) curves, the thereof derived AUC as well as sensitivity and specificity. Results The developed CNN demonstrated a high accuracy with an area under the curve (AUC) of 0.871 for detecting fractures, 0.896 for joint dislocation, 0.945 for osteoarthritis, and 0.800 for periarticular calcifications. It also detected osteosynthesis and endoprosthesis with near perfect accuracy (AUC 0.998 and 1.0, respectively). Sensitivity and specificity were 0.75 and 0.86 for fractures, 0.95 and 0.65 for joint dislocation, 0.90 and 0.86 for osteoarthrosis, and 0.60 and 0.89 for calcification. Conclusion CNNs have the potential to serve as an assistive device by providing clinicians a means to prioritize worklists or providing additional safety in situations of increased workload.


2020 ◽  
Vol 22 (Supplement_2) ◽  
pp. ii148-ii148
Author(s):  
Yoshihiro Muragaki ◽  
Yutaka Matsui ◽  
Takashi Maruyama ◽  
Masayuki Nitta ◽  
Taiichi Saito ◽  
...  

Abstract INTRODUCTION It is useful to know the molecular subtype of lower-grade gliomas (LGG) when deciding on a treatment strategy. This study aims to diagnose this preoperatively. METHODS A deep learning model was developed to predict the 3-group molecular subtype using multimodal data including magnetic resonance imaging (MRI), positron emission tomography (PET), and computed tomography (CT). The performance was evaluated using leave-one-out cross validation with a dataset containing information from 217 LGG patients. RESULTS The model performed best when the dataset contained MRI, PET, and CT data. The model could predict the molecular subtype with an accuracy of 96.6% for the training dataset and 68.7% for the test dataset. The model achieved test accuracies of 58.5%, 60.4%, and 59.4% when the dataset contained only MRI, MRI and PET, and MRI and CT data, respectively. The conventional method used to predict mutations in the isocitrate dehydrogenase (IDH) gene and the codeletion of chromosome arms 1p and 19q (1p/19q) sequentially had an overall accuracy of 65.9%. This is 2.8 percent point lower than the proposed method, which predicts the 3-group molecular subtype directly. CONCLUSIONS AND FUTURE PERSPECTIVE A deep learning model was developed to diagnose the molecular subtype preoperatively based on multi-modality data in order to predict the 3-group classification directly. Cross-validation showed that the proposed model had an overall accuracy of 68.7% for the test dataset. This is the first model to double the expected value for a 3-group classification problem, when predicting the LGG molecular subtype. We plan to apply the techniques of heat map and/or segmentation for an increase in prediction accuracy.


Author(s):  
R. PANCHAL ◽  
B. VERMA

Early detection of breast abnormalities remains the primary prevention against breast cancer despite the advances in breast cancer diagnosis and treatment. Presence of mass in breast tissues is highly indicative of breast cancer. The research work presented in this paper investigates the significance of different types of features using proposed neural network based classification technique to classify mass type of breast abnormalities in digital mammograms into malignant and benign. 14 gray level based features, four BI-RADS features, patient age feature and subtlety value feature have been explored using the proposed research methodology to attain maximum classification on test dataset. The proposed research technique attained a 91% testing classification rate with a 100% training classification rate on digital mammograms taken from the DDSM benchmark database.


Sign in / Sign up

Export Citation Format

Share Document