scholarly journals AI-Blue-Carba: A Rapid and Improved Carbapenemase Producer Detection Assay Using Blue-Carba With Deep Learning

2020 ◽  
Vol 11 ◽  
Author(s):  
Ling Jia ◽  
Lu Han ◽  
He-Xin Cai ◽  
Ze-Hua Cui ◽  
Run-Shi Yang ◽  
...  

A rapid and accurate detection of carbapenemase-producing Gram-negative bacteria (CPGNB) has an immediate demand in the clinic. Here, we developed and validated a method for rapid detection of CPGNB using Blue-Carba combined with deep learning (designated as AI-Blue-Carba). The optimum bacterial suspension concentration and detection wavelength were determined using a Multimode Plate Reader and integrated with deep learning modeling. We examined 160 carbapenemase-producing and non-carbapenemase-producing bacteria using the Blue-Carba test and a series of time and optical density values were obtained to build and validate the machine models. Subsequently, a simplified model was re-evaluated by descending the dataset from 13 time points to 2 time points. The best suitable bacterial concentration was determined to be 1.5 optical density (OD) and the optimum detection wavelength for AI-Blue-Carba was set as 615 nm. Among the 2 models (LRM and LSTM), the LSTM model generated the higher ROC-AUC value. Moreover, the simplified LSTM model trained by short time points (0–15 min) did not impair the accuracy of LSTM model. Compared with the traditional Blue-Carba, the AI-Blue-Carba method has a sensitivity of 95.3% and a specificity of 95.7% at 15 min, which is a rapid and accurate method to detect CPGNB.

2019 ◽  
Vol 1 (1) ◽  
pp. 1
Author(s):  
Sifra Kristina Hartono ◽  
Tetiana Haniastuti ◽  
Heni Susilowati ◽  
Juni Handajani ◽  
Alma Linggar Jonarta

Pseudomonas aeruginosa (P. aeruginosa) is an opportunistic bacterium, which could aggressively infect immunocompromised patients and thus, cause high mortality rate. In addition, P. aeruginosa in oropharynx could be aspirated and cause ventilator associated pneumonia. Royal jelly is one of bee’s products that has been used for therapeutic needs including antibacteria. Adherence factor of P. aeruginosa were flagelum, pili and lectin. The aim of the study was to determine the effect of royal jelly to P. aeruginosa adhesion. Suspension of P. aeruginosa (ATCC® 27853) was incubated at 37 °C for 18 h. Treatment groups were exposed to royal jelly with several concentrations, 2%, 4%, 6%; while distilled water was being used as negative control. Bacterial adhesion test was determined using spectrophotometer λ = 600 nm to measure optical density values of adhered bacterial suspension in tubes. The result of one-way ANOVA showed significant differences (p<0.05) of optical density values among groups indicating that royal jelly affected the bacterial adhesion. LSD results showed significant difference of optical density values between 2%, 4%, and 6% royal jelly compared to distilled water. Six percent of royal jelly had the least optical density value compared to the other groups. In conclusion, royal jelly has the ability to decrease adhesion of P. aeruginosa. Six percent of royal jelly has better ability to decrease adhesion of P. aeruginosa than other concentrations.


2021 ◽  
Vol 11 (9) ◽  
pp. 3863
Author(s):  
Ali Emre Öztürk ◽  
Ergun Erçelebi

A large amount of training image data is required for solving image classification problems using deep learning (DL) networks. In this study, we aimed to train DL networks with synthetic images generated by using a game engine and determine the effects of the networks on performance when solving real-image classification problems. The study presents the results of using corner detection and nearest three-point selection (CDNTS) layers to classify bird and rotary-wing unmanned aerial vehicle (RW-UAV) images, provides a comprehensive comparison of two different experimental setups, and emphasizes the significant improvements in the performance in deep learning-based networks due to the inclusion of a CDNTS layer. Experiment 1 corresponds to training the commonly used deep learning-based networks with synthetic data and an image classification test on real data. Experiment 2 corresponds to training the CDNTS layer and commonly used deep learning-based networks with synthetic data and an image classification test on real data. In experiment 1, the best area under the curve (AUC) value for the image classification test accuracy was measured as 72%. In experiment 2, using the CDNTS layer, the AUC value for the image classification test accuracy was measured as 88.9%. A total of 432 different combinations of trainings were investigated in the experimental setups. The experiments were trained with various DL networks using four different optimizers by considering all combinations of batch size, learning rate, and dropout hyperparameters. The test accuracy AUC values for networks in experiment 1 ranged from 55% to 74%, whereas the test accuracy AUC values in experiment 2 networks with a CDNTS layer ranged from 76% to 89.9%. It was observed that the CDNTS layer has considerable effects on the image classification accuracy performance of deep learning-based networks. AUC, F-score, and test accuracy measures were used to validate the success of the networks.


2021 ◽  
Vol 11 (6) ◽  
pp. 2723
Author(s):  
Fatih Uysal ◽  
Fırat Hardalaç ◽  
Ozan Peker ◽  
Tolga Tolunay ◽  
Nil Tokgöz

Fractures occur in the shoulder area, which has a wider range of motion than other joints in the body, for various reasons. To diagnose these fractures, data gathered from X-radiation (X-ray), magnetic resonance imaging (MRI), or computed tomography (CT) are used. This study aims to help physicians by classifying shoulder images taken from X-ray devices as fracture/non-fracture with artificial intelligence. For this purpose, the performances of 26 deep learning-based pre-trained models in the detection of shoulder fractures were evaluated on the musculoskeletal radiographs (MURA) dataset, and two ensemble learning models (EL1 and EL2) were developed. The pre-trained models used are ResNet, ResNeXt, DenseNet, VGG, Inception, MobileNet, and their spinal fully connected (Spinal FC) versions. In the EL1 and EL2 models developed using pre-trained models with the best performance, test accuracy was 0.8455, 0.8472, Cohen’s kappa was 0.6907, 0.6942 and the area that was related with fracture class under the receiver operating characteristic (ROC) curve (AUC) was 0.8862, 0.8695. As a result of 28 different classifications in total, the highest test accuracy and Cohen’s kappa values were obtained in the EL2 model, and the highest AUC value was obtained in the EL1 model.


2021 ◽  
Vol 11 (11) ◽  
pp. 4753
Author(s):  
Gen Ye ◽  
Chen Du ◽  
Tong Lin ◽  
Yan Yan ◽  
Jack Jiang

(1) Background: Deep learning has become ubiquitous due to its impressive performance in various domains, such as varied as computer vision, natural language and speech processing, and game-playing. In this work, we investigated the performance of recent deep learning approaches on the laryngopharyngeal reflux (LPR) diagnosis task. (2) Methods: Our dataset is composed of 114 subjects with 37 pH-positive cases and 77 control cases. In contrast to prior work based on either reflux finding score (RFS) or pH monitoring, we directly take laryngoscope images as inputs to neural networks, as laryngoscopy is the most common and simple diagnostic method. The diagnosis task is formulated as a binary classification problem. We first tested a powerful backbone network that incorporates residual modules, attention mechanism and data augmentation. Furthermore, recent methods in transfer learning and few-shot learning were investigated. (3) Results: On our dataset, the performance is the best test classification accuracy is 73.4%, while the best AUC value is 76.2%. (4) Conclusions: This study demonstrates that deep learning techniques can be applied to classify LPR images automatically. Although the number of pH-positive images used for training is limited, deep network can still be capable of learning discriminant features with the advantage of technique.


2022 ◽  
pp. 1-27
Author(s):  
Clifford Bohm ◽  
Douglas Kirkpatrick ◽  
Arend Hintze

Abstract Deep learning (primarily using backpropagation) and neuroevolution are the preeminent methods of optimizing artificial neural networks. However, they often create black boxes that are as hard to understand as the natural brains they seek to mimic. Previous work has identified an information-theoretic tool, referred to as R, which allows us to quantify and identify mental representations in artificial cognitive systems. The use of such measures has allowed us to make previous black boxes more transparent. Here we extend R to not only identify where complex computational systems store memory about their environment but also to differentiate between different time points in the past. We show how this extended measure can identify the location of memory related to past experiences in neural networks optimized by deep learning as well as a genetic algorithm.


Author(s):  
Christian Herff ◽  
Dean J. Krusienski

AbstractClinical data is often collected and processed as time series: a sequence of data indexed by successive time points. Such time series can be from sources that are sampled over short time intervals to represent continuous biophysical wave-(one word waveforms) forms such as the voltage measurements representing the electrocardiogram, to measurements that are sampled daily, weekly, yearly, etc. such as patient weight, blood triglyceride levels, etc. When analyzing clinical data or designing biomedical systems for measurements, interventions, or diagnostic aids, it is important to represent the information contained within such time series in a more compact or meaningful form (e.g., noise filtering), amenable to interpretation by a human or computer. This process is known as feature extraction. This chapter will discuss some fundamental techniques for extracting features from time series representing general forms of clinical data.


1970 ◽  
Vol 43 (2) ◽  
pp. 197-206
Author(s):  
MK Alam ◽  
MN Islam ◽  
MA Zaman

Neutron radiography (NR) technique has been adopted to study homogeneity and water absorption behavior of building materials, like double layer silver gray tiles obtained from Concord Real Estate & Building Products, Unit II, Salna, Gazipur, Dhaka, Bangladesh. Measurements of optical density differences between the film background and radiographic images of the dry/wet samples were used for investigation of the present work. The optical density was measured by using the digital optical densitometer (Model 07-424, S-23285, Victoreen Inc. USA). Large variation in optical density values of the radiographic image was observed. From this observation it shows that the rate of water absorption of the tiles increases with increase of immersion time. Through the investigation of radiographic image and subsequently analyzing the optical density we observed that distribution of the elements in the tiles are inhomogeneous. Key words: Homogeneity, Water absorption, Silver gray, Neturon radiography.DOI: 10.3329/bjsir.v43i2.963 Bangladesh J. Sci. Ind. Res. 43(2), 197-206, 2008


Author(s):  
Leyla USLU

In the study, Porphyridium cruentum was cultured under laboratory conditions at 20±2°C, 16:8 (light:dark) photoperiod and continuous aeration to different salinity (20‰, 30‰, 40‰) and two different light intensities (37 µmol m-2s-1 photon and 110 µmol m-2s-1 photon) and growth was determined. Dry matter, optical density and chlorophyll a parameter were used to determine growth. The best growth was determined in culture with a salinity of 30‰ at 110 µmol m-2s-1 photon light intensity. In this group, the optical density (OD) was 1.504±0.003 and the dry matter amount was 1.327gl-1. In the case of 37µmol µmol m-2s-1 photon light intensity, the optical density values were found to be similar in groups with 30‰ and 50‰ salinity and were found to be 1.234±0.004 and 1.215±0.002, respectively. The amounts of dry matter were also similar; 1.168gl-1 and 1.159gl-1, respectively. While the lowest growth was in the culture at 37 µmol m-2s-1 photon light intensity and 20‰ salinity. The optical density obtained on the last day of this group was 1.165±0.004 and the dry matter amount was determined as 0.986gl-1. The amount of chlorophyll a was determined in the cultured groups at the best 37 µmol m-2s-1 photon light intensity.


Medical image registration has important value in actual clinical applications. From the traditional time-consuming iterative similarity optimization method to the short time-consuming supervised deep learning to today's unsupervised learning, the continuous optimization of the registration strategy makes it more feasible in clinical applications. This survey mainly focuses on unsupervised learning methods and introduces the latest solutions for different registration relationships. The registration for inter-modality is a more challenging topic. The application of unsupervised learning in registration for inter-modality is the focus of this article. In addition, this survey also proposes ideas for future research methods to show directions of the future research.


2019 ◽  
Vol 35 (22) ◽  
pp. 4586-4595 ◽  
Author(s):  
Peng Ni ◽  
Neng Huang ◽  
Zhi Zhang ◽  
De-Peng Wang ◽  
Fan Liang ◽  
...  

Abstract Motivation The Oxford Nanopore sequencing enables to directly detect methylation states of bases in DNA from reads without extra laboratory techniques. Novel computational methods are required to improve the accuracy and robustness of DNA methylation state prediction using Nanopore reads. Results In this study, we develop DeepSignal, a deep learning method to detect DNA methylation states from Nanopore sequencing reads. Testing on Nanopore reads of Homo sapiens (H. sapiens), Escherichia coli (E. coli) and pUC19 shows that DeepSignal can achieve higher performance at both read level and genome level on detecting 6 mA and 5mC methylation states comparing to previous hidden Markov model (HMM) based methods. DeepSignal achieves similar performance cross different DNA methylation bases, different DNA methylation motifs and both singleton and mixed DNA CpG. Moreover, DeepSignal requires much lower coverage than those required by HMM and statistics based methods. DeepSignal can achieve 90% above accuracy for detecting 5mC and 6 mA using only 2× coverage of reads. Furthermore, for DNA CpG methylation state prediction, DeepSignal achieves 90% correlation with bisulfite sequencing using just 20× coverage of reads, which is much better than HMM based methods. Especially, DeepSignal can predict methylation states of 5% more DNA CpGs that previously cannot be predicted by bisulfite sequencing. DeepSignal can be a robust and accurate method for detecting methylation states of DNA bases. Availability and implementation DeepSignal is publicly available at https://github.com/bioinfomaticsCSU/deepsignal. Supplementary information Supplementary data are available at bioinformatics online.


Sign in / Sign up

Export Citation Format

Share Document