scholarly journals Reconstructing cell cycle and disease progression using deep learning

2016 ◽  
Author(s):  
Philipp Eulenberg ◽  
Niklas Köhler ◽  
Thomas Blasi ◽  
Andrew Filby ◽  
Anne E. Carpenter ◽  
...  

AbstractWe show that deep convolutional neural networks combined with non-linear dimension reduction enable reconstructing biological processes based on raw image data. We demonstrate this by recon-structing the cell cycle of Jurkat cells and disease progression in diabetic retinopathy. In further analysis of Jurkat cells, we detect and separate a subpopulation of dead cells in an unsupervised manner and, in classifying discrete cell cycle stages, we reach a 6-fold reduction in error rate compared to a recent approach based on boosting on image features. In contrast to previous methods, deep learning based predictions are fast enough for on-the-fly analysis in an imaging flow cytometer.

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Jonathan Stubblefield ◽  
Mitchell Hervert ◽  
Jason L. Causey ◽  
Jake A. Qualls ◽  
Wei Dong ◽  
...  

AbstractOne of the challenges with urgent evaluation of patients with acute respiratory distress syndrome (ARDS) in the emergency room (ER) is distinguishing between cardiac vs infectious etiologies for their pulmonary findings. We conducted a retrospective study with the collected data of 171 ER patients. ER patient classification for cardiac and infection causes was evaluated with clinical data and chest X-ray image data. We show that a deep-learning model trained with an external image data set can be used to extract image features and improve the classification accuracy of a data set that does not contain enough image data to train a deep-learning model. An analysis of clinical feature importance was performed to identify the most important clinical features for ER patient classification. The current model is publicly available with an interface at the web link: http://nbttranslationalresearch.org/.


Author(s):  
Ozge Oztimur Karadag ◽  
Ozlem Erdas

In the traditional image processing approaches, first low-level image features are extracted and then they are sent to a classifier or a recognizer for further processing. While the traditional image processing techniques employ this step-by-step approach, majority of the recent studies prefer layered architectures which both extract features and do the classification or recognition tasks. These architectures are referred as deep learning techniques and they are applicable if sufficient amount of labeled data is available and the minimum system requirements are met. Nevertheless, most of the time either the data is insufficient or the system sources are not enough. In this study, we experimented how it is still possible to obtain an effective visual representation by combining low-level visual features with features from a simple deep learning model. As a result, combinational features gave rise to 0.80 accuracy on the image data set while the performance of low-level features and deep learning features were 0.70 and 0.74 respectively.


2017 ◽  
Vol 8 (1) ◽  
Author(s):  
Philipp Eulenberg ◽  
Niklas Köhler ◽  
Thomas Blasi ◽  
Andrew Filby ◽  
Anne E. Carpenter ◽  
...  

2021 ◽  
Vol 29 (4) ◽  
Author(s):  
Mohammed Enamul Hoque ◽  
Kuryati Kipli ◽  
Tengku Mohd Afendi Zulcaffle ◽  
Abdulrazak Yahya Saleh Al-Hababi ◽  
Dayang Azra Awang Mat ◽  
...  

Retinal image analysis is crucially important to detect the different kinds of life-threatening cardiovascular and ophthalmic diseases as human retinal microvasculature exhibits remarkable abnormalities responding to these disorders. The high dimensionality and random accumulation of retinal images enlarge the data size, that creating complexity in managing and understating the retinal image data. Deep Learning (DL) has been introduced to deal with this big data challenge by developing intelligent tools. Convolutional Neural Network (CNN), a DL approach, has been designed to extract hierarchical image features with more abstraction. To assist the ophthalmologist in eye screening and ophthalmic disease diagnosis, CNN is being explored to create automatic systems for microvascular pattern analysis, feature extraction, and quantification of retinal images. Extraction of the true vessel of retinal microvasculature is significant for further analysis, such as vessel diameter and bifurcation angle quantification. This study proposes a retinal image feature, true vessel segments extraction approach exploiting the Faster RCNN. The fundamental Image Processing principles have been employed for pre-processing the retinal image data. A combined database assembling image data from different publicly available databases have been used to train, test, and evaluate this proposed method. This proposed method has obtained 92.81% sensitivity and 63.34 positive predictive value in extracting true vessel segments from the top first tier of colour retinal images. It is expected to integrate this method into ophthalmic diagnostic tools with further evaluation and validation by analysing the performance.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Kai Zhang ◽  
Chengquan Hu ◽  
Hang Yu

Aiming at the problems of high-resolution remote sensing images with many features and low classification accuracy using a single feature description, a remote sensing image land classification model based on deep learning from the perspective of ecological resource utilization is proposed. Firstly, the remote sensing image obtained by Gaofen-1 satellite is preprocessed, including multispectral data and panchromatic data. Then, the color, texture, shape, and local features are extracted from the image data, and the feature-level image fusion method is used to associate these features to realize the fusion of remote sensing image features. Finally, the fused image features are input into the trained depth belief network (DBN) for processing, and the land type is obtained by the Softmax classifier. Based on the Keras and TensorFlow platform, the experimental analysis of the proposed model shows that it can clearly classify all land types, and the overall accuracy, F1 value, and reasoning time of the classification results are 97.86%, 87.25%, and 128 ms, respectively, which are better than other comparative models.


2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


2019 ◽  
Vol 277 ◽  
pp. 02024 ◽  
Author(s):  
Lincan Li ◽  
Tong Jia ◽  
Tianqi Meng ◽  
Yizhe Liu

In this paper, an accurate two-stage deep learning method is proposed to detect vulnerable plaques in ultrasonic images of cardiovascular. Firstly, a Fully Convonutional Neural Network (FCN) named U-Net is used to segment the original Intravascular Optical Coherence Tomography (IVOCT) cardiovascular images. We experiment on different threshold values to find the best threshold for removing noise and background in the original images. Secondly, a modified Faster RCNN is adopted to do precise detection. The modified Faster R-CNN utilize six-scale anchors (122,162,322,642,1282,2562) instead of the conventional one scale or three scale approaches. First, we present three problems in cardiovascular vulnerable plaque diagnosis, then we demonstrate how our method solve these problems. The proposed method in this paper apply deep convolutional neural networks to the whole diagnostic procedure. Test results show the Recall rate, Precision rate, IoU (Intersection-over-Union) rate and Total score are 0.94, 0.885, 0.913 and 0.913 respectively, higher than the 1st team of CCCV2017 Cardiovascular OCT Vulnerable Plaque Detection Challenge. AP of the designed Faster RCNN is 83.4%, higher than conventional approaches which use one-scale or three-scale anchors. These results demonstrate the superior performance of our proposed method and the power of deep learning approaches in diagnose cardiovascular vulnerable plaques.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 863
Author(s):  
Vidas Raudonis ◽  
Agne Paulauskaite-Taraseviciene ◽  
Kristina Sutiene

Background: Cell detection and counting is of essential importance in evaluating the quality of early-stage embryo. Full automation of this process remains a challenging task due to different cell size, shape, the presence of incomplete cell boundaries, partially or fully overlapping cells. Moreover, the algorithm to be developed should process a large number of image data of different quality in a reasonable amount of time. Methods: Multi-focus image fusion approach based on deep learning U-Net architecture is proposed in the paper, which allows reducing the amount of data up to 7 times without losing spectral information required for embryo enhancement in the microscopic image. Results: The experiment includes the visual and quantitative analysis by estimating the image similarity metrics and processing times, which is compared to the results achieved by two wellknown techniques—Inverse Laplacian Pyramid Transform and Enhanced Correlation Coefficient Maximization. Conclusion: Comparatively, the image fusion time is substantially improved for different image resolutions, whilst ensuring the high quality of the fused image.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2611
Author(s):  
Andrew Shepley ◽  
Greg Falzon ◽  
Christopher Lawson ◽  
Paul Meek ◽  
Paul Kwan

Image data is one of the primary sources of ecological data used in biodiversity conservation and management worldwide. However, classifying and interpreting large numbers of images is time and resource expensive, particularly in the context of camera trapping. Deep learning models have been used to achieve this task but are often not suited to specific applications due to their inability to generalise to new environments and inconsistent performance. Models need to be developed for specific species cohorts and environments, but the technical skills required to achieve this are a key barrier to the accessibility of this technology to ecologists. Thus, there is a strong need to democratize access to deep learning technologies by providing an easy-to-use software application allowing non-technical users to train custom object detectors. U-Infuse addresses this issue by providing ecologists with the ability to train customised models using publicly available images and/or their own images without specific technical expertise. Auto-annotation and annotation editing functionalities minimize the constraints of manually annotating and pre-processing large numbers of images. U-Infuse is a free and open-source software solution that supports both multiclass and single class training and object detection, allowing ecologists to access deep learning technologies usually only available to computer scientists, on their own device, customised for their application, without sharing intellectual property or sensitive data. It provides ecological practitioners with the ability to (i) easily achieve object detection within a user-friendly GUI, generating a species distribution report, and other useful statistics, (ii) custom train deep learning models using publicly available and custom training data, (iii) achieve supervised auto-annotation of images for further training, with the benefit of editing annotations to ensure quality datasets. Broad adoption of U-Infuse by ecological practitioners will improve ecological image analysis and processing by allowing significantly more image data to be processed with minimal expenditure of time and resources, particularly for camera trap images. Ease of training and use of transfer learning means domain-specific models can be trained rapidly, and frequently updated without the need for computer science expertise, or data sharing, protecting intellectual property and privacy.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5312
Author(s):  
Yanni Zhang ◽  
Yiming Liu ◽  
Qiang Li ◽  
Jianzhong Wang ◽  
Miao Qi ◽  
...  

Recently, deep learning-based image deblurring and deraining have been well developed. However, most of these methods fail to distill the useful features. What is more, exploiting the detailed image features in a deep learning framework always requires a mass of parameters, which inevitably makes the network suffer from a high computational burden. We propose a lightweight fusion distillation network (LFDN) for image deblurring and deraining to solve the above problems. The proposed LFDN is designed as an encoder–decoder architecture. In the encoding stage, the image feature is reduced to various small-scale spaces for multi-scale information extraction and fusion without much information loss. Then, a feature distillation normalization block is designed at the beginning of the decoding stage, which enables the network to distill and screen valuable channel information of feature maps continuously. Besides, an information fusion strategy between distillation modules and feature channels is also carried out by the attention mechanism. By fusing different information in the proposed approach, our network can achieve state-of-the-art image deblurring and deraining results with a smaller number of parameters and outperform the existing methods in model complexity.


Sign in / Sign up

Export Citation Format

Share Document