scholarly journals Live Cancer Cell Classification Based on Quantitative Phase Spatial Fluctuations and Deep Learning With a Small Training Set

2021 ◽  
Vol 9 ◽  
Author(s):  
Noa Rotman-Nativ ◽  
Natan T. Shaked

We present an analysis method that can automatically classify live cancer cells from cell lines based on a small data set of quantitative phase imaging data without cell staining. The method includes spatial image analysis to extract the cell phase spatial fluctuation map, derived from the quantitative phase map of the cell measured without cell labeling, thus without prior knowledge on the biomarker. The spatial fluctuations are indicative of the cell stiffness, where cancer cells change their stiffness as cancer progresses. In this paper, the quantitative phase spatial fluctuations are used as the basis for a deep-learning classifier for evaluating the cell metastatic potential. The spatial fluctuation analysis performed on the quantitative phase profiles before inputting them to the neural network was proven to increase the classification results in comparison to inputting the quantitative phase profiles directly, as done so far. We classified between primary and metastatic cancer cells and obtained 92.5% accuracy, in spite of using a small training set, demonstrating the method potential for objective automatic clinical diagnosis of cancer cells in vitro.

2019 ◽  
Vol 1 ◽  
pp. 1-1
Author(s):  
Tee-Ann Teo

<p><strong>Abstract.</strong> Deep Learning is a kind of Machine Learning technology which utilizing the deep neural network to learn a promising model from a large training data set. Convolutional Neural Network (CNN) has been successfully applied in image segmentation and classification with high accuracy results. The CNN applies multiple kernels (also called filters) to extract image features via image convolution. It is able to determine multiscale features through the multiple layers of convolution and pooling processes. The variety of training data plays an important role to determine a reliable CNN model. The benchmarking training data for road mark extraction is mainly focused on close-range imagery because it is easier to obtain a close-range image rather than an airborne image. For example, KITTI Vision Benchmark Suite. This study aims to transfer the road mark training data from mobile lidar system to aerial orthoimage in Fully Convolutional Networks (FCN). The transformation of the training data from ground-based system to airborne system may reduce the effort of producing a large training data set.</p><p>This study uses FCN technology and aerial orthoimage to localize road marks on the road regions. The road regions are first extracted from 2-D large-scale vector map. The input aerial orthoimage is 10&amp;thinsp;cm spatial resolution and the non-road regions are masked out before the road mark localization. The training data are road mark’s polygons, which are originally digitized from ground-based mobile lidar and prepared for the road mark extraction using mobile mapping system. This study reuses these training data and applies them for the road mark extraction using aerial orthoimage. The digitized training road marks are then transformed to road polygon based on mapping coordinates. As the detail of ground-based lidar is much better than the airborne system, the partially occulted parking lot in aerial orthoimage can also be obtained from the ground-based system. The labels (also called annotations) for FCN include road region, non-regions and road mark. The size of a training batch is 500&amp;thinsp;pixel by 500&amp;thinsp;pixel (50&amp;thinsp;m by 50&amp;thinsp;m on the ground), and the total number of training batches for training is 75 batches. After the FCN training stage, an independent aerial orthoimage (Figure 1a) is applied to predict the road marks. The results of FCN provide initial regions for road marks (Figure 1b). Usually, road marks show higher reflectance than road asphalts. Therefore, this study uses this characteristic to refine the road marks (Figure 1c) by a binary classification inside the initial road mark’s region.</p><p>To compare the automatically extracted road marks (Figure 1c) and manually digitized road marks (Figure 1d), most road marks can be extracted using the training set from ground-based system. This study also selects an area of 600&amp;thinsp;m&amp;thinsp;&amp;times;&amp;thinsp;200&amp;thinsp;m in quantitative analysis. Among the 371 reference road marks, 332 can be extracted from proposed scheme, and the completeness reached 89%. The preliminary experiment demonstrated that most road marks can be successfully extracted by the proposed scheme. Therefore, the training data from the ground-based mapping system can be utilized in airborne orthoimage in similar spatial resolution.</p>


2021 ◽  
Author(s):  
Avi Gamoran ◽  
Yonatan Kaplan ◽  
Ram Isaac Orr ◽  
Almog Simchon ◽  
michael gilead

This paper describes our approach to theCLPsych 2021 Shared Task, in which weaimed to predict suicide attempts based onTwitter feed data. We addressed this challengeby emphasizing reliance on prior domainknowledge. We engineered novel theory drivenfeatures, and integrated prior knowledgewith empirical evidence in a principledmanner using Bayesian modeling. Whilethis theory-guided approach increases bias andlowers accuracy on the training set, it was successfulin preventing over-fitting. The modelsprovided reasonable classification accuracy onunseen test data (0.68 ≤ AUC ≤ 0.84). Ourapproach may be particularly useful in predictiontasks trained on a relatively small data set.


2018 ◽  
pp. 1-8 ◽  
Author(s):  
Okyaz Eminaga ◽  
Nurettin Eminaga ◽  
Axel Semjonow ◽  
Bernhard Breil

Purpose The recognition of cystoscopic findings remains challenging for young colleagues and depends on the examiner’s skills. Computer-aided diagnosis tools using feature extraction and deep learning show promise as instruments to perform diagnostic classification. Materials and Methods Our study considered 479 patient cases that represented 44 urologic findings. Image color was linearly normalized and was equalized by applying contrast-limited adaptive histogram equalization. Because these findings can be viewed via cystoscopy from every possible angle and side, we ultimately generated images rotated in 10-degree grades and flipped them vertically or horizontally, which resulted in 18,681 images. After image preprocessing, we developed deep convolutional neural network (CNN) models (ResNet50, VGG-19, VGG-16, InceptionV3, and Xception) and evaluated these models using F1 scores. Furthermore, we proposed two CNN concepts: 90%-previous-layer filter size and harmonic-series filter size. A training set (60%), a validation set (10%), and a test set (30%) were randomly generated from the study data set. All models were trained on the training set, validated on the validation set, and evaluated on the test set. Results The Xception-based model achieved the highest F1 score (99.52%), followed by models that were based on ResNet50 (99.48%) and the harmonic-series concept (99.45%). All images with cancer lesions were correctly determined by these models. When the focus was on the images misclassified by the model with the best performance, 7.86% of images that showed bladder stones with indwelling catheter and 1.43% of images that showed bladder diverticulum were falsely classified. Conclusion The results of this study show the potential of deep learning for the diagnostic classification of cystoscopic images. Future work will focus on integration of artificial intelligence–aided cystoscopy into clinical routines and possibly expansion to other clinical endoscopy applications.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mighten C. Yip ◽  
Mercedes M. Gonzalez ◽  
Christopher R. Valenta ◽  
Matthew J. M. Rowan ◽  
Craig R. Forest

AbstractA common electrophysiology technique used in neuroscience is patch clamp: a method in which a glass pipette electrode facilitates single cell electrical recordings from neurons. Typically, patch clamp is done manually in which an electrophysiologist views a brain slice under a microscope, visually selects a neuron to patch, and moves the pipette into close proximity to the cell to break through and seal its membrane. While recent advances in the field of patch clamping have enabled partial automation, the task of detecting a healthy neuronal soma in acute brain tissue slices is still a critical step that is commonly done manually, often presenting challenges for novices in electrophysiology. To overcome this obstacle and progress towards full automation of patch clamp, we combined the differential interference microscopy optical technique with an object detection-based convolutional neural network (CNN) to detect healthy neurons in acute slice. Utilizing the YOLOv3 convolutional neural network architecture, we achieved a 98% reduction in training times to 18 min, compared to previously published attempts. We also compared networks trained on unaltered and enhanced images, achieving up to 77% and 72% mean average precision, respectively. This novel, deep learning-based method accomplishes automated neuronal detection in brain slice at 18 frames per second with a small data set of 1138 annotated neurons, rapid training time, and high precision. Lastly, we verified the health of the identified neurons with a patch clamp experiment where the average access resistance was 29.25 M$$\Omega$$ Ω (n = 9). The addition of this technology during live-cell imaging for patch clamp experiments can not only improve manual patch clamping by reducing the neuroscience expertise required to select healthy cells, but also help achieve full automation of patch clamping by nominating cells without human assistance.


2021 ◽  
Author(s):  
SHOGO ARAI ◽  
ZHUANG FENG ◽  
Fuyuki Tokuda ◽  
Adam Purnomo ◽  
Kazuhiro Kosuge

<div>This paper proposes a deep learning-based fast grasp detection method with a small dataset for robotic bin-picking. We consider the problem of grasping stacked up mechanical parts on a planar workspace using a parallel gripper. In this paper, we use a deep neural network to solve the problem with a single depth image. To reduce the computation time, we propose an edge-based algorithm to generate potential grasps. Then, a convolutional neural network (CNN) is applied to evaluate the robustness of all potential grasps for bin-picking. Finally, the proposed method ranks them and the object is grasped by using the grasp with the highest score. In bin-picking experiments, we evaluate the proposed method with a 7-DOF manipulator using textureless mechanical parts with complex shapes. The success ratio of grasping is 97%, and the average computation time of CNN inference is less than 0.23[s] on a laptop PC without a GPU array. In addition, we also confirm that the proposed method can be applied to unseen objects which are not included in the training dataset. </div>


PLoS ONE ◽  
2021 ◽  
Vol 16 (2) ◽  
pp. e0246870
Author(s):  
Jaejin Hwang ◽  
Jinwon Lee ◽  
Kyung-Sun Lee

The objective of this study was to accurately predict the grip strength using a deep learning-based method (e.g., multi-layer perceptron [MLP] regression). The maximal grip strength with varying postures (upper arm, forearm, and lower body) of 164 young adults (100 males and 64 females) were collected. The data set was divided into a training set (90% of data) and a test set (10% of data). Different combinations of variables including demographic and anthropometric information of individual participants and postures was tested and compared to find the most predictive model. The MLP regression and 3 different polynomial regressions (linear, quadratic, and cubic) were conducted and the performance of regression was compared. The results showed that including all variables showed better performance than other combinations of variables. In general, MLP regression showed higher performance than polynomial regressions. Especially, MLP regression considering all variables achieved the highest performance of grip strength prediction (RMSE = 69.01N, R = 0.88, ICC = 0.92). This deep learning-based regression (MLP) would be useful to predict on-site- and individual-specific grip strength in the workspace to reduce the risk of musculoskeletal disorders in the upper extremity.


2018 ◽  
Vol 7 (4.11) ◽  
pp. 198 ◽  
Author(s):  
Mohamad Hazim Johari ◽  
Hasliza Abu Hassan ◽  
Ahmad Ihsan Mohd Yassin ◽  
Nooritawati Md Tahir ◽  
Azlee Zabidi ◽  
...  

This project presents a method to detect diabetic retinopathy on the fundus images by using deep learning neural network. Alexnet Convolution Neural Network (CNN) has been used in the project to ease the process of neural learning. The data set used were retrieved from MESSIDOR database and it contains 1200 pieces of fundus images. The images were filtered based on the project needed.  There were 580 pieces of images types .tif has been used after filtered and those pictures were divided into 2, which is Exudates images and Normal images. On the training and testing session, the 580 mixed of exudates and normal fundus images were divided into 2 sets which is training set and testing set. The result of the training and testing set were merged into a confusion matrix. The result for this project shows that the accuracy of the CNN for training and testing set was 99.3% and 88.3% respectively.   


2021 ◽  
Vol 45 (4) ◽  
pp. 233-238
Author(s):  
Lazar Kats ◽  
Marilena Vered ◽  
Johnny Kharouba ◽  
Sigalit Blumer

Objective: To apply the technique of transfer deep learning on a small data set for automatic classification of X-ray modalities in dentistry. Study design: For solving the problem of classification, the convolution neural networks based on VGG16, NASNetLarge and Xception architectures were used, which received pre-training on ImageNet subset. In this research, we used an in-house dataset created within the School of Dental Medicine, Tel Aviv University. The training dataset contained anonymized 496 digital Panoramic and Cephalometric X-ray images for orthodontic examinations from CS 8100 Digital Panoramic System (Carestream Dental LLC, Atlanta, USA). The models were trained using NVIDIA GeForce GTX 1080 Ti GPU. The study was approved by the ethical committee of Tel Aviv University. Results: The test dataset contained 124 X-ray images from 2 different devices: CS 8100 Digital Panoramic System and Planmeca ProMax 2D (Planmeca, Helsinki, Finland). X-ray images in the test database were not pre-processed. The accuracy of all neural network architectures was 100%. Following a result of almost absolute accuracy, the other statistical metrics were not relevant. Conclusions: In this study, good results have been obtained for the automatic classification of different modalities of X-ray images used in dentistry. The most promising direction for the development of this kind of application is the transfer deep learning. Further studies on automatic classification of modalities, as well as sub-modalities, can maximally reduce occasional difficulties arising in this field in the daily practice of the dentist and, eventually, improve the quality of diagnosis and treatment.


Sign in / Sign up

Export Citation Format

Share Document