scholarly journals Automated analysis of foraminifera fossil records by image classification using a convolutional neural network

2020 ◽  
Vol 39 (2) ◽  
pp. 183-202 ◽  
Author(s):  
Ross Marchant ◽  
Martin Tetard ◽  
Adnya Pratiwi ◽  
Michael Adebayo ◽  
Thibault de Garidel-Thoron

Abstract. Manual identification of foraminiferal morphospecies or morphotypes under stereo microscopes is time consuming for micropalaeontologists and not possible for nonspecialists. Therefore, a long-term goal has been to automate this process to improve its efficiency and repeatability. Recent advances in computation hardware have seen deep convolutional neural networks emerge as the state-of-the-art technique for image-based automated classification. Here, we describe a method for classifying large foraminifera image sets using convolutional neural networks. Construction of the classifier is demonstrated on the publicly available Endless Forams image set with a best accuracy of approximately 90 %. A complete automatic analysis is performed for benthic species dated to the last deglacial period for a sediment core from the north-eastern Pacific and for planktonic species dated from the present until 180 000 years ago in a core from the western Pacific warm pool. The relative abundances from automatic counting based on more than 500 000 images compare favourably with manual counting, showing the same signal dynamics. Our workflow opens the way to automated palaeoceanographic reconstruction based on computer image analysis and is freely available for use.

2019 ◽  
Author(s):  
Ross Marchant ◽  
Martin Tetard ◽  
Adnya Pratiwi ◽  
Thibault de Garidel-Thoron

Manual identification of foraminifera species or morphotypes under stereoscopic microscopes is time-consuming for the taxonomist, and a long-time goal has been automating this process to improve efficiency and repeatability. Recent advances in computation hardware have seen deep convolutional neural networks emerge as the state-of-the-art technique for image-based automated classification. Here, we describe a method for classifying large down-core foraminifera image set using convolutional neural networks. Construction of the classifier is demonstrated on the publically available Endless Forams image set with an best accuracy of approximately 90%. A complete down-core analysis is performed for benthic species in the Holocene period for core MD02-2518 from the North Eastern Pacific, and the relative abundances compare favourably with manual counting, showing the same signal dynamics. Using our workflow opens the way to automated paleo-reconstruction based on computer image analysis, and can be employed using our labelling and classification software, ParticleTrieur.


Electronics ◽  
2019 ◽  
Vol 8 (3) ◽  
pp. 256
Author(s):  
Francesco Ponzio ◽  
Gianvito Urgese ◽  
Elisa Ficarra ◽  
Santa Di Cataldo

Thanks to their capability to learn generalizable descriptors directly from images, deep Convolutional Neural Networks (CNNs) seem the ideal solution to most pattern recognition problems. On the other hand, to learn the image representation, CNNs need huge sets of annotated samples that are unfeasible in many every-day scenarios. This is the case, for example, of Computer-Aided Diagnosis (CAD) systems for digital pathology, where additional challenges are posed by the high variability of the cancerous tissue characteristics. In our experiments, state-of-the-art CNNs trained from scratch on histological images were less accurate and less robust to variability than a traditional machine learning framework, highlighting all the issues of fully training deep networks with limited data from real patients. To solve this problem, we designed and compared three transfer learning frameworks, leveraging CNNs pre-trained on non-medical images. This approach obtained very high accuracy, requiring much less computational resource for the training. Our findings demonstrate that transfer learning is a solution to the automated classification of histological samples and solves the problem of designing accurate and computationally-efficient CAD systems with limited training data.


2020 ◽  
Author(s):  
Denis Tamiev ◽  
Paige Furman ◽  
Nigel Reuel

AbstractQuantification of phenotypic heterogeneity present amongst bacterial cells can be a challenging task. Conventionally, classification and counting of bacteria sub-populations is achieved with manual microscopy, due to the lack of alternative, high-throughput, autonomous approaches. In this work, we apply classification-type convolutional neural networks (cCNN) to classify and enumerate bacterial cell sub-populations (B. subtilis clusters). Here, we demonstrate that the accuracy of the cCNN developed in this study can be as high as 86% when trained on a relatively small dataset (81 images). We also developed a new image preprocessing algorithm, specific to fluorescent microscope images, which increases the amount of training data available for the neural network by 72 times. By summing the classified cells together, the algorithm provides a total cell count which is on parity with manual counting, but is 10.2 times more consistent and 3.8 times faster. Finally, this work presents a complete solution framework for those wishing to learn and implement cCNN in their synthetic biology work.


2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


2019 ◽  
Vol 277 ◽  
pp. 02024 ◽  
Author(s):  
Lincan Li ◽  
Tong Jia ◽  
Tianqi Meng ◽  
Yizhe Liu

In this paper, an accurate two-stage deep learning method is proposed to detect vulnerable plaques in ultrasonic images of cardiovascular. Firstly, a Fully Convonutional Neural Network (FCN) named U-Net is used to segment the original Intravascular Optical Coherence Tomography (IVOCT) cardiovascular images. We experiment on different threshold values to find the best threshold for removing noise and background in the original images. Secondly, a modified Faster RCNN is adopted to do precise detection. The modified Faster R-CNN utilize six-scale anchors (122,162,322,642,1282,2562) instead of the conventional one scale or three scale approaches. First, we present three problems in cardiovascular vulnerable plaque diagnosis, then we demonstrate how our method solve these problems. The proposed method in this paper apply deep convolutional neural networks to the whole diagnostic procedure. Test results show the Recall rate, Precision rate, IoU (Intersection-over-Union) rate and Total score are 0.94, 0.885, 0.913 and 0.913 respectively, higher than the 1st team of CCCV2017 Cardiovascular OCT Vulnerable Plaque Detection Challenge. AP of the designed Faster RCNN is 83.4%, higher than conventional approaches which use one-scale or three-scale anchors. These results demonstrate the superior performance of our proposed method and the power of deep learning approaches in diagnose cardiovascular vulnerable plaques.


Sign in / Sign up

Export Citation Format

Share Document