scholarly journals Combined Color Semantics and Deep Learning for the Automatic Detection of Dolphin Dorsal Fins

Electronics ◽  
2020 ◽  
Vol 9 (5) ◽  
pp. 758 ◽  
Author(s):  
Vito Renò ◽  
Gianvito Losapio ◽  
Flavio Forenza ◽  
Tiziano Politi ◽  
Ettore Stella ◽  
...  

Photo-identification is a widely used non-invasive technique in biological studies for understanding if a specimen has been seen multiple times only relying on specific unique visual characteristics. This information is essential to infer knowledge about the spatial distribution, site fidelity, abundance or habitat use of a species. Today there is a large demand for algorithms that can help domain experts in the analysis of large image datasets. For this reason, it is straightforward that the problem of identify and crop the relevant portion of an image is not negligible in any photo-identification pipeline. This paper approaches the problem of automatically cropping cetaceans images with a hybrid technique based on domain analysis and deep learning. Domain knowledge is applied for proposing relevant regions with the aim of highlighting the dorsal fins, then a binary classification of fin vs. no-fin is performed by a convolutional neural network. Results obtained on real images demonstrate the feasibility of the proposed approach in the automated process of large datasets of Risso’s dolphins photos, enabling its use on more complex large scale studies. Moreover, the results of this study suggest to extend this methodology to biological investigations of different species.

Proceedings ◽  
2019 ◽  
Vol 27 (1) ◽  
pp. 8 ◽  
Author(s):  
David Perpetuini ◽  
Antonio Maria Chiarelli ◽  
Vincenzo Vinciguerra ◽  
Piergiusto Vitulli ◽  
Sergio Rinella ◽  
...  

Photoplethysmography (PPG) is a non-invasive technique that employs near infrared light to estimate periodic oscillations in blood volume within arteries caused by the pulse pressure wave. Importantly, combined Electrocardiography (ECG) and PPG can be employed to quantify arterial stiffness. The capabilities of a home-made multi-channel PPG-ECG device (7 PPG probes, 4 ECG derivations) to evaluate arterial ageing were assessed. The high numerosity of channels allowed to estimate arterial stiffness at multiple body locations, without supra-systolic cuff occlusion, providing a fast and accurate examination of cardiovascular status and potentially allowing large scale clinical screening of cardiovascular risk.


BMC Genomics ◽  
2019 ◽  
Vol 20 (S11) ◽  
Author(s):  
Tianle Ma ◽  
Aidong Zhang

Abstract Background Comprehensive molecular profiling of various cancers and other diseases has generated vast amounts of multi-omics data. Each type of -omics data corresponds to one feature space, such as gene expression, miRNA expression, DNA methylation, etc. Integrating multi-omics data can link different layers of molecular feature spaces and is crucial to elucidate molecular pathways underlying various diseases. Machine learning approaches to mining multi-omics data hold great promises in uncovering intricate relationships among molecular features. However, due to the “big p, small n” problem (i.e., small sample sizes with high-dimensional features), training a large-scale generalizable deep learning model with multi-omics data alone is very challenging. Results We developed a method called Multi-view Factorization AutoEncoder (MAE) with network constraints that can seamlessly integrate multi-omics data and domain knowledge such as molecular interaction networks. Our method learns feature and patient embeddings simultaneously with deep representation learning. Both feature representations and patient representations are subject to certain constraints specified as regularization terms in the training objective. By incorporating domain knowledge into the training objective, we implicitly introduced a good inductive bias into the machine learning model, which helps improve model generalizability. We performed extensive experiments on the TCGA datasets and demonstrated the power of integrating multi-omics data and biological interaction networks using our proposed method for predicting target clinical variables. Conclusions To alleviate the overfitting problem in deep learning on multi-omics data with the “big p, small n” problem, it is helpful to incorporate biological domain knowledge into the model as inductive biases. It is very promising to design machine learning models that facilitate the seamless integration of large-scale multi-omics data and biomedical domain knowledge for uncovering intricate relationships among molecular features and clinical features.


Diagnostics ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. 2032
Author(s):  
Ahmad Chaddad ◽  
Jiali Li ◽  
Qizong Lu ◽  
Yujie Li ◽  
Idowu Paul Okuwobi ◽  
...  

Radiomics with deep learning models have become popular in computer-aided diagnosis and have outperformed human experts on many clinical tasks. Specifically, radiomic models based on artificial intelligence (AI) are using medical data (i.e., images, molecular data, clinical variables, etc.) for predicting clinical tasks such as autism spectrum disorder (ASD). In this review, we summarized and discussed the radiomic techniques used for ASD analysis. Currently, the limited radiomic work of ASD is related to the variation of morphological features of brain thickness that is different from texture analysis. These techniques are based on imaging shape features that can be used with predictive models for predicting ASD. This review explores the progress of ASD-based radiomics with a brief description of ASD and the current non-invasive technique used to classify between ASD and healthy control (HC) subjects. With AI, new radiomic models using the deep learning techniques will be also described. To consider the texture analysis with deep CNNs, more investigations are suggested to be integrated with additional validation steps on various MRI sites.


2020 ◽  
Author(s):  
Alexandra Razorenova ◽  
Nikolay Yavich ◽  
Mikhail Malovichko ◽  
Maxim Fedorov ◽  
Nikolay Koshev ◽  
...  

AbstractElectroencephalography (EEG) is a well-established non-invasive technique to measure the brain activity, albeit with a limited spatial resolution. Variations in electric conductivity between different tissues distort the electric fields generated by cortical sources, resulting in smeared potential measurements on the scalp. One needs to solve an ill-posed inverse problem to recover the original neural activity. In this article, we present a generic method of recovering the cortical potentials from the EEG measurement by introducing a new inverse-problem solver based on deep Convolutional Neural Networks (CNN) in paired (U-Net) and unpaired (DualGAN) configurations. The solvers were trained on synthetic EEG-ECoG pairs that were generated using a head conductivity model computed using the Finite Element Method (FEM). These solvers are the first of their kind, that provide robust translation of EEG data to the cortex surface using deep learning. Providing a fast and accurate interpretation of the tracked EEG signal, our approach promises a boost to the spatial resolution of the future EEG devices.


2020 ◽  
Vol 10 (13) ◽  
pp. 4640 ◽  
Author(s):  
Javier Civit-Masot ◽  
Francisco Luna-Perejón ◽  
Manuel Domínguez Morales ◽  
Anton Civit

The spread of the SARS-CoV-2 virus has made the COVID-19 disease a worldwide epidemic. The most common tests to identify COVID-19 are invasive, time consuming and limited in resources. Imaging is a non-invasive technique to identify if individuals have symptoms of disease in their lungs. However, the diagnosis by this method needs to be made by a specialist doctor, which limits the mass diagnosis of the population. Image processing tools to support diagnosis reduce the load by ruling out negative cases. Advanced artificial intelligence techniques such as Deep Learning have shown high effectiveness in identifying patterns such as those that can be found in diseased tissue. This study analyzes the effectiveness of a VGG16-based Deep Learning model for the identification of pneumonia and COVID-19 using torso radiographs. Results show a high sensitivity in the identification of COVID-19, around 100%, and with a high degree of specificity, which indicates that it can be used as a screening test. AUCs on ROC curves are greater than 0.9 for all classes considered.


Diagnostics ◽  
2021 ◽  
Vol 11 (3) ◽  
pp. 528
Author(s):  
Said Boumaraf ◽  
Xiabi Liu ◽  
Yuchai Wan ◽  
Zhongshu Zheng ◽  
Chokri Ferkous ◽  
...  

Breast cancer is a serious threat to women. Many machine learning-based computer-aided diagnosis (CAD) methods have been proposed for the early diagnosis of breast cancer based on histopathological images. Even though many such classification methods achieved high accuracy, many of them lack the explanation of the classification process. In this paper, we compare the performance of conventional machine learning (CML) against deep learning (DL)-based methods. We also provide a visual interpretation for the task of classifying breast cancer in histopathological images. For CML-based methods, we extract a set of handcrafted features using three feature extractors and fuse them to get image representation that would act as an input to train five classical classifiers. For DL-based methods, we adopt the transfer learning approach to the well-known VGG-19 deep learning architecture, where its pre-trained version on the large scale ImageNet, is block-wise fine-tuned on histopathological images. The evaluation of the proposed methods is carried out on the publicly available BreaKHis dataset for the magnification dependent classification of benign and malignant breast cancer and their eight sub-classes, and a further validation on KIMIA Path960, a magnification-free histopathological dataset with 20 image classes, is also performed. After providing the classification results of CML and DL methods, and to better explain the difference in the classification performance, we visualize the learned features. For the DL-based method, we intuitively visualize the areas of interest of the best fine-tuned deep neural networks using attention maps to explain the decision-making process and improve the clinical interpretability of the proposed models. The visual explanation can inherently improve the pathologist’s trust in automated DL methods as a credible and trustworthy support tool for breast cancer diagnosis. The achieved results show that DL methods outperform CML approaches where we reached an accuracy between 94.05% and 98.13% for the binary classification and between 76.77% and 88.95% for the eight-class classification, while for DL approaches, the accuracies range from 85.65% to 89.32% for the binary classification and from 63.55% to 69.69% for the eight-class classification.


2021 ◽  
Author(s):  
Manuel Barberio ◽  
Toby Collins ◽  
Valentin Bencteux ◽  
Richard Nkusi ◽  
Eric Felli ◽  
...  

Abstract Nerves are difficult to recognize during surgery and inadvertent injuries may occur, bringing catastrophic consequences for the patient. Hyperspectral imaging (HSI) is a non-invasive technique combining photography with spectroscopy, allowing biological tissue property quantification. We show for the first time that HSI combined with deep learning allows nerves and other tissue types to be automatically recognized in-vivo at the pixel level. An animal model is used comprising eight anesthetized pigs with a neck midline incision, exposing several structures (nerve, artery, vein, muscle, fat, skin). State-of-the-art machine learning models have been trained to recognize these tissue types in HSI data. The best model is a Convolutional Neural Network (CNN), achieving an overall average sensitivity of 0.91 and specificity of 0.99, validated with leave-one-patient-out cross-validation. For the nerve, the CNN achieves an average sensitivity of 0.76 and specificity of 1.0. In conclusion, HSI combined with a CNN model is suitable for in vivo nerve recognition.


2016 ◽  
Author(s):  
Xiaoyong Pan ◽  
Hong-Bin Shen

AbstractBackgroundRNAs play key roles in cells through the interactions with proteins known as the RNA-binding proteins (RBP) and their binding motifs enable crucial understanding of the post-transcriptional regulation of RNAs. How the RBPs correctly recognize the target RNAs and why they bind specific positions is still far from clear. Machine learning-based algorithms are widely acknowledged to be capable of speeding up this process. Although many automatic tools have been developed to predict the RNA-protein binding sites from the rapidly growing multi-resource data, e.g. sequence, structure, their domain specific features and formats have posed significant computational challenges. One of current difficulties is that the cross-source shared common knowledge is at a higher abstraction level beyond the observed data, resulting in a low efficiency of direct integration of observed data across domains. The other difficulty is how to interpret the prediction results. Existing approaches tend to terminate after outputting the potential discrete binding sites on the sequences, but how to assemble them into the meaningful binding motifs is a topic worth of further investigation.ResultsIn viewing of these challenges, we propose a deep learning-based framework (iDeep) by using a novel hybrid convolutional neural network and deep belief network to predict the RBP interaction sites and motifs on RNAs. This new protocol is featured by transforming the original observed data into a high-level abstraction feature space using multiple layers of learning blocks, where the shared representations across different domains are integrated. To validate our iDeep method, we performed experiments on 31 large-scale CLIP-seq datasets, and our results show that by integrating multiple sources of data, the average AUC can be improved by 8% compared to the best single-source-based predictor; and through cross-domain knowledge integration at an abstraction level, it outperforms the state-of-the-art predictors by 6%. Besides the overall enhanced prediction performance, the convolutional neural network module embedded in iDeep is also able to automatically capture the interpretable binding motifs for RBPs. Large-scale experiments demonstrate that these mined binding motifs agree well with the experimentally verified results, suggesting iDeep is a promising approach in the real-world applications.ConclusionThe iDeep framework not only can achieve promising performance than the state-of-the-art predictors, but also easily capture interpretable binding motifs. iDeep is available at http://www.csbio.sjtu.edu.cn/bioinf/iDeep


2021 ◽  
Author(s):  
Tingting Zhu ◽  
Lanxin Zhu ◽  
Yi Li ◽  
Xiaopeng Chen ◽  
Mingyang He ◽  
...  

We report a novel fusion of microfluidics and light-field microscopy, to achieve high-speed 4D (space + time) imaging of moving C. elegans on a chip. Our approach combines automatic chip-based worm loading / compartmentalization / flushing / reloading with instantaneous deep-learning light-field imaging of moving worm. Taken together, we realized intoto image-based screening of wild-type and uncoordinated-type worms at a volume rate of 33 Hz, with sustained observation of 1 minute per worm, and overall throughput of 42 worms per hour. With quickly yielding over 80000 image volumes that four-dimensionally visualize the dynamics of all the worms, we can quantitatively analyse their behaviours as well as the neural activities, and correlate the phenotypes with the neuron functions. The different types of worms can be readily identified as a result of the high-throughput activity mapping. Our approach shows great potential for various lab-on-a-chip biological studies, such as embryo sorting and cell growth assays.


2019 ◽  
pp. 13-22
Author(s):  
Julian Renet

The estimation of demographic parameters in wild populations is strengthened by individual identification. For amphibians, various techniques are used to either temporarily or permanently mark individuals for identification. Photo-identification of body patterns offers a non-invasive technique. However, the reliability of photo-recognition software is key to the reliable estimation of the true demographic parameters. In the current study, we assessed the effectiveness of fully-automated and semi-automated software: Wild-ID and APHIS. We used the cryptic salamander Hydromantes strinatii as our study species. We used the False Rejection Rate (FRR) of Top 1, Top 5 and Top 10 matches of chest and cloaca pictures. Finally, we assessed the bias induced by our FRR for the estimation of population size through simulation. Wild-ID FRRs ranged from 0.042 to 0.093 while APHIS’ ranged from 0.227 to 0.547. Wild-ID was equally efficient with pictures from the chest and from the cloaca, while APHIS was significantly more efficient with chest pictures than cloaca pictures. Cropping pictures did not significantly improve Wild-ID effectiveness. Our Wild-ID FRRs are among the lowest ever obtained from pictures of an amphibian with a complex chromatophore pattern. Simulation showed that the Top 10 FRR from selected software Wild-ID induced a low bias 2.7% on the estimation of population size. The effectiveness and plasticity of Wild-ID provides opportunities for reliably monitoring amphibian species with complex colour patterns.


Sign in / Sign up

Export Citation Format

Share Document