scholarly journals Unsupervised Man Overboard Detection Using Thermal Imagery and Spatiotemporal Autoencoders

2021 ◽  
Author(s):  
Nikolaos Bakalos ◽  
Iason Katsamenis ◽  
Eleni Eirini Karolou ◽  
Nikolaos Doulamis

Man overboard incidents in a maritime vessel are serious accidents where the rapid detection of the even is crucial for the safe retrieval of the person. To this end, the use of deep learning models as automatic detectors of these scenarios has been tested and proven efficient, however, the use of correct capturing methods is imperative in order for the learning framework to operate well. Thermal data can be a suitable method of monitoring, as they are not affected by illumination changes and are able to operate in rough conditions, such as open sea travel. We investigate the use of a convolutional autoencoder trained over thermal data, as a mechanism for the automatic detection of man overboard scenarios. Morever, we present a dataset that was created to emulate such events and was used for training and testing the algorithm.

2019 ◽  
Vol 46 (5) ◽  
pp. 2286-2297 ◽  
Author(s):  
Adam Mylonas ◽  
Paul J. Keall ◽  
Jeremy T. Booth ◽  
Chun‐Chien Shieh ◽  
Thomas Eade ◽  
...  

Author(s):  
Malusi Sibiya ◽  
Mbuyu Sumbwanyambe

Machine learning systems use different algorithms to detect the diseases affecting the plant leaves. Nevertheless, selecting a suitable machine learning framework differs from study to study, depending on the features and complexity of the software packages. This paper introduces a taxonomic inspection of the literature in deep learning frameworks for the detection of plant leaf diseases. The objective of this study is to identify the dominating software frameworks in the literature for modelling machine learning plant leaf disease detecting systems.


Author(s):  
Jenq-Dar Tsay ◽  
Kevin Kao ◽  
Chun-Chieh Chao ◽  
Yu-Cheng Chang

Rainfall retrieval using geostationary satellites provides critical means to the monitoring of extreme rainfall events. Using the relatively new Himawari 8 meteorological satellite with three times more channels than its predecessors, the deep learning framework of “convolutional autoencoder” (CAE) was applied to the extraction of cloud and precipitation features. The CAE method was incorporated into the Convolution Neural Network version of the PERSIANN precipitation retrieval that uses GOES satellites. By applying the CAE technique with the addition of Residual Blocks and other modifications of deep learning architecture, the presented derivation of PERSIANN operated at the Central Weather Bureau of Taiwan (referred to as PERSIANN-CWB) expands four extra convolution layers to fully use Himawari 8’s infrared and water vapor channels, while preventing degradation of accuracy caused by the deeper network. The development of PERSIANN-CWB was trained over Taiwan for its diverse weather systems and localized rainfall features, and the evaluation reveals an overall improvement from its CNN counterpart and superior performance over all other rainfall retrievals analyzed. Limitation of this model was found in the derivation of typhoon rainfall, an area requiring further research.


Author(s):  
Bahzad Taha Chicho ◽  
◽  
Amira Bibo Sallow ◽  

Python is one of the most widely adopted programming languages, having replaced a number of those in the field. Python is popular with developers for a variety of reasons, one of which is because it has an incredibly diverse collection of libraries that users can run. The most compelling reasons for adopting Keras come from its guiding principles, particularly those related to usability. Aside from the simplicity of learning and model construction, Keras has a wide variety of production deployment options and robust support for multiple GPUs and distributed training. A strong and easy-to-use free, open-source Python library is the most important tool for developing and evaluating deep learning models. The aim of this paper is to provide the most current survey of Keras in different aspects, which is a Python-based deep learning Application Programming Interface (API) that runs on top of the machine learning framework, TensorFlow. The mentioned library is used in conjunction with TensorFlow, PyTorch, CODEEPNEATM, and Pygame to allow integration of deep learning models such as cardiovascular disease diagnostics, graph neural networks, identifying health issues, COVID-19 recognition, skin tumors, image detection, and so on, in the applied area. Furthermore, the author used Keras's details, goals, challenges, significant outcomes, and the findings obtained using this method.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
Y Li ◽  
S Rao ◽  
A Hassaine ◽  
R Ramakrishnan ◽  
Y Zhu ◽  
...  

Abstract Background Forecasting incident heart failure is a critical demand for prevention. Recent research suggested the superior performance of deep learning models on the prediction tasks using electronic health records. However, even with a relatively accurate predictive performance, the major impediments to the wider use of deep learning models for clinical decision making are the difficulties of assigning a level of confidence to model predictions and the interpretability of predictions. Purpose We aimed to develop a deep learning framework for more accurate incident heart failure prediction, with provision of measures of uncertainty and interpretability. Methods We used a longitudinal linked electronic health records dataset, Clinical Practice Research Datalink, involving 788,880 patients, 8.3% of whom had an incident heart failure diagnosis. To embed the uncertainty estimation mechanism into the deep learning models, we developed a probabilistic framework based on a novel transformer deep learning model: deep Bayesian Gaussian processes (DBGP). We investigated the performance of incident heart failure prediction and uncertainty estimation for the model and validated it using an external held-out dataset. Diagnoses, medications, and age for each encounter were included as predictors. By comparing the uncertainty, we investigated the possibility of identifying the correct predictions from wrong ones to avoid potential misclassification. Using model distillation meant to mimic a well-trained complex model with simple models, we investigated the importance of associations between diagnoses, medications and heart failure with an interpretable linear regression component learned from DBGP. Results The DBGP achieved high precision with 0.941 as AUROC for external validation. More importantly, it showed the uncertainty information could distinguish the correct predictions from wrong ones, with significant difference (p-value with 500 samples) between distribution of uncertainties for negative predictions (3.21e-69 between true negative and false negative), and positive predictions (3.39e-22 between true positive and false positive). Utilising the distilled model, we can specify the contribution of each diagnosis and medication to heart failure prediction. For instance, Losartan/Fosinopril, Bisoprolol and Left bundle-branch block showed strong association to heart failure incidence with coefficient 0.11 (95% CI: 0.10, 0.12), 0.09 (0.08, 0.11) and 0.09 (0.07, 0.11) respectively; Peritoneal adhesions, Trochanteric bursitis and Galactorrhea showed strong disassociations with coefficient −0.07 (−0.09, −0.05), −0.07 (−0.09, −0.04) and −0.06 (−0.08, −0.04) individually. Conclusions Our novel probabilistic deep learning framework adds a measure of uncertainty the prediction and helps to mitigate misclassification. Model distillation provides an opportunity to interpret deep learning models and offers a data-driven perspective for risk factor analysis. Funding Acknowledgement Type of funding source: Public Institution(s). Main funding source(s): Oxford Martin School,University of Oxford; NIHR Oxford Biomedical Research Centre, University of Oxford


2021 ◽  
Vol 327 ◽  
pp. 128921
Author(s):  
Juan C. Rodriguez Gamboa ◽  
Adenilton J. da Silva ◽  
Ismael C. S. Araujo ◽  
Eva Susana Albarracin E. ◽  
Cristhian M. Duran A.

2020 ◽  
Vol 34 (07) ◽  
pp. 11426-11433
Author(s):  
Xingyi Li ◽  
Zhongang Qi ◽  
Xiaoli Fern ◽  
Fuxin Li

Deep networks are often not scale-invariant hence their performance can vary wildly if recognizable objects are at an unseen scale occurring only at testing time. In this paper, we propose ScaleNet, which recursively predicts object scale in a deep learning framework. With an explicit objective to predict the scale of objects in images, ScaleNet enables pretrained deep learning models to identify objects in the scales that are not present in their training sets. By recursively calling ScaleNet, one can generalize to very large scale changes unseen in the training set. To demonstrate the robustness of our proposed framework, we conduct experiments with pretrained as well as fine-tuned classification and detection frameworks on MNIST, CIFAR-10, and MS COCO datasets and results reveal that our proposed framework significantly boosts the performances of deep networks.


2021 ◽  
Vol 15 (2) ◽  
pp. 1-21
Author(s):  
Yi Zhu ◽  
Lei Li ◽  
Xindong Wu

Deep learning seeks to achieve excellent performance for representation learning in image datasets. However, supervised deep learning models such as convolutional neural networks require a large number of labeled image data, which is intractable in applications, while unsupervised deep learning models like stacked denoising auto-encoder cannot employ label information. Meanwhile, the redundancy of image data incurs performance degradation on representation learning for aforementioned models. To address these problems, we propose a semi-supervised deep learning framework called stacked convolutional sparse auto-encoder, which can learn robust and sparse representations from image data with fewer labeled data records. More specifically, the framework is constructed by stacking layers. In each layer, higher layer feature representations are generated by features of lower layers in a convolutional way with kernels learned by a sparse auto-encoder. Meanwhile, to solve the data redundance problem, the algorithm of Reconstruction Independent Component Analysis is designed to train on patches for sphering the input data. The label information is encoded using a Softmax Regression model for semi-supervised learning. With this framework, higher level representations are learned by layers mapping from image data. It can boost the performance of the base subsequent classifiers such as support vector machines. Extensive experiments demonstrate the superior classification performance of our framework compared to several state-of-the-art representation learning methods.


Sign in / Sign up

Export Citation Format

Share Document