scholarly journals Evaluation of convolutional neural networks for the classification of falls from heterogeneous thermal vision sensors

2020 ◽  
Vol 16 (5) ◽  
pp. 155014772092048
Author(s):  
Miguel Ángel López-Medina ◽  
Macarena Espinilla ◽  
Chris Nugent ◽  
Javier Medina Quero

The automatic detection of falls within environments where sensors are deployed has attracted considerable research interest due to the prevalence and impact of falling people, especially the elderly. In this work, we analyze the capabilities of non-invasive thermal vision sensors to detect falls using several architectures of convolutional neural networks. First, we integrate two thermal vision sensors with different capabilities: (1) low resolution with a wide viewing angle and (2) high resolution with a central viewing angle. Second, we include fuzzy representation of thermal information. Third, we enable the generation of a large data set from a set of few images using ad hoc data augmentation, which increases the original data set size, generating new synthetic images. Fourth, we define three types of convolutional neural networks which are adapted for each thermal vision sensor in order to evaluate the impact of the architecture on fall detection performance. The results show encouraging performance in single-occupancy contexts. In multiple occupancy, the low-resolution thermal vision sensor with a wide viewing angle obtains better performance and reduction of learning time, in comparison with the high-resolution thermal vision sensors with a central viewing angle.

Proceedings ◽  
2018 ◽  
Vol 2 (19) ◽  
pp. 1236 ◽  
Author(s):  
Javier Quero ◽  
Matthew Burns ◽  
Muhammad Razzaq ◽  
Chris Nugent ◽  
Macarena Espinilla

In this work, we detail a methodology based on Convolutional Neural Networks (CNNs) to detect falls from non-invasive thermal vision sensors. First, we include an agile data collection to label images in order to create a dataset that describes several cases of single and multiple occupancy. These cases include standing inhabitants and target situations with a fallen inhabitant. Second, we provide data augmentation techniques to increase the learning capabilities of the classification and reduce the configuration time. Third, we have defined 3 types of CNN to evaluate the impact that the number of layers and kernel size have on the performance of the methodology. The results show an encouraging performance in single-occupancy contexts, with up to 92 % of accuracy, but a 10 % of reduction in accuracy in multiple-occupancy. The learning capabilities of CNNs have been highlighted due to the complex images obtained from the low-cost device. These images have strong noise as well as uncertain and blurred areas. The results highlight that the CNN based on 3-layers maintains a stable performance, as well as quick learning.


The objective of this research is provide to the specialists in skin cancer, a premature, rapid and non-invasive diagnosis of melanoma identification, using an image of the lesion, to apply to the treatment of a patient, the method used is the architecture contrast of Convolutional neural networks proposed by Laura Kocobinski of the University of Boston, against our architecture, which reduce the depth of the convolution filter of the last two convolutional layers to obtain maps of more significant characteristics. The performance of the model was reflected in the accuracy during the validation, considering the best result obtained, which is confirmed with the additional data set. The findings found with the application of this base architecture were improved accuracy from 0.79 to 0.83, with 30 epochs, compared to Kocobinski's AlexNet architecture, it was not possible to improve the accuracy of 0.90, however, the complexity of the network played an important role in the results we obtained, which was able to balance and obtain better results without increasing the epochs, the application of our research is very helpful for doctors, since it will allow them to quickly identify if an injury is melanoma or not and consequently treat it efficiently.


2019 ◽  
Vol 11 (18) ◽  
pp. 2176 ◽  
Author(s):  
Chen ◽  
Zhong ◽  
Tan

Detecting objects in aerial images is a challenging task due to multiple orientations and relatively small size of the objects. Although many traditional detection models have demonstrated an acceptable performance by using the imagery pyramid and multiple templates in a sliding-window manner, such techniques are inefficient and costly. Recently, convolutional neural networks (CNNs) have successfully been used for object detection, and they have demonstrated considerably superior performance than that of traditional detection methods; however, this success has not been expanded to aerial images. To overcome such problems, we propose a detection model based on two CNNs. One of the CNNs is designed to propose many object-like regions that are generated from the feature maps of multi scales and hierarchies with the orientation information. Based on such a design, the positioning of small size objects becomes more accurate, and the generated regions with orientation information are more suitable for the objects arranged with arbitrary orientations. Furthermore, another CNN is designed for object recognition; it first extracts the features of each generated region and subsequently makes the final decisions. The results of the extensive experiments performed on the vehicle detection in aerial imagery (VEDAI) and overhead imagery research data set (OIRDS) datasets indicate that the proposed model performs well in terms of not only the detection accuracy but also the detection speed.


2020 ◽  
Vol 12 (5) ◽  
pp. 765 ◽  
Author(s):  
Calimanut-Ionut Cira ◽  
Ramon Alcarria ◽  
Miguel-Ángel Manso-Callejo ◽  
Francisco Serradilla

Remote sensing imagery combined with deep learning strategies is often regarded as an ideal solution for interpreting scenes and monitoring infrastructures with remarkable performance levels. In addition, the road network plays an important part in transportation, and currently one of the main related challenges is detecting and monitoring the occurring changes in order to update the existent cartography. This task is challenging due to the nature of the object (continuous and often with no clearly defined borders) and the nature of remotely sensed images (noise, obstructions). In this paper, we propose a novel framework based on convolutional neural networks (CNNs) to classify secondary roads in high-resolution aerial orthoimages divided in tiles of 256 × 256 pixels. We will evaluate the framework’s performance on unseen test data and compare the results with those obtained by other popular CNNs trained from scratch.


Electronics ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 190 ◽  
Author(s):  
Zhiwei Huang ◽  
Jinzhao Lin ◽  
Liming Xu ◽  
Huiqian Wang ◽  
Tong Bai ◽  
...  

The application of deep convolutional neural networks (CNN) in the field of medical image processing has attracted extensive attention and demonstrated remarkable progress. An increasing number of deep learning methods have been devoted to classifying ChestX-ray (CXR) images, and most of the existing deep learning methods are based on classic pretrained models, trained by global ChestX-ray images. In this paper, we are interested in diagnosing ChestX-ray images using our proposed Fusion High-Resolution Network (FHRNet). The FHRNet concatenates the global average pooling layers of the global and local feature extractors—it consists of three branch convolutional neural networks and is fine-tuned for thorax disease classification. Compared with the results of other available methods, our experimental results showed that the proposed model yields a better disease classification performance for the ChestX-ray 14 dataset, according to the receiver operating characteristic curve and area-under-the-curve score. An ablation study further confirmed the effectiveness of the global and local branch networks in improving the classification accuracy of thorax diseases.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Teja Kattenborn ◽  
Jana Eichel ◽  
Fabian Ewald Fassnacht

AbstractRecent technological advances in remote sensing sensors and platforms, such as high-resolution satellite imagers or unmanned aerial vehicles (UAV), facilitate the availability of fine-grained earth observation data. Such data reveal vegetation canopies in high spatial detail. Efficient methods are needed to fully harness this unpreceded source of information for vegetation mapping. Deep learning algorithms such as Convolutional Neural Networks (CNN) are currently paving new avenues in the field of image analysis and computer vision. Using multiple datasets, we test a CNN-based segmentation approach (U-net) in combination with training data directly derived from visual interpretation of UAV-based high-resolution RGB imagery for fine-grained mapping of vegetation species and communities. We demonstrate that this approach indeed accurately segments and maps vegetation species and communities (at least 84% accuracy). The fact that we only used RGB imagery suggests that plant identification at very high spatial resolutions is facilitated through spatial patterns rather than spectral information. Accordingly, the presented approach is compatible with low-cost UAV systems that are easy to operate and thus applicable to a wide range of users.


Sign in / Sign up

Export Citation Format

Share Document