Assistive Framework for Automatic Detection of All the Zones in Retinopathy of Prematurity Using Deep Learning

Author(s):  
Ranjana Agrawal ◽  
Sucheta Kulkarni ◽  
Rahee Walambe ◽  
Ketan Kotecha
Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1579
Author(s):  
Dongqi Wang ◽  
Qinghua Meng ◽  
Dongming Chen ◽  
Hupo Zhang ◽  
Lisheng Xu

Automatic detection of arrhythmia is of great significance for early prevention and diagnosis of cardiovascular disease. Traditional feature engineering methods based on expert knowledge lack multidimensional and multi-view information abstraction and data representation ability, so the traditional research on pattern recognition of arrhythmia detection cannot achieve satisfactory results. Recently, with the increase of deep learning technology, automatic feature extraction of ECG data based on deep neural networks has been widely discussed. In order to utilize the complementary strength between different schemes, in this paper, we propose an arrhythmia detection method based on the multi-resolution representation (MRR) of ECG signals. This method utilizes four different up to date deep neural networks as four channel models for ECG vector representations learning. The deep learning based representations, together with hand-crafted features of ECG, forms the MRR, which is the input of the downstream classification strategy. The experimental results of big ECG dataset multi-label classification confirm that the F1 score of the proposed method is 0.9238, which is 1.31%, 0.62%, 1.18% and 0.6% higher than that of each channel model. From the perspective of architecture, this proposed method is highly scalable and can be employed as an example for arrhythmia recognition.


Author(s):  
Tanzila Saba ◽  
Shahzad Akbar ◽  
Hoshang Kolivand ◽  
Saeed Ali Bahaj

2021 ◽  
Vol 266 ◽  
pp. 192-200
Author(s):  
Stephen P. Canton ◽  
Esmaeel Dadashzadeh ◽  
Linwah Yip ◽  
Raquel Forsythe ◽  
Robert Handzel

Drones ◽  
2021 ◽  
Vol 5 (1) ◽  
pp. 6
Author(s):  
Apostolos Papakonstantinou ◽  
Marios Batsaris ◽  
Spyros Spondylidis ◽  
Konstantinos Topouzelis

Marine litter (ML) accumulation in the coastal zone has been recognized as a major problem in our time, as it can dramatically affect the environment, marine ecosystems, and coastal communities. Existing monitoring methods fail to respond to the spatiotemporal changes and dynamics of ML concentrations. Recent works showed that unmanned aerial systems (UAS), along with computer vision methods, provide a feasible alternative for ML monitoring. In this context, we proposed a citizen science UAS data acquisition and annotation protocol combined with deep learning techniques for the automatic detection and mapping of ML concentrations in the coastal zone. Five convolutional neural networks (CNNs) were trained to classify UAS image tiles into two classes: (a) litter and (b) no litter. Testing the CCNs’ generalization ability to an unseen dataset, we found that the VVG19 CNN returned an overall accuracy of 77.6% and an f-score of 77.42%. ML density maps were created using the automated classification results. They were compared with those produced by a manual screening classification proving our approach’s geographical transferability to new and unknown beaches. Although ML recognition is still a challenging task, this study provides evidence about the feasibility of using a citizen science UAS-based monitoring method in combination with deep learning techniques for the quantification of the ML load in the coastal zone using density maps.


2021 ◽  
Vol 164 ◽  
pp. 111974
Author(s):  
Dimitris V. Politikos ◽  
Elias Fakiris ◽  
Athanasios Davvetas ◽  
Iraklis A. Klampanos ◽  
George Papatheodorou

Ophthalmology ◽  
2021 ◽  
Vol 128 (7) ◽  
pp. 1077-1078
Author(s):  
Darius M. Moshfeghi ◽  
Michael T. Trese

2021 ◽  
Vol 137 ◽  
pp. 109582
Author(s):  
Suyon Chang ◽  
Hwiyoung Kim ◽  
Young Joo Suh ◽  
Dong Min Choi ◽  
Hyunghu Kim ◽  
...  

2021 ◽  
Vol 11 (13) ◽  
pp. 6085
Author(s):  
Jesus Salido ◽  
Vanesa Lomas ◽  
Jesus Ruiz-Santaquiteria ◽  
Oscar Deniz

There is a great need to implement preventive mechanisms against shootings and terrorist acts in public spaces with a large influx of people. While surveillance cameras have become common, the need for monitoring 24/7 and real-time response requires automatic detection methods. This paper presents a study based on three convolutional neural network (CNN) models applied to the automatic detection of handguns in video surveillance images. It aims to investigate the reduction of false positives by including pose information associated with the way the handguns are held in the images belonging to the training dataset. The results highlighted the best average precision (96.36%) and recall (97.23%) obtained by RetinaNet fine-tuned with the unfrozen ResNet-50 backbone and the best precision (96.23%) and F1 score values (93.36%) obtained by YOLOv3 when it was trained on the dataset including pose information. This last architecture was the only one that showed a consistent improvement—around 2%—when pose information was expressly considered during training.


Sign in / Sign up

Export Citation Format

Share Document