scholarly journals A COVID-19 pandemic AI-based system with deep learning forecasting and automatic statistical data acquisition: Development and Implementation Study (Preprint)

Author(s):  
Cheng-Sheng Yu ◽  
Shy-Shin Chang ◽  
Tzu-Hao Chang ◽  
Jenny L Wu ◽  
Yu-Jiun Lin ◽  
...  
2021 ◽  
Author(s):  
Cheng-Sheng Yu ◽  
Shy-Shin Chang ◽  
Tzu-Hao Chang ◽  
Jenny L Wu ◽  
Yu-Jiun Lin ◽  
...  

UNSTRUCTURED This is a correction for the published manuscript: A COVID-19 Pandemic Artificial Intelligence-Based System With Deep Learning Forecasting and Automatic Statistical Data Acquisition: Development and Implementation Study (J Med Internet Res 2021;23(5):e27806). The authorship list has been corrected with an equal contribution footnote and an "Acknowledgements" section has been added to the original published manuscript.


Drones ◽  
2021 ◽  
Vol 5 (1) ◽  
pp. 6
Author(s):  
Apostolos Papakonstantinou ◽  
Marios Batsaris ◽  
Spyros Spondylidis ◽  
Konstantinos Topouzelis

Marine litter (ML) accumulation in the coastal zone has been recognized as a major problem in our time, as it can dramatically affect the environment, marine ecosystems, and coastal communities. Existing monitoring methods fail to respond to the spatiotemporal changes and dynamics of ML concentrations. Recent works showed that unmanned aerial systems (UAS), along with computer vision methods, provide a feasible alternative for ML monitoring. In this context, we proposed a citizen science UAS data acquisition and annotation protocol combined with deep learning techniques for the automatic detection and mapping of ML concentrations in the coastal zone. Five convolutional neural networks (CNNs) were trained to classify UAS image tiles into two classes: (a) litter and (b) no litter. Testing the CCNs’ generalization ability to an unseen dataset, we found that the VVG19 CNN returned an overall accuracy of 77.6% and an f-score of 77.42%. ML density maps were created using the automated classification results. They were compared with those produced by a manual screening classification proving our approach’s geographical transferability to new and unknown beaches. Although ML recognition is still a challenging task, this study provides evidence about the feasibility of using a citizen science UAS-based monitoring method in combination with deep learning techniques for the quantification of the ML load in the coastal zone using density maps.


2021 ◽  
Author(s):  
Li Xinyun ◽  
Liu Huidan ◽  
Yin Hang ◽  
Cao Zilan ◽  
Chen Bangdi ◽  
...  

2020 ◽  
Vol 25 (6) ◽  
pp. 553-565 ◽  
Author(s):  
Boran Sekeroglu ◽  
Ilker Ozsahin

The detection of severe acute respiratory syndrome coronavirus 2 (SARS CoV-2), which is responsible for coronavirus disease 2019 (COVID-19), using chest X-ray images has life-saving importance for both patients and doctors. In addition, in countries that are unable to purchase laboratory kits for testing, this becomes even more vital. In this study, we aimed to present the use of deep learning for the high-accuracy detection of COVID-19 using chest X-ray images. Publicly available X-ray images (1583 healthy, 4292 pneumonia, and 225 confirmed COVID-19) were used in the experiments, which involved the training of deep learning and machine learning classifiers. Thirty-eight experiments were performed using convolutional neural networks, 10 experiments were performed using five machine learning models, and 14 experiments were performed using the state-of-the-art pre-trained networks for transfer learning. Images and statistical data were considered separately in the experiments to evaluate the performances of models, and eightfold cross-validation was used. A mean sensitivity of 93.84%, mean specificity of 99.18%, mean accuracy of 98.50%, and mean receiver operating characteristics–area under the curve scores of 96.51% are achieved. A convolutional neural network without pre-processing and with minimized layers is capable of detecting COVID-19 in a limited number of, and in imbalanced, chest X-ray images.


Author(s):  
Shoumik Majumdar ◽  
Shubhangi Jain ◽  
Isidora Chara Tourni ◽  
Arsenii Mustafin ◽  
Diala Lteif ◽  
...  

Deep learning models perform remarkably well for the same task under the assumption that data is always coming from the same distribution. However, this is generally violated in practice, mainly due to the differences in the data acquisition techniques and the lack of information about the underlying source of new data. Domain Generalization targets the ability to generalize to test data of an unseen domain; while this problem is well-studied for images, such studies are significantly lacking in spatiotemporal visual content – videos and GIFs. This is due to (1) the challenging nature of misalignment of temporal features and the varying appearance/motion of actors and actions in different domains, and (2) spatiotemporal datasets being laborious to collect and annotate for multiple domains. We collect and present the first synthetic video dataset of Animated GIFs for domain generalization, Ani-GIFs, that is used to study domain gap of videos vs. GIFs, and animated vs. real GIFs, for the task of action recognition. We provide a training and testing setting for Ani-GIFs, and extend two domain generalization baseline approaches, based on data augmentation and explainability, to the spatiotemporal domain to catalyze research in this direction.


Processes ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1895
Author(s):  
Donggyun Im ◽  
Sangkyu Lee ◽  
Homin Lee ◽  
Byungguan Yoon ◽  
Fayoung So ◽  
...  

Manufacturers are eager to replace the human inspector with automatic inspection systems to improve the competitive advantage by means of quality. However, some manufacturers have failed to apply the traditional vision system because of constraints in data acquisition and feature extraction. In this paper, we propose an inspection system based on deep learning for a tampon applicator producer that uses the applicator’s structural characteristics for data acquisition and uses state-of-the-art models for object detection and instance segmentation, YOLOv4 and YOLACT for feature extraction, respectively. During the on-site trial test, we experienced some False-Positive (FP) cases and found a possible Type I error. We used a data-centric approach to solve the problem by using two different data pre-processing methods, the Background Removal (BR) and Contrast Limited Adaptive Histogram Equalization (CLAHE). We have experimented with analyzing the effect of the methods on the inspection with the self-created dataset. We found that CLAHE increased Recall by 0.1 at the image level, and both CLAHE and BR improved Precision by 0.04–0.06 at the bounding box level. These results support that the data-centric approach might improve the detection rate. However, the data pre-processing techniques deteriorated the metrics used to measure the overall performance, such as F1-score and Average Precision (AP), even though we empirically confirmed that the malfunctions improved. With the detailed analysis of the result, we have found some cases that revealed the ambiguity of the decisions caused by the inconsistency in data annotation. Our research alerts AI practitioners that validating the model based only on the metrics may lead to a wrong conclusion.


Sensors ◽  
2019 ◽  
Vol 19 (7) ◽  
pp. 1716 ◽  
Author(s):  
Seungeun Chung ◽  
Jiyoun Lim ◽  
Kyoung Ju Noh ◽  
Gague Kim ◽  
Hyuntae Jeong

In this paper, we perform a systematic study about the on-body sensor positioning and data acquisition details for Human Activity Recognition (HAR) systems. We build a testbed that consists of eight body-worn Inertial Measurement Units (IMU) sensors and an Android mobile device for activity data collection. We develop a Long Short-Term Memory (LSTM) network framework to support training of a deep learning model on human activity data, which is acquired in both real-world and controlled environments. From the experiment results, we identify that activity data with sampling rate as low as 10 Hz from four sensors at both sides of wrists, right ankle, and waist is sufficient in recognizing Activities of Daily Living (ADLs) including eating and driving activity. We adopt a two-level ensemble model to combine class-probabilities of multiple sensor modalities, and demonstrate that a classifier-level sensor fusion technique can improve the classification performance. By analyzing the accuracy of each sensor on different types of activity, we elaborate custom weights for multimodal sensor fusion that reflect the characteristic of individual activities.


2017 ◽  
Vol 68 ◽  
pp. 32-42 ◽  
Author(s):  
Rodrigo F. Berriel ◽  
Franco Schmidt Rossi ◽  
Alberto F. de Souza ◽  
Thiago Oliveira-Santos

Sign in / Sign up

Export Citation Format

Share Document