scholarly journals Using Satellite Data on Remote Transportation of Air Pollutants for PM2.5 Prediction in Northern Taiwan

Author(s):  
George Kibirige ◽  
Ming-Chuan Yang ◽  
Chao-Lin Liu ◽  
Meng Chang Chen

We proposed RTP, a composite neural network model that captures knowledge from remote transportation pollution events (RTPEs) to improve the local PM2.5 prediction. To the best of our knowledge, this is the first deep learning work to include knowledge from remote pollutants for PM2.5 prediction. RTP consists of two neural network components: a pre-trained base model and STRI model. The base model captures knowledge from local factors that influence PM2.5 concentrations and STRI captures knowledge from RTPEs by learning spatial-temporal characteristics of Satellite base AOD data and weather features from remote areas. In addition, given the size of the STRI model, to facilitate training and improve results we divide the full STRI model into two components: STRI\_fe, which is used to extract spatial-temporal features from remote areas, and STRI\_p, which predicts local PM2.5 concentrations using both remote and local features. The prediction results from STRI\_p show that the prediction error is reduced when remote features are added to the model, demonstrating that the STRI model indeed captures knowledge from RTPEs.<div>To characterize the occurrence of RTPEs in northern Taiwan, we also developed an algorithm to classify PM2.5 concentrations attributable to RTPEs. We use the STRI model for the prediction of two EPA stations located at the northern tip of Taiwan and apply the classification algorithm to the results. This yields improvements in accuracy when remote features are added to the model, which demonstrates the impact of RTPEs at the stations.</div>

2021 ◽  
Author(s):  
George Kibirige ◽  
Ming-Chuan Yang ◽  
Chao-Lin Liu ◽  
Meng Chang Chen

We proposed RTP, a composite neural network model that captures knowledge from remote transportation pollution events (RTPEs) to improve the local PM2.5 prediction. To the best of our knowledge, this is the first deep learning work to include knowledge from remote pollutants for PM2.5 prediction. RTP consists of two neural network components: a pre-trained base model and STRI model. The base model captures knowledge from local factors that influence PM2.5 concentrations and STRI captures knowledge from RTPEs by learning spatial-temporal characteristics of Satellite base AOD data and weather features from remote areas. In addition, given the size of the STRI model, to facilitate training and improve results we divide the full STRI model into two components: STRI\_fe, which is used to extract spatial-temporal features from remote areas, and STRI\_p, which predicts local PM2.5 concentrations using both remote and local features. The prediction results from STRI\_p show that the prediction error is reduced when remote features are added to the model, demonstrating that the STRI model indeed captures knowledge from RTPEs.<div>To characterize the occurrence of RTPEs in northern Taiwan, we also developed an algorithm to classify PM2.5 concentrations attributable to RTPEs. We use the STRI model for the prediction of two EPA stations located at the northern tip of Taiwan and apply the classification algorithm to the results. This yields improvements in accuracy when remote features are added to the model, which demonstrates the impact of RTPEs at the stations.</div>


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 4953
Author(s):  
Sara Al-Emadi ◽  
Abdulla Al-Ali ◽  
Abdulaziz Al-Ali

Drones are becoming increasingly popular not only for recreational purposes but in day-to-day applications in engineering, medicine, logistics, security and others. In addition to their useful applications, an alarming concern in regard to the physical infrastructure security, safety and privacy has arisen due to the potential of their use in malicious activities. To address this problem, we propose a novel solution that automates the drone detection and identification processes using a drone’s acoustic features with different deep learning algorithms. However, the lack of acoustic drone datasets hinders the ability to implement an effective solution. In this paper, we aim to fill this gap by introducing a hybrid drone acoustic dataset composed of recorded drone audio clips and artificially generated drone audio samples using a state-of-the-art deep learning technique known as the Generative Adversarial Network. Furthermore, we examine the effectiveness of using drone audio with different deep learning algorithms, namely, the Convolutional Neural Network, the Recurrent Neural Network and the Convolutional Recurrent Neural Network in drone detection and identification. Moreover, we investigate the impact of our proposed hybrid dataset in drone detection. Our findings prove the advantage of using deep learning techniques for drone detection and identification while confirming our hypothesis on the benefits of using the Generative Adversarial Networks to generate real-like drone audio clips with an aim of enhancing the detection of new and unfamiliar drones.


Diagnostics ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 1672
Author(s):  
Luya Lian ◽  
Tianer Zhu ◽  
Fudong Zhu ◽  
Haihua Zhu

Objectives: Deep learning methods have achieved impressive diagnostic performance in the field of radiology. The current study aimed to use deep learning methods to detect caries lesions, classify different radiographic extensions on panoramic films, and compare the classification results with those of expert dentists. Methods: A total of 1160 dental panoramic films were evaluated by three expert dentists. All caries lesions in the films were marked with circles, whose combination was defined as the reference dataset. A training and validation dataset (1071) and a test dataset (89) were then established from the reference dataset. A convolutional neural network, called nnU-Net, was applied to detect caries lesions, and DenseNet121 was applied to classify the lesions according to their depths (dentin lesions in the outer, middle, or inner third D1/2/3 of dentin). The performance of the test dataset in the trained nnU-Net and DenseNet121 models was compared with the results of six expert dentists in terms of the intersection over union (IoU), Dice coefficient, accuracy, precision, recall, negative predictive value (NPV), and F1-score metrics. Results: nnU-Net yielded caries lesion segmentation IoU and Dice coefficient values of 0.785 and 0.663, respectively, and the accuracy and recall rate of nnU-Net were 0.986 and 0.821, respectively. The results of the expert dentists and the neural network were shown to be no different in terms of accuracy, precision, recall, NPV, and F1-score. For caries depth classification, DenseNet121 showed an overall accuracy of 0.957 for D1 lesions, 0.832 for D2 lesions, and 0.863 for D3 lesions. The recall results of the D1/D2/D3 lesions were 0.765, 0.652, and 0.918, respectively. All metric values, including accuracy, precision, recall, NPV, and F1-score values, were proven to be no different from those of the experienced dentists. Conclusion: In detecting and classifying caries lesions on dental panoramic radiographs, the performance of deep learning methods was similar to that of expert dentists. The impact of applying these well-trained neural networks for disease diagnosis and treatment decision making should be explored.


2021 ◽  
Author(s):  
Marco Luca Sbodio ◽  
Natasha Mulligan ◽  
Stefanie Speichert ◽  
Vanessa Lopez ◽  
Joao Bettencourt-Silva

There is a growing trend in building deep learning patient representations from health records to obtain a comprehensive view of a patient’s data for machine learning tasks. This paper proposes a reproducible approach to generate patient pathways from health records and to transform them into a machine-processable image-like structure useful for deep learning tasks. Based on this approach, we generated over a million pathways from FAIR synthetic health records and used them to train a convolutional neural network. Our initial experiments show the accuracy of the CNN on a prediction task is comparable or better than other autoencoders trained on the same data, while requiring significantly less computational resources for training. We also assess the impact of the size of the training dataset on autoencoders performances. The source code for generating pathways from health records is provided as open source.


2019 ◽  
Vol 15 (4) ◽  
pp. 76-107
Author(s):  
Nagarathna Ravi ◽  
Vimala Rani P ◽  
Rajesh Alias Harinarayan R ◽  
Mercy Shalinie S ◽  
Karthick Seshadri ◽  
...  

Pure air is vital for sustaining human life. Air pollution causes long-term effects on people. There is an urgent need for protecting people from its profound effects. In general, people are unaware of the levels to which they are exposed to air pollutants. Vehicles, burning various kinds of waste, and industrial gases are the top three onset agents of air pollution. Of these three top agents, human beings are exposed frequently to the pollutants due to motor vehicles. To aid in protecting people from vehicular air pollutants, this article proposes a framework that utilizes deep learning models. The framework utilizes a deep belief network to predict the levels of air pollutants along the paths people travel and also a comparison with the predictions made by a feed forward neural network and an extreme learning machine. When evaluating the deep belief neural network for the case study undertaken, a deep belief network was able to achieve a higher index of agreement and lower RMSE values.


2019 ◽  
Vol 2019 ◽  
pp. 1-14
Author(s):  
Renzhou Gui ◽  
Tongjie Chen ◽  
Han Nie

With the continuous development of science, more and more research results have proved that machine learning is capable of diagnosing and studying the major depressive disorder (MDD) in the brain. We propose a deep learning network with multibranch and local residual feedback, for four different types of functional magnetic resonance imaging (fMRI) data produced by depressed patients and control people under the condition of listening to positive- and negative-emotions music. We use the large convolution kernel of the same size as the correlation matrix to match the features and obtain the results of feature matching of 264 regions of interest (ROIs). Firstly, four-dimensional fMRI data are used to generate the two-dimensional correlation matrix of one person’s brain based on ROIs and then processed by the threshold value which is selected according to the characteristics of complex network and small-world network. After that, the deep learning model in this paper is compared with support vector machine (SVM), logistic regression (LR), k-nearest neighbor (kNN), a common deep neural network (DNN), and a deep convolutional neural network (CNN) for classification. Finally, we further calculate the matched ROIs from the intermediate results of our deep learning model which can help related fields further explore the pathogeny of depression patients.


Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 2085 ◽  
Author(s):  
Rami M. Jomaa ◽  
Hassan Mathkour ◽  
Yakoub Bazi ◽  
Md Saiful Islam

Although fingerprint-based systems are the commonly used biometric systems, they suffer from a critical vulnerability to a presentation attack (PA). Therefore, several approaches based on a fingerprint biometrics have been developed to increase the robustness against a PA. We propose an alternative approach based on the combination of fingerprint and electrocardiogram (ECG) signals. An ECG signal has advantageous characteristics that prevent the replication. Combining a fingerprint with an ECG signal is a potentially interesting solution to reduce the impact of PAs in biometric systems. We also propose a novel end-to-end deep learning-based fusion neural architecture between a fingerprint and an ECG signal to improve PA detection in fingerprint biometrics. Our model uses state-of-the-art EfficientNets for generating a fingerprint feature representation. For the ECG, we investigate three different architectures based on fully-connected layers (FC), a 1D-convolutional neural network (1D-CNN), and a 2D-convolutional neural network (2D-CNN). The 2D-CNN converts the ECG signals into an image and uses inverted Mobilenet-v2 layers for feature generation. We evaluated the method on a multimodal dataset, that is, a customized fusion of the LivDet 2015 fingerprint dataset and ECG data from real subjects. Experimental results reveal that this architecture yields a better average classification accuracy compared to a single fingerprint modality.


2021 ◽  
Vol 11 (1) ◽  
pp. 75
Author(s):  
Nibras Abo Alzahab ◽  
Luca Apollonio ◽  
Angelo Di Iorio ◽  
Muaaz Alshalak ◽  
Sabrina Iarlori ◽  
...  

Background: Brain-Computer Interface (BCI) is becoming more reliable, thanks to the advantages of Artificial Intelligence (AI). Recently, hybrid Deep Learning (hDL), which combines different DL algorithms, has gained momentum over the past five years. In this work, we proposed a review on hDL-based BCI starting from the seminal studies in 2015. Objectives: We have reviewed 47 papers that apply hDL to the BCI system published between 2015 and 2020 extracting trends and highlighting relevant aspects to the topic. Methods: We have queried four scientific search engines (Google Scholar, PubMed, IEEE Xplore and Elsevier Science Direct) and different data items were extracted from each paper such as the database used, kind of application, online/offline training, tasks used for the BCI, pre-processing methodology adopted, type of normalization used, which kind of features were extracted, type of DL architecture used, number of layers implemented and which optimization approach were used as well. All these items were then investigated one by one to uncover trends. Results: Our investigation reveals that Electroencephalography (EEG) has been the most used technique. Interestingly, despite the lower Signal-to-Noise Ratio (SNR) of the EEG data that makes pre-processing of that data mandatory, we have found that the pre-processing has only been used in 21.28% of the cases by showing that hDL seems to be able to overcome this intrinsic drawback of the EEG data. Temporal-features seem to be the most effective with 93.94% accuracy, while spatial-temporal features are the most used with 33.33% of the cases investigated. The most used architecture has been Convolutional Neural Network-Recurrent Neural Network CNN-RNN with 47% of the cases. Moreover, half of the studies have used a low number of layers to achieve a good compromise between the complexity of the network and computational efficiency. Significance: To give useful information to the scientific community, we make our summary table of hDL-based BCI papers available and invite the community to published work to contribute to it directly. We have indicated a list of open challenges, emphasizing the need to use neuroimaging techniques other than EEG, such as functional Near-Infrared Spectroscopy (fNIRS), deeper investigate the advantages and disadvantages of using pre-processing and the relationship with the accuracy obtained. To implement new combinations of architectures, such as RNN-based and Deep Belief Network DBN-based, it is necessary to better explore the frequency and temporal-frequency features of the data at hand.


Sign in / Sign up

Export Citation Format

Share Document