Predictive deep learning models for environmental properties: the direct calculation of octanol–water partition coefficients from molecular graphs

2019 ◽  
Vol 21 (16) ◽  
pp. 4555-4565 ◽  
Author(s):  
Zihao Wang ◽  
Yang Su ◽  
Weifeng Shen ◽  
Saimeng Jin ◽  
James H. Clark ◽  
...  

A deep learning approach coupling the Tree-LSTM network and back-propagation neural network for predicting the octanol–water partition coefficient.

2021 ◽  
Vol 9 ◽  
Author(s):  
Wen-Tsao Pan ◽  
Qiu-Yu Huang ◽  
Zi-Yin Yang ◽  
Fei-Yan Zhu ◽  
Yu-Ning Pang ◽  
...  

This paper examines the determinants of tourism stock returns in China from October 25, 2018, to October 21, 2020, including the COVID-19 era. We propose four deep learning prediction models based on the Back Propagation Neural Network (BPNN): Quantum Swarm Intelligence Algorithms (QSIA), Quantum Step Fruit-Fly Optimization Algorithm (QSFOA), Quantum Particle Swarm Optimization Algorithm (QPSO) and Quantum Genetic Algorithm (QGA). Firstly, the rough dataset is used to reduce the dimension of the indices. Secondly, the number of neurons in the multilayer of BPNN is optimized by QSIA, QSFOA, QPSO, and QGA, respectively. Finally, the deep learning models are then used to establish prediction models with the best number of neurons under these three algorithms for the non-linear real stock returns. The results indicate that the QSFOA-BPNN model has the highest prediction accuracy among all models, and it is defined as the most effective feasible method. This evidence is robust to different sub-periods.


Author(s):  
Shikha Bhardwaj ◽  
Gitanjali Pandove ◽  
Pawan Kumar Dahiya

Background: In order to retrieve a particular image from vast repository of images, an efficient system is required and such an eminent system is well-known by the name Content-based image retrieval (CBIR) system. Color is indeed an important attribute of an image and the proposed system consist of a hybrid color descriptor which is used for color feature extraction. Deep learning, has gained a prominent importance in the current era. So, the performance of this fusion based color descriptor is also analyzed in the presence of Deep learning classifiers. Method: This paper describes a comparative experimental analysis on various color descriptors and the best two are chosen to form an efficient color based hybrid system denoted as combined color moment-color autocorrelogram (Co-CMCAC). Then, to increase the retrieval accuracy of the hybrid system, a Cascade forward back propagation neural network (CFBPNN) is used. The classification accuracy obtained by using CFBPNN is also compared to Patternnet neural network. Results: The results of the hybrid color descriptor depict that the proposed system has superior results of the order of 95.4%, 88.2%, 84.4% and 96.05% on Corel-1K, Corel-5K, Corel-10K and Oxford flower benchmark datasets respectively as compared to many state-of-the-art related techniques. Conclusion: This paper depict an experimental and analytical analysis on different color feature descriptors namely, Color moment (CM), Color auto-correlogram (CAC), Color histogram (CH), Color coherence vector (CCV) and Dominant color descriptor (DCD). The proposed hybrid color descriptor (Co-CMCAC) is utilized for the withdrawal of color features with Cascade forward back propagation neural network (CFBPNN) is used as a classifier on four benchmark datasets namely Corel-1K, Corel-5K and Corel-10K and Oxford flower.


2021 ◽  
Vol 11 (9) ◽  
pp. 3952
Author(s):  
Shimin Tang ◽  
Zhiqiang Chen

With the ubiquitous use of mobile imaging devices, the collection of perishable disaster-scene data has become unprecedentedly easy. However, computing methods are unable to understand these images with significant complexity and uncertainties. In this paper, the authors investigate the problem of disaster-scene understanding through a deep-learning approach. Two attributes of images are concerned, including hazard types and damage levels. Three deep-learning models are trained, and their performance is assessed. Specifically, the best model for hazard-type prediction has an overall accuracy (OA) of 90.1%, and the best damage-level classification model has an explainable OA of 62.6%, upon which both models adopt the Faster R-CNN architecture with a ResNet50 network as a feature extractor. It is concluded that hazard types are more identifiable than damage levels in disaster-scene images. Insights are revealed, including that damage-level recognition suffers more from inter- and intra-class variations, and the treatment of hazard-agnostic damage leveling further contributes to the underlying uncertainties.


2022 ◽  
Author(s):  
Maede Maftouni ◽  
Bo Shen ◽  
Andrew Chung Chee Law ◽  
Niloofar Ayoobi Yazdi ◽  
Zhenyu Kong

<p>The global extent of COVID-19 mutations and the consequent depletion of hospital resources highlighted the necessity of effective computer-assisted medical diagnosis. COVID-19 detection mediated by deep learning models can help diagnose this highly contagious disease and lower infectivity and mortality rates. Computed tomography (CT) is the preferred imaging modality for building automatic COVID-19 screening and diagnosis models. It is well-known that the training set size significantly impacts the performance and generalization of deep learning models. However, accessing a large dataset of CT scan images from an emerging disease like COVID-19 is challenging. Therefore, data efficiency becomes a significant factor in choosing a learning model. To this end, we present a multi-task learning approach, namely, a mask-guided attention (MGA) classifier, to improve the generalization and data efficiency of COVID-19 classification on lung CT scan images.</p><p>The novelty of this method is compensating for the scarcity of data by employing more supervision with lesion masks, increasing the sensitivity of the model to COVID-19 manifestations, and helping both generalization and classification performance. Our proposed model achieves better overall performance than the single-task baseline and state-of-the-art models, as measured by various popular metrics. In our experiment with different percentages of data from our curated dataset, the classification performance gain from this multi-task learning approach is more significant for the smaller training sizes. Furthermore, experimental results demonstrate that our method enhances the focus on the lesions, as witnessed by both</p><p>attention and attribution maps, resulting in a more interpretable model.</p>


Author(s):  
V. Punitha ◽  
C. Mala

The recent technological transformation in application deployment, with the enriched availability of applications, induces the attackers to shift the target of the attack to the services provided by the application layer. Application layer DoS or DDoS attacks are launched only after establishing the connection to the server. They are stealthier than network or transport layer attacks. The existing defence mechanisms are unproductive in detecting application layer DoS or DDoS attacks. Hence, this chapter proposes a novel deep learning classification model using an autoencoder to detect application layer DDoS attacks by measuring the deviations in the incoming network traffic. The experimental results show that the proposed deep autoencoder model detects application layer attacks in HTTP traffic more proficiently than existing machine learning models.


2018 ◽  
Vol 7 (3.34) ◽  
pp. 237
Author(s):  
R Aswini Priyanka ◽  
C Ashwitha ◽  
R Arun Chakravarthi ◽  
R Prakash

In scientific world, Face recognition becomes an important research topic. The face identification system is an application capable of verifying a human face from a live videos or digital images. One of the best methods is to compare the particular facial attributes of a person with the images and its database. It is widely used in biometrics and security systems. Back in old days, face identification was a challenging concept. Because of the variations in viewpoint and facial expression, the deep learning neural network came into the technology stack it’s been very easy to detect and recognize the faces. The efficiency has increased dramatically. In this paper, ORL database is about the ten images of forty people helps to evaluate our methodology. We use the concept of Back Propagation Neural Network (BPNN) in deep learning model is to recognize the faces and increase the efficiency of the model compared to previously existing face recognition models.   


2020 ◽  
Vol 12 (10) ◽  
pp. 1581 ◽  
Author(s):  
Daniel Perez ◽  
Kazi Islam ◽  
Victoria Hill ◽  
Richard Zimmerman ◽  
Blake Schaeffer ◽  
...  

Coastal ecosystems are critically affected by seagrass, both economically and ecologically. However, reliable seagrass distribution information is lacking in nearly all parts of the world because of the excessive costs associated with its assessment. In this paper, we develop two deep learning models for automatic seagrass distribution quantification based on 8-band satellite imagery. Specifically, we implemented a deep capsule network (DCN) and a deep convolutional neural network (CNN) to assess seagrass distribution through regression. The DCN model first determines whether seagrass is presented in the image through classification. Second, if seagrass is presented in the image, it quantifies the seagrass through regression. During training, the regression and classification modules are jointly optimized to achieve end-to-end learning. The CNN model is strictly trained for regression in seagrass and non-seagrass patches. In addition, we propose a transfer learning approach to transfer knowledge in the trained deep models at one location to perform seagrass quantification at a different location. We evaluate the proposed methods in three WorldView-2 satellite images taken from the coastal area in Florida. Experimental results show that the proposed deep DCN and CNN models performed similarly and achieved much better results than a linear regression model and a support vector machine. We also demonstrate that using transfer learning techniques for the quantification of seagrass significantly improved the results as compared to directly applying the deep models to new locations.


Sign in / Sign up

Export Citation Format

Share Document