scholarly journals Predicting adsorption ability of adsorbents at arbitrary sites for pollutants using deep transfer learning

2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Zhilong Wang ◽  
Haikuo Zhang ◽  
Jiahao Ren ◽  
Xirong Lin ◽  
Tianli Han ◽  
...  

AbstractAccurately evaluating the adsorption ability of adsorbents for heavy metal ions (HMIs) and organic pollutants in water is critical for the design and preparation of emerging highly efficient adsorbents. However, predicting adsorption capabilities of adsorbents at arbitrary sites is challenging, with currently unavailable measuring technology for active sites and the corresponding activities. Here, we present an efficient artificial intelligence (AI) approach to predict the adsorption ability of adsorbents at arbitrary sites, as a case study of three HMIs (Pb(II), Hg(II), and Cd(II)) adsorbed on the surface of a representative two-dimensional graphitic-C3N4. We apply the deep neural network and transfer learning to predict the adsorption capabilities of three HMIs at arbitrary sites, with the predicted results of Cd(II) > Hg(II) > Pb(II) and the root-mean-squared errors less than 0.1 eV. The proposed AI method has the same prediction accuracy as the ab initio DFT calculation, but is millions of times faster than the DFT to predict adsorption abilities at arbitrary sites and only requires one-tenth of datasets compared to training from scratch. We further verify the adsorption capacity of g-C3N4 towards HMIs experimentally and obtain results consistent with the AI prediction. It indicates that the presented approach is capable of evaluating the adsorption ability of adsorbents efficiently, and can be further extended to other interdisciplines and industries for the adsorption of harmful elements in aqueous solution.

2020 ◽  
Vol 13 (1) ◽  
pp. 23
Author(s):  
Wei Zhao ◽  
William Yamada ◽  
Tianxin Li ◽  
Matthew Digman ◽  
Troy Runge

In recent years, precision agriculture has been researched to increase crop production with less inputs, as a promising means to meet the growing demand of agriculture products. Computer vision-based crop detection with unmanned aerial vehicle (UAV)-acquired images is a critical tool for precision agriculture. However, object detection using deep learning algorithms rely on a significant amount of manually prelabeled training datasets as ground truths. Field object detection, such as bales, is especially difficult because of (1) long-period image acquisitions under different illumination conditions and seasons; (2) limited existing prelabeled data; and (3) few pretrained models and research as references. This work increases the bale detection accuracy based on limited data collection and labeling, by building an innovative algorithms pipeline. First, an object detection model is trained using 243 images captured with good illimitation conditions in fall from the crop lands. In addition, domain adaptation (DA), a kind of transfer learning, is applied for synthesizing the training data under diverse environmental conditions with automatic labels. Finally, the object detection model is optimized with the synthesized datasets. The case study shows the proposed method improves the bale detecting performance, including the recall, mean average precision (mAP), and F measure (F1 score), from averages of 0.59, 0.7, and 0.7 (the object detection) to averages of 0.93, 0.94, and 0.89 (the object detection + DA), respectively. This approach could be easily scaled to many other crop field objects and will significantly contribute to precision agriculture.


2020 ◽  
Vol 152 ◽  
pp. S146-S147
Author(s):  
J. Perez-Alija ◽  
P. Gallego ◽  
M. Lizondo ◽  
J. Nuria ◽  
A. Latorre-Musoll ◽  
...  

2021 ◽  
Vol 10 (9) ◽  
pp. 25394-25398
Author(s):  
Chitra Desai

Deep learning models have demonstrated improved efficacy in image classification since the ImageNet Large Scale Visual Recognition Challenge started since 2010. Classification of images has further augmented in the field of computer vision with the dawn of transfer learning. To train a model on huge dataset demands huge computational resources and add a lot of cost to learning. Transfer learning allows to reduce on cost of learning and also help avoid reinventing the wheel. There are several pretrained models like VGG16, VGG19, ResNet50, Inceptionv3, EfficientNet etc which are widely used.   This paper demonstrates image classification using pretrained deep neural network model VGG16 which is trained on images from ImageNet dataset. After obtaining the convolutional base model, a new deep neural network model is built on top of it for image classification based on fully connected network. This classifier will use features extracted from the convolutional base model.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Xiaoping Guo

Traditional text annotation-based video retrieval is done by manually labeling videos with text, which is inefficient and highly subjective and generally cannot accurately describe the meaning of videos. Traditional content-based video retrieval uses convolutional neural networks to extract the underlying feature information of images to build indexes and achieves similarity retrieval of video feature vectors according to certain similarity measure algorithms. In this paper, by studying the characteristics of sports videos, we propose the histogram difference method based on using transfer learning and the four-step method based on block matching for mutation detection and fading detection of video shots, respectively. By adaptive thresholding, regions with large frame difference changes are marked as candidate regions for shots, and then the shot boundaries are determined by mutation detection algorithm. Combined with the characteristics of sports video, this paper proposes a key frame extraction method based on clustering and optical flow analysis, and experimental comparison with the traditional clustering method. In addition, this paper proposes a key frame extraction algorithm based on clustering and optical flow analysis for key frame extraction of sports video. The algorithm effectively removes the redundant frames, and the extracted key frames are more representative. Through extensive experiments, the keyword fuzzy finding algorithm based on improved deep neural network and ontology semantic expansion proposed in this paper shows a more desirable retrieval performance, and it is feasible to use this method for video underlying feature extraction, annotation, and keyword finding, and one of the outstanding features of the algorithm is that it can quickly and effectively retrieve the desired video in a large number of Internet video resources, reducing the false detection rate and leakage rate while improving the fidelity, which basically meets people’s daily needs.


Sign in / Sign up

Export Citation Format

Share Document