scholarly journals Domain knowledge integration into deep learning for typhoon intensity classification

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Maiki Higa ◽  
Shinya Tanahara ◽  
Yoshitaka Adachi ◽  
Natsumi Ishiki ◽  
Shin Nakama ◽  
...  

AbstractIn this report, we propose a deep learning technique for high-accuracy estimation of the intensity class of a typhoon from a single satellite image, by incorporating meteorological domain knowledge. By using the Visual Geometric Group’s model, VGG-16, with images preprocessed with fisheye distortion, which enhances a typhoon’s eye, eyewall, and cloud distribution, we achieved much higher classification accuracy than that of a previous study, even with sequential-split validation. Through comparison of t-distributed stochastic neighbor embedding (t-SNE) plots for the feature maps of VGG with the original satellite images, we also verified that the fisheye preprocessing facilitated cluster formation, suggesting that our model could successfully extract image features related to the typhoon intensity class. Moreover, gradient-weighted class activation mapping (Grad-CAM) was applied to highlight the eye and the cloud distributions surrounding the eye, which are important regions for intensity classification; the results suggest that our model qualitatively gained a viewpoint similar to that of domain experts. A series of analyses revealed that the data-driven approach using only deep learning has limitations, and the integration of domain knowledge could bring new breakthroughs.

Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5312
Author(s):  
Yanni Zhang ◽  
Yiming Liu ◽  
Qiang Li ◽  
Jianzhong Wang ◽  
Miao Qi ◽  
...  

Recently, deep learning-based image deblurring and deraining have been well developed. However, most of these methods fail to distill the useful features. What is more, exploiting the detailed image features in a deep learning framework always requires a mass of parameters, which inevitably makes the network suffer from a high computational burden. We propose a lightweight fusion distillation network (LFDN) for image deblurring and deraining to solve the above problems. The proposed LFDN is designed as an encoder–decoder architecture. In the encoding stage, the image feature is reduced to various small-scale spaces for multi-scale information extraction and fusion without much information loss. Then, a feature distillation normalization block is designed at the beginning of the decoding stage, which enables the network to distill and screen valuable channel information of feature maps continuously. Besides, an information fusion strategy between distillation modules and feature channels is also carried out by the attention mechanism. By fusing different information in the proposed approach, our network can achieve state-of-the-art image deblurring and deraining results with a smaller number of parameters and outperform the existing methods in model complexity.


2019 ◽  
Vol 11 (11) ◽  
pp. 1382 ◽  
Author(s):  
Daifeng Peng ◽  
Yongjun Zhang ◽  
Haiyan Guan

Change detection (CD) is essential to the accurate understanding of land surface changes using available Earth observation data. Due to the great advantages in deep feature representation and nonlinear problem modeling, deep learning is becoming increasingly popular to solve CD tasks in remote-sensing community. However, most existing deep learning-based CD methods are implemented by either generating difference images using deep features or learning change relations between pixel patches, which leads to error accumulation problems since many intermediate processing steps are needed to obtain final change maps. To address the above-mentioned issues, a novel end-to-end CD method is proposed based on an effective encoder-decoder architecture for semantic segmentation named UNet++, where change maps could be learned from scratch using available annotated datasets. Firstly, co-registered image pairs are concatenated as an input for the improved UNet++ network, where both global and fine-grained information can be utilized to generate feature maps with high spatial accuracy. Then, the fusion strategy of multiple side outputs is adopted to combine change maps from different semantic levels, thereby generating a final change map with high accuracy. The effectiveness and reliability of our proposed CD method are verified on very-high-resolution (VHR) satellite image datasets. Extensive experimental results have shown that our proposed approach outperforms the other state-of-the-art CD methods.


2021 ◽  
Vol 13 (11) ◽  
pp. 2116
Author(s):  
Chia-Yu Hsu ◽  
Wenwen Li ◽  
Sizhe Wang

This paper introduces a new GeoAI solution to support automated mapping of global craters on the Mars surface. Traditional crater detection algorithms suffer from the limitation of working only in a semiautomated or multi-stage manner, and most were developed to handle a specific dataset in a small subarea of Mars’ surface, hindering their transferability for global crater detection. As an alternative, we propose a GeoAI solution based on deep learning to tackle this problem effectively. Three innovative features are integrated into our object detection pipeline: (1) a feature pyramid network is leveraged to generate feature maps with rich semantics across multiple object scales; (2) prior geospatial knowledge based on the Hough transform is integrated to enable more accurate localization of potential craters; and (3) a scale-aware classifier is adopted to increase the prediction accuracy of both large and small crater instances. The results show that the proposed strategies bring a significant increase in crater detection performance than the popular Faster R-CNN model. The integration of geospatial domain knowledge into the data-driven analytics moves GeoAI research up to the next level to enable knowledge-driven GeoAI. This research can be applied to a wide variety of object detection and image analysis tasks.


2016 ◽  
Author(s):  
Xiaoyong Pan ◽  
Hong-Bin Shen

AbstractBackgroundRNAs play key roles in cells through the interactions with proteins known as the RNA-binding proteins (RBP) and their binding motifs enable crucial understanding of the post-transcriptional regulation of RNAs. How the RBPs correctly recognize the target RNAs and why they bind specific positions is still far from clear. Machine learning-based algorithms are widely acknowledged to be capable of speeding up this process. Although many automatic tools have been developed to predict the RNA-protein binding sites from the rapidly growing multi-resource data, e.g. sequence, structure, their domain specific features and formats have posed significant computational challenges. One of current difficulties is that the cross-source shared common knowledge is at a higher abstraction level beyond the observed data, resulting in a low efficiency of direct integration of observed data across domains. The other difficulty is how to interpret the prediction results. Existing approaches tend to terminate after outputting the potential discrete binding sites on the sequences, but how to assemble them into the meaningful binding motifs is a topic worth of further investigation.ResultsIn viewing of these challenges, we propose a deep learning-based framework (iDeep) by using a novel hybrid convolutional neural network and deep belief network to predict the RBP interaction sites and motifs on RNAs. This new protocol is featured by transforming the original observed data into a high-level abstraction feature space using multiple layers of learning blocks, where the shared representations across different domains are integrated. To validate our iDeep method, we performed experiments on 31 large-scale CLIP-seq datasets, and our results show that by integrating multiple sources of data, the average AUC can be improved by 8% compared to the best single-source-based predictor; and through cross-domain knowledge integration at an abstraction level, it outperforms the state-of-the-art predictors by 6%. Besides the overall enhanced prediction performance, the convolutional neural network module embedded in iDeep is also able to automatically capture the interpretable binding motifs for RBPs. Large-scale experiments demonstrate that these mined binding motifs agree well with the experimentally verified results, suggesting iDeep is a promising approach in the real-world applications.ConclusionThe iDeep framework not only can achieve promising performance than the state-of-the-art predictors, but also easily capture interpretable binding motifs. iDeep is available at http://www.csbio.sjtu.edu.cn/bioinf/iDeep


Author(s):  
Zongsheng Zheng ◽  
Chenyu Hu ◽  
Zhaorong Liu ◽  
Jianbo Hao ◽  
Qian Hou ◽  
...  

AbstractTropical cyclone, also known as typhoon, is one of the most destructive weather phenomena. Its intense cyclonic eddy circulations often cause serious damages to coastal areas. Accurate classification or prediction for typhoon intensity is crucial to the disaster warning and mitigation management. But typhoon intensity-related feature extraction is a challenging task as it requires significant pre-processing and human intervention for analysis, and its recognition rate is poor due to various physical factors such as tropical disturbance. In this study, we built a Typhoon-CNNs framework, an automatic classifier for typhoon intensity based on convolutional neural network (CNN). Typhoon-CNNs framework utilized a cyclical convolution strategy supplemented with dropout zero-set, which extracted sensitive features of existing spiral cloud band (SCB) more effectively and reduces over-fitting phenomenon. To further optimize the performance of Typhoon-CNNs, we also proposed the improved activation function (T-ReLU) and the loss function (CE-FMCE). The improved Typhoon-CNNs was trained and validated using more than 10,000 multiple sensor satellite cloud images of National Institute of Informatics. The classification accuracy reached to 88.74%. Compared with other deep learning methods, the accuracy of our improved Typhoon-CNNs was 7.43% higher than ResNet50, 10.27% higher than InceptionV3 and 14.71% higher than VGG16. Finally, by visualizing hierarchic feature maps derived from Typhoon-CNNs, we can easily identify the sensitive characteristics such as typhoon eyes, dense-shadowing cloud areas and SCBs, which facilitates classify and forecast typhoon intensity.


2021 ◽  
Vol 64 (3) ◽  
pp. 919-927
Author(s):  
Dujin Wang ◽  
Yizhong Wang ◽  
Ming Li ◽  
Xinting Yang ◽  
Jianwei Wu ◽  
...  

HighlightsThe proposed method detected thrips and whitefly more accurately than previous methods.The proposed method demonstrated good robustness to illumination reflections and different pest densities.Small pest detection was improved by adding large-scale feature maps and more residual units to a shallow network.Machine vision and deep learning created an end-to-end model to detect small pests on sticky traps in field conditions.Abstract. Pest detection is the basis of precise control in vegetable greenhouses. To improve the detection accuracy and robustness for two common small pests (whitefly and thrips) in greenhouses, this study proposes a novel small object detection approach based on the YOLOv4 model. Yellow sticky trap (YST) images at the original resolution (2560 × 1920 pixels) were collected using pest monitoring equipment in a greenhouse. The images were then cropped and labeled to create sub-images (416 × 416 pixels) to construct an experimental dataset. The labeled images used in this study (900 training, 100 validation, and 200 test) are available for comparative studies. To enhance the model’s ability to detect small pests, the feature map at the 8-fold downsampling layer in the backbone network was merged with the feature map at the 4-fold downsampling layer to generate a new layer and output a feature map with a size of 104 × 104 pixels. Furthermore, the residual units in the first two residual blocks were enlarged by four times to extract more shallow image features and the location information of target pests to withstand image degradation in the field. The experimental results showed that the mean average precision (mAP) for detection of whitefly and thrips using the proposed approach was improved by 8.2% and 3.4% compared with the YOLOv3 and YOLOv4 models, respectively. The detection performance slightly decreased as the pest densities increased in the YST image, but the mAP value was still 92.7% in the high-density dataset, which indicates that the proposed model has good robustness over a range of pest densities. Compared with previous similar studies, the proposed method has better potential to monitor whitefly and thrips using YSTs in field conditions. Keywords: Deep learning, Greenhouse pest management, Image processing, Pest detection, Small object, YOLOv4.


2021 ◽  
pp. 1-11
Author(s):  
Yaning Liu ◽  
Lin Han ◽  
Hexiang Wang ◽  
Bo Yin

Papillary thyroid carcinoma (PTC) is a common carcinoma in thyroid. As many benign thyroid nodules have the papillary structure which could easily be confused with PTC in morphology. Thus, pathologists have to take a lot of time on differential diagnosis of PTC besides personal diagnostic experience and there is no doubt that it is subjective and difficult to obtain consistency among observers. To address this issue, we applied deep learning to the differential diagnosis of PTC and proposed a histological image classification method for PTC based on the Inception Residual convolutional neural network (IRCNN) and support vector machine (SVM). First, in order to expand the dataset and solve the problem of histological image color inconsistency, a pre-processing module was constructed that included color transfer and mirror transform. Then, to alleviate overfitting of the deep learning model, we optimized the convolution neural network by combining Inception Network and Residual Network to extract image features. Finally, the SVM was trained via image features extracted by IRCNN to perform the classification task. Experimental results show effectiveness of the proposed method in the classification of PTC histological images.


Author(s):  
Qiang Yu ◽  
Feiqiang Liu ◽  
Long Xiao ◽  
Zitao Liu ◽  
Xiaomin Yang

Deep-learning (DL)-based methods are of growing importance in the field of single image super-resolution (SISR). The practical application of these DL-based models is a remaining problem due to the requirement of heavy computation and huge storage resources. The powerful feature maps of hidden layers in convolutional neural networks (CNN) help the model learn useful information. However, there exists redundancy among feature maps, which can be further exploited. To address these issues, this paper proposes a lightweight efficient feature generating network (EFGN) for SISR by constructing the efficient feature generating block (EFGB). Specifically, the EFGB can conduct plain operations on the original features to produce more feature maps with parameters slightly increasing. With the help of these extra feature maps, the network can extract more useful information from low resolution (LR) images to reconstruct the desired high resolution (HR) images. Experiments conducted on the benchmark datasets demonstrate that the proposed EFGN can outperform other deep-learning based methods in most cases and possess relatively lower model complexity. Additionally, the running time measurement indicates the feasibility of real-time monitoring.


Sign in / Sign up

Export Citation Format

Share Document