scholarly journals Industrial Smoke Target Segmentation Based on Fully Convolutional Networks with Multiscale Convolution and Dynamic Weight Loss Function

2020 ◽  
Vol 32 (12) ◽  
pp. 1898-1909
Author(s):  
Hui Liu ◽  
Junpeng Zhang ◽  
Qingrong Li
Author(s):  
Q. Zhang ◽  
Y. Zhang ◽  
P. Yang ◽  
Y. Meng ◽  
S. Zhuo ◽  
...  

Abstract. Extracting land cover information from satellite imagery is of great importance for the task of automated monitoring in various remote sensing applications. Deep convolutional neural networks make this task more feasible, but they are limited by the small dataset of annotated images. In this paper, we present a fully convolutional networks architecture, FPN-VGG, that combines Feature Pyramid Networks and VGG. In order to accomplish the task of land cover classification, we create a land cover dataset of pixel-wise annotated images, and employ a transfer learning step and the variant dice loss function to promote the performance of FPN-VGG. The results indicate that FPN-VGG shows more competence for land cover classification comparing with other state-of-the-art fully convolutional networks. The transfer learning and dice loss function are beneficial to improve the performance of on the small and unbalanced dataset. Our best model on the dataset gets an overall accuracy of 82.9%, an average F1 score of 66.0% and an average IoU of 52.7%.


2021 ◽  
Vol 20 (1) ◽  
Author(s):  
Xue Bai ◽  
Jie Zhang ◽  
Binbing Wang ◽  
Shengye Wang ◽  
Yida Xiang ◽  
...  

Abstract Background Neural-network methods have been widely used for the prediction of dose distributions in radiotherapy. However, the prediction accuracy of existing methods may be degraded by the problem of dose imbalance. In this work, a new loss function is proposed to alleviate the dose imbalance and achieve more accurate prediction results. The U-Net architecture was employed to build a prediction model. Our study involved a total of 110 patients with left-breast cancer, who were previously treated by volumetric-modulated arc radiotherapy. The patient dataset was divided into training and test subsets of 100 and 10 cases, respectively. We proposed a novel ‘sharp loss’ function, and a parameter γ was used to adjust the loss properties. The mean square error (MSE) loss and the sharp loss with different γ values were tested and compared using the Wilcoxon signed-rank test. Results The sharp loss achieved superior dose prediction results compared to those of the MSE loss. The best performance with the MSE loss and the sharp loss was obtained when the parameter γ was set to 100. Specifically, the mean absolute difference values for the planning target volume were 318.87 ± 30.23 for the MSE loss versus 144.15 ± 16.27 for the sharp loss with γ = 100 (p < 0.05). The corresponding values for the ipsilateral lung, the heart, the contralateral lung, and the spinal cord were 278.99 ± 51.68 versus 198.75 ± 61.38 (p < 0.05), 216.99 ± 44.13 versus 144.86 ± 43.98 (p < 0.05), 125.96 ± 66.76 versus 111.86 ± 47.19 (p > 0.05), and 194.30 ± 14.51 versus 168.58 ± 25.97 (p < 0.05), respectively. Conclusions The sharp loss function could significantly improve the accuracy of radiotherapy dose prediction.


IEEE Access ◽  
2020 ◽  
pp. 1-1
Author(s):  
Jeremy M. Webb ◽  
Duane D. Meixner ◽  
Shaheeda A. Adusei ◽  
Eric C. Polley ◽  
Mostafa Fatemi ◽  
...  

Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 144
Author(s):  
Yuexing Han ◽  
Xiaolong Li ◽  
Bing Wang ◽  
Lu Wang

Image segmentation plays an important role in the field of image processing, helping to understand images and recognize objects. However, most existing methods are often unable to effectively explore the spatial information in 3D image segmentation, and they neglect the information from the contours and boundaries of the observed objects. In addition, shape boundaries can help to locate the positions of the observed objects, but most of the existing loss functions neglect the information from the boundaries. To overcome these shortcomings, this paper presents a new cascaded 2.5D fully convolutional networks (FCNs) learning framework to segment 3D medical images. A new boundary loss that incorporates distance, area, and boundary information is also proposed for the cascaded FCNs to learning more boundary and contour features from the 3D medical images. Moreover, an effective post-processing method is developed to further improve the segmentation accuracy. We verified the proposed method on LITS and 3DIRCADb datasets that include the liver and tumors. The experimental results show that the performance of the proposed method is better than existing methods with a Dice Per Case score of 74.5% for tumor segmentation, indicating the effectiveness of the proposed method.


2021 ◽  
Vol 10 (8) ◽  
pp. 523
Author(s):  
Nicholus Mboga ◽  
Stefano D’Aronco ◽  
Tais Grippa ◽  
Charlotte Pelletier ◽  
Stefanos Georganos ◽  
...  

Multitemporal environmental and urban studies are essential to guide policy making to ultimately improve human wellbeing in the Global South. Land-cover products derived from historical aerial orthomosaics acquired decades ago can provide important evidence to inform long-term studies. To reduce the manual labelling effort by human experts and to scale to large, meaningful regions, we investigate in this study how domain adaptation techniques and deep learning can help to efficiently map land cover in Central Africa. We propose and evaluate a methodology that is based on unsupervised adaptation to reduce the cost of generating reference data for several cities and across different dates. We present the first application of domain adaptation based on fully convolutional networks for semantic segmentation of a dataset of historical panchromatic orthomosaics for land-cover generation for two focus cities Goma-Gisenyi and Bukavu. Our experimental evaluation shows that the domain adaptation methods can reach an overall accuracy between 60% and 70% for different regions. If we add a small amount of labelled data from the target domain, too, further performance gains can be achieved.


2021 ◽  
Vol 13 (14) ◽  
pp. 2794
Author(s):  
Shuhao Ran ◽  
Xianjun Gao ◽  
Yuanwei Yang ◽  
Shaohua Li ◽  
Guangbin Zhang ◽  
...  

Deep learning approaches have been widely used in building automatic extraction tasks and have made great progress in recent years. However, the missing detection and wrong detection causing by spectrum confusion is still a great challenge. The existing fully convolutional networks (FCNs) cannot effectively distinguish whether the feature differences are from one building or the building and its adjacent non-building objects. In order to overcome the limitations, a building multi-feature fusion refined network (BMFR-Net) was presented in this paper to extract buildings accurately and completely. BMFR-Net is based on an encoding and decoding structure, mainly consisting of two parts: the continuous atrous convolution pyramid (CACP) module and the multiscale output fusion constraint (MOFC) structure. The CACP module is positioned at the end of the contracting path and it effectively minimizes the loss of effective information in multiscale feature extraction and fusion by using parallel continuous small-scale atrous convolution. To improve the ability to aggregate semantic information from the context, the MOFC structure performs predictive output at each stage of the expanding path and integrates the results into the network. Furthermore, the multilevel joint weighted loss function effectively updates parameters well away from the output layer, enhancing the learning capacity of the network for low-level abstract features. The experimental results demonstrate that the proposed BMFR-Net outperforms the other five state-of-the-art approaches in both visual interpretation and quantitative evaluation.


Sign in / Sign up

Export Citation Format

Share Document