scholarly journals Assessing the Impact of the Loss Function, Architecture and Image Type for Deep Learning-Based Wildfire Segmentation

2021 ◽  
Vol 11 (15) ◽  
pp. 7046
Author(s):  
Jorge Francisco Ciprián-Sánchez ◽  
Gilberto Ochoa-Ruiz ◽  
Lucile Rossi ◽  
Frédéric Morandini

Wildfires stand as one of the most relevant natural disasters worldwide, particularly more so due to the effect of climate change and its impact on various societal and environmental levels. In this regard, a significant amount of research has been done in order to address this issue, deploying a wide variety of technologies and following a multi-disciplinary approach. Notably, computer vision has played a fundamental role in this regard. It can be used to extract and combine information from several imaging modalities in regard to fire detection, characterization and wildfire spread forecasting. In recent years, there has been work pertaining to Deep Learning (DL)-based fire segmentation, showing very promising results. However, it is currently unclear whether the architecture of a model, its loss function, or the image type employed (visible, infrared, or fused) has the most impact on the fire segmentation results. In the present work, we evaluate different combinations of state-of-the-art (SOTA) DL architectures, loss functions, and types of images to identify the parameters most relevant to improve the segmentation results. We benchmark them to identify the top-performing ones and compare them to traditional fire segmentation techniques. Finally, we evaluate if the addition of attention modules on the best performing architecture can further improve the segmentation results. To the best of our knowledge, this is the first work that evaluates the impact of the architecture, loss function, and image type in the performance of DL-based wildfire segmentation models.

2021 ◽  
Vol 13 (9) ◽  
pp. 1779
Author(s):  
Xiaoyan Yin ◽  
Zhiqun Hu ◽  
Jiafeng Zheng ◽  
Boyong Li ◽  
Yuanyuan Zuo

Radar beam blockage is an important error source that affects the quality of weather radar data. An echo-filling network (EFnet) is proposed based on a deep learning algorithm to correct the echo intensity under the occlusion area in the Nanjing S-band new-generation weather radar (CINRAD/SA). The training dataset is constructed by the labels, which are the echo intensity at the 0.5° elevation in the unblocked area, and by the input features, which are the intensity in the cube including multiple elevations and gates corresponding to the location of bottom labels. Two loss functions are applied to compile the network: one is the common mean square error (MSE), and the other is a self-defined loss function that increases the weight of strong echoes. Considering that the radar beam broadens with distance and height, the 0.5° elevation scan is divided into six range bands every 25 km to train different models. The models are evaluated by three indicators: explained variance (EVar), mean absolute error (MAE), and correlation coefficient (CC). Two cases are demonstrated to compare the effect of the echo-filling model by different loss functions. The results suggest that EFnet can effectively correct the echo reflectivity and improve the data quality in the occlusion area, and there are better results for strong echoes when the self-defined loss function is used.


2021 ◽  
pp. 1-29
Author(s):  
Yanhong Chen

ABSTRACT In this paper, we study the optimal reinsurance contracts that minimize the convex combination of the Conditional Value-at-Risk (CVaR) of the insurer’s loss and the reinsurer’s loss over the class of ceded loss functions such that the retained loss function is increasing and the ceded loss function satisfies Vajda condition. Among a general class of reinsurance premium principles that satisfy the properties of risk loading and convex order preserving, the optimal solutions are obtained. Our results show that the optimal ceded loss functions are in the form of five interconnected segments for general reinsurance premium principles, and they can be further simplified to four interconnected segments if more properties are added to reinsurance premium principles. Finally, we derive optimal parameters for the expected value premium principle and give a numerical study to analyze the impact of the weighting factor on the optimal reinsurance.


Diagnostics ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 967
Author(s):  
Amirreza Mahbod ◽  
Gerald Schaefer ◽  
Christine Löw ◽  
Georg Dorffner ◽  
Rupert Ecker ◽  
...  

Nuclei instance segmentation can be considered as a key point in the computer-mediated analysis of histological fluorescence-stained (FS) images. Many computer-assisted approaches have been proposed for this task, and among them, supervised deep learning (DL) methods deliver the best performances. An important criterion that can affect the DL-based nuclei instance segmentation performance of FS images is the utilised image bit depth, but to our knowledge, no study has been conducted so far to investigate this impact. In this work, we released a fully annotated FS histological image dataset of nuclei at different image magnifications and from five different mouse organs. Moreover, by different pre-processing techniques and using one of the state-of-the-art DL-based methods, we investigated the impact of image bit depth (i.e., eight bits vs. sixteen bits) on the nuclei instance segmentation performance. The results obtained from our dataset and another publicly available dataset showed very competitive nuclei instance segmentation performances for the models trained with 8 bit and 16 bit images. This suggested that processing 8 bit images is sufficient for nuclei instance segmentation of FS images in most cases. The dataset including the raw image patches, as well as the corresponding segmentation masks is publicly available in the published GitHub repository.


Entropy ◽  
2022 ◽  
Vol 24 (1) ◽  
pp. 128
Author(s):  
Zhenwei Guan ◽  
Feng Min ◽  
Wei He ◽  
Wenhua Fang ◽  
Tao Lu

Forest fire detection from videos or images is vital to forest firefighting. Most deep learning based approaches rely on converging image loss, which ignores the content from different fire scenes. In fact, complex content of images always has higher entropy. From this perspective, we propose a novel feature entropy guided neural network for forest fire detection, which is used to balance the content complexity of different training samples. Specifically, a larger weight is given to the feature of the sample with a high entropy source when calculating the classification loss. In addition, we also propose a color attention neural network, which mainly consists of several repeated multiple-blocks of color-attention modules (MCM). Each MCM module can extract the color feature information of fire adequately. The experimental results show that the performance of our proposed method outperforms the state-of-the-art methods.


Author(s):  
Andrew Cropper ◽  
Sebastijan Dumančic

A major challenge in inductive logic programming (ILP) is learning large programs. We argue that a key limitation of existing systems is that they use entailment to guide the hypothesis search. This approach is limited because entailment is a binary decision: a hypothesis either entails an example or does not, and there is no intermediate position. To address this limitation, we go beyond entailment and use 'example-dependent' loss functions to guide the search, where a hypothesis can partially cover an example. We implement our idea in Brute, a new ILP system which uses best-first search, guided by an example-dependent loss function, to incrementally build programs. Our experiments on three diverse program synthesis domains (robot planning, string transformations, and ASCII art), show that Brute can substantially outperform existing ILP systems, both in terms of predictive accuracies and learning times, and can learn programs 20 times larger than state-of-the-art systems.


Mathematics ◽  
2020 ◽  
Vol 8 (8) ◽  
pp. 1316
Author(s):  
Luisa F. Sánchez-Peralta ◽  
Artzai Picón ◽  
Juan Antonio Antequera-Barroso ◽  
Juan Francisco Ortega-Morán ◽  
Francisco M. Sánchez-Margallo ◽  
...  

Colorectal cancer is one of the leading cancer death causes worldwide, but its early diagnosis highly improves the survival rates. The success of deep learning has also benefited this clinical field. When training a deep learning model, it is optimized based on the selected loss function. In this work, we consider two networks (U-Net and LinkNet) and two backbones (VGG-16 and Densnet121). We analyzed the influence of seven loss functions and used a principal component analysis (PCA) to determine whether the PCA-based decomposition allows for the defining of the coefficients of a non-redundant primal loss function that can outperform the individual loss functions and different linear combinations. The eigenloss is defined as a linear combination of the individual losses using the elements of the eigenvector as coefficients. Empirical results show that the proposed eigenloss improves the general performance of individual loss functions and outperforms other linear combinations when Linknet is used, showing potential for its application in polyp segmentation problems.


Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 723 ◽  
Author(s):  
Divish Rengasamy ◽  
Mina Jafari ◽  
Benjamin Rothwell ◽  
Xin Chen ◽  
Grazziela P. Figueredo

Deep learning has been employed to prognostic and health management of automotive and aerospace with promising results. Literature in this area has revealed that most contributions regarding deep learning is largely focused on the model’s architecture. However, contributions regarding improvement of different aspects in deep learning, such as custom loss function for prognostic and health management are scarce. There is therefore an opportunity to improve upon the effectiveness of deep learning for the system’s prognostics and diagnostics without modifying the models’ architecture. To address this gap, the use of two different dynamically weighted loss functions, a newly proposed weighting mechanism and a focal loss function for prognostics and diagnostics task are investigated. A dynamically weighted loss function is expected to modify the learning process by augmenting the loss function with a weight value corresponding to the learning error of each data instance. The objective is to force deep learning models to focus on those instances where larger learning errors occur in order to improve their performance. The two loss functions used are evaluated using four popular deep learning architectures, namely, deep feedforward neural network, one-dimensional convolutional neural network, bidirectional gated recurrent unit and bidirectional long short-term memory on the commercial modular aero-propulsion system simulation data from NASA and air pressure system failure data for Scania trucks. Experimental results show that dynamically-weighted loss functions helps us achieve significant improvement for remaining useful life prediction and fault detection rate over non-weighted loss function predictions.


Author(s):  
Ratnesh Kumar ◽  
Edwin Weill ◽  
Farzin Aghdasi ◽  
Parthasarathy Sriram

AbstractIn this paper we tackle the problem of vehicle re-identification in a camera network utilizing triplet embeddings. Re-identification is the problem of matching appearances of objects across different cameras. With the proliferation of surveillance cameras enabling smart and safer cities, there is an ever-increasing need to re-identify vehicles across cameras. Typical challenges arising in smart city scenarios include variations of viewpoints, illumination and self occlusions. Most successful approaches for re-identification involve (deep) learning an embedding space such that the vehicles of same identities are projected closer to one another, compared to the vehicles representing different identities. Popular loss functions for learning an embedding (space) include contrastive or triplet loss. In this paper we provide an extensive evaluation of triplet loss applied to vehicle re-identification and demonstrate that using the recently proposed sampling approaches for mining informative data points outperform most of the existing state-of-the-art approaches for vehicle re-identification. Compared to most existing state-of-the-art approaches, our approach is simpler and more straightforward for training utilizing only identity-level annotations, along with one of the smallest published embedding dimensions for efficient inference. Furthermore in this work we introduce a formal evaluation of a triplet sampling variant (batch sample) into the re-identification literature. In addition to the conference version [24], this submission adds extensive experiments on new released datasets, cross domain evaluations and ablation studies.


2020 ◽  
Vol 163 ◽  
pp. 01001
Author(s):  
Georgy Ayzel ◽  
Liubov Kurochkina ◽  
Eduard Kazakov ◽  
Sergei Zhuravlev

Streamflow prediction is a vital public service that helps to establish flash-flood early warning systems or assess the impact of projected climate change on water management. However, the availability of streamflow observations limits the utilization of the state-of-the-art streamflow prediction techniques to the basins where hydrometric gauging stations exist. Since the most river basins in the world are ungauged, the development of the specialized techniques for the reliable streamflow prediction in ungauged basins (PUB) is of crucial importance. In recent years, the emerging field of deep learning provides a myriad of new models that can breathe new life into the stagnating PUB methods. In the presented study, we benchmark the streamflow prediction efficiency of Long Short-Term Memory (LSTM) networks against the standard technique of GR4J hydrological model parameters regionalization (HMREG) at 200 basins in Northwest Russia. Results show that the LSTM-based regional hydrological model significantly outperforms the HMREG scheme in terms of median Nash-Sutcliffe efficiency (NSE), which is 0.73 and 0.61 for LSTM and HMREG, respectively. Moreover, LSTM demonstrates the comparable median NSE with that for basin-scale calibration of GR4J (0.75). Therefore, this study underlines the high utilization potential of deep learning for the PUB by demonstrating the new state-of-the-art performance in this field.


2021 ◽  
Author(s):  
Sayan Nag

Self-supervised learning and pre-training strategies have developed over the last few years especially for Convolutional Neural Networks (CNNs). Recently application of such methods can also be noticed for Graph Neural Networks (GNNs). In this paper, we have used a graph based self-supervised learning strategy with different loss functions (Barlow Twins[? ], HSIC[? ], VICReg[? ]) which have shown promising results when applied with CNNs previously. We have also proposed a hybrid loss function combining the advantages of VICReg and HSIC and called it as VICRegHSIC. The performance of these aforementioned methods have been compared when applied to two different datasets namely MUTAG and PROTEINS. Moreover, the impact of different batch sizes, projector dimensions and data augmentation strategies have also been explored. The results are preliminary and we will be continuing to explore with other datasets.


Sign in / Sign up

Export Citation Format

Share Document