scholarly journals Convolutional Neural Network for Pansharpening with Spatial Structure Enhancement Operator

2021 ◽  
Vol 13 (20) ◽  
pp. 4062
Author(s):  
Weiwei Huang ◽  
Yan Zhang ◽  
Jianwei Zhang ◽  
Yuhui Zheng

Pansharpening aims to fuse the abundant spectral information of multispectral (MS) images and the spatial details of panchromatic (PAN) images, yielding a high-spatial-resolution MS (HRMS) image. Traditional methods only focus on the linear model, ignoring the fact that degradation process is a nonlinear inverse problem. Due to convolutional neural networks (CNNs) having an extraordinary effect in overcoming the shortcomings of traditional linear models, they have been adapted for pansharpening in the past few years. However, most existing CNN-based methods cannot take full advantage of the structural information of images. To address this problem, a new pansharpening method combining a spatial structure enhancement operator with a CNN architecture is employed in this study. The proposed method uses the Sobel operator as an edge-detection operator to extract abundant high-frequency information from the input PAN and MS images, hence obtaining the abundant spatial features of the images. Moreover, we utilize the CNN to acquire the spatial feature maps, preserving the information in both the spatial and spectral domains. Simulated experiments and real-data experiments demonstrated that our method had excellent performance in both quantitative and visual evaluation.

2020 ◽  
Vol 34 (07) ◽  
pp. 10567-10574
Author(s):  
Qingchao Chen ◽  
Yang Liu

Unsupervised domain Adaptation (UDA) aims to learn and transfer generalized features from a labelled source domain to a target domain without any annotations. Existing methods only aligning high-level representation but without exploiting the complex multi-class structure and local spatial structure. This is problematic as 1) the model is prone to negative transfer when the features from different classes are misaligned; 2) missing the local spatial structure poses a major obstacle in performing the fine-grained feature alignment. In this paper, we integrate the valuable information conveyed in classifier prediction and local feature maps into global feature representation and then perform a single mini-max game to make it domain invariant. In this way, the domain-invariant feature not only describes the holistic representation of the original image but also preserves mode-structure and fine-grained spatial structural information. The feature integration is achieved by estimating and maximizing the mutual information (MI) among the global feature, local feature and classifier prediction simultaneously. As the MI is hard to measure directly in high-dimension spaces, we adopt a new objective function that implicitly maximizes the MI via an effective sampling strategy and a discriminator design. Our STructure-Aware Feature Fusion (STAFF) network achieves the state-of-the-art performances in various UDA datasets.


2020 ◽  
Vol 13 (4) ◽  
pp. 2109-2124 ◽  
Author(s):  
Jorge Baño-Medina ◽  
Rodrigo Manzanas ◽  
José Manuel Gutiérrez

Abstract. Deep learning techniques (in particular convolutional neural networks, CNNs) have recently emerged as a promising approach for statistical downscaling due to their ability to learn spatial features from huge spatiotemporal datasets. However, existing studies are based on complex models, applied to particular case studies and using simple validation frameworks, which makes a proper assessment of the (possible) added value offered by these techniques difficult. As a result, these models are usually seen as black boxes, generating distrust among the climate community, particularly in climate change applications. In this paper we undertake a comprehensive assessment of deep learning techniques for continental-scale statistical downscaling, building on the VALUE validation framework. In particular, different CNN models of increasing complexity are applied to downscale temperature and precipitation over Europe, comparing them with a few standard benchmark methods from VALUE (linear and generalized linear models) which have been traditionally used for this purpose. Besides analyzing the adequacy of different components and topologies, we also focus on their extrapolation capability, a critical point for their potential application in climate change studies. To do this, we use a warm test period as a surrogate for possible future climate conditions. Our results show that, while the added value of CNNs is mostly limited to the reproduction of extremes for temperature, these techniques do outperform the classic ones in the case of precipitation for most aspects considered. This overall good performance, together with the fact that they can be suitably applied to large regions (e.g., continents) without worrying about the spatial features being considered as predictors, can foster the use of statistical approaches in international initiatives such as Coordinated Regional Climate Downscaling Experiment (CORDEX).


2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Huibing Hao ◽  
Chun Su

A novel reliability assessment method for degradation product with two dependent performance characteristics (PCs) is proposed, which is different from existing work that only utilized one dimensional degradation data. In this model, the dependence of two PCs is described by the Frank copula function, and each PC is governed by a random effected nonlinear diffusion process where random effects capture the unit to unit differences. Considering that the model is so complicated and analytically intractable, Markov Chain Monte Carlo (MCMC) method is used to estimate the unknown parameters. A numerical example about LED lamp is given to demonstrate the usefulness and validity of the proposed model and method. Numerical results show that the random effected nonlinear diffusion model is very useful by checking the goodness of fit of the real data, and ignoring the dependence between PCs may result in different reliability conclusion.


Author(s):  
Fiorella Pia Salvatore ◽  
Alessia Spada ◽  
Francesca Fortunato ◽  
Demetris Vrontis ◽  
Mariantonietta Fiore

The purpose of this paper is to investigate the determinants influencing the costs of cardiovascular disease in the regional health service in Italy’s Apulia region from 2014 to 2016. Data for patients with acute myocardial infarction (AMI), heart failure (HF), and atrial fibrillation (AF) were collected from the hospital discharge registry. Generalized linear models (GLM), and generalized linear mixed models (GLMM) were used to identify the role of random effects in improving the model performance. The study was based on socio-demographic variables and disease-specific variables (diagnosis-related group, hospitalization type, hospital stay, surgery, and economic burden of the hospital discharge form). Firstly, both models indicated an increase in health costs in 2016, and lower spending values for women (p < 0.001) were shown. GLMM indicates a significant increase in health expenditure with increasing age (p < 0.001). Day-hospital has the lowest cost, surgery increases the cost, and AMI is the most expensive pathology, contrary to AF (p < 0.001). Secondly, AIC and BIC assume the lowest values for the GLMM model, indicating the random effects’ relevance in improving the model performance. This study is the first that considers real data to estimate the economic burden of CVD from the regional health service’s perspective. It appears significant for its ability to provide a large set of estimates of the economic burden of CVD, providing information to managers for health management and planning.


2018 ◽  
Vol 7 (3.15) ◽  
pp. 36 ◽  
Author(s):  
Sarah Nadirah Mohd Johari ◽  
Fairuz Husna Muhamad Farid ◽  
Nur Afifah Enara Binti Nasrudin ◽  
Nur Sarah Liyana Bistamam ◽  
Nur Syamira Syamimi Muhammad Shuhaili

Predicting financial market changes is an important issue in time series analysis, receiving an increasing attention due to financial crisis. Autoregressive integrated moving average (ARIMA) model has been one of the most widely used linear models in time series forecasting but ARIMA model cannot capture nonlinear patterns easily. Generalized autoregressive conditional heteroscedasticity (GARCH) model applied understanding of volatility depending to the estimation of previous forecast error and current volatility, improving ARIMA model. Support vector machine (SVM) and artificial neural network (ANN) have been successfully applied in solving nonlinear regression estimation problems. This study proposes hybrid methodology that exploits unique strength of GARCH + SVM model, and GARCH + ANN model in forecasting stock index. Real data sets of stock prices FTSE Bursa Malaysia KLCI were used to examine the forecasting accuracy of the proposed model. The results shows that the proposed hybrid model achieves best forecasting compared to other model.  


Forests ◽  
2021 ◽  
Vol 12 (9) ◽  
pp. 1162
Author(s):  
Olga Cholewińska ◽  
Andrzej Keczyński ◽  
Barbara Kusińska ◽  
Bogdan Jaroszewicz

Large trees are keystone structures for the functioning and maintenance of the biological diversity of wooded landscapes. Thus, we need a better understanding of large-tree–other-tree interactions and their effects on the diversity and spatial structure of the surrounding trees. We studied these interactions in the core of the Białowieża Primeval Forest—Europe’s best-preserved temperate forest ecosystem, characterized by high abundance of ancient trees. We measured diameter and bark thickness of the monumental trees of Acer platanoides L., Carpinus betulus L., Picea abies (L.) H. Karst, Quercus robur L., and Tilia cordata Mill., as well as the diameter and distance to the monumental tree of five nearest neighbor trees. The effects of the monumental tree on arrangements of the surrounding trees were studied with the help of linear models. We revealed that the species identity of a large tree had, in the case of C. betulus and T. cordata, a significant impact on the diversity of adjacent tree groupings, their distance to the central tree, and frequency of the neighboring trees. The distance between the neighbor and the large trees increased with the increasing diameter of the central tree. Our findings reinforce the call for the protection of large old trees, regardless of their species and where they grow from the geographical or ecosystem point of view.


2016 ◽  
Vol 2016 ◽  
pp. 1-8 ◽  
Author(s):  
Lorentz Jäntschi ◽  
Donatella Bálint ◽  
Sorana D. Bolboacă

Multiple linear regression analysis is widely used to link an outcome with predictors for better understanding of the behaviour of the outcome of interest. Usually, under the assumption that the errors follow a normal distribution, the coefficients of the model are estimated by minimizing the sum of squared deviations. A new approach based on maximum likelihood estimation is proposed for finding the coefficients on linear models with two predictors without any constrictive assumptions on the distribution of the errors. The algorithm was developed, implemented, and tested as proof-of-concept using fourteen sets of compounds by investigating the link between activity/property (as outcome) and structural feature information incorporated by molecular descriptors (as predictors). The results on real data demonstrated that in all investigated cases the power of the error is significantly different by the convenient value of two when the Gauss-Laplace distribution was used to relax the constrictive assumption of the normal distribution of the error. Therefore, the Gauss-Laplace distribution of the error could not be rejected while the hypothesis that the power of the error from Gauss-Laplace distribution is normal distributed also failed to be rejected.


Author(s):  
Giuseppe Buccheri ◽  
Fulvio Corsi

Abstract Despite their effectiveness, linear models for realized variance neglect measurement errors on integrated variance and exhibit several forms of misspecification due to the inherent nonlinear dynamics of volatility. We propose new extensions of the popular approximate long-memory heterogeneous autoregressive (HAR) model apt to disentangle these effects and quantify their separate impact on volatility forecasts. By combining the asymptotic theory of the realized variance estimator with the Kalman filter and by introducing time-varying HAR parameters, we build new models that account for: (i) measurement errors (HARK), (ii) nonlinear dependencies (SHAR) and (iii) both measurement errors and nonlinearities (SHARK). The proposed models are simply estimated through standard maximum likelihood methods and are shown, both on simulated and real data, to provide better out-of-sample forecasts compared to standard HAR specifications and other competing approaches.


2019 ◽  
Vol 1 (2) ◽  
pp. 164-183 ◽  
Author(s):  
Dimitris Bertsimas ◽  
Jack Dunn ◽  
Nishanth Mundru

Motivated by personalized decision making, given observational data [Formula: see text] involving features [Formula: see text], assigned treatments or prescriptions [Formula: see text], and outcomes [Formula: see text], we propose a tree-based algorithm called optimal prescriptive tree (OPT) that uses either constant or linear models in the leaves of the tree to predict the counterfactuals and assign optimal treatments to new samples. We propose an objective function that balances optimality and accuracy. OPTs are interpretable and highly scalable, accommodate multiple treatments, and provide high-quality prescriptions. We report results involving synthetic and real data that show that OPTs either outperform or are comparable with several state-of-the-art methods. Given their combination of interpretability, scalability, generalizability, and performance, OPTs are an attractive alternative for personalized decision making in a variety of areas, such as online advertising and personalized medicine.


Sign in / Sign up

Export Citation Format

Share Document