regularization techniques
Recently Published Documents


TOTAL DOCUMENTS

256
(FIVE YEARS 76)

H-INDEX

25
(FIVE YEARS 4)

Complexity ◽  
2022 ◽  
Vol 2022 ◽  
pp. 1-10
Author(s):  
Sara Muhammadullah ◽  
Amena Urooj ◽  
Faridoon Khan ◽  
Mohammed N Alshahrani ◽  
Mohammed Alqawba ◽  
...  

In order to reduce the dimensionality of parameter space and enhance out-of-sample forecasting performance, this research compares regularization techniques with Autometrics in time-series modeling. We mainly focus on comparing weighted lag adaptive LASSO (WLAdaLASSO) with Autometrics, but as a benchmark, we estimate other popular regularization methods LASSO, AdaLASSO, SCAD, and MCP. For analytical comparison, we implement Monte Carlo simulation and assess the performance of these techniques in terms of out-of-sample Root Mean Square Error, Gauge, and Potency. The comparison is assessed with varying autocorrelation coefficients and sample sizes. The simulation experiment indicates that, compared to Autometrics and other regularization approaches, the WLAdaLASSO outperforms the others in covariate selection and forecasting, especially when there is a greater linear dependency between predictors. In contrast, the computational efficiency of Autometrics decreases with a strong linear dependency between predictors. However, under the large sample and weak linear dependency between predictors, the Autometrics potency ⟶ 1 and gauge ⟶ α. In contrast, LASSO, AdaLASSO, SCAD, and MCP select more covariates and possess higher RMSE than Autometrics and WLAdaLASSO. To compare the considered techniques, we made the Generalized Unidentified Model for covariate selection and out-of-sample forecasting for the trade balance of Pakistan. We train the model on 1985–2015 observations and 2016–2020 observations as test data for the out-of-sample forecast.


2022 ◽  
Vol 14 (2) ◽  
pp. 288
Author(s):  
Yangyang Wang ◽  
Zhiming He ◽  
Xu Zhan ◽  
Yuanhua Fu ◽  
Liming Zhou

Three-dimensional (3D) synthetic aperture radar (SAR) imaging provides complete 3D spatial information, which has been used in environmental monitoring in recent years. Compared with matched filtering (MF) algorithms, the regularization technique can improve image quality. However, due to the substantial computational cost, the existing observation-matrix-based sparse imaging algorithm is difficult to apply to large-scene and 3D reconstructions. Therefore, in this paper, novel 3D sparse reconstruction algorithms with generalized Lq-regularization are proposed. First, we combine majorization–minimization (MM) and L1 regularization (MM-L1) to improve SAR image quality. Next, we combine MM and L1/2 regularization (MM-L1/2) to achieve high-quality 3D images. Then, we present the algorithm which combines MM and L0 regularization (MM-L0) to obtain 3D images. Finally, we present a generalized MM-Lq algorithm (GMM-Lq) for sparse SAR imaging problems with arbitrary q0≤q≤1 values. The proposed algorithm can improve the performance of 3D SAR images, compared with existing regularization techniques, and effectively reduce the amount of calculation needed. Additionally, the reconstructed complex image retains the phase information, which makes the reconstructed SAR image still suitable for interferometry applications. Simulation and experimental results verify the effectiveness of the algorithms.


2022 ◽  
pp. 105760
Author(s):  
Erick Meira ◽  
Fernando Luiz Cyrino Oliveira ◽  
Lilian M. de Menezes

2021 ◽  
Author(s):  
Doron Avramov ◽  
Guy Kaplanski ◽  
Avanidhar Subrahmanyam

Regression regularization techniques show that deviations of accounting fundamentals from their preceding moving averages forecast drifts in equity market prices. Deviations-based predictability survives a comprehensive set of prominent anomalies. The profitability applies strongly to the long leg and survives value weighting and excluding microcaps. We provide evidence that the predictability arises because investors anchor to recent means of fundamentals. A factor based on our fundamentals-based index yields economically significant intercepts after controlling for a comprehensive set of other factors, including those based on profit margins and earnings drift. This paper was accepted by Gustavo Manso, finance.


GPS Solutions ◽  
2021 ◽  
Vol 26 (1) ◽  
Author(s):  
Zohreh Adavi ◽  
Robert Weber ◽  
Marcus Franz Glaner

AbstractWater vapor is one of the most variable components in the earth's atmosphere and has a significant role in forming clouds, rain and snow, air pollution, and acid rain. Therefore, increasing the accuracy of estimated water vapor can lead to more accurate predictions of severe weather, upcoming storms, and natural hazards. In recent years, GNSS has turned out to be a valuable tool for remotely sensing the atmosphere. In this context, GNSS tomography evolved to an extremely promising technique to reconstruct the spatiotemporal structure of the troposphere. However, locating dual-frequency (DF) receivers with a spatial resolution of a few tens of kilometers sufficient for GNSS tomography is not economically feasible. Therefore, in this research, the feasibility of using single-frequency (SF) observations in GNSS tomography as an alternative approach has been investigated. The algebraic reconstruction technique (ART) and the total variation (TV) method are examined to reconstruct a regularized solution. The accuracy of the reconstructed water vapor distribution model using low-cost receivers is verified by radiosonde measurements in the area of the EPOSA (Echtzeit Positionierung Austria) GNSS network, which is mostly located in the east part of Austria for the period DoY 232–245, 2019. The results indicate that irrespective of the investigated ART and TV techniques, the quality of the reconstructed wet refractivity field is comparable for both SF and DF schemes. However, in the SF scheme the MAE with respect to the radiosonde measurements for ART + NWM and ART + TV can reach up to 10 ppm during noontime. Despite that, all statistical results demonstrate the degradation of the retrieved wet refractivity field of only 10–40% when applying the SF scheme in the presence of the initial guess.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Johannes Linder ◽  
Georg Seelig

Abstract Background Optimization of DNA and protein sequences based on Machine Learning models is becoming a powerful tool for molecular design. Activation maximization offers a simple design strategy for differentiable models: one-hot coded sequences are first approximated by a continuous representation, which is then iteratively optimized with respect to the predictor oracle by gradient ascent. While elegant, the current version of the method suffers from vanishing gradients and may cause predictor pathologies leading to poor convergence. Results Here, we introduce Fast SeqProp, an improved activation maximization method that combines straight-through approximation with normalization across the parameters of the input sequence distribution. Fast SeqProp overcomes bottlenecks in earlier methods arising from input parameters becoming skewed during optimization. Compared to prior methods, Fast SeqProp results in up to 100-fold faster convergence while also finding improved fitness optima for many applications. We demonstrate Fast SeqProp’s capabilities by designing DNA and protein sequences for six deep learning predictors, including a protein structure predictor. Conclusions Fast SeqProp offers a reliable and efficient method for general-purpose sequence optimization through a differentiable fitness predictor. As demonstrated on a variety of deep learning models, the method is widely applicable, and can incorporate various regularization techniques to maintain confidence in the sequence designs. As a design tool, Fast SeqProp may aid in the development of novel molecules, drug therapies and vaccines.


2021 ◽  
Vol 19 (2) ◽  
pp. 9-15
Author(s):  
Arjun Singh Saud ◽  
Subarna Shakya

Stock price forecasting in the field of interest for many stock investors to earn more profit from stock trading. Nowadays, machine learning researchers are also involved in this research field so that fast, accurate and automatic stock price forecasting can be achieved. This research paper evaluated GRU network’s performance with weight decay reg-ularization techniques for predicting price of stocks listed NEPSE. Three weight decay regularization technique analyzed in this research work were (1) L1 regularization (2) L2 regularization and (3) L1_L2 regularization. In this research work, six randomly selected stocks from NEPSE were experimented. From the experimental results, we observed that L2 regularization could outperform L1 and L1_L2 reg-ularization techniques for all six stocks. The average MSE obtained with L2 regularization was 4.12% to 33.52% lower than the average MSE obtained with L1 regularization, and it was 10.92% to 37.1% lower than the average MSE obtained with L1_L2 regularization. Thus, we concluded that the L2 regularization is best choice among weight regularization for stock price prediction.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5456
Author(s):  
Hamid Mukhtar ◽  
Saeed Mian Qaisar ◽  
Atef Zaguia

Alcoholism is attributed to regular or excessive drinking of alcohol and leads to the disturbance of the neuronal system in the human brain. This results in certain malfunctioning of neurons that can be detected by an electroencephalogram (EEG) using several electrodes on a human skull at appropriate positions. It is of great interest to be able to classify an EEG activity as that of a normal person or an alcoholic person using data from the minimum possible electrodes (or channels). Due to the complex nature of EEG signals, accurate classification of alcoholism using only a small dataset is a challenging task. Artificial neural networks, specifically convolutional neural networks (CNNs), provide efficient and accurate results in various pattern-based classification problems. In this work, we apply CNN on raw EEG data and demonstrate how we achieved 98% average accuracy by optimizing a baseline CNN model and outperforming its results in a range of performance evaluation metrics on the University of California at Irvine Machine Learning (UCI-ML) EEG dataset. This article explains the stepwise improvement of the baseline model using the dropout, batch normalization, and kernel regularization techniques and provides a comparison of the two models that can be beneficial for aspiring practitioners who aim to develop similar classification models in CNN. A performance comparison is also provided with other approaches using the same dataset.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Mohammed D. Kassim ◽  
Thabet Abdeljawad ◽  
Saeed M. Ali ◽  
Mohammed S. Abdo

AbstractIn this research paper, we intend to study the stability of solutions of some nonlinear initial value fractional differential problems. These equations are studied within the generalized fractional derivative of various orders. In order to study the solutions’ decay to zero as a power function, we establish sufficient conditions on the nonlinear terms. To this end, some versions of inequalities are combined and generalized via the so-called Bihari inequality. Moreover, we employ some properties of the generalized fractional derivative and appropriate regularization techniques. Finally, the paper involves examples to affirm the validity of the results.


Sign in / Sign up

Export Citation Format

Share Document