minimization methods
Recently Published Documents


TOTAL DOCUMENTS

200
(FIVE YEARS 33)

H-INDEX

22
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Adrian Brown

Abstract This paper discusses the mathematical aspects of band fitting and introduces the Asymmetric Gaussian curve and its tangent space for the first time. First, we derive an equation for an Asymmetric Gaussian shape. We then derive a rule for the resolution of two Gaussian shaped bands. We then use the Asymmetrical Gaussian equation to derive a Master Equation to fit two overlapping bands. We identify regions of the fitting space where the Asymmetric Gaussian fit is optimal, sub optimal and not optimal. We then demonstrate the use of the Asymmetric Gaussian curve to fit four overlapping Gaussian bands, and show how this is relevant to the olivine family spectral complex at 1 μm. We develop a modified model of the olivine family spectral complex based on previous work by Runciman and Burns. The limitations of the asymmetric band fitting method and a critical assessment of three commonly used numerical minimization methods are also provided.


2021 ◽  
Vol 12 ◽  
Author(s):  
Shuguang Han ◽  
Ning Wang ◽  
Yuxin Guo ◽  
Furong Tang ◽  
Lei Xu ◽  
...  

Inspired by L1-norm minimization methods, such as basis pursuit, compressed sensing, and Lasso feature selection, in recent years, sparse representation shows up as a novel and potent data processing method and displays powerful superiority. Researchers have not only extended the sparse representation of a signal to image presentation, but also applied the sparsity of vectors to that of matrices. Moreover, sparse representation has been applied to pattern recognition with good results. Because of its multiple advantages, such as insensitivity to noise, strong robustness, less sensitivity to selected features, and no “overfitting” phenomenon, the application of sparse representation in bioinformatics should be studied further. This article reviews the development of sparse representation, and explains its applications in bioinformatics, namely the use of low-rank representation matrices to identify and study cancer molecules, low-rank sparse representations to analyze and process gene expression profiles, and an introduction to related cancers and gene expression profile database.


2021 ◽  
Vol 7 (8) ◽  
pp. 78532-78543
Author(s):  
Bruno Henrique Marques Margotto ◽  
Bruno Muniz de Freitas Miotto ◽  
Bruno Muniz de Freitas Miotto ◽  
Carlos Eduardo Polatschek Kopperschmidt ◽  
Carlos Eduardo Polatschek Kopperschmidt ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Maolin Liang ◽  
Lifang Dai

Recent works on the multilinear system A x m − 1 = b with an order- m and dimension- n tensor A and a vector b of dimension- n have been motivated by their applications in data mining, numerical PDEs, tensor complementary problems, and so on. In this paper, we propose an alternating minimization method for the solution of the system mentioned above and present several randomized versions of this algorithm in order to improve its performance. The provided numerical experiments show that our methods are feasible for any tensor A and outperform some existing ones in the same case.


2021 ◽  
Vol 02 (01) ◽  
Author(s):  
Nazri Mohd Nawi ◽  
◽  
Eneng Tita Tosida ◽  
Hamiza Hasbi ◽  
Norhamreeza Abdul Hamid ◽  
...  

Back propagation (BP) neural network is known for its popularity and its capability in prediction and classification. BP used gradient descent (GD) method as one of the most widely used error minimization methods used to train back propagation (BP) networks. Besides its popularity BP still faces some limitation such as very slow in learning as well as easily get stuck at local minima. Many techniques have been introduced to improve BP performance. This research implements second order method together with gradient descent in order to improve its performance. The efficiency of both methods are verified and compared by means of simulations on classifying drug addict repetition. The results show that the second order methods are more reliable and significantly improves the learning performance of BP.


2021 ◽  
Vol 286 ◽  
pp. 112268
Author(s):  
Natália Valmorbida Moraes ◽  
Fernando Henrique Lermen ◽  
Márcia Elisa Soares Echeveste

2021 ◽  
pp. 385-399
Author(s):  
Wilson Guasti Junior ◽  
Isaac P. Santos

Abstract In this work we explore the use of deep learning models based on deep feedforward neural networks to solve ordinary and partial differential equations. The illustration of this methodology is given by solving a variety of initial and boundary value problems. The numerical results, obtained based on different feedforward neural networks structures, activation functions and minimization methods, were compared to each other and to the exact solutions. The neural network was implemented using the Python language, with the Tensorflow library.


Sign in / Sign up

Export Citation Format

Share Document