mm algorithm
Recently Published Documents


TOTAL DOCUMENTS

45
(FIVE YEARS 15)

H-INDEX

7
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Kensuke Tanioka ◽  
Yuki Furotani ◽  
Satoru Hiwa

Background: Low-rank approximation is a very useful approach for interpreting the features of a correlation matrix; however, a low-rank approximation may result in estimation far from zero even if the corresponding original value was far from zero. In this case, the results lead to misinterpretation. Methods: To overcome these problems, we propose a new approach to estimate a sparse low-rank correlation matrix based on threshold values combined with cross-validation. In the proposed approach, the MM algorithm was used to estimate the sparse low-rank correlation matrix, and a grid search was performed to select the threshold values related to sparse estimation. Results: Through numerical simulation, we found that the FPR and average relative error of the proposed method were superior to those of the tandem approach. For the application of microarray gene expression, the FPRs of the proposed approach with d=2,3, and 5 were 0.128, 0.139, and 0.197, respectively, while FPR of the tandem approach was 0.285. Conclusions: We propose a novel approach to estimate sparse low-rank correlation matrix. The advantage of the proposed method is that it provides results that are easy to interpret and avoid misunderstandings. We demonstrated the superiority of the proposed method through both numerical simulations and real examples.


2021 ◽  
Vol 2078 (1) ◽  
pp. 012012
Author(s):  
Song Yao ◽  
Lipeng Cui ◽  
Sining Ma

Abstract In recent years, the sparse model is a research hotspot in the field of artificial intelligence. Since the Lasso model ignores the group structure among variables, and can only achieve the selection of scattered variables. Besides, Group Lasso can only select groups of variables. To address this problem, the Sparse Group Log Ridge model is proposed, which can select both groups of variables and variables in one group. Then the MM algorithm combined with the block coordinate descent algorithm can be used for solving. Finally, the advantages of the model in terms of variables selection and prediction are shown through the experiment.


Photonics ◽  
2021 ◽  
Vol 8 (7) ◽  
pp. 236
Author(s):  
Ignacio O. Romero ◽  
Yile Fang ◽  
Michael Lun ◽  
Changqing Li

X-ray fluorescence computed tomography (XFCT) is a molecular imaging technique that can be used to sense different elements or nanoparticle (NP) agents inside deep samples or tissues. However, XFCT has not been a popular molecular imaging tool because it has limited molecular sensitivity and spatial resolution. We present a benchtop XFCT imaging system in which a superfine pencil-beam X-ray source and a ring of X-ray spectrometers were simulated using GATE (Geant4 Application for Tomographic Emission) Monte Carlo software. An accelerated majorization minimization (MM) algorithm with an L1 regularization scheme was used to reconstruct the XFCT image of molybdenum (Mo) NP targets. Good target localization was achieved with a DICE coefficient of 88.737%. The reconstructed signal of the targets was found to be proportional to the target concentrations if detector number, detector placement, and angular projection number are optimized. The MM algorithm performance was compared with the maximum likelihood expectation maximization (ML-EM) and filtered back projection (FBP) algorithms. Our results indicate that the MM algorithm is superior to the ML-EM and FBP algorithms. We found that the MM algorithm was able to reconstruct XFCT targets as small as 0.25 mm in diameter. We also found that measurements with three angular projections and a 20-detector ring are enough to reconstruct the XFCT images.


2021 ◽  
Vol 10 (4) ◽  
pp. 62
Author(s):  
Ouindllassida Jean-Etienne Ou´edraogo ◽  
Edoh Katchekpele ◽  
Simplice Dossou-Gb´et´e

The aims of this paper is to propose a new approach for fitting a three-parameter weibull distribution to data from an independent and identically distributed scheme of sampling. This approach use a likelihood function based on the n - 1 largest order statistics. Information loss by dropping the first order statistic is then retrieved via an MM-algorithm which will be used to estimate the model’s parameters. To examine the properties of the proposed estimators, the associated bias and mean squared error were calculated through Monte Carlo simulations. Subsequently, the performance of these estimators were compared with those of two concurrent methods.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-16
Author(s):  
Guocai Rong ◽  
Luwei Tang ◽  
Wenting Luo ◽  
Qing Li ◽  
Lifeng Deng

Case-cohort design is a biased sampling method. Due to its cost-effective and theoretical significance, this design has extensive application value in many large cohort studies. The case-cohort data includes a subcohort sampled randomly from the entire cohort and all the failed subjects outside the subcohort. In this paper, the adjustment for the distorted covariates is considered to case-cohort data in Cox’s model. According to the existing adjustable methods of distorted covariates for linear and nonlinear models, we propose estimating the distorting functions by nonparametrically regressing the distorted covariates on the distorting factors; then, the estimators for the parameters are obtained using the estimated covariates. The proof of consistency and being asymptotically normal is completed. For calculating the maximum likelihood estimates of the regression coefficients subject in Cox’s model, a minorization-maximization (MM) algorithm is developed. Simulation studies are performed to compare the estimations with the covariates undistorted, distorted, and adjusted to illustrate the proposed methods.


Author(s):  
Tarek Abdallah ◽  
Gustavo Vulcano

Problem definition: A major task in retail operations is to optimize the assortments exhibited to consumers. To this end, retailers need to understand customers’ preferences for different products. Academic/practical relevance: This is particularly challenging when only sales and product-availability data are recorded, and not all products are displayed in all periods. Similarly, in revenue management contexts, firms (airlines, hotels, etc.) need to understand customers’ preferences for different options in order to optimize the menu of products to offer. Methodology: In this paper, we study the estimation of preferences under a multinomial logit model of demand when customers arrive over time in accordance with a nonhomogeneous Poisson process. This model has recently caught important attention in both academic and industrial practices. We formulate the problem as a maximum-likelihood estimation problem, which turns out to be nonconvex. Results: Our contribution is twofold: From a theoretical perspective, we characterize conditions under which the maximum-likelihood estimates are unique and the model is identifiable. From a practical perspective, we propose a minorization-maximization (MM) algorithm to ease the optimization of the likelihood function. Through an extensive numerical study, we show that our algorithm leads to better estimates in a noticeably short computational time compared with state-of-the-art benchmarks. Managerial implications: The theoretical results provide a solid foundation for the use of the model in terms of the quality of the derived estimates. At the same time, the fast MM algorithm allows the implementation of the model and the estimation procedure at large scale, compatible with real industrial applications.


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4457
Author(s):  
Shuangshuang Li ◽  
Haixin Sun ◽  
Hamada Esmaiel

Underwater acoustic localization is a useful technique applied to any military and civilian applications. Among the range-based underwater acoustic localization methods, the time difference of arrival (TDOA) has received much attention because it is easy to implement and relatively less affected by the underwater environment. This paper proposes a TDOA-based localization algorithm for an underwater acoustic sensor network using the maximum-likelihood (ML) ratio criterion. To relax the complexity of the proposed localization complexity, we construct an auxiliary function, and use the majorization-minimization (MM) algorithm to solve it. The proposed localization algorithm proposed in this paper is called a T-MM algorithm. T-MM is applying the MM algorithm to the TDOA acoustic-localization technique. As the MM algorithm iterations are sensitive to the initial points, a gradient-based initial point algorithm is used to set the initial points of the T-MM scheme. The proposed T-MM localization scheme is evaluated based on squared position error bound (SPEB), and through calculation, we get the SPEB expression by the equivalent Fisher information matrix (EFIM). The simulation results show how the proposed T-MM algorithm has better performance and outperforms the state-of-the-art localization algorithms in terms of accuracy and computation complexity even under a high presence of underwater noise.


2020 ◽  
Vol 10 (5) ◽  
pp. 1755 ◽  
Author(s):  
Yupei Zhang ◽  
Yue Yun ◽  
Huan Dai ◽  
Jiaqi Cui ◽  
Xuequn Shang

Student grade prediction (SGP) is an important educational problem for designing personalized strategies of teaching and learning. Many studies adopt the technique of matrix factorization (MF). However, their methods often focus on the grade records regardless of the side information, such as backgrounds and relationships. To this end, in this paper, we propose a new MF method, called graph regularized robust matrix factorization (GRMF), based on the recent robust MF version. GRMF integrates two side graphs built on the side data of students and courses into the objective of robust low-rank MF. As a result, the learned features of students and courses can grasp more priors from educational situations to achieve higher grade prediction results. The resulting objective problem can be effectively optimized by the Majorization Minimization (MM) algorithm. In addition, GRMF not only can yield the specific features for the education domain but can also deal with the case of missing, noisy, and corruptive data. To verify our method, we test GRMF on two public data sets for rating prediction and image recovery. Finally, we apply GRMF to educational data from our university, which is composed of 1325 students and 832 courses. The extensive experimental results manifestly show that GRMF is robust to various data problem and achieves more effective features in comparison with other methods. Moreover, GRMF also delivers higher prediction accuracy than other methods on our educational data set. This technique can facilitate personalized teaching and learning in higher education.


Sign in / Sign up

Export Citation Format

Share Document