l2 norm
Recently Published Documents


TOTAL DOCUMENTS

346
(FIVE YEARS 161)

H-INDEX

18
(FIVE YEARS 5)

2022 ◽  
Vol 22 (3) ◽  
pp. 1-24
Author(s):  
Yizhang Jiang ◽  
Xiaoqing Gu ◽  
Lei Hua ◽  
Kang Li ◽  
Yuwen Tao ◽  
...  

Artificial intelligence– (AI) based fog/edge computing has become a promising paradigm for infectious disease. Various AI algorithms are embedded in cooperative fog/edge devices to construct medical Internet of Things environments, infectious disease forecast systems, smart health, and so on. However, these systems are usually done in isolation, which is called single-task learning. They do not consider the correlation and relationship between multiple/different tasks, so some common information in the model parameters or data characteristics is lost. In this study, each data center in fog/edge computing is considered as a task in the multi-task learning framework. In such a learning framework, a multi-task weighted Takagi-Sugeno-Kang (TSK) fuzzy system, called MW-TSKFS, is developed to forecast the trend of Coronavirus disease 2019 (COVID-19). MW-TSKFS provides a multi-task learning strategy for both antecedent and consequent parameters of fuzzy rules. First, a multi-task weighted fuzzy c-means clustering algorithm is developed for antecedent parameter learning, which extracts the public information among all tasks and the private information of each task. By sharing the public cluster centroid and public membership matrix, the differences of commonality and individuality can be further exploited. For consequent parameter learning of MW-TSKFS, a multi-task collaborative learning mechanism is developed based on ε-insensitive criterion and L2 norm penalty term, which can enhance the generalization and forecasting ability of the proposed fuzzy system. The experimental results on the real COVID-19 time series show that the forecasting tend model based on multi-task the weighted TSK fuzzy system has a high application value.


2022 ◽  
pp. 1-29
Author(s):  
Wanting Lu ◽  
Heping Wang

We study the approximation of multivariate functions from a separable Hilbert space in the randomized setting with the error measured in the weighted L2 norm. We consider algorithms that use standard information Λstd consisting of function values or general linear information Λall consisting of arbitrary linear functionals. We investigate the equivalences of various notions of algebraic and exponential tractability in the randomized setting for Λstd and Λall for the normalized or absolute error criterion. For the normalized error criterion, we show that the power of Λstd is the same as that of Λall for all notions of exponential tractability and some notions of algebraic tractability without any condition. For the absolute error criterion, we show that the power of Λstd is the same as that of Λall for all notions of algebraic and exponential tractability without any condition. Specifically, we solve Open Problems 98, 101, 102 and almost solve Open Problem 100 as posed by E.Novak and H.Wo´zniakowski in the book: Tractability of Multivariate Problems, Volume III: Standard Information for Operators, EMS Tracts in Mathematics, Zürich, 2012.


2021 ◽  
Vol 12 (1) ◽  
pp. 62
Author(s):  
Gang Xu ◽  
Xiang Li ◽  
Xingyu Zhang ◽  
Guangxin Xing ◽  
Feng Pan

Loop closure detection is a key challenge in visual simultaneous localization and mapping (SLAM) systems, which has attracted significant research interest in recent years. It entails correctly determining whether a scene has previously been visited by a mobile robot and completely establishing the consistent maps of motion. There are many loop closure detection methods that have been proposed, but most of these algorithms are handcrafted features-based and perform weak robustness to illumination variations. In this paper, we investigate a Siamese Convolutional Neural Network (SCNN) to solve the task of loop closure detection in RGB-D SLAM. Firstly, we use a pre-trained SCNN model to extract features as image descriptors; then, the L2 norm distance is adopted as a similarity metric between descriptors. In terms of the learned features for matching, there are two key issues for discussion: (1) how to define an appropriate loss as supervision (utilizing the cross-entropy loss, the contrastive loss, or the combination of two); and (2) how to combine the appearance information in RGB images and position information in depth images (utilizing early fusion, mid-level fusion or late fusion). We compare our proposed method of different baseline by experiments carried out on two public datasets (New College and NYU), and our performance outperforms the state-of-the-art.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Mahmoud M. Khattab ◽  
Akram M. Zeki ◽  
Ali A. Alwan ◽  
Belgacem Bouallegue ◽  
Safaa S. Matter ◽  
...  

The primary goal of the multiframe super-resolution image reconstruction is to produce an image with a higher resolution by integrating information extracted from a set of corresponding images with low resolution, which is used in various fields. However, super-resolution image reconstruction approaches are typically affected by annoying restorative artifacts, including blurring, noise, and staircasing effect. Accordingly, it is always difficult to balance between smoothness and edge preservation. In this paper, we intend to enhance the efficiency of multiframe super-resolution image reconstruction in order to optimize both analysis and human interpretation processes by improving the pictorial information and enhancing the automatic machine perception. As a result, we propose new approaches that firstly rely on estimating the initial high-resolution image through preprocessing of the reference low-resolution image based on median, mean, Lucy-Richardson, and Wiener filters. This preprocessing stage is used to overcome the degradation present in the reference low-resolution image, which is a suitable kernel for producing the initial high-resolution image to be used in the reconstruction phase of the final image. Then, L2 norm is employed for the data-fidelity term to minimize the residual among the predicted high-resolution image and the observed low-resolution images. Finally, bilateral total variation prior model is utilized to restrict the minimization function to a stable state of the generated HR image. The experimental results of the synthetic data indicate that the proposed approaches have enhanced efficiency visually and quantitatively compared to other existing approaches.


Nanomaterials ◽  
2021 ◽  
Vol 11 (12) ◽  
pp. 3386
Author(s):  
Amine Chiboub ◽  
Yassir Arezki ◽  
Alain Vissiere ◽  
Charyar Mehdi-Souzani ◽  
Nabil Anwer ◽  
...  

Optical aspherical lenses with high surface quality are increasingly demanded in several applications in medicine, synchrotron, vision, etc. To reach the requested surface quality, most advanced manufacturing processes are used in closed chain with high precision measurement machines. The measured data are analysed with least squares (LS or L2-norm) or minimum zone (MZ) fitting (also Chebyshev fitting or L∞-norm) algorithms to extract the form error. Performing data fitting according to L∞-norm is more accurate and challenging than L2-norm, since it directly minimizes peak-to-valley (PV). In parallel, reference softgauges are used to assess the performance of the implemented MZ fitting algorithms, according to the F1 algorithm measurement standard, to guarantee their traceability, accuracy and robustness. Reference softgauges usually incorporate multiple parameters related to manufacturing processes, measurement errors, points distribution, etc., to be as close as possible to the real measured data. In this paper, a unique robust approach based on a non-vertex solution is mathematically formulated and implemented for generating reference softgauges for complex shapes. Afterwards, two implemented MZ fitting algorithms (HTR and EPF) were successfully tested on a number of generated reference pairs. The evaluation of their performance was carried out through two metrics: degree of difficulty and performance measure.


Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1659
Author(s):  
Yinnian He

In this work, a finite element (FE) method is discussed for the 3D steady Navier–Stokes equations by using the finite element pair Xh×Mh. The method consists of transmitting the finite element solution (uh,ph) of the 3D steady Navier–Stokes equations into the finite element solution pairs (uhn,phn) based on the finite element space pair Xh×Mh of the 3D steady linearized Navier–Stokes equations by using the Stokes, Newton and Oseen iterative methods, where the finite element space pair Xh×Mh satisfies the discrete inf-sup condition in a 3D domain Ω. Here, we present the weak formulations of the FE method for solving the 3D steady Stokes, Newton and Oseen iterative equations, provide the existence and uniqueness of the FE solution (uhn,phn) of the 3D steady Stokes, Newton and Oseen iterative equations, and deduce the convergence with respect to (σ,h) of the FE solution (uhn,phn) to the exact solution (u,p) of the 3D steady Navier–Stokes equations in the H1−L2 norm. Finally, we also give the convergence order with respect to (σ,h) of the FE velocity uhn to the exact velocity u of the 3D steady Navier–Stokes equations in the L2 norm.


2021 ◽  
Vol 58 (2) ◽  
pp. 119-131
Author(s):  
Christos P. Kitsos

Summary The aim of this paper is to investigate and discuss the common points shared, in their line of development, by both Sampling Theory and Design of Experiments. In fact, Sampling Theory adopts the main optimality criterion of the Optimal Design of Experiments, the minimization of variance, i.e. D-optimality. There is also an approach based on c-optimality, as far as ratio estimates are concerned, in Design of Experiments, and the A-optimality involved in a proposed Sampling technique. It is pointed out that the L2 norm is mainly applied as a distance measure.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Zhongchen Guo ◽  
Xuexiang Yu ◽  
Chao Hu ◽  
Zhihao Yu ◽  
Chuang Jiang

Precise point positioning (PPP) is used in many fields. However, pseudorange multipath delay is an important error that restricts its accuracy. Pseudorange multipath delay can be considered as the combination of effective information and observation noise; it can be modeled after removing the observation noise. In this work, elastic nets (EN) regularization denoising method is proposed and compared with L2-norm regularization denoising method. Then, quadratic polynomial (QP) model plus autoregressive (AR) model (QP + AR) are used to model the denoised pseudorange multipath delays. Finally, the modeling results are corrected to the observations to verify the improvement of BDS-3 single-frequency PPP accuracy. Three single-frequency PPP schemes are designed to verify the effectiveness of denoising method and QP + AR model. The experimental results show that the accuracy improvement of B3I and B2a is more obvious than that of B1I and B1C when the modeling values are corrected to the pseudorange observations. The improvement of B3I and B2a in the east (E) and up (U) directions can reach 10.6%∼34.4% and 5.9%∼65.7%, and the improvement of the north (N) direction is mostly less than 10.0%. The accuracy of B1I and B1C in E and U directions can be improved by 0%∼30.7% and 0.4%∼28.6%, respectively, while the accuracy of N direction can be improved slightly or even decreased. Using EN regularization denoising and QP + AR model correction, single-frequency PPP performs better at B3I and B2a, while L2-norm regularization denoising and QP + AR model correction perform better at B1I and B1C. The accuracy improvement of B2a and B3I is more obvious than that of B1I and B1C. The convergence time after MP correction of each frequency is slightly shorter. Overall, the proposed pseudorange multipath delays processing strategy is beneficial in improving the single-frequency PPP of BDS-3 satellite.


2021 ◽  
Author(s):  
Hiroki Kuroda ◽  
Daichi Kitahara

This paper presents a convex recovery method for block-sparse signals whose block partitions are unknown a priori. We first introduce a nonconvex penalty function, where the block partition is adapted for the signal of interest by minimizing the mixed l2/l1 norm over all possible block partitions. Then, by exploiting a variational representation of the l2 norm, we derive the proposed penalty function as a suitable convex relaxation of the nonconvex one. For a block-sparse recovery model designed with the proposed penalty, we develop an iterative algorithm which is guaranteed to converge to a globally optimal solution. Numerical experiments demonstrate the effectiveness of the proposed method.


Geophysics ◽  
2021 ◽  
pp. 1-62
Author(s):  
Thomas André Larsen Greiner ◽  
Jan Erik Lie ◽  
Odd Kolbjørnsen ◽  
Andreas Kjelsrud Evensen ◽  
Espen Harris Nilsen ◽  
...  

In 3D marine seismic acquisition, the seismic wavefield is not sampled uniformly in the spatial directions. This leads to a seismic wavefield consisting of irregularly and sparsely populated traces with large gaps between consecutive sail-lines especially in the near-offsets. The problem of reconstructing the complete seismic wavefield from a subsampled and incomplete wavefield, is formulated as an underdetermined inverse problem. We investigate unsupervised deep learning based on a convolutional neural network (CNN) for multidimensional wavefield reconstruction of irregularly populated traces defined on a regular grid. The proposed network is based on an encoder-decoder architecture with an overcomplete latent representation, including appropriate regularization penalties to stabilize the solution. We proposed a combination of penalties, which consists of the L2-norm penalty on the network parameters, and a first- and second-order total-variation (TV) penalty on the model. We demonstrate the performance of the proposed method on broad-band synthetic data, and field data represented by constant-offset gathers from a source-over-cable data set from the Barents Sea. In the field data example we compare the results to a full production flow from a contractor company, which is based on a 5D Fourier interpolation approach. In this example, our approach displays improved reconstruction of the wavefield with less noise in the sparse near-offsets compared to the industry approach, which leads to improved structural definition of the near offsets in the migrated sections.


Sign in / Sign up

Export Citation Format

Share Document