scholarly journals Kernel Regression Imputation in Manifolds via Bi-Linear Modeling: The Dynamic-MRI Case

Author(s):  
Konstantinos Slavakis ◽  
Gaurav Shetty ◽  
Loris Cannelli ◽  
Gesualdo Scutari ◽  
Ukash Nakarmi ◽  
...  

This paper introduces a non-parametric kernel-based modeling framework for imputation by regression on data that are assumed to lie close to an unknown-to-the-user smooth manifold in a Euclidean space. The proposed framework, coined kernel regression imputation in manifolds (KRIM), needs no training data to operate. Aiming at computationally efficient solutions, KRIM utilizes a small number of ``landmark'' data-points to extract geometric information from the measured data via parsimonious affine combinations (``linear patches''), which mimic the concept of tangent spaces to smooth manifolds and take place in functional approximation spaces, namely reproducing kernel Hilbert spaces (RKHSs). Multiple complex RKHSs are combined in a data-driven way to surmount the obstacle of pin-pointing the ``optimal'' parameters of a single kernel through cross-validation. The extracted geometric information is incorporated into the design via a novel bi-linear data-approximation model, and the imputation-by-regression task takes the form of an inverse problem which is solved by an iterative algorithm with guaranteed convergence to a stationary point of the non-convex loss function. To showcase the modular character and wide applicability of KRIM, this paper highlights the application of KRIM to dynamic magnetic resonance imaging (dMRI), where reconstruction of high-resolution images from severely under-sampled dMRI data is desired. Extensive numerical tests on synthetic and real dMRI data demonstrate the superior performance of KRIM over state-of-the-art approaches under several metrics and with a small computational footprint.<br>

2021 ◽  
Author(s):  
Konstantinos Slavakis ◽  
Gaurav Shetty ◽  
Loris Cannelli ◽  
Gesualdo Scutari ◽  
Ukash Nakarmi ◽  
...  

This paper introduces a non-parametric kernel-based modeling framework for imputation by regression on data that are assumed to lie close to an unknown-to-the-user smooth manifold in a Euclidean space. The proposed framework, coined kernel regression imputation in manifolds (KRIM), needs no training data to operate. Aiming at computationally efficient solutions, KRIM utilizes a small number of ``landmark'' data-points to extract geometric information from the measured data via parsimonious affine combinations (``linear patches''), which mimic the concept of tangent spaces to smooth manifolds and take place in functional approximation spaces, namely reproducing kernel Hilbert spaces (RKHSs). Multiple complex RKHSs are combined in a data-driven way to surmount the obstacle of pin-pointing the ``optimal'' parameters of a single kernel through cross-validation. The extracted geometric information is incorporated into the design via a novel bi-linear data-approximation model, and the imputation-by-regression task takes the form of an inverse problem which is solved by an iterative algorithm with guaranteed convergence to a stationary point of the non-convex loss function. To showcase the modular character and wide applicability of KRIM, this paper highlights the application of KRIM to dynamic magnetic resonance imaging (dMRI), where reconstruction of high-resolution images from severely under-sampled dMRI data is desired. Extensive numerical tests on synthetic and real dMRI data demonstrate the superior performance of KRIM over state-of-the-art approaches under several metrics and with a small computational footprint.<br>


2021 ◽  
Author(s):  
Konstantinos Slavakis ◽  
Gaurav Shetty ◽  
Loris Cannelli ◽  
Gesualdo Scutari ◽  
Ukash Nakarmi ◽  
...  

<div>This paper introduces a non-parametric approximation framework for imputation-by-regression on data with missing entries. The proposed framework, coined kernel regression imputation in manifolds (KRIM), is built on the hypothesis that features, generated by the measured data, lie close to an unknown-to-the-user smooth manifold. The feature space, where the smooth manifold is embedded in, takes the form of a reproducing kernel Hilbert space (RKHS). Aiming at concise data descriptions, KRIM identifies a small number of ``landmark points'' to define approximating ``linear patches'' in the feature space which mimic tangent spaces to smooth manifolds. This geometric information is infused into the design through a novel bi-linear model that allows for multiple approximating RKHSs. To effect imputation-by-regression, a bi-linear inverse problem is solved by an iterative algorithm with guaranteed convergence to a stationary point of a non-convex loss function. To showcase KRIM's modularity, the application of KRIM to dynamic magnetic resonance imaging (dMRI) is detailed, where reconstruction of images from severely under-sampled dMRI data is desired. Extensive numerical tests on synthetic and real dMRI data demonstrate the superior performance of KRIM over state-of-the-art approaches under several metrics and with a small computational footprint.</div>


Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5966
Author(s):  
Ke Wang ◽  
Gong Zhang

The challenge of small data has emerged in synthetic aperture radar automatic target recognition (SAR-ATR) problems. Most SAR-ATR methods are data-driven and require a lot of training data that are expensive to collect. To address this challenge, we propose a recognition model that incorporates meta-learning and amortized variational inference (AVI). Specifically, the model consists of global parameters and task-specific parameters. The global parameters, trained by meta-learning, construct a common feature extractor shared between all recognition tasks. The task-specific parameters, modeled by probability distributions, can adapt to new tasks with a small amount of training data. To reduce the computation and storage cost, the task-specific parameters are inferred by AVI implemented with set-to-set functions. Extensive experiments were conducted on a real SAR dataset to evaluate the effectiveness of the model. The results of the proposed approach compared with those of the latest SAR-ATR methods show the superior performance of our model, especially on recognition tasks with limited data.


2021 ◽  
Vol 13 (4) ◽  
pp. 631
Author(s):  
Kyle D. Woodward ◽  
Narcisa G. Pricope ◽  
Forrest R. Stevens ◽  
Andrea E. Gaughan ◽  
Nicholas E. Kolarik ◽  
...  

Remote sensing analyses focused on non-timber forest product (NTFP) collection and grazing are current research priorities of land systems science. However, mapping these particular land use patterns in rural heterogeneous landscapes is challenging because their potential signatures on the landscape cannot be positively identified without fine-scale land use data for validation. Using field-mapped resource areas and household survey data from participatory mapping research, we combined various Landsat-derived indices with ancillary data associated with human habitation to model the intensity of grazing and NTFP collection activities at 100-m spatial resolution. The study area is situated centrally within a transboundary southern African landscape that encompasses community-based organization (CBO) areas across three countries. We conducted four iterations of pixel-based random forest models, modifying the variable set to determine which of the covariates are most informative, using the best fit predictions to summarize and compare resource use intensity by resource type and across communities. Pixels within georeferenced, field-mapped resource areas were used as training data. All models had overall accuracies above 60% but those using proxies for human habitation were more robust, with overall accuracies above 90%. The contribution of Landsat data as utilized in our modeling framework was negligible, and further research must be conducted to extract greater value from Landsat or other optical remote sensing platforms to map these land use patterns at moderate resolution. We conclude that similar population proxy covariates should be included in future studies attempting to characterize communal resource use when traditional spectral signatures do not adequately capture resource use intensity alone. This study provides insights into modeling resource use activity when leveraging both remotely sensed data and proxies for human habitation in heterogeneous, spectrally mixed rural land areas.


Author(s):  
Flávio Craveiro ◽  
João Meneses de Matos ◽  
Helena Bártolo ◽  
Paulo Bártolo

Traditionally the construction sector is very conservative, risk averse and reluctant to adopt new technologies and ideas. The construction industry faces great challenges to develop more innovative and efficient solutions. In recent years, significant advances in technology and more sustainable urban environments has been creating numerous opportunities for innovation in automation. This paper proposes a new system based on extrusion-based technologies aiming at solving some limitations of current technologies to allow a more efficient building construction with organic forms and geometries, based on sustainable eco principles. This novel approach is described through a control deposition software. Current modeling techniques focus only on capturing the geometric information and cannot satisfy the requirements from modeling the components made of multi-heterogeneous materials. There is a great deal of interest in tailoring structures so the functional requirements can vary with location. The proposed functionally graded material deposition (FGM) system will allow a smooth variation of material properties to build up more efficient buildings regarding thermal, acoustic and structural conditions.


2020 ◽  
Vol 2020 ◽  
pp. 1-7 ◽  
Author(s):  
Aboubakar Nasser Samatin Njikam ◽  
Huan Zhao

This paper introduces an extremely lightweight (with just over around two hundred thousand parameters) and computationally efficient CNN architecture, named CharTeC-Net (Character-based Text Classification Network), for character-based text classification problems. This new architecture is composed of four building blocks for feature extraction. Each of these building blocks, except the last one, uses 1 × 1 pointwise convolutional layers to add more nonlinearity to the network and to increase the dimensions within each building block. In addition, shortcut connections are used in each building block to facilitate the flow of gradients over the network, but more importantly to ensure that the original signal present in the training data is shared across each building block. Experiments on eight standard large-scale text classification and sentiment analysis datasets demonstrate CharTeC-Net’s superior performance over baseline methods and yields competitive accuracy compared with state-of-the-art methods, although CharTeC-Net has only between 181,427 and 225,323 parameters and weighs less than 1 megabyte.


2020 ◽  
Vol 34 (07) ◽  
pp. 10542-10550 ◽  
Author(s):  
Jingjing Chen ◽  
Liangming Pan ◽  
Zhipeng Wei ◽  
Xiang Wang ◽  
Chong-Wah Ngo ◽  
...  

Recognizing ingredients for a given dish image is at the core of automatic dietary assessment, attracting increasing attention from both industry and academia. Nevertheless, the task is challenging due to the difficulty of collecting and labeling sufficient training data. On one hand, there are hundred thousands of food ingredients in the world, ranging from the common to rare. Collecting training samples for all of the ingredient categories is difficult. On the other hand, as the ingredient appearances exhibit huge visual variance during the food preparation, it requires to collect the training samples under different cooking and cutting methods for robust recognition. Since obtaining sufficient fully annotated training data is not easy, a more practical way of scaling up the recognition is to develop models that are capable of recognizing unseen ingredients. Therefore, in this paper, we target the problem of ingredient recognition with zero training samples. More specifically, we introduce multi-relational GCN (graph convolutional network) that integrates ingredient hierarchy, attribute as well as co-occurrence for zero-shot ingredient recognition. Extensive experiments on both Chinese and Japanese food datasets are performed to demonstrate the superior performance of multi-relational GCN and shed light on zero-shot ingredients recognition.


2016 ◽  
Author(s):  
Olivier Poirion ◽  
Xun Zhu ◽  
Travers Ching ◽  
Lana X. Garmire

AbstractDespite its popularity, characterization of subpopulations with transcript abundance is subject to a significant amount of noise. We propose to use effective and expressed nucleotide variations (eeSNVs) from scRNA-seq as alternative features for tumor subpopulation identification. We developed a linear modeling framework, SSrGE, to link eeSNVs associated with gene expression. In all the datasets tested, eeSNVs achieve better accuracies than gene expression for identifying subpopulations. Previously validated cancer-relevant genes are also highly ranked, confirming the significance of the method. Moreover, SSrGE is capable of analyzing coupled DNA-seq and RNA-seq data from the same single cells, demonstrating its value in integrating multi-omics single cell techniques. In summary, SNV features from scRNA-seq data have merits for both subpopulation identification and linkage of genotype-phenotype relationship. The method SSrGE is available at https://github.com/lanagarmire/SSrGE.


Author(s):  
Muhammad Zulqarnain ◽  
Rozaida Ghazali ◽  
Muhammad Ghulam Ghouse ◽  
Muhammad Faheem Mushtaq

Text classification has become very serious problem for big organization to manage the large amount of online data and has been extensively applied in the tasks of Natural Language Processing (NLP). Text classification can support users to excellently manage and exploit meaningful information require to be classified into various categories for further use. In order to best classify texts, our research efforts to develop a deep learning approach which obtains superior performance in text classification than other RNNs approaches. However, the main problem in text classification is how to enhance the classification accuracy and the sparsity of the data semantics sensitivity to context often hinders the classification performance of texts. In order to overcome the weakness, in this paper we proposed unified structure to investigate the effects of word embedding and Gated Recurrent Unit (GRU) for text classification on two benchmark datasets included (Google snippets and TREC). GRU is a well-known type of recurrent neural network (RNN), which is ability of computing sequential data over its recurrent architecture. Experimentally, the semantically connected words are commonly near to each other in embedding spaces. First, words in posts are changed into vectors via word embedding technique. Then, the words sequential in sentences are fed to GRU to extract the contextual semantics between words. The experimental results showed that proposed GRU model can effectively learn the word usage in context of texts provided training data. The quantity and quality of training data significantly affected the performance. We evaluated the performance of proposed approach with traditional recurrent approaches, RNN, MV-RNN and LSTM, the proposed approach is obtained better results on two benchmark datasets in the term of accuracy and error rate.


Sign in / Sign up

Export Citation Format

Share Document