HiDEx

Author(s):  
Cyrus Shaoul ◽  
Chris Westbury

HAL (Hyperspace Analog to Language) is a high-dimensional model of semantic space that uses the global co-occurrence frequency of words in a large corpus of text as the basis for a representation of semantic memory. In the original HAL model, many parameters were set without any a priori rationale. In this chapter we describe a new computer application called the High Dimensional Explorer (HiDEx) that makes it possible to systematically alter the values of the model’s parameters and thereby to examine their effect on the co-occurrence matrix that instantiates the model. New parameter sets give us measures of semantic density that improve the model’s ability to predict behavioral measures. Implications for such models are discussed.

Mathematics ◽  
2021 ◽  
Vol 9 (3) ◽  
pp. 222
Author(s):  
Juan C. Laria ◽  
M. Carmen Aguilera-Morillo ◽  
Enrique Álvarez ◽  
Rosa E. Lillo ◽  
Sara López-Taruella ◽  
...  

Over the last decade, regularized regression methods have offered alternatives for performing multi-marker analysis and feature selection in a whole genome context. The process of defining a list of genes that will characterize an expression profile remains unclear. It currently relies upon advanced statistics and can use an agnostic point of view or include some a priori knowledge, but overfitting remains a problem. This paper introduces a methodology to deal with the variable selection and model estimation problems in the high-dimensional set-up, which can be particularly useful in the whole genome context. Results are validated using simulated data and a real dataset from a triple-negative breast cancer study.


2021 ◽  
Author(s):  
Masafumi Hirose ◽  
Hatsuki Fujinami

<p>Spaceborne-radar precipitation products at high altitudes entail close attention to geographically inherent retrieval uncertainties. The lowest levels free from surface clutter are ~1 km higher in rugged mountainous areas than those over flatlands. The clutter-removal filter masks precipitation echoes at altitudes below 3 km from the surface at the swath edge over narrow valleys in the Himalayas. In this study, precipitation profiles at levels with clutter interference were estimated using an a priori precipitation profile dataset based on near-nadir observations. The corrected precipitation dataset was generated based on the Tropical Rainfall Measuring Mission Precipitation Radar (TRMM PR) product at a spatial resolution of 0.01° around the Trambau Glacier terminus in the Nepal Himalayas, where ground observation sites were installed in 2016. The occurrence frequency of precipitation was considerably small compared with the in situ observation because of limitations in the sensor sensitivity. The occurrence frequency of light precipitation is increased by the Dual-frequency Precipitation Radar (DPR) onboard the Global Precipitation Measurement (GPM) Core Observatory, and the low-level precipitation profile correction mitigates underestimation bias by ~10%. In this presentation, the detectability of fine-scale precipitation climatology and the local characteristics of its diurnal variation at high altitudes are discussed based the combination of the TRMM PR and GPM DPR products.</p>


2018 ◽  
Vol 52 (2) ◽  
pp. 631-657 ◽  
Author(s):  
Peng Chen

In this work we analyze the dimension-independent convergence property of an abstract sparse quadrature scheme for numerical integration of functions of high-dimensional parameters with Gaussian measure. Under certain assumptions on the exactness and boundedness of univariate quadrature rules as well as on the regularity assumptions on the parametric functions with respect to the parameters, we prove that the convergence of the sparse quadrature error is independent of the number of the parameter dimensions. Moreover, we propose both an a priori and an a posteriori schemes for the construction of a practical sparse quadrature rule and perform numerical experiments to demonstrate their dimension-independent convergence rates.


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Alexander Dementjev ◽  
Burkhard Hensel ◽  
Klaus Kabitzsch ◽  
Bernd Kauschinger ◽  
Steffen Schroeder

Machine tools are important parts of high-complex industrial manufacturing. Thus, the end product quality strictly depends on the accuracy of these machines, but they are prone to deformation caused by their own heat. The deformation needs to be compensated in order to assure accurate production. So an adequate model of the high-dimensional thermal deformation process must be created and parameters of this model must be evaluated. Unfortunately, such parameters are often unknown and cannot be calculated a priori. Parameter identification during real experiments is not an option for these models because of its high engineering and machine time effort. The installation of additional sensors to measure these parameters directly is uneconomical. Instead, an effective calibration of thermal models can be reached by combining real and virtual measurements on a machine tool during its real operation, without additional sensors installation. In this paper, a new approach for thermal model calibration is presented. The expected results are very promising and can be recommended as an effective solution for this class of problems.


2006 ◽  
Vol 55 (4) ◽  
pp. 534-552 ◽  
Author(s):  
Michael N. Jones ◽  
Walter Kintsch ◽  
Douglas J.K. Mewhort

2020 ◽  
Author(s):  
Kevin C. VanHorn ◽  
Murat Can Çobanoğlu

AbstractDimensionality reduction (DR) is often integral when analyzing high-dimensional data across scientific, economic, and social networking applications. For data with a high order of complexity, nonlinear approaches are often needed to identify and represent the most important components. We propose a novel DR approach that can incorporate a known underlying hierarchy. Specifically, we extend the widely used t-Distributed Stochastic Neighbor Embedding technique (t-SNE) to include hierarchical information and demonstrate its use with known or unknown class labels. We term this approach “H-tSNE.” Such a strategy can aid in discovering and understanding underlying patterns of a dataset that is heavily influenced by parent-child relationships. Without integrating information that is known a priori, we suggest that DR cannot function as effectively. In this regard, we argue for a DR approach that enables the user to incorporate known, relevant relationships even if their representation is weakly expressed in the dataset.Availabilitygithub.com/Cobanoglu-Lab/h-tSNE


2017 ◽  
Author(s):  
Morteza Dehghani ◽  
Reihane Boghrati ◽  
Kingson Man ◽  
Joseph Hoover ◽  
Sarah Gimbel ◽  
...  

Drawing from a common lexicon of semantic units, humans fashion narratives whose meaning transcends that of their individual utterances. However, while brain regions that represent lower-level semantic units, such as words and sentences, have been identified, questions remain about the neural representation of narrative comprehension, which involves inferring cumulative meaning. To address these questions, we exposed English, Mandarin and Farsi native speakers to native language translations of the same stories during fMRI scanning. Using a new technique in natural language processing, we calculated the distributed representations of these stories (capturing the meaning of the stories in high-dimensional semantic space), and demonstrate that using these representations we can identify the specific story a participant was reading from the neural data. Notably, this was possible even when the distributed representations were calculated using stories in a different language than the participant was reading. Relying on over 44 billion classifications, our results reveal that identification relied on a collection of brain regions most prominently located in the default mode network. These results demonstrate that neuro-semantic encoding of narratives happens at levels higher than individual semantic units and that this encoding is systematic across both individuals and languages.


2021 ◽  
Author(s):  
◽  
Guillem Barroso Gassiot

Latest developments in high-strength Magnetic Resonance Imaging (MRI) scanners, with in-built high resolution, have dramatically enhanced the ability of clinicians to diagnose tumours and rare illnesses. However, their high-strength transient magnetic fields induce unwanted eddy currents in shielding components, which result in high-frequency vibrations, noise, imaging artefacts and, ultimately, heat dissipation and boiling off of the helium used to super-cool the magnets. Optimum MRI scanner design requires the capturing of complex electro-magneto-mechanical interactions with high fidelity computational tools. Moreover, manufacturing new MRI scanners still represents a computational challenge to industry due to the large variability in material parameters and geometrical configurations that need to be tested during the early design phase. This process can be highly optimised through the employment of user-friendly computational metamodels constructed on the basis of Reduced Order Modelling (ROM) techniques, where high-dimensional parametric offline solutions are obtained, stored and assimilated in order to be efficiently queried in real time.This thesis presents a novel a priori Proper Generalised Decomposition (PGD) computational framework for the analysis of the electro-magneto-mechanical inter-actions in the context of MRI scanner design to address the urgent need for the development of new cost-effective methods, whereby previously performed compu-tations can be assimilated as training solutions of a surrogate digital twin model to allow for real-time simulations. The PGD methodology is derived for coupled electro-magneto-mechanical problems in an axisymmetric Lagrangian setting, in-cluding the possibility to vary several material and geometrical parameters (as part of the high-dimensional offline solution), that are relevant for the industrial part-ner of the project, Siemens Healthineers. A regularised-adaptive strategy and a staggered PGD approach are proposed in order to enhance the accuracy and robust-ness of the PGD algorithm while preserving its a priori nature. The Lagrangian adaptation of the governing equations will allow for a comparison between staggered and monolithic solvers, where the staggered approach will be shown to enhance the robustness and accuracy of the PGD technique. Moreover, geometric changes in the computational domain will be accounted for in the PGD solution by using a PGD-projection technique that will enable the computation of a separable expression even for geometrical variations, preserving thus the efficiency of the online PGD stage. A set of numerical problems will be presented in order to validate the PGD formula-tion, which will be benchmarked against the full order (reference) model. Moreover, a comparison between two families of ROM methods, the a priori PGD and the a posteriori Proper Orthogonal Decomposition (POD), will also be performed in order to assess and compare different ROM strategies.


2006 ◽  
Author(s):  
Allison B. Kaufman ◽  
Curt Burgess ◽  
Arunava Chakravartty ◽  
Brenda McCowan ◽  
Catherine H. Decker

Sign in / Sign up

Export Citation Format

Share Document