scholarly journals Computer Model Emulation with High-Dimensional Functional Output in Large-Scale Observing System Uncertainty Experiments

Technometrics ◽  
2021 ◽  
pp. 1-36
Author(s):  
Pulong Ma ◽  
Anirban Mondal ◽  
Bledar A. Konomi ◽  
Jonathan Hobbs ◽  
Joon Jin Song ◽  
...  
2014 ◽  
Vol 575 ◽  
pp. 201-205
Author(s):  
Bin Liu ◽  
Chun Lin Ji

We present an automated computation system for large scale design of metamaterials (MTMs). A computer model emulation (CME) technique is used to generate a forward mapping from the MTM particle’s geometric dimension to the corresponding electromagnetic (EM) response. Then the design problem translates to be a reverse engineering process which aims to find optimal values of the geometric dimensions for the MTM particles. The core of the CME process is a statistical functional regression module using a Gaussian Process mixture (GPM) model. The reverse engineering process is implemented with a Bayesian optimization technique. Experimental results demonstrate that the proposed approach can facilitate rapid design of MTMs.


2009 ◽  
Vol 35 (7) ◽  
pp. 859-866
Author(s):  
Ming LIU ◽  
Xiao-Long WANG ◽  
Yuan-Chao LIU

2021 ◽  
Vol 11 (2) ◽  
pp. 472
Author(s):  
Hyeongmin Cho ◽  
Sangkyun Lee

Machine learning has been proven to be effective in various application areas, such as object and speech recognition on mobile systems. Since a critical key to machine learning success is the availability of large training data, many datasets are being disclosed and published online. From a data consumer or manager point of view, measuring data quality is an important first step in the learning process. We need to determine which datasets to use, update, and maintain. However, not many practical ways to measure data quality are available today, especially when it comes to large-scale high-dimensional data, such as images and videos. This paper proposes two data quality measures that can compute class separability and in-class variability, the two important aspects of data quality, for a given dataset. Classical data quality measures tend to focus only on class separability; however, we suggest that in-class variability is another important data quality factor. We provide efficient algorithms to compute our quality measures based on random projections and bootstrapping with statistical benefits on large-scale high-dimensional data. In experiments, we show that our measures are compatible with classical measures on small-scale data and can be computed much more efficiently on large-scale high-dimensional datasets.


Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 146
Author(s):  
Aleksei Vakhnin ◽  
Evgenii Sopov

Modern real-valued optimization problems are complex and high-dimensional, and they are known as “large-scale global optimization (LSGO)” problems. Classic evolutionary algorithms (EAs) perform poorly on this class of problems because of the curse of dimensionality. Cooperative Coevolution (CC) is a high-performed framework for performing the decomposition of large-scale problems into smaller and easier subproblems by grouping objective variables. The efficiency of CC strongly depends on the size of groups and the grouping approach. In this study, an improved CC (iCC) approach for solving LSGO problems has been proposed and investigated. iCC changes the number of variables in subcomponents dynamically during the optimization process. The SHADE algorithm is used as a subcomponent optimizer. We have investigated the performance of iCC-SHADE and CC-SHADE on fifteen problems from the LSGO CEC’13 benchmark set provided by the IEEE Congress of Evolutionary Computation. The results of numerical experiments have shown that iCC-SHADE outperforms, on average, CC-SHADE with a fixed number of subcomponents. Also, we have compared iCC-SHADE with some state-of-the-art LSGO metaheuristics. The experimental results have shown that the proposed algorithm is competitive with other efficient metaheuristics.


2012 ◽  
Vol 2012 ◽  
pp. 1-8 ◽  
Author(s):  
Catalina Alvarado-Rojas ◽  
Michel Le Van Quyen

Little is known about the long-term dynamics of widely interacting cortical and subcortical networks during the wake-sleep cycle. Using large-scale intracranial recordings of epileptic patients during seizure-free periods, we investigated local- and long-range synchronization between multiple brain regions over several days. For such high-dimensional data, summary information is required for understanding and modelling the underlying dynamics. Here, we suggest that a compact yet useful representation is given by a state space based on the first principal components. Using this representation, we report, with a remarkable similarity across the patients with different locations of electrode placement, that the seemingly complex patterns of brain synchrony during the wake-sleep cycle can be represented by a small number of characteristic dynamic modes. In this space, transitions between behavioral states occur through specific trajectories from one mode to another. These findings suggest that, at a coarse level of temporal resolution, the different brain states are correlated with several dominant synchrony patterns which are successively activated across wake-sleep states.


Author(s):  
Alexander Miropolsky ◽  
Anath Fischer

The inspection of machined objects is one of the most important quality control tasks in the manufacturing industry. Contemporary scanning technologies have provided the impetus for the development of computational inspection methods, where the computer model of the manufactured object is reconstructed from the scan data, and then verified against its digital design model. Scan data, however, are typically very large scale (i.e., many points), unorganized, noisy, and incomplete. Therefore, reconstruction is problematic. To overcome the above problems the reconstruction methods may exploit diverse feature data, that is, diverse information about the properties of the scanned object. Based on this concept, the paper proposes a new method for denoising and reduction in scan data by extended geometric filter. The proposed method is applied directly on the scanned points and is automatic, fast, and straightforward to implement. The paper demonstrates the integration of the proposed method into the framework of the computational inspection process.


2015 ◽  
Vol 2015 ◽  
pp. 1-12 ◽  
Author(s):  
Sai Kiranmayee Samudrala ◽  
Jaroslaw Zola ◽  
Srinivas Aluru ◽  
Baskar Ganapathysubramanian

Dimensionality reduction refers to a set of mathematical techniques used to reduce complexity of the original high-dimensional data, while preserving its selected properties. Improvements in simulation strategies and experimental data collection methods are resulting in a deluge of heterogeneous and high-dimensional data, which often makes dimensionality reduction the only viable way to gain qualitative and quantitative understanding of the data. However, existing dimensionality reduction software often does not scale to datasets arising in real-life applications, which may consist of thousands of points with millions of dimensions. In this paper, we propose a parallel framework for dimensionality reduction of large-scale data. We identify key components underlying the spectral dimensionality reduction techniques, and propose their efficient parallel implementation. We show that the resulting framework can be used to process datasets consisting of millions of points when executed on a 16,000-core cluster, which is beyond the reach of currently available methods. To further demonstrate applicability of our framework we perform dimensionality reduction of 75,000 images representing morphology evolution during manufacturing of organic solar cells in order to identify how processing parameters affect morphology evolution.


Sign in / Sign up

Export Citation Format

Share Document