On-the-fly model reduction for large-scale structural topology optimization using principal components analysis

2020 ◽  
Vol 62 (1) ◽  
pp. 209-230 ◽  
Author(s):  
Manyu Xiao ◽  
Dongcheng Lu ◽  
Piotr Breitkopf ◽  
Balaji Raghavan ◽  
Subhrajit Dutta ◽  
...  
Author(s):  
Keith E. Stanovich ◽  
Richard F. West ◽  
Maggie E. Toplak

Chapter 12 describes a large-scale study of the short-form version of the CART. The short-form is composed of 11 of the 20 subtests and can be completed in less than two hours by most subjects. The short-form CART includes both the Probabilistic and Statistical Reasoning and the Scientific Reasoning subtests, as both are at the core of most definitions of rational thinking. All four subtests that directly tap the avoidance of miserly processing are included in the short form. The Probabilistic Numeracy subtest is included in the short-form CART because it is statistically quite potent for the amount of time that it takes. All four subtests assessing contaminated mindware are included in the short-form. Chapter 12 reports the results of a study of short-form performance involving 372 subjects. Reliabilities of all the subtests are reported, as well as correlations with cognitive ability and the Actively Open-Minded Thinking scale. Correlations among all the subtests are reported as well as a principal components analysis of the subtests.


Author(s):  
Keith E. Stanovich ◽  
Richard F. West ◽  
Maggie E. Toplak

This chapter describes a large-scale study of the full-form version of the CART involving 747 subjects. Reliabilities of all the subtests are reported, as well as correlations with measures of cognitive ability and the four thinking disposition scales of the CART. Correlations among all the subtests are reported as well as a principal components analysis of the subtests. Comparisons between the full-form CART and the short-form CART are presented as well as comparisons with even briefer forms of the test than the short form.


Author(s):  
Tao Jiang ◽  
Mehran Chirehdast

Abstract In this paper, structural topology optimization is extended to systems design. Locations and patterns of connections in a structural system that consists of multiple components strongly affect its performance. Topology of connections is defined, and a new classification for structural optimization is introduced that includes the topology optimization problem for connections. A mathematical programming problem is formulated that addresses this design problem. A convex approximation method using analytical gradients is used to solve the optimization problem. This solution method is readily applicable to large-scale problems. The design problem presented and solved here has a wide range of applications in all areas of structural design. The examples provided here are for spot-weld and adhesive bond joints. Numerous other potential applications are suggested.


2013 ◽  
Vol 7 (1) ◽  
pp. 19-24
Author(s):  
Kevin Blighe

Elaborate downstream methods are required to analyze large microarray data-sets. At times, where the end goal is to look for relationships between (or patterns within) different subgroups or even just individual samples, large data-sets must first be filtered using statistical thresholds in order to reduce their overall volume. As an example, in anthropological microarray studies, such ‘dimension reduction’ techniques are essential to elucidate any links between polymorphisms and phenotypes for given populations. In such large data-sets, a subset can first be taken to represent the larger data-set. For example, polling results taken during elections are used to infer the opinions of the population at large. However, what is the best and easiest method of capturing a sub-set of variation in a data-set that can represent the overall portrait of variation? In this article, principal components analysis (PCA) is discussed in detail, including its history, the mathematics behind the process, and in which ways it can be applied to modern large-scale biological datasets. New methods of analysis using PCA are also suggested, with tentative results outlined.


2020 ◽  
Vol 10 (4) ◽  
pp. 1481 ◽  
Author(s):  
Abdulkhaliq A. Jaafer ◽  
Mustafa Al-Bazoon ◽  
Abbas O. Dawood

In this study, the binary bat algorithm (BBA) for structural topology optimization is implemented. The problem is to find the stiffest structure using a certain amount of material and some constraints using the bit-array representation method. A new filtering algorithm is proposed to make BBA find designs with no separated objects, no checkerboard patterns, less unusable material, and higher structural performance. A volition penalty function for topology optimization is also proposed to accelerate the convergence toward the optimal design. The main effect of using the BBA lies in the fact that the BBA is able to handle a large number of design variables in comparison with other well-known metaheuristic algorithms. Based on the numerical results of four benchmark problems in structural topology optimization for minimum compliance, the following conclusions are made: (1) The BBA with the proposed filtering algorithm and penalty function are effective in solving large-scale numerical topology optimization problems (fine finite elements mesh). (2) The proposed algorithm produces solid-void designs without gray areas, which makes them practical solutions that are applicable in manufacturing.


Author(s):  
Ruey Leng Loo ◽  
Queenie Chan ◽  
Henrik Antti ◽  
Jia V Li ◽  
H Ashrafian ◽  
...  

Abstract Motivation Large-scale population omics data can provide insight into associations between gene–environment interactions and disease. However, existing dimension reduction modelling techniques are often inefficient for extracting detailed information from these complex datasets. Results Here, we present an interactive software pipeline for exploratory analyses of population-based nuclear magnetic resonance spectral data using a COmbined Multi-block Principal components Analysis with Statistical Spectroscopy (COMPASS) within the R-library hastaLaVista framework. Principal component analysis models are generated for a sequential series of spectral regions (blocks) to provide more granular detail defining sub-populations within the dataset. Molecular identification of key differentiating signals is subsequently achieved by implementing Statistical TOtal Correlation SpectroscopY on the full spectral data to define feature patterns. Finally, the distributions of cross-correlation of the reference patterns across the spectral dataset are used to provide population statistics for identifying underlying features arising from drug intake, latent diseases and diet. The COMPASS method thus provides an efficient semi-automated approach for screening population datasets. Availability and implementation Source code is available at https://github.com/cheminfo/COMPASS. Supplementary information Supplementary data are available at Bioinformatics online.


Sign in / Sign up

Export Citation Format

Share Document