A new nonlinear polyconvex orthotropic material model for the robust simulation of technical fabrics in civil engineering applications at large strains – Validation with large-scale experiment/Ein neues polykonvexes orthotropes Materialmodell zur robusten Simulation von Textilmembranen im Bauingenieur- wesen unter Berücksichtigung großer Deformationen – Validierung anhand eines Großbauteilversuchs

Bauingenieur ◽  
2019 ◽  
Vol 94 (12) ◽  
pp. 488-497
Author(s):  
Mehran Motevalli ◽  
Jörg Uhlemann ◽  
Natalie Stranghöner ◽  
Daniel Balzani

Abstract A polyconvex orthotropic material model is proposed for the simulation of tensile membrane structures. The notion of anisotropic metric tensors is employed in the formulation of the polyconvex orthotropic term which allows for the description of the interaction of the warp and fill yarns. The model is adjusted to the stress-strain paths of uni- and biaxial tensile tests of a woven fabric and the results are compared with the linear elastic model. The lateral contraction in the uniaxial loading case is taken into account to also capture the strong crosswise interactions. An increased number of load cycles is considered in the experiments to reach a saturated elastic state of the material. A new method is proposed enabling in principle the identification of unique (linear) stiffness parameters by previously identifying the (nonlinear) model parameters. Eventually, the proposed nonlinear model contains only 4 material parameters to be identified for the individual membrane material. Moreover, a new large-scale experimental setting is presented which allows for the validation of the proposed model response in real-life engineering applications. The numerical robustness of the model is tested in an advanced simulation of a large roof structure under application of realistic boundary conditions.

2011 ◽  
Vol 90-93 ◽  
pp. 176-181
Author(s):  
Chang Lu Chen ◽  
Sheng Jun Shao ◽  
Lin Ma

Duncan-Chang nonlinear model has been modified and applied to the structural loess calculation. Based on structural studies and conventional triaxial tests, this paper has analyzed the mechanical properties of intact loess and the relationship between the stress ratio structural parameters and the strain, then the expression of generalized shear strain and stress ratio structural parameters are given to facilitate the engineering applications. On this basis, the stress-strain curve of intact loess was corrected by the use of the stress ratio structural parameters. The form of the intact loess stress-strain curves which have been revised has changed hardening from the softening or weak softening. The results show that the modified stress-strain curves of intact loess can apply Duncan- Chang nonlinear model to calculate and the model parameters are reasonable and effective. This method provides Duncan-Chang nonlinear model which is widely used in engineering with a new ways and means in intact structural loess application.


2017 ◽  
Vol 21 ◽  
pp. 369-393
Author(s):  
Nelson Antunes ◽  
Vladas Pipiras ◽  
Patrice Abry ◽  
Darryl Veitch

Poisson cluster processes are special point processes that find use in modeling Internet traffic, neural spike trains, computer failure times and other real-life phenomena. The focus of this work is on the various moments and cumulants of Poisson cluster processes, and specifically on their behavior at small and large scales. Under suitable assumptions motivated by the multiscale behavior of Internet traffic, it is shown that all these various quantities satisfy scale free (scaling) relations at both small and large scales. Only some of these relations turn out to carry information about salient model parameters of interest, and consequently can be used in the inference of the scaling behavior of Poisson cluster processes. At large scales, the derived results complement those available in the literature on the distributional convergence of normalized Poisson cluster processes, and also bring forward a more practical interpretation of the so-called slow and fast growth regimes. Finally, the results are applied to a real data trace from Internet traffic.


2016 ◽  
Vol 27 (7) ◽  
pp. 2231-2246 ◽  
Author(s):  
Maria Francesca Marino ◽  
Nikos Tzavidis ◽  
Marco Alfò

Quantile regression provides a detailed and robust picture of the distribution of a response variable, conditional on a set of observed covariates. Recently, it has be been extended to the analysis of longitudinal continuous outcomes using either time-constant or time-varying random parameters. However, in real-life data, we frequently observe both temporal shocks in the overall trend and individual-specific heterogeneity in model parameters. A benchmark dataset on HIV progression gives a clear example. Here, the evolution of the CD4 log counts exhibits both sudden temporal changes in the overall trend and heterogeneity in the effect of the time since seroconversion on the response dynamics. To accommodate such situations, we propose a quantile regression model, where time-varying and time-constant random coefficients are jointly considered. Since observed data may be incomplete due to early drop-out, we also extend the proposed model in a pattern mixture perspective. We assess the performance of the proposals via a large-scale simulation study and the analysis of the CD4 count data.


2021 ◽  
Vol 347 ◽  
pp. 00036
Author(s):  
Johan Bester ◽  
Philip Venter ◽  
Martin van Eldik

The use of computational fluid dynamics in continuous operation industries have become more prominent in recent times. Proposed system improvements through geometric changes or control strategies can be evaluated within a relatively shorter timeframe. Applications for discrete element methods (DEMs) in real life simulations, however, require validated material-calibration-methods. In this paper, the V-model methodology in combination with direct and bulk calibration approaches were followed to determine material model parameters, to simulate real life occurrences. For the bulk calibration approach a test rig with a containment hopper, deflection plate and settling zone was used. Screened material drains from the hopper, interacts with the deflection plate, and then settles at the material angle of repose. A high-speed camera captured material interaction with the rig, where footage was used during simulation validation. The direct measuring approach was used to determine particle size, shape and density, while confirming friction and restitution coefficients determined in the bulk calibration method. The test was repeated and validated for various geometrical changes. Three categories of validation were established, namely particle speed assessment, -trajectory assessment and -plate interaction assessment. In conclusion, the combination of direct and bulk calibration approaches was significant in calibrating the required material model parameters.


2021 ◽  
Vol 55 (1) ◽  
pp. 1-2
Author(s):  
Bhaskar Mitra

Neural networks with deep architectures have demonstrated significant performance improvements in computer vision, speech recognition, and natural language processing. The challenges in information retrieval (IR), however, are different from these other application areas. A common form of IR involves ranking of documents---or short passages---in response to keyword-based queries. Effective IR systems must deal with query-document vocabulary mismatch problem, by modeling relationships between different query and document terms and how they indicate relevance. Models should also consider lexical matches when the query contains rare terms---such as a person's name or a product model number---not seen during training, and to avoid retrieving semantically related but irrelevant results. In many real-life IR tasks, the retrieval involves extremely large collections---such as the document index of a commercial Web search engine---containing billions of documents. Efficient IR methods should take advantage of specialized IR data structures, such as inverted index, to efficiently retrieve from large collections. Given an information need, the IR system also mediates how much exposure an information artifact receives by deciding whether it should be displayed, and where it should be positioned, among other results. Exposure-aware IR systems may optimize for additional objectives, besides relevance, such as parity of exposure for retrieved items and content publishers. In this thesis, we present novel neural architectures and methods motivated by the specific needs and challenges of IR tasks. We ground our contributions with a detailed survey of the growing body of neural IR literature [Mitra and Craswell, 2018]. Our key contribution towards improving the effectiveness of deep ranking models is developing the Duet principle [Mitra et al., 2017] which emphasizes the importance of incorporating evidence based on both patterns of exact term matches and similarities between learned latent representations of query and document. To efficiently retrieve from large collections, we develop a framework to incorporate query term independence [Mitra et al., 2019] into any arbitrary deep model that enables large-scale precomputation and the use of inverted index for fast retrieval. In the context of stochastic ranking, we further develop optimization strategies for exposure-based objectives [Diaz et al., 2020]. Finally, this dissertation also summarizes our contributions towards benchmarking neural IR models in the presence of large training datasets [Craswell et al., 2019] and explores the application of neural methods to other IR tasks, such as query auto-completion.


Author(s):  
Krzysztof Jurczuk ◽  
Marcin Czajkowski ◽  
Marek Kretowski

AbstractThis paper concerns the evolutionary induction of decision trees (DT) for large-scale data. Such a global approach is one of the alternatives to the top-down inducers. It searches for the tree structure and tests simultaneously and thus gives improvements in the prediction and size of resulting classifiers in many situations. However, it is the population-based and iterative approach that can be too computationally demanding to apply for big data mining directly. The paper demonstrates that this barrier can be overcome by smart distributed/parallel processing. Moreover, we ask the question whether the global approach can truly compete with the greedy systems for large-scale data. For this purpose, we propose a novel multi-GPU approach. It incorporates the knowledge of global DT induction and evolutionary algorithm parallelization together with efficient utilization of memory and computing GPU’s resources. The searches for the tree structure and tests are performed simultaneously on a CPU, while the fitness calculations are delegated to GPUs. Data-parallel decomposition strategy and CUDA framework are applied. Experimental validation is performed on both artificial and real-life datasets. In both cases, the obtained acceleration is very satisfactory. The solution is able to process even billions of instances in a few hours on a single workstation equipped with 4 GPUs. The impact of data characteristics (size and dimension) on convergence and speedup of the evolutionary search is also shown. When the number of GPUs grows, nearly linear scalability is observed what suggests that data size boundaries for evolutionary DT mining are fading.


Author(s):  
Gianluca Bardaro ◽  
Alessio Antonini ◽  
Enrico Motta

AbstractOver the last two decades, several deployments of robots for in-house assistance of older adults have been trialled. However, these solutions are mostly prototypes and remain unused in real-life scenarios. In this work, we review the historical and current landscape of the field, to try and understand why robots have yet to succeed as personal assistants in daily life. Our analysis focuses on two complementary aspects: the capabilities of the physical platform and the logic of the deployment. The former analysis shows regularities in hardware configurations and functionalities, leading to the definition of a set of six application-level capabilities (exploration, identification, remote control, communication, manipulation, and digital situatedness). The latter focuses on the impact of robots on the daily life of users and categorises the deployment of robots for healthcare interventions using three types of services: support, mitigation, and response. Our investigation reveals that the value of healthcare interventions is limited by a stagnation of functionalities and a disconnection between the robotic platform and the design of the intervention. To address this issue, we propose a novel co-design toolkit, which uses an ecological framework for robot interventions in the healthcare domain. Our approach connects robot capabilities with known geriatric factors, to create a holistic view encompassing both the physical platform and the logic of the deployment. As a case study-based validation, we discuss the use of the toolkit in the pre-design of the robotic platform for an pilot intervention, part of the EU large-scale pilot of the EU H2020 GATEKEEPER project.


2021 ◽  
Vol 5 (1) ◽  
pp. 14
Author(s):  
Christos Makris ◽  
Georgios Pispirigos

Nowadays, due to the extensive use of information networks in a broad range of fields, e.g., bio-informatics, sociology, digital marketing, computer science, etc., graph theory applications have attracted significant scientific interest. Due to its apparent abstraction, community detection has become one of the most thoroughly studied graph partitioning problems. However, the existing algorithms principally propose iterative solutions of high polynomial order that repetitively require exhaustive analysis. These methods can undoubtedly be considered resource-wise overdemanding, unscalable, and inapplicable in big data graphs, such as today’s social networks. In this article, a novel, near-linear, and highly scalable community prediction methodology is introduced. Specifically, using a distributed, stacking-based model, which is built on plain network topology characteristics of bootstrap sampled subgraphs, the underlined community hierarchy of any given social network is efficiently extracted in spite of its size and density. The effectiveness of the proposed methodology has diligently been examined on numerous real-life social networks and proven superior to various similar approaches in terms of performance, stability, and accuracy.


Author(s):  
Clemens M. Lechner ◽  
Nivedita Bhaktha ◽  
Katharina Groskurth ◽  
Matthias Bluemke

AbstractMeasures of cognitive or socio-emotional skills from large-scale assessments surveys (LSAS) are often based on advanced statistical models and scoring techniques unfamiliar to applied researchers. Consequently, applied researchers working with data from LSAS may be uncertain about the assumptions and computational details of these statistical models and scoring techniques and about how to best incorporate the resulting skill measures in secondary analyses. The present paper is intended as a primer for applied researchers. After a brief introduction to the key properties of skill assessments, we give an overview over the three principal methods with which secondary analysts can incorporate skill measures from LSAS in their analyses: (1) as test scores (i.e., point estimates of individual ability), (2) through structural equation modeling (SEM), and (3) in the form of plausible values (PVs). We discuss the advantages and disadvantages of each method based on three criteria: fallibility (i.e., control for measurement error and unbiasedness), usability (i.e., ease of use in secondary analyses), and immutability (i.e., consistency of test scores, PVs, or measurement model parameters across different analyses and analysts). We show that although none of the methods are optimal under all criteria, methods that result in a single point estimate of each respondent’s ability (i.e., all types of “test scores”) are rarely optimal for research purposes. Instead, approaches that avoid or correct for measurement error—especially PV methodology—stand out as the method of choice. We conclude with practical recommendations for secondary analysts and data-producing organizations.


Signals ◽  
2021 ◽  
Vol 2 (3) ◽  
pp. 434-455
Author(s):  
Sujan Kumar Roy ◽  
Kuldip K. Paliwal

Inaccurate estimates of the linear prediction coefficient (LPC) and noise variance introduce bias in Kalman filter (KF) gain and degrade speech enhancement performance. The existing methods propose a tuning of the biased Kalman gain, particularly in stationary noise conditions. This paper introduces a tuning of the KF gain for speech enhancement in real-life noise conditions. First, we estimate noise from each noisy speech frame using a speech presence probability (SPP) method to compute the noise variance. Then, we construct a whitening filter (with its coefficients computed from the estimated noise) to pre-whiten each noisy speech frame prior to computing the speech LPC parameters. We then construct the KF with the estimated parameters, where the robustness metric offsets the bias in KF gain during speech absence of noisy speech to that of the sensitivity metric during speech presence to achieve better noise reduction. The noise variance and the speech model parameters are adopted as a speech activity detector. The reduced-biased Kalman gain enables the KF to minimize the noise effect significantly, yielding the enhanced speech. Objective and subjective scores on the NOIZEUS corpus demonstrate that the enhanced speech produced by the proposed method exhibits higher quality and intelligibility than some benchmark methods.


Sign in / Sign up

Export Citation Format

Share Document