scholarly journals Data‐Driven simulation of inelastic materials using structured data sets and tangential transition rules

PAMM ◽  
2021 ◽  
Vol 20 (1) ◽  
Author(s):  
Kerem Ciftci ◽  
Klaus Hackl
Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 154
Author(s):  
Marcus Walldén ◽  
Masao Okita ◽  
Fumihiko Ino ◽  
Dimitris Drikakis ◽  
Ioannis Kokkinakis

Increasing processing capabilities and input/output constraints of supercomputers have increased the use of co-processing approaches, i.e., visualizing and analyzing data sets of simulations on the fly. We present a method that evaluates the importance of different regions of simulation data and a data-driven approach that uses the proposed method to accelerate in-transit co-processing of large-scale simulations. We use the importance metrics to simultaneously employ multiple compression methods on different data regions to accelerate the in-transit co-processing. Our approach strives to adaptively compress data on the fly and uses load balancing to counteract memory imbalances. We demonstrate the method’s efficiency through a fluid mechanics application, a Richtmyer–Meshkov instability simulation, showing how to accelerate the in-transit co-processing of simulations. The results show that the proposed method expeditiously can identify regions of interest, even when using multiple metrics. Our approach achieved a speedup of 1.29× in a lossless scenario. The data decompression time was sped up by 2× compared to using a single compression method uniformly.


2020 ◽  
Vol 45 (s1) ◽  
pp. 535-559
Author(s):  
Christian Pentzold ◽  
Lena Fölsche

AbstractOur article examines how journalistic reports and online comments have made sense of computational politics. It treats the discourse around data-driven campaigns as its object of analysis and codifies four main perspectives that have structured the debates about the use of large data sets and data analytics in elections. We study American, British, and German sources on the 2016 United States presidential election, the 2017 United Kingdom general election, and the 2017 German federal election. There, groups of speakers maneuvered between enthusiastic, skeptical, agnostic, or admonitory stances and so cannot be clearly mapped onto these four discursive positions. Coming along with the inconsistent accounts, public sensemaking was marked by an atmosphere of speculation about the substance and effects of computational politics. We conclude that this equivocality helped journalists and commentators to sideline prior reporting on the issue in order to repeatedly rediscover the practices they had already covered.


2015 ◽  
Vol 639 ◽  
pp. 21-30 ◽  
Author(s):  
Stephan Purr ◽  
Josef Meinhardt ◽  
Arnulf Lipp ◽  
Axel Werner ◽  
Martin Ostermair ◽  
...  

Data-driven quality evaluation in the stamping process of car body parts is quite promising because dependencies in the process have not yet been sufficiently researched. However, the application of data mining methods for the process in stamping plants would require a large number of sample data sets. Today, acquiring these data represents a major challenge, because the necessary data are inadequately measured, recorded or stored. Thus, the preconditions for the sample data acquisition must first be created before being able to investigate any correlations. In addition, the process conditions change over time due to wear mechanisms. Therefore, the results do not remain valid and a constant data acquisition is required. In this publication, the current situation in stamping plants regarding the process robustness will be first discussed and the need for data-driven methods will be shown. Subsequently, the state of technology regarding the possibility of collecting the sample data sets for quality analysis in producing car body parts will be researched. At the end of this work, an overview will be provided concerning how this data collection was implemented at BMW as well as what kind of potential can be expected.


Author(s):  
Heiko Paulheim ◽  
Christian Bizer

Linked Data on the Web is either created from structured data sources (such as relational databases), from semi-structured sources (such as Wikipedia), or from unstructured sources (such as text). In the latter two cases, the generated Linked Data will likely be noisy and incomplete. In this paper, we present two algorithms that exploit statistical distributions of properties and types for enhancing the quality of incomplete and noisy Linked Data sets: SDType adds missing type statements, and SDValidate identifies faulty statements. Neither of the algorithms uses external knowledge, i.e., they operate only on the data itself. We evaluate the algorithms on the DBpedia and NELL knowledge bases, showing that they are both accurate as well as scalable. Both algorithms have been used for building the DBpedia 3.9 release: With SDType, 3.4 million missing type statements have been added, while using SDValidate, 13,000 erroneous RDF statements have been removed from the knowledge base.


2011 ◽  
Vol 83 (6) ◽  
pp. 2075-2082 ◽  
Author(s):  
Caroline J. Sands ◽  
Muireann Coen ◽  
Timothy M. D. Ebbels ◽  
Elaine Holmes ◽  
John C. Lindon ◽  
...  

2018 ◽  
Vol 2 (10) ◽  
pp. 735-742 ◽  
Author(s):  
Martin Gerlach ◽  
Beatrice Farb ◽  
William Revelle ◽  
Luís A. Nunes Amaral

Author(s):  
Patrick Gelß ◽  
Stefan Klus ◽  
Jens Eisert ◽  
Christof Schütte

A key task in the field of modeling and analyzing nonlinear dynamical systems is the recovery of unknown governing equations from measurement data only. There is a wide range of application areas for this important instance of system identification, ranging from industrial engineering and acoustic signal processing to stock market models. In order to find appropriate representations of underlying dynamical systems, various data-driven methods have been proposed by different communities. However, if the given data sets are high-dimensional, then these methods typically suffer from the curse of dimensionality. To significantly reduce the computational costs and storage consumption, we propose the method multidimensional approximation of nonlinear dynamical systems (MANDy) which combines data-driven methods with tensor network decompositions. The efficiency of the introduced approach will be illustrated with the aid of several high-dimensional nonlinear dynamical systems.


2019 ◽  
Vol 8 (12) ◽  
pp. 584 ◽  
Author(s):  
Bernd Resch ◽  
Michael Szell

Due to the wide-spread use of disruptive digital technologies like mobile phones, cities have transitioned from data-scarce to data-rich environments. As a result, the field of geoinformatics is being reshaped and challenged to develop adequate data-driven methods. At the same time, the term "smart city" is increasingly being applied in urban planning, reflecting the aims of different stakeholders to create value out of the new data sets. However, many smart city research initiatives are promoting techno-positivistic approaches which do not account enough for the citizens’ needs. In this paper, we review the state of quantitative urban studies under this new perspective, and critically discuss the development of smart city programs. We conclude with a call for a new anti-disciplinary, human-centric urban data science, and a well-reflected use of technology and data collection in smart city planning. Finally, we introduce the papers of this special issue which focus on providing a more human-centric view on data-driven urban studies, spanning topics from cycling and wellbeing, to mobility and land use.


Sign in / Sign up

Export Citation Format

Share Document