scholarly journals Efficient Experimental and Data-Centered Workflow for Microstructure-Based Fatigue Data

Author(s):  
A. R. Durmaz ◽  
N. Hadzic ◽  
T. Straub ◽  
C. Eberl ◽  
P. Gumbsch

Abstract Background Early fatigue mechanisms for various materials are yet to be unveiled for the (very) high-cycle fatigue (VHCF) regime. This can be ascribed to a lack of available data capturing initial fatigue damage evolution, which continues to adversely affect data scientists and computational modeling experts attempting to derive microstructural dependencies from small sample size data and incomplete feature representations. Objective The aim of this work is to address this lack and to drive the digital transformation of materials such that future virtual component design can be rendered more reliable and more efficient. Achieving this relies on fatigue models that comprehensively capture all relevant dependencies. Methods To this end, this work proposes a combined experimental and data post-processing workflow to establish multimodal fatigue crack initiation and propagation data sets efficiently. It evolves around fatigue testing of mesoscale specimens to increase damage detection sensitivity, data fusion through multimodal registration to address data heterogeneity, and image-based data-driven damage localization. Results A workflow with a high degree of automation is established, that links large distortion-corrected microstructure data with damage localization and evolution kinetics. The workflow enables cycling up to the VHCF regime in comparatively short time spans, while maintaining unprecedented time resolution of damage evolution. Resulting data sets capture the interaction of damage with microstructural features and hold the potential to unravel a mechanistic understanding. Conclusions The proposed workflow lays the foundation for future data mining and data-driven modeling of microstructural fatigue by providing statistically meaningful data sets extendable to a wide range of materials.

Author(s):  
Patrick Gelß ◽  
Stefan Klus ◽  
Jens Eisert ◽  
Christof Schütte

A key task in the field of modeling and analyzing nonlinear dynamical systems is the recovery of unknown governing equations from measurement data only. There is a wide range of application areas for this important instance of system identification, ranging from industrial engineering and acoustic signal processing to stock market models. In order to find appropriate representations of underlying dynamical systems, various data-driven methods have been proposed by different communities. However, if the given data sets are high-dimensional, then these methods typically suffer from the curse of dimensionality. To significantly reduce the computational costs and storage consumption, we propose the method multidimensional approximation of nonlinear dynamical systems (MANDy) which combines data-driven methods with tensor network decompositions. The efficiency of the introduced approach will be illustrated with the aid of several high-dimensional nonlinear dynamical systems.


2019 ◽  
Vol 37 (15_suppl) ◽  
pp. e14223-e14223
Author(s):  
Qimin Quan

e14223 Background: Cytokine release syndrome (CRS), a systemic inflammatory response observed with monoclonal antibody drugs and adoptive T cell treatments, has become a major issue for CAR-T therapy. CRS can present as a mild reaction requiring minimally invasive supportive care up to a severe systemic response resulting in patient death. Monitoring this response during these therapeutic treatments is non-trivial due the wide range of biomarker concentrations, small sample volumes, and long assay times. Current analytical methods are unable to address these needs, limiting the precision of CAR-T therapy and effective management of its side effects. Methods: Emerging studies in this area have focused in establishing a panel of predictive biomarkers to manage dosing and early interventions, among them, IFNγ, IL6, TNFα, MIP1 have shown predicative powers in pediatric patients. Nevertheless, a significant improvement (100x) on the detection sensitivity is required to predict the CRS response with currently available methods. In addition, CRS-associated biomarkers including CRP and ferritin vary from 10ng/mL-10mg/mL while other predictive biomarkers (eg, IL6, IFNγ, etc.) vary from 1pg/mL-100ng/mL. At present, no analytical tool, known to us, can provide this large dynamic range ( > 9 logs), with the requisite lower limit of detection, in a rapid single test to predict and differentiate low, medium or high grade responses. Results: We present the NanoMosaic platform, the technology that has requisite sensitivity and breadth of dynamic range to enable precision detection based on precise quantitation of CRS-relevant biomarkers. NanoMosaic technology is enabled by single molecule nanoneedle sensors that are densely integrated on a silicon chip and manufactured with a CMOS-compatible process. Absolute quantitation is achieved by imaging the spectrum of nanoneedles, corresponding directly to the number of molecules. Conclusions: Direct comparison of different protein biomarkers with orders of magnitude concentration variations becomes possible in one platform and small sample sizes. We envision NanoMosaic technology will not only drive biomarker discovery, but also enable precise dosing management for CAR-T therapy.


2019 ◽  
pp. 40-46 ◽  
Author(s):  
V.V. Savchenko ◽  
A.V. Savchenko

We consider the task of automated quality control of sound recordings containing voice samples of individuals. It is shown that in this task the most acute is the small sample size. In order to overcome this problem, we propose the novel method of acoustic measurements based on relative stability of the pitch frequency within a voice sample of short duration. An example of its practical implementation using aninter-periodic accumulation of a speech signal is considered. An experimental study with specially developed software provides statistical estimates of the effectiveness of the proposed method in noisy environments. It is shown that this method rejects the audio recording as unsuitable for a voice biometric identification with a probability of 0,95 or more for a signal to noise ratio below 15 dB. The obtained results are intended for use in the development of new and modifying existing systems of collecting and automated quality control of biometric personal data. The article is intended for a wide range of specialists in the field of acoustic measurements and digital processing of speech signals, as well as for practitioners who organize the work of authorized organizations in preparing for registration samples of biometric personal data.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Eleanor F. Miller ◽  
Andrea Manica

Abstract Background Today an unprecedented amount of genetic sequence data is stored in publicly available repositories. For decades now, mitochondrial DNA (mtDNA) has been the workhorse of genetic studies, and as a result, there is a large volume of mtDNA data available in these repositories for a wide range of species. Indeed, whilst whole genome sequencing is an exciting prospect for the future, for most non-model organisms’ classical markers such as mtDNA remain widely used. By compiling existing data from multiple original studies, it is possible to build powerful new datasets capable of exploring many questions in ecology, evolution and conservation biology. One key question that these data can help inform is what happened in a species’ demographic past. However, compiling data in this manner is not trivial, there are many complexities associated with data extraction, data quality and data handling. Results Here we present the mtDNAcombine package, a collection of tools developed to manage some of the major decisions associated with handling multi-study sequence data with a particular focus on preparing sequence data for Bayesian skyline plot demographic reconstructions. Conclusions There is now more genetic information available than ever before and large meta-data sets offer great opportunities to explore new and exciting avenues of research. However, compiling multi-study datasets still remains a technically challenging prospect. The mtDNAcombine package provides a pipeline to streamline the process of downloading, curating, and analysing sequence data, guiding the process of compiling data sets from the online database GenBank.


Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 154
Author(s):  
Marcus Walldén ◽  
Masao Okita ◽  
Fumihiko Ino ◽  
Dimitris Drikakis ◽  
Ioannis Kokkinakis

Increasing processing capabilities and input/output constraints of supercomputers have increased the use of co-processing approaches, i.e., visualizing and analyzing data sets of simulations on the fly. We present a method that evaluates the importance of different regions of simulation data and a data-driven approach that uses the proposed method to accelerate in-transit co-processing of large-scale simulations. We use the importance metrics to simultaneously employ multiple compression methods on different data regions to accelerate the in-transit co-processing. Our approach strives to adaptively compress data on the fly and uses load balancing to counteract memory imbalances. We demonstrate the method’s efficiency through a fluid mechanics application, a Richtmyer–Meshkov instability simulation, showing how to accelerate the in-transit co-processing of simulations. The results show that the proposed method expeditiously can identify regions of interest, even when using multiple metrics. Our approach achieved a speedup of 1.29× in a lossless scenario. The data decompression time was sped up by 2× compared to using a single compression method uniformly.


2021 ◽  
pp. 204141962199349
Author(s):  
Jordan J Pannell ◽  
George Panoutsos ◽  
Sam B Cooke ◽  
Dan J Pope ◽  
Sam E Rigby

Accurate quantification of the blast load arising from detonation of a high explosive has applications in transport security, infrastructure assessment and defence. In order to design efficient and safe protective systems in such aggressive environments, it is of critical importance to understand the magnitude and distribution of loading on a structural component located close to an explosive charge. In particular, peak specific impulse is the primary parameter that governs structural deformation under short-duration loading. Within this so-called extreme near-field region, existing semi-empirical methods are known to be inaccurate, and high-fidelity numerical schemes are generally hampered by a lack of available experimental validation data. As such, the blast protection community is not currently equipped with a satisfactory fast-running tool for load prediction in the near-field. In this article, a validated computational model is used to develop a suite of numerical near-field blast load distributions, which are shown to follow a similar normalised shape. This forms the basis of the data-driven predictive model developed herein: a Gaussian function is fit to the normalised loading distributions, and a power law is used to calculate the magnitude of the curve according to established scaling laws. The predictive method is rigorously assessed against the existing numerical dataset, and is validated against new test models and available experimental data. High levels of agreement are demonstrated throughout, with typical variations of <5% between experiment/model and prediction. The new approach presented in this article allows the analyst to rapidly compute the distribution of specific impulse across the loaded face of a wide range of target sizes and near-field scaled distances and provides a benchmark for data-driven modelling approaches to capture blast loading phenomena in more complex scenarios.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1109
Author(s):  
Varnakavi. Naresh ◽  
Nohyun Lee

A biosensor is an integrated receptor-transducer device, which can convert a biological response into an electrical signal. The design and development of biosensors have taken a center stage for researchers or scientists in the recent decade owing to the wide range of biosensor applications, such as health care and disease diagnosis, environmental monitoring, water and food quality monitoring, and drug delivery. The main challenges involved in the biosensor progress are (i) the efficient capturing of biorecognition signals and the transformation of these signals into electrochemical, electrical, optical, gravimetric, or acoustic signals (transduction process), (ii) enhancing transducer performance i.e., increasing sensitivity, shorter response time, reproducibility, and low detection limits even to detect individual molecules, and (iii) miniaturization of the biosensing devices using micro-and nano-fabrication technologies. Those challenges can be met through the integration of sensing technology with nanomaterials, which range from zero- to three-dimensional, possessing a high surface-to-volume ratio, good conductivities, shock-bearing abilities, and color tunability. Nanomaterials (NMs) employed in the fabrication and nanobiosensors include nanoparticles (NPs) (high stability and high carrier capacity), nanowires (NWs) and nanorods (NRs) (capable of high detection sensitivity), carbon nanotubes (CNTs) (large surface area, high electrical and thermal conductivity), and quantum dots (QDs) (color tunability). Furthermore, these nanomaterials can themselves act as transduction elements. This review summarizes the evolution of biosensors, the types of biosensors based on their receptors, transducers, and modern approaches employed in biosensors using nanomaterials such as NPs (e.g., noble metal NPs and metal oxide NPs), NWs, NRs, CNTs, QDs, and dendrimers and their recent advancement in biosensing technology with the expansion of nanotechnology.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Yance Feng ◽  
Lei M. Li

Abstract Background Normalization of RNA-seq data aims at identifying biological expression differentiation between samples by removing the effects of unwanted confounding factors. Explicitly or implicitly, the justification of normalization requires a set of housekeeping genes. However, the existence of housekeeping genes common for a very large collection of samples, especially under a wide range of conditions, is questionable. Results We propose to carry out pairwise normalization with respect to multiple references, selected from representative samples. Then the pairwise intermediates are integrated based on a linear model that adjusts the reference effects. Motivated by the notion of housekeeping genes and their statistical counterparts, we adopt the robust least trimmed squares regression in pairwise normalization. The proposed method (MUREN) is compared with other existing tools on some standard data sets. The goodness of normalization emphasizes on preserving possible asymmetric differentiation, whose biological significance is exemplified by a single cell data of cell cycle. MUREN is implemented as an R package. The code under license GPL-3 is available on the github platform: github.com/hippo-yf/MUREN and on the conda platform: anaconda.org/hippo-yf/r-muren. Conclusions MUREN performs the RNA-seq normalization using a two-step statistical regression induced from a general principle. We propose that the densities of pairwise differentiations are used to evaluate the goodness of normalization. MUREN adjusts the mode of differentiation toward zero while preserving the skewness due to biological asymmetric differentiation. Moreover, by robustly integrating pre-normalized counts with respect to multiple references, MUREN is immune to individual outlier samples.


Sign in / Sign up

Export Citation Format

Share Document