DATA-DRIVEN DISCOVERY OF MATERIAL STATES IN COMPOSITES UNDER FATIGUE LOADS

2021 ◽  
Author(s):  
MUTHU RAM ELENCHEZHIAN ◽  
VAMSEE VADLAMUDI ◽  
RASSEL RAIHAN ◽  
KENNETH REIFSNIDER

Our community has a widespread knowledge on the damage tolerance and durability of the composites, developed over the past few decades by various experimental and computational efforts. Several methods have been used to understand the damage behavior and henceforth predict the material states such as residual strength (damage tolerance) and life (durability) of these material systems. Electrochemical Impedance Spectroscopy (EIS) and Broadband Dielectric Spectroscopy (BbDS) are such methods, which have been proven to identify the damage states in composites. Our previous work using BbDS method has proven to serve as precursor to identify the damage levels, indicating the beginning of end of life of the material. As a change in the material state variable is triggered by damage development, the rate of change of these states indicates the rate of damage interaction and can effectively predict impending failure. The Data-Driven Discovery of Models (D3M) [1] aims to develop model discovery systems, enabling users with domain knowledge but no data science background to create empirical models of real, complex processes. These D3M methods have been developed severely over the years in various applications and their implementation on real-time prediction for complex parameters such as material states in composites need to be trusted based on physics and domain knowledge. In this research work, we propose the use of data-driven methods combined with BbDS and progressive damage analysis to identify and hence predict material states in composites, subjected to fatigue loads.

2021 ◽  
Vol 11 (7) ◽  
pp. 3110
Author(s):  
Karina Gibert ◽  
Xavier Angerri

In this paper, the results of the project INSESS-COVID19 are presented, as part of a special call owing to help in the COVID19 crisis in Catalonia. The technological infrastructure and methodology developed in this project allows the quick screening of a territory for a quick a reliable diagnosis in front of an unexpected situation by providing relevant decisional information to support informed decision-making and strategy and policy design. One of the challenges of the project was to extract valuable information from direct participatory processes where specific target profiles of citizens are consulted and to distribute the participation along the whole territory. Having a lot of variables with a moderate number of citizens involved (in this case about 1000) implies the risk of violating statistical secrecy when multivariate relationships are analyzed, thus putting in risk the anonymity of the participants as well as their safety when vulnerable populations are involved, as is the case of INSESS-COVID19. In this paper, the entire data-driven methodology developed in the project is presented and the dealing of the small subgroups of population for statistical secrecy preserving described. The methodology is reusable with any other underlying questionnaire as the data science and reporting parts are totally automatized.


2021 ◽  
pp. 095001702097730
Author(s):  
Netta Avnoon

Drawing on theories from the sociology of work and the sociology of culture, this article argues that members of nascent technical occupations construct their professional identity and claim status through an omnivorous approach to skills acquisition. Based on a discursive analysis of 56 semi-structured in-depth interviews with data scientists, data science professors and managers in Israel, it was found that data scientists mobilise the following five resources to construct their identity: (1) ability to bridge the gap between scientist’s and engineer’s identities; (2) multiplicity of theories; (3) intensive self-learning; (4) bridging technical and social skills; and (5) acquiring domain knowledge easily. These resources diverge from former generalist-specialist identity tensions described in the literature as they attribute a higher status to the generalist-omnivore and a lower one to the specialist-snob.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Sanjana Mondal ◽  
Kaushik Samaddar

PurposeThe paper aims to explore the various dimensions of human factor relevant for integrating data-driven supply chain quality management practices (DDSCQMPs) with organizational performance. Keeping the transition phase from “Industry 4.0” to “Industry 5.0” in mind, the paper reinforces the role of the human factor and critically discusses the issues and challenges in the present organizational setup.Design/methodology/approachFollowing the grounded theory approach, the study arranged in-depth interviews and focus group sessions with industry experts from various service-oriented firms in India. Dimensions of human factor identified from there were grouped together through a morphological analysis (MA), and interlinkages between them were explored through a cross-consistency matrix.FindingsThis research work identified 20 critical dimensions of human factor and have grouped them under five important categories, namely, cohesive force, motivating force, regulating force, supporting force and functional force that drive quality performance in the supply chain domain.Originality/valueIn line with the requirements of the present “Industry 4.0” and the forthcoming “Industry 5.0”, where the need to collaborate human factor with smart system gets priority, the paper made a novel attempt in presenting the critical human factors and categorizing them under important driving forces. The research also contributed in linking DDSCQMPs with organizational performance. The proposed framework can guide the future researchers in expanding the theoretical constructs through initiating further cross-cultural studies across industries.


2021 ◽  
Author(s):  
Karen Triep ◽  
Alexander Benedikt Leichtle ◽  
Martin Meister ◽  
Georg Martin Fiedler ◽  
Olga Endrich

BACKGROUND The criteria for the diagnosis of kidney disease outlined in “The Kidney Disease: Improving Global Outcomes (KDIGO)” are based on a patient’s current, historical and baseline data. The diagnosis of acute (AKI), chronic (CKD) and acute-on-chronic kidney disease requires past measurements of creatinine and back-calculation and the interpretation of several laboratory values over a certain period. Diagnosis may be hindered by unclear definition of the individual creatinine baseline and rough ranges of norm values set without adjustment for age, ethnicity, comorbidities and treatment. Classification of the correct diagnosis and the sufficient staging improves coding, data quality, reimbursement, the choice of therapeutic approach and the patient’s outcome. OBJECTIVE With the help of a complex rule-engine a data-driven approach to assign the diagnoses acute, chronic and acute-on-chronic kidney disease is applied. METHODS Real-time and retrospective data from the hospital’s Clinical Data Warehouse of in- and outpatient cases treated between 2014 – 2019 is used. Delta serum creatinine, baseline values and admission and discharge data are analyzed. A KDIGO based standard query language (SQL) algorithm applies specific diagnosis (ICD) codes to inpatient stays. To measure the effect on diagnosis, Text Mining on discharge documentation is conducted. RESULTS We show that this approach yields an increased number of diagnoses as well as higher precision in documentation and coding (unspecific diagnosis ICD N19* coded in % of N19 generated 17.8 in 2016, 3.3 in 2019). CONCLUSIONS Our data-driven method supports the process and reliability of diagnosis and staging and improves the quality of documentation and data. Measuring patients’ outcome will be the next step of the project.


2021 ◽  
pp. 026638212110619
Author(s):  
Sharon Richardson

During the past two decades, there have been a number of breakthroughs in the fields of data science and artificial intelligence, made possible by advanced machine learning algorithms trained through access to massive volumes of data. However, their adoption and use in real-world applications remains a challenge. This paper posits that a key limitation in making AI applicable has been a failure to modernise the theoretical frameworks needed to evaluate and adopt outcomes. Such a need was anticipated with the arrival of the digital computer in the 1950s but has remained unrealised. This paper reviews how the field of data science emerged and led to rapid breakthroughs in algorithms underpinning research into artificial intelligence. It then discusses the contextual framework now needed to advance the use of AI in real-world decisions that impact human lives and livelihoods.


2018 ◽  
Vol 11 (2) ◽  
pp. 139-158 ◽  
Author(s):  
Thomas G. Cech ◽  
Trent J. Spaulding ◽  
Joseph A. Cazier

Purpose The purpose of this paper is to lay out the data competence maturity model (DCMM) and discuss how the application of the model can serve as a foundation for a measured and deliberate use of data in secondary education. Design/methodology/approach Although the model is new, its implications, and its application are derived from key findings and best practices from the software development, data analytics and secondary education performance literature. These principles can guide educators to better manage student and operational outcomes. This work builds and applies the DCMM model to secondary education. Findings The conceptual model reveals significant opportunities to improve data-driven decision making in schools and local education agencies (LEAs). Moving past the first and second stages of the data competency maturity model should allow educators to better incorporate data into the regular decision-making process. Practical implications Moving up the DCMM to better integrate data into their decision-making process has the potential to produce profound improvements for schools and LEAs. Data science is about making better decisions. Understanding the path laid out in the DCMM to helping an organization move to a more mature data-driven decision-making process will help improve both student and operational outcomes. Originality/value This paper brings a new concept, the DCMM, to the educational literature and discusses how these principles can be applied to improve decision making by integrating them into their decision-making process and trying to help the organization mature within this framework.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 3491-3495

The term Data Engineering did not get much popularity as the terminologies like Data Science or Data Analytics, mainly because the importance of this technique or concept is normally observed or experienced only during working with data or handling data or playing with data as a Data Scientist or Data Analyst. Though neither of these two, but as an academician and the urge to learn, while working with Python, this topic ‘Data engineering’ and one of its major sub topic or concept ‘Data Wrangling’ has drawn attention and this paper is a small step to explain the experience of handling data which uses Wrangling concept, using Python. So Data Wrangling, earlier referred to as Data Munging (when done by hand or manually), is the method of transforming and mapping data from one available data format into another format with the idea of making it more appropriate and important for a variety of relatedm purposes such as analytics. Data wrangling is the modern name used for data pre-processing rather Munging. The Python Library used for the research work shown here is called Pandas. Though the major Research Area is ‘Application of Data Analytics on Academic Data using Python’, this paper focuses on a small preliminary topic of the mentioned research work named Data wrangling using Python (Pandas Library).


2021 ◽  
Vol 7 (4) ◽  
pp. 208
Author(s):  
Mor Peleg ◽  
Amnon Reichman ◽  
Sivan Shachar ◽  
Tamir Gadot ◽  
Meytal Avgil Tsadok ◽  
...  

Triggered by the COVID-19 crisis, Israel’s Ministry of Health (MoH) held a virtual datathon based on deidentified governmental data. Organized by a multidisciplinary committee, Israel’s research community was invited to offer insights to help solve COVID-19 policy challenges. The Datathon was designed to develop operationalizable data-driven models to address COVID-19 health policy challenges. Specific relevant challenges were defined and diverse, reliable, up-to-date, deidentified governmental datasets were extracted and tested. Secure remote-access research environments were established. Registration was open to all citizens. Around a third of the applicants were accepted, and they were teamed to balance areas of expertise and represent all sectors of the community. Anonymous surveys for participants and mentors were distributed to assess usefulness and points for improvement and retention for future datathons. The Datathon included 18 multidisciplinary teams, mentored by 20 data scientists, 6 epidemiologists, 5 presentation mentors, and 12 judges. The insights developed by the three winning teams are currently considered by the MoH as potential data science methods relevant for national policies. Based on participants’ feedback, the process for future data-driven regulatory responses for health crises was improved. Participants expressed increased trust in the MoH and readiness to work with the government on these or future projects.


Author(s):  
Yunpeng Li ◽  
Utpal Roy ◽  
Y. Tina Lee ◽  
Sudarsan Rachuri

Rule-based expert systems such as CLIPS (C Language Integrated Production System) are 1) based on inductive (if-then) rules to elicit domain knowledge and 2) designed to reason new knowledge based on existing knowledge and given inputs. Recently, data mining techniques have been advocated for discovering knowledge from massive historical or real-time sensor data. Combining top-down expert-driven rule models with bottom-up data-driven prediction models facilitates enrichment and improvement of the predefined knowledge in an expert system with data-driven insights. However, combining is possible only if there is a common and formal representation of these models so that they are capable of being exchanged, reused, and orchestrated among different authoring tools. This paper investigates the open standard PMML (Predictive Model Mockup Language) in integrating rule-based expert systems with data analytics tools, so that a decision maker would have access to powerful tools in dealing with both reasoning-intensive tasks and data-intensive tasks. We present a process planning use case in the manufacturing domain, which is originally implemented as a CLIPS-based expert system. Different paradigms in interpreting expert system facts and rules as PMML models (and vice versa), as well as challenges in representing and composing these models, have been explored. They will be discussed in detail.


Sign in / Sign up

Export Citation Format

Share Document