Data Literacy und Strategien der datengetriebenen Wertschöpfung

2021 ◽  
Vol 62 (2) ◽  
pp. 87-98
Author(s):  
Tobias Knuth

Data have become ubiquitous in the 21st century. Companies can achieve a competitive advantage if they manage to utilise business data successfully, and a data strategy can help to become a data-driven company. In this article, three pillars are presented: data literacy as a central competence, data science as a specialisation, and the chief data officer as a C-level executive. Die Digitalisierung führt im 21. Jahrhundert zu immer größeren Datenmengen. Unternehmen können betriebliche Daten nutzen, um fundierte und bessere Entscheidungen schneller zu treffen. Dabei stellt sich die Frage, wie im Anschluss an die digitale Transformation die Entwicklung von einem digitalisierten zu einem datengetriebenen Unternehmen erfolgen kann. Der geschulte Umgang mit Daten, die sogenannte Data Literacy, gilt als eine der grundlegenden Kompetenzen der modernen Wissensgesellschaft. Die in diesem Artikel vorgestellte Datenstrategie hat drei Säulen: Data Literacy als entscheidende Kompetenz aller Mitarbeiter, Data Science als Spezialisierung für komplexe Fragestellungen und die Rolle des Chief Data Officers als strategische Führungskraft zur Koordina­tion und Etablierung von datengetriebenen Prozessen. Die erfolgreiche Umsetzung einer Datenstrategie kann einen messbaren Wettbewerbsvorteil schaffen.

Management ◽  
2018 ◽  
Author(s):  
David E. Caughlin

People analytics refers to the systematic and scientific process of applying quantitative or qualitative data analysis methods to derive insights that shape and inform employee-related business decisions and performance. More specifically, people analytics can be described as the process of collecting, analyzing, and reporting people-related data for the purpose of improving decision making, achieving strategic objectives, and sustaining a competitive advantage. In other words, people analytics involves scientific organizational research. As a field, people analytics has emerged as an extension of traditional human resources (HR) metrics and reporting, such as annualized turnover rate and cost per hire. Further, as an interdisciplinary field, people analytics integrates decades of science and practice in the areas of industrial and organizational psychology, organizational behavior, human resource management, statistics, mathematics, information systems and management, data science, finance, law, and ethics. Other related terms include HR analytics, human capital analytics, workforce analytics, and talent analytics. A major emphasis of people analytics is on making more effective data-driven HR decisions to realize organizational strategic objectives and to achieve a competitive advantage. Although the concepts and practices underlying people analytics are not entirely new, recent advances in computing power and information systems have brought people analytics to a wider audience via more affordable information system and enterprise resource planning platforms, and more powerful mathematical and statistical programs. Moreover, the growing scholarly and business interest in people analytics is based, in part, on the emergence of HR as a strategic business partner in organizations and the growing emphasis placed on making data-driven decisions in HR. Finally, people analytics can be differentiated into three levels of increasing complexity and organizational value: (a) descriptive analytics describe what has already happened in the organization, (b) predictive analytics predict what will or could happen in the organization, and (c) prescriptive analytics take data-analytic findings and use them to improve decisions and to make specific actions.


Beverages ◽  
2021 ◽  
Vol 7 (1) ◽  
pp. 3
Author(s):  
Zeqing Dong ◽  
Travis Atkison ◽  
Bernard Chen

Although wine has been produced for several thousands of years, the ancient beverage has remained popular and even more affordable in modern times. Among all wine making regions, Bordeaux, France is probably one of the most prestigious wine areas in history. Since hundreds of wines are produced from Bordeaux each year, humans are not likely to be able to examine all wines across multiple vintages to define the characteristics of outstanding 21st century Bordeaux wines. Wineinformatics is a newly proposed data science research with an application domain in wine to process a large amount of wine data through the computer. The goal of this paper is to build a high-quality computational model on wine reviews processed by the full power of the Computational Wine Wheel to understand 21st century Bordeaux wines. On top of 985 binary-attributes generated from the Computational Wine Wheel in our previous research, we try to add additional attributes by utilizing a CATEGORY and SUBCATEGORY for an additional 14 and 34 continuous-attributes to be included in the All Bordeaux (14,349 wine) and the 1855 Bordeaux datasets (1359 wines). We believe successfully merging the original binary-attributes and the new continuous-attributes can provide more insights for Naïve Bayes and Supported Vector Machine (SVM) to build the model for a wine grade category prediction. The experimental results suggest that, for the All Bordeaux dataset, with the additional 14 attributes retrieved from CATEGORY, the Naïve Bayes classification algorithm was able to outperform the existing research results by increasing accuracy by 2.15%, precision by 8.72%, and the F-score by 1.48%. For the 1855 Bordeaux dataset, with the additional attributes retrieved from the CATEGORY and SUBCATEGORY, the SVM classification algorithm was able to outperform the existing research results by increasing accuracy by 5%, precision by 2.85%, recall by 5.56%, and the F-score by 4.07%. The improvements demonstrated in the research show that attributes retrieved from the CATEGORY and SUBCATEGORY has the power to provide more information to classifiers for superior model generation. The model build in this research can better distinguish outstanding and class 21st century Bordeaux wines. This paper provides new directions in Wineinformatics for technical research in data science, such as regression, multi-target, classification and domain specific research, including wine region terroir analysis, wine quality prediction, and weather impact examination.


2021 ◽  
Vol 11 (7) ◽  
pp. 3110
Author(s):  
Karina Gibert ◽  
Xavier Angerri

In this paper, the results of the project INSESS-COVID19 are presented, as part of a special call owing to help in the COVID19 crisis in Catalonia. The technological infrastructure and methodology developed in this project allows the quick screening of a territory for a quick a reliable diagnosis in front of an unexpected situation by providing relevant decisional information to support informed decision-making and strategy and policy design. One of the challenges of the project was to extract valuable information from direct participatory processes where specific target profiles of citizens are consulted and to distribute the participation along the whole territory. Having a lot of variables with a moderate number of citizens involved (in this case about 1000) implies the risk of violating statistical secrecy when multivariate relationships are analyzed, thus putting in risk the anonymity of the participants as well as their safety when vulnerable populations are involved, as is the case of INSESS-COVID19. In this paper, the entire data-driven methodology developed in the project is presented and the dealing of the small subgroups of population for statistical secrecy preserving described. The methodology is reusable with any other underlying questionnaire as the data science and reporting parts are totally automatized.


2021 ◽  
Author(s):  
MUTHU RAM ELENCHEZHIAN ◽  
VAMSEE VADLAMUDI ◽  
RASSEL RAIHAN ◽  
KENNETH REIFSNIDER

Our community has a widespread knowledge on the damage tolerance and durability of the composites, developed over the past few decades by various experimental and computational efforts. Several methods have been used to understand the damage behavior and henceforth predict the material states such as residual strength (damage tolerance) and life (durability) of these material systems. Electrochemical Impedance Spectroscopy (EIS) and Broadband Dielectric Spectroscopy (BbDS) are such methods, which have been proven to identify the damage states in composites. Our previous work using BbDS method has proven to serve as precursor to identify the damage levels, indicating the beginning of end of life of the material. As a change in the material state variable is triggered by damage development, the rate of change of these states indicates the rate of damage interaction and can effectively predict impending failure. The Data-Driven Discovery of Models (D3M) [1] aims to develop model discovery systems, enabling users with domain knowledge but no data science background to create empirical models of real, complex processes. These D3M methods have been developed severely over the years in various applications and their implementation on real-time prediction for complex parameters such as material states in composites need to be trusted based on physics and domain knowledge. In this research work, we propose the use of data-driven methods combined with BbDS and progressive damage analysis to identify and hence predict material states in composites, subjected to fatigue loads.


2021 ◽  
Author(s):  
Karen Triep ◽  
Alexander Benedikt Leichtle ◽  
Martin Meister ◽  
Georg Martin Fiedler ◽  
Olga Endrich

BACKGROUND The criteria for the diagnosis of kidney disease outlined in “The Kidney Disease: Improving Global Outcomes (KDIGO)” are based on a patient’s current, historical and baseline data. The diagnosis of acute (AKI), chronic (CKD) and acute-on-chronic kidney disease requires past measurements of creatinine and back-calculation and the interpretation of several laboratory values over a certain period. Diagnosis may be hindered by unclear definition of the individual creatinine baseline and rough ranges of norm values set without adjustment for age, ethnicity, comorbidities and treatment. Classification of the correct diagnosis and the sufficient staging improves coding, data quality, reimbursement, the choice of therapeutic approach and the patient’s outcome. OBJECTIVE With the help of a complex rule-engine a data-driven approach to assign the diagnoses acute, chronic and acute-on-chronic kidney disease is applied. METHODS Real-time and retrospective data from the hospital’s Clinical Data Warehouse of in- and outpatient cases treated between 2014 – 2019 is used. Delta serum creatinine, baseline values and admission and discharge data are analyzed. A KDIGO based standard query language (SQL) algorithm applies specific diagnosis (ICD) codes to inpatient stays. To measure the effect on diagnosis, Text Mining on discharge documentation is conducted. RESULTS We show that this approach yields an increased number of diagnoses as well as higher precision in documentation and coding (unspecific diagnosis ICD N19* coded in % of N19 generated 17.8 in 2016, 3.3 in 2019). CONCLUSIONS Our data-driven method supports the process and reliability of diagnosis and staging and improves the quality of documentation and data. Measuring patients’ outcome will be the next step of the project.


2021 ◽  
pp. 026638212110619
Author(s):  
Sharon Richardson

During the past two decades, there have been a number of breakthroughs in the fields of data science and artificial intelligence, made possible by advanced machine learning algorithms trained through access to massive volumes of data. However, their adoption and use in real-world applications remains a challenge. This paper posits that a key limitation in making AI applicable has been a failure to modernise the theoretical frameworks needed to evaluate and adopt outcomes. Such a need was anticipated with the arrival of the digital computer in the 1950s but has remained unrealised. This paper reviews how the field of data science emerged and led to rapid breakthroughs in algorithms underpinning research into artificial intelligence. It then discusses the contextual framework now needed to advance the use of AI in real-world decisions that impact human lives and livelihoods.


2018 ◽  
Vol 11 (2) ◽  
pp. 139-158 ◽  
Author(s):  
Thomas G. Cech ◽  
Trent J. Spaulding ◽  
Joseph A. Cazier

Purpose The purpose of this paper is to lay out the data competence maturity model (DCMM) and discuss how the application of the model can serve as a foundation for a measured and deliberate use of data in secondary education. Design/methodology/approach Although the model is new, its implications, and its application are derived from key findings and best practices from the software development, data analytics and secondary education performance literature. These principles can guide educators to better manage student and operational outcomes. This work builds and applies the DCMM model to secondary education. Findings The conceptual model reveals significant opportunities to improve data-driven decision making in schools and local education agencies (LEAs). Moving past the first and second stages of the data competency maturity model should allow educators to better incorporate data into the regular decision-making process. Practical implications Moving up the DCMM to better integrate data into their decision-making process has the potential to produce profound improvements for schools and LEAs. Data science is about making better decisions. Understanding the path laid out in the DCMM to helping an organization move to a more mature data-driven decision-making process will help improve both student and operational outcomes. Originality/value This paper brings a new concept, the DCMM, to the educational literature and discusses how these principles can be applied to improve decision making by integrating them into their decision-making process and trying to help the organization mature within this framework.


2021 ◽  
Vol 7 (4) ◽  
pp. 208
Author(s):  
Mor Peleg ◽  
Amnon Reichman ◽  
Sivan Shachar ◽  
Tamir Gadot ◽  
Meytal Avgil Tsadok ◽  
...  

Triggered by the COVID-19 crisis, Israel’s Ministry of Health (MoH) held a virtual datathon based on deidentified governmental data. Organized by a multidisciplinary committee, Israel’s research community was invited to offer insights to help solve COVID-19 policy challenges. The Datathon was designed to develop operationalizable data-driven models to address COVID-19 health policy challenges. Specific relevant challenges were defined and diverse, reliable, up-to-date, deidentified governmental datasets were extracted and tested. Secure remote-access research environments were established. Registration was open to all citizens. Around a third of the applicants were accepted, and they were teamed to balance areas of expertise and represent all sectors of the community. Anonymous surveys for participants and mentors were distributed to assess usefulness and points for improvement and retention for future datathons. The Datathon included 18 multidisciplinary teams, mentored by 20 data scientists, 6 epidemiologists, 5 presentation mentors, and 12 judges. The insights developed by the three winning teams are currently considered by the MoH as potential data science methods relevant for national policies. Based on participants’ feedback, the process for future data-driven regulatory responses for health crises was improved. Participants expressed increased trust in the MoH and readiness to work with the government on these or future projects.


Sign in / Sign up

Export Citation Format

Share Document