2S4-6 Application of big data analytics to ergonomic research field and its ethical aspects in scientific research

2014 ◽  
Vol 50 (Supplement) ◽  
pp. S86-S87
Author(s):  
Takeshi Ebara
2018 ◽  
Vol 29 (2) ◽  
pp. 767-783 ◽  
Author(s):  
Maciel Manoel Queiroz ◽  
Renato Telles

Purpose The purpose of this paper is to recognise the current state of big data analytics (BDA) on different organisational and supply chain management (SCM) levels in Brazilian firms. Specifically, the paper focuses on understanding BDA awareness in Brazilian firms and proposes a framework to analyse firms’ maturity in implementing BDA projects in logistics/SCM. Design/methodology/approach A survey on SCM levels of 1,000 firms was conducted via questionnaires. Of the 272 questionnaires received, 155 were considered valid, representing a 15.5 per cent response rate. Findings The knowledge of Brazilian firms regarding BDA, the difficulties and barriers to BDA project adoption, and the relationship between supply chain levels and BDA knowledge were identified. A framework was proposed for the adoption of BDA projects in SCM. Research limitations/implications This study does not offer external validity due to restrictions for the generalisation of the results even in the Brazilian context, which stems from the conducted sampling. Future studies should improve the comprehension in this research field and focus on the impact of big data on supply chains or networks in emerging world regions, such as Latin America. Practical implications This paper provides insights for practitioners to develop activities involving big data and SCM, and proposes functional and consistent guidance through the BDA-SCM triangle framework as an additional tool in the implementation of BDA projects in the SCM context. Originality/value This study is the first to analyse BDA on different organisational and SCM levels in emerging countries, offering instrumentalisation for BDA-SCM projects.


2020 ◽  
Vol 1 ◽  
pp. 66-70
Author(s):  
Nikoleta Leventi ◽  
Alexandrina Vodenitcharova ◽  
Kristina Popova

A clinical trial, according to the WHO, “is any research study that prospectively assigns human participants or groups of humans to one or more health-related interventions to evaluate the effects on health outcomes. Interventions include but are not restricted to drugs, cells and other biological products, surgical procedures, radiological procedures, devices, behavioural treatments, process-of-care changes, preventive care, etc”. The application of innovative information technologies like artificial intelligence and big data analytics in clinical trial processes is a new challenge. Such systems are useful tools, and promise to enhance the healthcare management, and to optimize clinical outcomes and economic effectiveness. However, their use raises ethical and social issues. In this direction, the European Commission in June 2018 set up the High Level Expert Group on AI, which offers guidance on a comprehensive framework for trustworthy AI. Trustworthy AI consists of three components, which should be met during the entire life cycle of the system: (1) it should be lawful, (2) it should be ethical, and (3) it should be robust. In this article we used the focus group methodology to obtain information from experts about the ethical aspects raised when innovative information technologies like artificial intelligence and big data analytics are used in clinical trials. Feedback from the experts was also gathered regarding the usage of the proposed guidelines for trustworthy AI, as an evaluation tool for the particular case of clinical trials.


Webology ◽  
2021 ◽  
Vol 18 (1) ◽  
pp. 104-120
Author(s):  
S. Josephine Isabella ◽  
Sujatha Srinivasan ◽  
G. Suseendran

During the big data era, there is a continuous occurrence of developing the learning of imbalanced data gives a pathway for the research field along with data mining and machine learning concepts. In recent years, Big Data and Big Data Analytics having high eminence due to data exploration by many of the applications in real-time. Using machine learning will be a greater solution to solve the difficulties that occur when we learn the imbalanced data. Many real-world applications have to predict the solutions for highly imbalanced datasets with the imbalanced target variable. In most of the cases, the target variable assigns or having the least occurrences of the target values due to the sort of imbalances associated with things or events strongly applicable for the users who avail the solutions (for example, results of stock changes, fraud finding, network security, etc.). The expansion of the availability of data due to the rise of big data from the network systems such as security, internet transactions, finance manipulations, surveillance of CCTV or other devices makes the chance to the critical study of insufficient knowledge from the imbalance data when supporting the decision making processes. The data imbalance occurrence is a challenge to the research field. In recent trends, there is more data level and an algorithm level method is being upgraded constantly and leads to develop a new hybrid framework to solve this problem in classification. Classifying the imbalanced data is a challenging task in the field of big data analytics. This study mainly concentrates on the problem existing in most cases of real-world applications as an imbalance occurs in the data. This difficulty present due to the data distribution with skewed nature. We have analyses the data imbalance and find the solution. This paper concentrates mainly on finding a better solution to this nature of the problem to be solved with the proposed framework using a hybrid ensemble classifier based on the Binary Cross-Entropy method as loss function along with the Gradient Boost Algorithm.


Author(s):  
Fulvio Mazzocchi

The topic of Big Data is today extensively discussed, not only on the technical ground. This also depends on the fact that Big Data are frequently presented as allowing an epistemological paradigm shift in scientific research, which would be able to supersede the traditional hypothesis-driven method. In this piece, I critically scrutinize two key claims that are usually associated with this approach, namely, the fact that data speak for themselves, deflating the role of theories and models, and the primacy of correlation over causation. My intention is both to acknowledge the value of Big Data analytics as innovative heuristics, and to provide a balanced account of what could be expected and what not from it.


Symmetry ◽  
2021 ◽  
Vol 13 (8) ◽  
pp. 1440
Author(s):  
Caihua Liu ◽  
Guochao Peng ◽  
Yongxin Kong ◽  
Shuyang Li ◽  
Si Chen

Recent years have seen a growing call for use of big data analytics techniques to support the realisation of symmetries and simulations in digital twins and smart factories, in which data quality plays an important role in determining the quality of big data analytics products. Although data quality affecting big data analytics has received attention in the smart factory research field, to date a systematic review of the topic of interest for understanding the present state of the art is not available, which could help reveal the trends and gaps in this area. This paper therefore presents a systematic literature review of research articles about data quality affecting big data analytics in smart factories that have been published up to 2020. We examined 31 empirical studies from our selection of papers to identify the research themes in this field. The analysis of these studies links data quality issues toward big data analytics with data quality dimensions and methods used to address these issues in the smart factory context. The findings of this systematic review also provide implications for practitioners in addressing data quality issues to better use big data analytics products to support digital symmetry in the context of smart factory.


2019 ◽  
Vol 54 (5) ◽  
pp. 20
Author(s):  
Dheeraj Kumar Pradhan

2020 ◽  
Vol 49 (5) ◽  
pp. 11-17
Author(s):  
Thomas Wrona ◽  
Pauline Reinecke

Big Data & Analytics (BDA) ist zu einer kaum hinterfragten Institution für Effizienz und Wettbewerbsvorteil von Unternehmen geworden. Zu viele prominente Beispiele, wie der Erfolg von Google oder Amazon, scheinen die Bedeutung zu bestätigen, die Daten und Algorithmen zur Erlangung von langfristigen Wettbewerbsvorteilen zukommt. Sowohl die Praxis als auch die Wissenschaft scheinen geradezu euphorisch auf den „Datenzug“ aufzuspringen. Wenn Risiken thematisiert werden, dann handelt es sich meist um ethische Fragen. Dabei wird häufig übersehen, dass die diskutierten Vorteile sich primär aus einer operativen Effizienzperspektive ergeben. Strategische Wirkungen werden allenfalls in Bezug auf Geschäftsmodellinnovationen diskutiert, deren tatsächlicher Innovationsgrad noch zu beurteilen ist. Im Folgenden soll gezeigt werden, dass durch BDA zwar Wettbewerbsvorteile erzeugt werden können, dass aber hiermit auch große strategische Risiken verbunden sind, die derzeit kaum beachtet werden.


2019 ◽  
Vol 7 (2) ◽  
pp. 273-277
Author(s):  
Ajay Kumar Bharti ◽  
Neha Verma ◽  
Deepak Kumar Verma

2017 ◽  
Vol 49 (004) ◽  
pp. 825--830
Author(s):  
A. AHMED ◽  
R.U. AMIN ◽  
M. R. ANJUM ◽  
I. ULLAH ◽  
I. S. BAJWA

Sign in / Sign up

Export Citation Format

Share Document