Using a Representative Driving Pattern Extraction Technique Modeling with Machine Learning, Development of Durability Test Mode

2021 ◽  
Author(s):  
Hye Jeong Beun
2021 ◽  
Vol 3 (2) ◽  
pp. 392-413
Author(s):  
Stefan Studer ◽  
Thanh Binh Bui ◽  
Christian Drescher ◽  
Alexander Hanuschkin ◽  
Ludwig Winkler ◽  
...  

Machine learning is an established and frequently used technique in industry and academia, but a standard process model to improve success and efficiency of machine learning applications is still missing. Project organizations and machine learning practitioners face manifold challenges and risks when developing machine learning applications and have a need for guidance to meet business expectations. This paper therefore proposes a process model for the development of machine learning applications, covering six phases from defining the scope to maintaining the deployed machine learning application. Business and data understanding are executed simultaneously in the first phase, as both have considerable impact on the feasibility of the project. The next phases are comprised of data preparation, modeling, evaluation, and deployment. Special focus is applied to the last phase, as a model running in changing real-time environments requires close monitoring and maintenance to reduce the risk of performance degradation over time. With each task of the process, this work proposes quality assurance methodology that is suitable to address challenges in machine learning development that are identified in the form of risks. The methodology is drawn from practical experience and scientific literature, and has proven to be general and stable. The process model expands on CRISP-DM, a data mining process model that enjoys strong industry support, but fails to address machine learning specific tasks. The presented work proposes an industry- and application-neutral process model tailored for machine learning applications with a focus on technical tasks for quality assurance.


2019 ◽  
Vol 6 (1) ◽  
pp. 205395171881956 ◽  
Author(s):  
Anja Bechmann ◽  
Geoffrey C Bowker

Artificial Intelligence (AI) in the form of different machine learning models is applied to Big Data as a way to turn data into valuable knowledge. The rhetoric is that ensuing predictions work well—with a high degree of autonomy and automation. We argue that we need to analyze the process of applying machine learning in depth and highlight at what point human knowledge production takes place in seemingly autonomous work. This article reintroduces classification theory as an important framework for understanding such seemingly invisible knowledge production in the machine learning development and design processes. We suggest a framework for studying such classification closely tied to different steps in the work process and exemplify the framework on two experiments with machine learning applied to Facebook data from one of our labs. By doing so we demonstrate ways in which classification and potential discrimination take place in even seemingly unsupervised and autonomous models. Moving away from concepts of non-supervision and autonomy enable us to understand the underlying classificatory dispositifs in the work process and that this form of analysis constitutes a first step towards governance of artificial intelligence.


2019 ◽  
Vol 26 (4) ◽  
pp. 2141-2168 ◽  
Author(s):  
Jessica Morley ◽  
Luciano Floridi ◽  
Libby Kinsey ◽  
Anat Elhalal

AbstractThe debate about the ethical implications of Artificial Intelligence dates from the 1960s (Samuel in Science, 132(3429):741–742, 1960. 10.1126/science.132.3429.741; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles—the ‘what’ of AI ethics (beneficence, non-maleficence, autonomy, justice and explicability)—rather than on practices, the ‘how.’ Awareness of the potential issues is increasing at a fast rate, but the AI community’s ability to take action to mitigate the associated risks is still at its infancy. Our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers apply ethics at each stage of the Machine Learning development pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs.


2020 ◽  
Author(s):  
Simon Claus Stock ◽  
Jürgen Becker ◽  
Daniel Grimm ◽  
Tim Hotfilter ◽  
Gabriela Molinar ◽  
...  

2015 ◽  
Vol 26 (5) ◽  
pp. 493-498
Author(s):  
YONGHEE LEE ◽  
DONGJO OH ◽  
UISIK JEON ◽  
JONGHYUN LEE

Sign in / Sign up

Export Citation Format

Share Document