A Systematic Literature Review of Cutting Tool Wear Monitoring in Turning by Using Artificial Intelligence Techniques

Machines ◽  
2021 ◽  
Vol 9 (12) ◽  
pp. 351
Author(s):  
Lorenzo Colantonio ◽  
Lucas Equeter ◽  
Pierre Dehombreux ◽  
François Ducobu

In turning operations, the wear of cutting tools is inevitable. As workpieces produced with worn tools may fail to meet specifications, the machining industries focus on replacement policies that mitigate the risk of losses due to scrap. Several strategies, from empiric laws to more advanced statistical models, have been proposed in the literature. More recently, many monitoring systems based on Artificial Intelligence (AI) techniques have been developed. Due to the scope of different artificial intelligence approaches, having a holistic view of the state of the art on this subject is complex, in part due to a lack of recent comprehensive reviews. This literature review therefore presents 20 years of literature on this subject obtained following a Systematic Literature Review (SLR) methodology. This SLR aims to answer the following research question: “How is the AI used in the framework of monitoring/predicting the condition of tools in stable turning condition?” To answer this research question, the “Scopus” database was consulted in order to gather relevant publications published between 1 January 2000 and 1 January 2021. The systematic approach yielded 8426 articles among which 102 correspond to the inclusion and exclusion criteria which limit the application of AI to stable turning operation and online prediction. A bibliometric analysis performed on these articles highlighted the growing interest of this subject in the recent years. A more in-depth analysis of the articles is also presented, mainly focusing on six AI techniques that are highly represented in the literature: Artificial Neural Network (ANN), fuzzy logic, Support Vector Machine (SVM), Self-Organizing Map (SOM), Hidden Markov Model (HMM), and Convolutional Neural Network (CNN). For each technique, the trends in the inputs, pre-processing techniques, and outputs of the AI are presented. The trends highlight the early and continuous importance of ANN, and the emerging interest of CNN for tool condition monitoring. The lack of common benchmark database for evaluating models performance does not allow clear comparisons of technique performance.

2020 ◽  
Vol 3 (1) ◽  
pp. 43-53
Author(s):  
Fahrur Rozi

Nowadays IoT researches on intelligent service systems is becoming a trend. IoT produces a variety of data from sensors or smart phones. Data generated from IoT can be more useful and can be followed up if data analysis is carried out. Predictive analytic with IoT is part of data analysis that aims to predict something solution. This analysis utilization produces innovative applications in various fields with diverse predictive analytic methods or techniques. This study uses Systematic Literature Review (SLR) to understand about research trends, methods and architecture used in predictive analytic with IoT. So the first step is to determine the research question (RQ) and then search is carried out on several literature published in popular journal databases namely IEEE Xplore, Scopus and ACM from 2015 - 2019. As a result of a review of thirty (30) selected articles, there are several research fields which are trends, namely Transportation, Agriculture, Health, Industry, Smart Home, and Environment. The most studied fields are agriculture. Predictive analytic with IoT use varied method according to the conditions of data used. There are five most widely used methods, namely Bayesian Network (BN), Artificial Neural Network (ANN), Recurrent Neural Networks (RNN), Neural Network (NN), and Support Vector Machines (SVM). Some studies also propose architectures that use predictive analytic with IoT.


2021 ◽  
Vol 5 (1) ◽  
pp. 55-62
Author(s):  
Dwi Suchisty ◽  
Widodo ◽  
Bambang Prasetya Adhi

Sebuah dokumen atau tulisan pastinya mengandung suatu informasi penting di dalamnya. Peringkasan dokumen membuat penemuan informasi-informasi tersebut menjadi lebih mudah karena mempersingkat kalimat dengan cara menghilangkan kata atau kalimat yang tidak penting. Peringkasan dokumen saat ini sudah banyak dilakukan dengan cara yang otomatis menggunakan metode-metode yang dikembangkan dari model neural netowork. Penelitian ini bertujuan untuk mengetahui sejauh mana perkembangan metode neural network dalam meringkas dokumen dilakukan dengan cara menganalisis literatur atau penelitian menggunakan teknik systematic literature review. Pengumpulan literatur dilakukan dengan cara melakukan pencarian pada beberapa digital library dengan memasukkan search string yang telah dibuat berdasarkan research question dengan batas publikasi antara tahun 2014-2018. Hasil dari penelitian ini menunjukkan bahwa dari 1266 literatur yang diperoleh 39 diantaranya layak untuk dianalisa. Berdasarkan dari 39 literatur tersebut diketahui bahwa metode neural network yang digunakan untuk meringkas dokumen adalah sebanyak 28 metode. Metode yang paling sering digunakan adalah metode Recurrent Neural Network (RNN) dan metode terbaik yang ditemukan untuk melakukan peringkasan adalah Deep Neural Network (DNN) dengan persentase ketepatan mencapai 62%.


Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 198
Author(s):  
Mujaheed Abdullahi ◽  
Yahia Baashar ◽  
Hitham Alhussian ◽  
Ayed Alwadain ◽  
Norshakirah Aziz ◽  
...  

In recent years, technology has advanced to the fourth industrial revolution (Industry 4.0), where the Internet of things (IoTs), fog computing, computer security, and cyberattacks have evolved exponentially on a large scale. The rapid development of IoT devices and networks in various forms generate enormous amounts of data which in turn demand careful authentication and security. Artificial intelligence (AI) is considered one of the most promising methods for addressing cybersecurity threats and providing security. In this study, we present a systematic literature review (SLR) that categorize, map and survey the existing literature on AI methods used to detect cybersecurity attacks in the IoT environment. The scope of this SLR includes an in-depth investigation on most AI trending techniques in cybersecurity and state-of-art solutions. A systematic search was performed on various electronic databases (SCOPUS, Science Direct, IEEE Xplore, Web of Science, ACM, and MDPI). Out of the identified records, 80 studies published between 2016 and 2021 were selected, surveyed and carefully assessed. This review has explored deep learning (DL) and machine learning (ML) techniques used in IoT security, and their effectiveness in detecting attacks. However, several studies have proposed smart intrusion detection systems (IDS) with intelligent architectural frameworks using AI to overcome the existing security and privacy challenges. It is found that support vector machines (SVM) and random forest (RF) are among the most used methods, due to high accuracy detection another reason may be efficient memory. In addition, other methods also provide better performance such as extreme gradient boosting (XGBoost), neural networks (NN) and recurrent neural networks (RNN). This analysis also provides an insight into the AI roadmap to detect threats based on attack categories. Finally, we present recommendations for potential future investigations.


2020 ◽  
Author(s):  
Avishek Choudhury ◽  
Onur Asan

BACKGROUND Artificial intelligence (AI) provides opportunities to identify the health risks of patients and thus influence patient safety outcomes. OBJECTIVE The purpose of this systematic literature review was to identify and analyze quantitative studies utilizing or integrating AI to address and report clinical-level patient safety outcomes. METHODS We restricted our search to the PubMed, PubMed Central, and Web of Science databases to retrieve research articles published in English between January 2009 and August 2019. We focused on quantitative studies that reported positive, negative, or intermediate changes in patient safety outcomes using AI apps, specifically those based on machine-learning algorithms and natural language processing. Quantitative studies reporting only AI performance but not its influence on patient safety outcomes were excluded from further review. RESULTS We identified 53 eligible studies, which were summarized concerning their patient safety subcategories, the most frequently used AI, and reported performance metrics. Recognized safety subcategories were clinical alarms (n=9; mainly based on decision tree models), clinical reports (n=21; based on support vector machine models), and drug safety (n=23; mainly based on decision tree models). Analysis of these 53 studies also identified two essential findings: (1) the lack of a standardized benchmark and (2) heterogeneity in AI reporting. CONCLUSIONS This systematic review indicates that AI-enabled decision support systems, when implemented correctly, can aid in enhancing patient safety by improving error detection, patient stratification, and drug management. Future work is still needed for robust validation of these systems in prospective and real-world clinical environments to understand how well AI can predict safety outcomes in health care settings.


10.2196/18599 ◽  
2020 ◽  
Vol 8 (7) ◽  
pp. e18599 ◽  
Author(s):  
Avishek Choudhury ◽  
Onur Asan

Background Artificial intelligence (AI) provides opportunities to identify the health risks of patients and thus influence patient safety outcomes. Objective The purpose of this systematic literature review was to identify and analyze quantitative studies utilizing or integrating AI to address and report clinical-level patient safety outcomes. Methods We restricted our search to the PubMed, PubMed Central, and Web of Science databases to retrieve research articles published in English between January 2009 and August 2019. We focused on quantitative studies that reported positive, negative, or intermediate changes in patient safety outcomes using AI apps, specifically those based on machine-learning algorithms and natural language processing. Quantitative studies reporting only AI performance but not its influence on patient safety outcomes were excluded from further review. Results We identified 53 eligible studies, which were summarized concerning their patient safety subcategories, the most frequently used AI, and reported performance metrics. Recognized safety subcategories were clinical alarms (n=9; mainly based on decision tree models), clinical reports (n=21; based on support vector machine models), and drug safety (n=23; mainly based on decision tree models). Analysis of these 53 studies also identified two essential findings: (1) the lack of a standardized benchmark and (2) heterogeneity in AI reporting. Conclusions This systematic review indicates that AI-enabled decision support systems, when implemented correctly, can aid in enhancing patient safety by improving error detection, patient stratification, and drug management. Future work is still needed for robust validation of these systems in prospective and real-world clinical environments to understand how well AI can predict safety outcomes in health care settings.


2021 ◽  
Vol 23 (Supplement_6) ◽  
pp. vi139-vi139
Author(s):  
Jan Lost ◽  
Tej Verma ◽  
Niklas Tillmanns ◽  
W R Brim ◽  
Harry Subramanian ◽  
...  

Abstract PURPOSE Identifying molecular subtypes in gliomas has prognostic and therapeutic value, traditionally after invasive neurosurgical tumor resection or biopsy. Recent advances using artificial intelligence (AI) show promise in using pre-therapy imaging for predicting molecular subtype. We performed a systematic review of recent literature on AI methods used to predict molecular subtypes of gliomas. METHODS Literature review conforming to PRSIMA guidelines was performed for publications prior to February 2021 using 4 databases: Ovid Embase, Ovid MEDLINE, Cochrane trials (CENTRAL), and Web of Science core-collection. Keywords included: artificial intelligence, machine learning, deep learning, radiomics, magnetic resonance imaging, glioma, and glioblastoma. Non-machine learning and non-human studies were excluded. Screening was performed using Covidence software. Bias analysis was done using TRIPOD guidelines. RESULTS 11,727 abstracts were retrieved. After applying initial screening exclusion criteria, 1,135 full text reviews were performed, with 82 papers remaining for data extraction. 57% used retrospective single center hospital data, 31.6% used TCIA and BRATS, and 11.4% analyzed multicenter hospital data. An average of 146 patients (range 34-462 patients) were included. Algorithms predicting IDH status comprised 51.8% of studies, MGMT 18.1%, and 1p19q 6.0%. Machine learning methods were used in 71.4%, deep learning in 27.4%, and 1.2% directly compared both methods. The most common algorithm for machine learning were support vector machine (43.3%), and for deep learning convolutional neural network (68.4%). Mean prediction accuracy was 76.6%. CONCLUSION Machine learning is the predominant method for image-based prediction of glioma molecular subtypes. Major limitations include limited datasets (60.2% with under 150 patients) and thus limited generalizability of findings. We recommend using larger annotated datasets for AI network training and testing in order to create more robust AI algorithms, which will provide better prediction accuracy to real world clinical datasets and provide tools that can be translated to clinical practice.


Sign in / Sign up

Export Citation Format

Share Document