IN-SITU CALIBRATED DIGITAL PROCESS TWIN MODELS FOR RESOURCE EFFICIENT MANUFACTURING

Author(s):  
David Adeniji ◽  
Julius Schoop

Abstract The chief objective of manufacturing process improvement efforts is to significantly minimize process resources such as time, cost, waste, and consumed energy while improving product quality and process productivity. This paper presents a novel physics-informed optimization approach based on artificial intelligence (AI) to generate digital process twins (DPTs). The utility of the DPT approach is demonstrated for the case of finish machining of aerospace components made from gamma titanium aluminide alloy (γ-TiAl). This particular component has been plagued with persistent quality defects, including surface and sub-surface cracks, which adversely affect resource efficiency. Previous process improvement efforts have been restricted to anecdotal post-mortem investigation and empirical modeling, which fail to address the fundamental issue of how and when cracks occur during cutting. In this work, the integration of insitu process characterization with modular physics-based models is presented, and machine learning algorithms are used to create a DPT capable of reducing environmental and energy impacts while significantly increasing yield and profitability. Based on the preliminary results presented here, an improvement in the overall embodied energy efficiency of over 84%, 93% in process queuing time, 2% in scrap cost, and 93% in queuing cost has been realized for γ-TiAl machining using our novel approach.

Author(s):  
David Adeniji ◽  
Julius Schoop

Abstract The chief objective of manufacturing process improvement efforts is to significantly minimize process resources such as time, cost, waste, and consumed energy while improving product quality and process productivity. This paper presents a novel physics-informed optimization approach based on artificial intelligence (AI) to generate digital process twins (DPTs). The utility of the DPT approach is demonstrated for the case of finish machining of aerospace components made from gamma titanium aluminide alloy (γ-TiAl). This particular component has been plagued with persistent quality defects, including surface and sub-surface cracks, which adversely affect resource efficiency. Previous process improvement efforts have been restricted to anecdotal post-mortem investigation and empirical modeling, which fail to address the fundamental issue of how and when cracks occur during cutting. In this work, the integration of in-situ process characterization with modular physics-based models is presented, and machine learning algorithms are used to create a DPT capable of reducing environmental and energy impacts while significantly increasing yield and profitability. Based on the preliminary results presented here, an improvement in the overall embodied energy efficiency of over 84%, 93% in process queuing time, 2% in scrap cost, and 93% in queuing cost has been realized for γ-TiAl machining using our novel approach.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Sima Ranjbari ◽  
Toktam Khatibi ◽  
Ahmad Vosough Dizaji ◽  
Hesamoddin Sajadi ◽  
Mehdi Totonchi ◽  
...  

Abstract Background Intrauterine Insemination (IUI) outcome prediction is a challenging issue which the assisted reproductive technology (ART) practitioners are dealing with. Predicting the success or failure of IUI based on the couples' features can assist the physicians to make the appropriate decision for suggesting IUI to the couples or not and/or continuing the treatment or not for them. Many previous studies have been focused on predicting the in vitro fertilization (IVF) and intracytoplasmic sperm injection (ICSI) outcome using machine learning algorithms. But, to the best of our knowledge, a few studies have been focused on predicting the outcome of IUI. The main aim of this study is to propose an automatic classification and feature scoring method to predict intrauterine insemination (IUI) outcome and ranking the most significant features. Methods For this purpose, a novel approach combining complex network-based feature engineering and stacked ensemble (CNFE-SE) is proposed. Three complex networks are extracted considering the patients' data similarities. The feature engineering step is performed on the complex networks. The original feature set and/or the features engineered are fed to the proposed stacked ensemble to classify and predict IUI outcome for couples per IUI treatment cycle. Our study is a retrospective study of a 5-year couples' data undergoing IUI. Data is collected from Reproductive Biomedicine Research Center, Royan Institute describing 11,255 IUI treatment cycles for 8,360 couples. Our dataset includes the couples' demographic characteristics, historical data about the patients' diseases, the clinical diagnosis, the treatment plans and the prescribed drugs during the cycles, semen quality, laboratory tests and the clinical pregnancy outcome. Results Experimental results show that the proposed method outperforms the compared methods with Area under receiver operating characteristics curve (AUC) of 0.84 ± 0.01, sensitivity of 0.79 ± 0.01, specificity of 0.91 ± 0.01, and accuracy of 0.85 ± 0.01 for the prediction of IUI outcome. Conclusions The most important predictors for predicting IUI outcome are semen parameters (sperm motility and concentration) as well as female body mass index (BMI).


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1955
Author(s):  
Md Jubaer Hossain Pantho ◽  
Pankaj Bhowmik ◽  
Christophe Bobda

The astounding development of optical sensing imaging technology, coupled with the impressive improvements in machine learning algorithms, has increased our ability to understand and extract information from scenic events. In most cases, Convolution neural networks (CNNs) are largely adopted to infer knowledge due to their surprising success in automation, surveillance, and many other application domains. However, the convolution operations’ overwhelming computation demand has somewhat limited their use in remote sensing edge devices. In these platforms, real-time processing remains a challenging task due to the tight constraints on resources and power. Here, the transfer and processing of non-relevant image pixels act as a bottleneck on the entire system. It is possible to overcome this bottleneck by exploiting the high bandwidth available at the sensor interface by designing a CNN inference architecture near the sensor. This paper presents an attention-based pixel processing architecture to facilitate the CNN inference near the image sensor. We propose an efficient computation method to reduce the dynamic power by decreasing the overall computation of the convolution operations. The proposed method reduces redundancies by using a hierarchical optimization approach. The approach minimizes power consumption for convolution operations by exploiting the Spatio-temporal redundancies found in the incoming feature maps and performs computations only on selected regions based on their relevance score. The proposed design addresses problems related to the mapping of computations onto an array of processing elements (PEs) and introduces a suitable network structure for communication. The PEs are highly optimized to provide low latency and power for CNN applications. While designing the model, we exploit the concepts of biological vision systems to reduce computation and energy. We prototype the model in a Virtex UltraScale+ FPGA and implement it in Application Specific Integrated Circuit (ASIC) using the TSMC 90nm technology library. The results suggest that the proposed architecture significantly reduces dynamic power consumption and achieves high-speed up surpassing existing embedded processors’ computational capabilities.


2021 ◽  
Vol 13 (3) ◽  
pp. 63
Author(s):  
Maghsoud Morshedi ◽  
Josef Noll

Video conferencing services based on web real-time communication (WebRTC) protocol are growing in popularity among Internet users as multi-platform solutions enabling interactive communication from anywhere, especially during this pandemic era. Meanwhile, Internet service providers (ISPs) have deployed fiber links and customer premises equipment that operate according to recent 802.11ac/ax standards and promise users the ability to establish uninterrupted video conferencing calls with ultra-high-definition video and audio quality. However, the best-effort nature of 802.11 networks and the high variability of wireless medium conditions hinder users experiencing uninterrupted high-quality video conferencing. This paper presents a novel approach to estimate the perceived quality of service (PQoS) of video conferencing using only 802.11-specific network performance parameters collected from Wi-Fi access points (APs) on customer premises. This study produced datasets comprising 802.11-specific network performance parameters collected from off-the-shelf Wi-Fi APs operating at 802.11g/n/ac/ax standards on both 2.4 and 5 GHz frequency bands to train machine learning algorithms. In this way, we achieved classification accuracies of 92–98% in estimating the level of PQoS of video conferencing services on various Wi-Fi networks. To efficiently troubleshoot wireless issues, we further analyzed the machine learning model to correlate features in the model with the root cause of quality degradation. Thus, ISPs can utilize the approach presented in this study to provide predictable and measurable wireless quality by implementing a non-intrusive quality monitoring approach in the form of edge computing that preserves customers’ privacy while reducing the operational costs of monitoring and data analytics.


Author(s):  
H. Ali Razavi ◽  
Steven Danyluk ◽  
Thomas R. Kurfess

This paper explores the limitations of a previously reported indentation model that correlated the depth of plastic deformation and the normal component of the grinding force. The indentation model for grinding is studied using force control grinding of gamma titanium aluminide (TiAl-γ). Reciprocating surface grinding is carried out for a range of normal force 15–90 N, a cutting depth of 20–40 μm and removal rate of 1–9 mm3/sec using diamond, cubic boron nitride (CBN) and aluminum oxide (Al2O3) abrasives. The experimental data show that the indentation model for grinding is a valid approximation when the normal component of grinding force exceeds some value that is abrasive dependent.


1998 ◽  
Vol 552 ◽  
Author(s):  
R. Raban ◽  
L. L. ◽  
T. M.

ABSTRACTPlates of three gamma titanium aluminide alloys have been investment cast with a wide variety of casting conditions designed to influence cooling rates. These alloys include Ti-48Al-2Cr-2Nb, Ti- 47Al-2Cr-2Nb+0.5at%B and Ti-45Al-2Cr-2Nb+0.9at%B. Cooling rates have been estimated with the use of thermal data from casting experiments, along with the UES ProCAST simulation package. Variations in cooling rate significantly influenced the microstructure and tensile properties of all three alloys.


2021 ◽  
Author(s):  
İsmail Can Dikmen ◽  
Teoman Karadağ

Abstract Today, the storage of electrical energy is one of the most important technical challenges. The increasing number of high capacity, high-power applications, especially electric vehicles and grid energy storage, points to the fact that we will be faced with a large amount of batteries that will need to be recycled and separated in the near future. An alternative method to the currently used methods for separating these batteries according to their chemistry is discussed in this study. This method can be applied even on integrated circuits due to its ease of implementation and low operational cost. In this respect, it is also possible to use it in multi-chemistry battery management systems to detect the chemistry of the connected battery. For the implementation of the method, the batteries are connected to two different loads alternately. In this way, current and voltage values ​​are measured for two different loads without allowing the battery to relax. The obtained data is pre-processed with a separation function developed based on statistical significance. In machine learning algorithms, artificial neural network and decision tree algorithms are trained with processed data and used to determine battery chemistry with 100% accuracy. The efficiency and ease of implementation of the decision tree algorithm in such a categorization method are presented comparatively.


1994 ◽  
Vol 364 ◽  
Author(s):  
J. Kameda ◽  
C. R. Gold ◽  
E. S. Lee ◽  
T. E. Bloomer ◽  
M. Yamaguchi

AbstractSmall punch (SP) tests on single grained titanium aluminide (Ti-48 at.%Al) specimens with 12° and 80° lamellar orientations with respect to the tensile stress axis were conducted at 1123 K in air. Brittle cracks readily extended through the thickness in the 80° lamellar structure. In a SP specimen with the 12° lamellar structure load-interrupted at the strain of 0.43%, surface cracks with the depth of 15–25 μm were formed along lamellar boundaries. Local oxidation behavior on partly sputtered surfaces in the load-interrupted 12° lamellar specimen was examined using scanning Auger microprobe (SAM). Oxygen enriched regions were observed near cracks and some lamellar layers. The mechanisms of high temperature oxygen-induced cracking are discussed in terms of the local oxidation near cracks and lamellar boundaries.


Information ◽  
2018 ◽  
Vol 9 (9) ◽  
pp. 233 ◽  
Author(s):  
Zuleika Nascimento ◽  
Djamel Sadok

Network traffic classification aims to identify categories of traffic or applications of network packets or flows. It is an area that continues to gain attention by researchers due to the necessity of understanding the composition of network traffics, which changes over time, to ensure the network Quality of Service (QoS). Among the different methods of network traffic classification, the payload-based one (DPI) is the most accurate, but presents some drawbacks, such as the inability of classifying encrypted data, the concerns regarding the users’ privacy, the high computational costs, and ambiguity when multiple signatures might match. For that reason, machine learning methods have been proposed to overcome these issues. This work proposes a Multi-Objective Divide and Conquer (MODC) model for network traffic classification, by combining, into a hybrid model, supervised and unsupervised machine learning algorithms, based on the divide and conquer strategy. Additionally, it is a flexible model since it allows network administrators to choose between a set of parameters (pareto-optimal solutions), led by a multi-objective optimization process, by prioritizing flow or byte accuracies. Our method achieved 94.14% of average flow accuracy for the analyzed dataset, outperforming the six DPI-based tools investigated, including two commercial ones, and other machine learning-based methods.


Sign in / Sign up

Export Citation Format

Share Document