Feature-based modeling for industrial processes in the context of digital twins: A case study of HVOF process

2022 ◽  
Vol 51 ◽  
pp. 101486
Author(s):  
Jiangzhuo Ren ◽  
Tianyu Zhou ◽  
Yiming Rong ◽  
Yongsheng Ma ◽  
Rafiq Ahmad
2021 ◽  
Author(s):  
Matheus Antonio Nogueira de Andrade ◽  
Herman Augusto Lepikson ◽  
Carlos Alberto Tosta Machado

Abstract Introduction: Digital twins are becoming a powerful tool to enhance industrial processes worldwide. This paper proposes a model for the creation of industrial processes’ digital twins using a steam distillation process for essential oil extraction as a case study. Case Description: A grey box modeling is suggested combining a machine learning based model with physical modeling to improve the process. Real time simulation and a hybrid control strategy are used, linked to reinforcement learning and proportional integral derivative control, focusing on the yield increase and optimization. Computer Vision and Artificial Intelligence enhancements were suggested. Discussion and Evaluation: Digital twins, in combination with Artificial Intelligence can be of great help to support companies with the decision-making challenges. Furthermore, some benefits that Artificial Intelligence can bring to the process were enlightened. Computer Vision approaches were also discussed. Conclusions: A creation method is elaborated to support other applications of digital twins in industrial processes in the future. In order to apply it to different processes, generalization capabilities must be proved.


Holzforschung ◽  
2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Ricardo Jorge Oliveira ◽  
Bruna Santos ◽  
Maria J. Mota ◽  
Susana R. Pereira ◽  
Pedro C. Branco ◽  
...  

Abstract Lignocellulosic biomass represents a suitable feedstock for production of biofuels and bioproducts. Its chemical composition depends on many aspects (e.g. plant source, pre-processing) and it has impact on productivity of industrial bioprocesses. Numerous methodologies can be applied for biomass characterisation, with acid hydrolysis being a particularly relevant step. This study intended to assess the most suitable procedures for acid hydrolysis, taking Eucalyptus globulus bark as a case study. For that purpose, variation of temperature (90–120 °C) was evaluated over time (0–5 h), through monosaccharides and oligosaccharides contents and degradation. For glucose, the optimal conditions were 100 °C for 2.5 h, reaching a content of 48.6 wt.%. For xylose, the highest content (15.2 wt.%) was achieved at 90 °C for 2 h, or 120 °C for 0.5 h. Maximum concentrations of mannose and galactose (1.0 and 1.7 wt.%, respectively) were achieved at 90 and 100 °C (2–3.5 h) or at 120 °C (0.5–1 h). These results revealed that different hydrolysis conditions should be applied for different sugars. Using this approach, total sugar quantification in eucalyptus bark was increased by 4.3%, which would represent a 5% increase in the ethanol volume produced, considering a hypothetical bioethanol production yield. This reflects the importance of feedstock characterization on determination of economic viability of industrial processes.


2021 ◽  
Vol 8 ◽  
Author(s):  
J.A. Douthwaite ◽  
B. Lesage ◽  
M. Gleirscher ◽  
R. Calinescu ◽  
J. M. Aitken ◽  
...  

Digital twins offer a unique opportunity to design, test, deploy, monitor, and control real-world robotic processes. In this paper we present a novel, modular digital twinning framework developed for the investigation of safety within collaborative robotic manufacturing processes. The modular architecture supports scalable representations of user-defined cyber-physical environments, and tools for safety analysis and control. This versatile research tool facilitates the creation of mixed environments of Digital Models, Digital Shadows, and Digital Twins, whilst standardising communication and physical system representation across different hardware platforms. The framework is demonstrated as applied to an industrial case-study focused on the safety assurance of a collaborative robotic manufacturing process. We describe the creation of a digital twin scenario, consisting of individual digital twins of entities in the manufacturing case study, and the application of a synthesised safety controller from our wider work. We show how the framework is able to provide adequate evidence to virtually assess safety claims made against the safety controller using a supporting validation module and testing strategy. The implementation, evidence and safety investigation is presented and discussed, raising exciting possibilities for the use of digital twins in robotic safety assurance.


Author(s):  
Y. Song ◽  
Y. Ai ◽  
H. Zhu

In urban coast, coastline is a direct factor to reflect human activities. It is of crucial importance to the understanding of urban growth, resource development and ecological environment. Due to complexity and uncertainty in this type of coast, it is difficult to detect accurate coastline position and determine the subtypes of the coastline. In this paper, we present a multiscale feature-based subtype coastline determination (MFBSCD) method to extract coastline and determine the subtypes. In this method, uncertainty-considering coastline detection (UCCD) method is proposed to separate water and land for more accurate coastline position. The MFBSCD method can well integrate scale-invariant features of coastline in geometry and spatial structure to determine coastline in subtype scale, and can make subtypes verify with each other during processing to ensure the accuracy of final results. It was applied to Landsat Thematic Mapper (TM) and Operational Land Imager (OLI) images of Tianjin, China, and the accuracy of the extracted coastlines was assessed with the manually delineated coastline. The mean ME (misclassification error) and mean LM (Line Matching) are 0.0012 and 24.54 m respectively. The method provides an inexpensive and automated means of coastline mapping with subtype scale in coastal city sectors with intense human interference, which can be significant for coast resource management and evaluation of urban development.


Author(s):  
L. Barazzetti ◽  
R. Brumana ◽  
D. Oreni ◽  
M. Previtali ◽  
F. Roncoroni

This paper presents a photogrammetric methodology for true-orthophoto generation with images acquired from UAV platforms. The method is an automated multistep workflow made up of three main parts: (i) image orientation through feature-based matching and collinearity equations / bundle block adjustment, (ii) dense matching with correlation techniques able to manage multiple images, and true-orthophoto mapping for 3D model texturing. It allows automated data processing of sparse blocks of convergent images in order to obtain a final true-orthophoto where problems such as self-occlusions, ghost effects, and multiple texture assignments are taken into consideration. <br><br> The different algorithms are illustrated and discussed along with a real case study concerning the UAV flight over the Basilica di Santa Maria di Collemaggio in L'Aquila (Italy). The final result is a rigorous true-orthophoto used to inspect the roof of the Basilica, which was seriously damaged by the earthquake in 2009.


2018 ◽  
Vol 49 (3) ◽  
pp. 610-623 ◽  
Author(s):  
Colin Wilson ◽  
Gillian Gallagher

The lexicon of a natural language does not contain all of the phonological structures that are grammatical. This presents a fundamental challenge to the learner, who must distinguish linguistically significant restrictions from accidental gaps ( Fischer-Jørgensen 1952 , Halle 1962 , Chomsky and Halle 1965 , Pierrehumbert 1994 , Frisch and Zawaydeh 2001 , Iverson and Salmons 2005 , Gorman 2013 , Hayes and White 2013 ). The severity of the challenge depends on the size of the lexicon ( Pierrehumbert 2001 ), the number of sounds and their frequency distribution ( Sigurd 1968 , Tambovtsev and Martindale 2007 ), and the complexity of the generalizations that learners must entertain ( Pierrehumbert 1994 , Hayes and Wilson 2008 , Kager and Pater 2012 , Jardine and Heinz 2016 ). In this squib, we consider the problem that accidental gaps pose for learning phonotactic grammars stated on a single, surface level of representation. While the monostratal approach to phonology has considerable theoretical and computational appeal ( Ellison 1993 , Bird and Ellison 1994 , Scobbie, Coleman, and Bird 1996 , Burzio 2002 ), little previous research has investigated how purely surface-based phonotactic grammars can be learned from natural lexicons (but cf. Hayes and Wilson 2008 , Hayes and White 2013 ). The empirical basis of our study is the sound pattern of South Bolivian Quechua, with particular focus on the allophonic distribution of high and mid vowels. We show that, in characterizing the vowel distribution, a surface-based analysis must resort to generalizations of greater complexity than are needed in traditional accounts that derive outputs from underlying forms. This exacerbates the learning problem, because complex constraints are more likely to be surface-true by chance (i.e., the structures they prohibit are more likely to be accidentally absent from the lexicon). A comprehensive quantitative analysis of the Quechua lexicon and phonotactic system establishes that many accidental gaps of the relevant complexity level do indeed exist. We propose that, to overcome this problem, surface-based phonotactic models should have two related properties: they should use distinctive features to state constraints at multiple levels of granularity, and they should select constraints of appropriate granularity by statistical comparison of observed and expected frequency distributions. The central idea is that actual gaps typically belong to statistically robust feature-based classes, whereas accidental gaps are more likely to be featurally isolated and to contain independently rare sounds. A maximum-entropy learning model that incorporates these two properties is shown to be effective at distinguishing systematic and accidental gaps in a whole-language phonotactic analysis of Quechua, outperforming minimally different models that lack features or perform nonstatistical induction.


Author(s):  
Diane Ngo ◽  
David A. Guerra-Zubiaga ◽  
Germánico González-Badillo ◽  
Reza Vatankhah Barenji

Cloud manufacturing (CMfg) is a new manufacturing paradigm designed to enable manufacturing enterprise to share their resources and capabilities. Prior to any real-life change in the system, for CMfg it is important to anticipate and optimize the response of the system through simulation. Digital Twins (DT) is a simulation method for this paradigm that is different from existing simulation methods in two ways. It is a virtual copy of the system containing all the components and can connect to the controller in real time. The goal of this work is to develop a DT for an educational manufacturing cell. The educational manufacturing cell is a FESTO Reconfigurable Mechatronics System (RMS). The cell has four stations that uses pallets to transport the product on the conveyor belt and assembles a part of the product. The Siemens Process Simulate: TECNOMATIX, was used to create the DT of the system. The system is modeled in a CAD program and then imported into TECNOMATIX Process Simulate, where it is programmed to replicate the processes.


2020 ◽  
Vol 113 ◽  
pp. 94-105
Author(s):  
Marie Platenius-Mohr ◽  
Somayeh Malakuti ◽  
Sten Grüner ◽  
Johannes Schmitt ◽  
Thomas Goldschmidt

Sign in / Sign up

Export Citation Format

Share Document