scholarly journals Response Surface Mesh with the Outer Input Method

Author(s):  
Francisco Daniel Filip Duarte

Abstract Artificial intelligence in general and optimization tasks applied to the design of very efficient structures rely on response surfaces to forecast the output of functions, and are vital part of these methodologies. Yet they have important limitations, since greater precisions require greater data sets, thus, training or updating larger response surfaces become computationally expensive or unfeasible. This has been an important bottle neck limitation to achieve more promising results, rendering many optimization and AI tasks with a low performance.To solve this challenge, a new methodology created to segment response surfaces is hereby presented. Differently than other similar methodologies, this algorithm named outer input method has a very simple and robust operation, generating a mesh of near isopopulated partitions of inputs which share similitude. The great advantage it offers is that it can be applied to any data set with any type of distribution, such as random, Cartesian, or clustered, for domains with any number of coordinates, significantly simplifying any metamodel with a mesh ensemble.This study demonstrates how one of the most known and precise metamodel denominated Kriging, yet with expensive computation costs, can be significantly simplified with a response surface mesh, increasing training speed up to 567 times, while using a quad-core parallel processing. Since individual mesh elements can be parallelized or updated individually, its faster operational speed has its speed increased.

2021 ◽  
Author(s):  
Francisco Daniel Filip Duarte

Abstract Artificial intelligence in general and optimization tasks applied to the design of aerospace, space,and automotive structures, rely on response surfaces to forecast the output of functions, and are vital part of these methodologies. Yet they have important limitations, since greater precisions require greater data sets, thus, training or updating larger response surfaces become computationally expensive, sometimes unfeasible. This has been a bottle neck limitation to achieve more promising results, rendering many AI related task with a low efficiency.To solve this challenge, a new methodology created to segment response surfaces is hereby presented. Differently than other similar methodologies, the novel algorithm here presented named outer input method, has a very simple and robust operation. With only one operational parameter, maximum element size, it efficiently generates a near isopopulated mesh for any data set with any type of distribution, such as random, Cartesian, or clustered, for domains with any number of coordinates.Thus, it is possible to simplify the response surfaces by generating an ensemble of response surfaces, here denominated response surface mesh. This study demonstrates how a metamodel denominated Kriging, trained with a large data set, can be simplified with a response surface mesh, significantly reducing its often expensive computation costs> experiments here presented achieved an speed increase up to 180 times, while using a dual core parallel processingcomputer. This methodology can be applied to any metamodel, and metamodel elements can be easily parallelized and updated individually. Thus, its already faster training operation has its speed increased.


2021 ◽  
Author(s):  
Francisco Daniel Filip Duarte

Abstract Artificial intelligence in general and optimization tasks applied to the design of aerospace, space,and automotive structures, rely on response surfaces to forecast the output of functions, and are vital part of these methodologies. Yet they have important limitations, since greater precisions require greater data sets, thus, training or updating larger response surfaces become computationally expensive, sometimes unfeasible. This has been a bottle neck limitation to achieve more promising results, rendering many AI related task with a low efficiency.To solve this challenge, a new methodology created to segment response surfaces is hereby presented. Differently than other similar methodologies, the novel algorithm here presented named outer input method, has a very simple and robust operation. With only one operational parameter, maximum element size, it efficiently generates a near isopopulated mesh for any data set with any type of distribution, such as random, Cartesian, or clustered, for domains with any number of coordinates.Thus, it is possible to simplify the response surfaces by generating an ensemble of response surfaces, here denominated response surface mesh. This study demonstrates how a metamodel denominated Kriging, trained with a large data set, can be simplified with a response surface mesh, significantly reducing its often expensive computation costs> experiments here presented achieved an speed increase up to 180 times, while using a dual core parallel processingcomputer. This methodology can be applied to any metamodel, and metamodel elements can be easily parallelized and updated individually. Thus, its already faster training operation has its speed increased.


Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 245
Author(s):  
Konstantinos G. Liakos ◽  
Georgios K. Georgakilas ◽  
Fotis C. Plessas ◽  
Paris Kitsos

A significant problem in the field of hardware security consists of hardware trojan (HT) viruses. The insertion of HTs into a circuit can be applied for each phase of the circuit chain of production. HTs degrade the infected circuit, destroy it or leak encrypted data. Nowadays, efforts are being made to address HTs through machine learning (ML) techniques, mainly for the gate-level netlist (GLN) phase, but there are some restrictions. Specifically, the number and variety of normal and infected circuits that exist through the free public libraries, such as Trust-HUB, are based on the few samples of benchmarks that have been created from circuits large in size. Thus, it is difficult, based on these data, to develop robust ML-based models against HTs. In this paper, we propose a new deep learning (DL) tool named Generative Artificial Intelligence Netlists SynthesIS (GAINESIS). GAINESIS is based on the Wasserstein Conditional Generative Adversarial Network (WCGAN) algorithm and area–power analysis features from the GLN phase and synthesizes new normal and infected circuit samples for this phase. Based on our GAINESIS tool, we synthesized new data sets, different in size, and developed and compared seven ML classifiers. The results demonstrate that our new generated data sets significantly enhance the performance of ML classifiers compared with the initial data set of Trust-HUB.


2020 ◽  
pp. 20200375
Author(s):  
Min-Suk Heo ◽  
Jo-Eun Kim ◽  
Jae-Joon Hwang ◽  
Sang-Sun Han ◽  
Jin-Soo Kim ◽  
...  

Artificial intelligence, which has been actively applied in a broad range of industries in recent years, is an active area of interest for many researchers. Dentistry is no exception to this trend, and the applications of artificial intelligence are particularly promising in the field of oral and maxillofacial (OMF) radiology. Recent researches on artificial intelligence in OMF radiology have mainly used convolutional neural networks, which can perform image classification, detection, segmentation, registration, generation, and refinement. Artificial intelligence systems in this field have been developed for the purposes of radiographic diagnosis, image analysis, forensic dentistry, and image quality improvement. Tremendous amounts of data are needed to achieve good results, and involvement of OMF radiologist is essential for making accurate and consistent data sets, which is a time-consuming task. In order to widely use artificial intelligence in actual clinical practice in the future, there are lots of problems to be solved, such as building up a huge amount of fine-labeled open data set, understanding of the judgment criteria of artificial intelligence, and DICOM hacking threats using artificial intelligence. If solutions to these problems are presented with the development of artificial intelligence, artificial intelligence will develop further in the future and is expected to play an important role in the development of automatic diagnosis systems, the establishment of treatment plans, and the fabrication of treatment tools. OMF radiologists, as professionals who thoroughly understand the characteristics of radiographic images, will play a very important role in the development of artificial intelligence applications in this field.


Author(s):  
Christopher MacDonald ◽  
Michael Yang ◽  
Shawn Learn ◽  
Ron Hugo ◽  
Simon Park

Abstract There are several challenges associated with existing rupture detection systems such as their inability to accurately detect during transient (such as pump dynamics) conditions, delayed responses and their inability to transfer models to different pipeline configurations easily. To address these challenges, we employ multiple Artificial Intelligence (AI) classifiers that rely on pattern recognitions instead of traditional operator-set thresholds. AI techniques, consisting of two-dimensional (2D) Convolutional Neural Networks (CNN) and Adaptive Neuro Fuzzy Interface Systems (ANFIS), are used to mimic processes performed by operators during a rupture event. This includes both visualization (using CNN) and rule-based decision making (using ANFIS). The system provides a level of reasoning to an operator through the use of the rule-based AI system. Pump station sensor data is non-dimensionalized prior to AI processing, enabling application to pipeline configurations outside of the training data set. AI algorithms undergo testing and training using two data sets: laboratory-collected data that mimics transient pump-station operations and real operator data that includes Real Time Transient Model (RTTM) simulated ruptures. The use of non-dimensional sensor data enables the system to detect ruptures from pipeline data not used in the training process.


Author(s):  
Christopher Macdonald ◽  
Jaehyun Yang ◽  
Shawn Learn ◽  
Simon S. Park ◽  
Ronald J. Hugo

Abstract There are several challenges associated with existing pipeline rupture detection systems, including an inability to accurately detect during transient conditions (such as changes in pump operating points), an inability to easily transfer from one pipeline configuration to another, and relatively slow response times. To address these challenges, we employ multiple Artificial Intelligence (AI) classifiers that rely on pattern recognition instead of traditional operator-set thresholds. AI techniques, consisting of two-dimensional (2D) Convolutional Neural Networks (CNN) and Adaptive Neuro Fuzzy Interface Systems (ANFIS), are used to mimic processes performed by operators during a rupture event. This includes both visualization (using CNN) and rule-based decision making (using ANFIS). The system provides a level of reasoning to an operator through the use of rule-based AI. Pump station sensor data is non-dimensionalized prior to AI processing, enabling pipeline configurations outside of the training data set, independent of geometry, length, and medium. AI algorithms undergo testing and training using two data sets: laboratory-collected flow loop data that mimics transient pump-station operations and real operator data that include simulated ruptures using the Real Time Transient Model (RTTM). The multiple AI classifier results are fused together to provide higher reliability especially detecting ruptures from pipeline data not used in the training process.


Author(s):  
Paolo Massimo Buscema ◽  
William J Tastle

Data sets collected independently using the same variables can be compared using a new artificial neural network called Artificial neural network What If Theory, AWIT. Given a data set that is deemed the standard reference for some object, i.e. a flower, industry, disease, or galaxy, other data sets can be compared against it to identify its proximity to the standard. Thus, data that might not lend itself well to traditional methods of analysis could identify new perspectives or views of the data and thus, potentially new perceptions of novel and innovative solutions. This method comes out of the field of artificial intelligence, particularly artificial neural networks, and utilizes both machine learning and pattern recognition to display an innovative analysis.


Geophysics ◽  
2002 ◽  
Vol 67 (6) ◽  
pp. 1692-1700 ◽  
Author(s):  
Torleif Dahlin ◽  
Christian Bernstone ◽  
Mong Hong Loke

A contaminated site at Lernacken in southern Sweden, formerly used for sludge disposal, was investigated using a 3‐D resistivity imaging technique. The data acquisition was carried out using a roll‐along technique for 3‐D data acquisition that allows using standard multielectrode equipment designed for engineering and environmental applications. The technique allows for the measurement of large true 3‐D resistivity data sets, and data were measured using two perpendicular electrode‐orientation directions with only one layout of the cables. The data were plotted as two sets of pseudo depth slices using the two electrode orientation directions, which resulted in markedly different plots. The complete data set was inverted to form a resistivity‐depth model of the ground using a 3‐D least‐squares smoothness constrained inversion technique. The results obtained were compared to other geophysical and background data, and a good agreement was found. The results show that the 3‐D roll‐along technique in combination with 3‐D inversion can be highly useful for engineering and environmental applications. However, multichannel measurement equipment is necessary to speed up the data acquisition process for routine application.


2020 ◽  
Vol 30 (Supplement_5) ◽  
Author(s):  
R Haneef ◽  
S Fuentes ◽  
R Hrzic ◽  
S Fosse-Edorh ◽  
S Kab ◽  
...  

Abstract Background The use of artificial intelligence is increasing to estimate and predict health outcomes from large data sets. The main objectives were to develop two algorithms using machine learning techniques to identify new cases of diabetes (case study I) and to classify type 1 and type 2 (case study II) in France. Methods We selected the training data set from a cohort study linked with French national Health database (i.e., SNDS). Two final datasets were used to achieve each objective. A supervised machine learning method including eight following steps was developed: the selection of the data set, case definition, coding and standardization of variables, split data into training and test data sets, variable selection, training, validation and selection of the model. We planned to apply the trained models on the SNDS to estimate the incidence of diabetes and the prevalence of type 1/2 diabetes. Results For the case study I, 23/3468 and for case study II, 14/3481 SNDS variables were selected based on an optimal balance between variance explained and using the ReliefExp algorithm. We trained four models using different classification algorithms on the training data set. The Linear Discriminant Analysis model performed best in both case studies. The models were assessed on the test datasets and achieved a specificity of 67% and a sensitivity of 62% in case study I, and a specificity of 97 % and sensitivity of 100% in case study II. The case study II model was applied to the SNDS and estimated the prevalence of type 1 diabetes in 2016 in France of 0.3% and for type 2, 4.4%. The case study model I was not applied to the SNDS. Conclusions The case study II model to estimate the prevalence of type 1/2 diabetes has good performance and will be used in routine surveillance. The case study I model to identify new cases of diabetes showed a poor performance due to missing necessary information on determinants of diabetes and will need to be improved for further research.


Author(s):  
Asaduzzaman Nur Shuvo ◽  
Apurba Adhikary ◽  
Md. Bipul Hossain ◽  
Sultana Jahan Soheli

Data sets in large applications are often too gigantic to fit completely inside the computer’s internal memory. The resulting input/output communication (or I/O) between fast internal memory and slower external memory (such as disks) can be a major performance bottle−neck. While applying sorting on this huge data set, it is essential to do external sorting. This paper is concerned with a new in−place external sorting algorithm. Our proposed algorithm uses the concept of Quick−Sort and Divide−and−Conquer approaches resulting in a faster sorting algorithm avoiding any additional disk space. In addition, we showed that the average time complexity can be reduced compared to the existing external sorting approaches.


Sign in / Sign up

Export Citation Format

Share Document