Convolutional Neural Networks for the Localization of Plastic Velocity Gradient Tensor in Polycrystalline Microstructures

Author(s):  
David Montes de Oca Zapiain ◽  
Apaar Shanker ◽  
Surya Kalidindi

Abstract Recent work has demonstrated the potential of convolutional neural networks (CNNs) in producing low-computational cost surrogate models for the localization of mechanical fields in two-phase microstructures. The extension of the same CNNs to polycrystalline microstructures is hindered by the lack of an efficient formalism for the representation of the crystal lattice orientation in the input channels of the CNNs. In this paper, we demonstrate the benefits of using generalized spherical harmonics (GSH) for addressing this challenge. A CNN model was successfully trained to predict the local plastic velocity gradient fields in polycrystalline microstructures subjected to a macroscopically imposed loading condition. Specifically, it is demonstrated that the proposed approach improves significantly the accuracy of the CNN models, when compared with the direct use of Bunge-Euler angles to represent the crystal orientations in the input channels. Since the proposed approach implicitly satisfies the expected crystal symmetries in the specification of the input microstructure to the CNN, it opens new research directions for the adoption of CNNs in addressing a broad range of polycrystalline microstructure design and optimization problems.

Author(s):  
Nicola Demo ◽  
Giulio Ortali ◽  
Gianluca Gustin ◽  
Gianluigi Rozza ◽  
Gianpiero Lavini

Abstract This contribution describes the implementation of a data-driven shape optimization pipeline in a naval architecture application. We adopt reduced order models in order to improve the efficiency of the overall optimization, keeping a modular and equation-free nature to target the industrial demand. We applied the above mentioned pipeline to a realistic cruise ship in order to reduce the total drag. We begin by defining the design space, generated by deforming an initial shape in a parametric way using free form deformation. The evaluation of the performance of each new hull is determined by simulating the flux via finite volume discretization of a two-phase (water and air) fluid. Since the fluid dynamics model can result very expensive—especially dealing with complex industrial geometries—we propose also a dynamic mode decomposition enhancement to reduce the computational cost of a single numerical simulation. The real-time computation is finally achieved by means of proper orthogonal decomposition with Gaussian process regression technique. Thanks to the quick approximation, a genetic optimization algorithm becomes feasible to converge towards the optimal shape.


2020 ◽  
Vol 10 (2) ◽  
pp. 483 ◽  
Author(s):  
Eko Ihsanto ◽  
Kalamullah Ramli ◽  
Dodi Sudiana ◽  
Teddy Surya Gunawan

Many algorithms have been developed for automated electrocardiogram (ECG) classification. Due to the non-stationary nature of the ECG signal, it is rather challenging to use traditional handcraft methods, such as time-based analysis of feature extraction and classification, to pave the way for machine learning implementation. This paper proposed a novel method, i.e., the ensemble of depthwise separable convolutional (DSC) neural networks for the classification of cardiac arrhythmia ECG beats. Using our proposed method, the four stages of ECG classification, i.e., QRS detection, preprocessing, feature extraction, and classification, were reduced to two steps only, i.e., QRS detection and classification. No preprocessing method was required while feature extraction was combined with classification. Moreover, to reduce the computational cost while maintaining its accuracy, several techniques were implemented, including All Convolutional Network (ACN), Batch Normalization (BN), and ensemble convolutional neural networks. The performance of the proposed ensemble CNNs were evaluated using the MIT-BIH arrythmia database. In the training phase, around 22% of the 110,057 beats data extracted from 48 records were utilized. Using only these 22% labeled training data, our proposed algorithm was able to classify the remaining 78% of the database into 16 classes. Furthermore, the sensitivity ( S n ), specificity ( S p ), and positive predictivity ( P p ), and accuracy ( A c c ) are 99.03%, 99.94%, 99.03%, and 99.88%, respectively. The proposed algorithm required around 180 μs, which is suitable for real time application. These results showed that our proposed method outperformed other state of the art methods.


2017 ◽  
Vol 89 (4) ◽  
pp. 609-619 ◽  
Author(s):  
Witold Artur Klimczyk ◽  
Zdobyslaw Jan Goraj

Purpose This paper aims to address the issue of designing aerodynamically robust empennage. Aircraft design optimization often narrowed to analysis of cruise conditions does not take into account other flight phases (manoeuvres). These, especially in unmanned air vehicle sector, can be significant part of the whole flight. Empennage is a part of the aircraft, with crucial function for manoeuvres. It is important to consider robustness for highest performance. Design/methodology/approach Methodology for robust wing design is presented. Surrogate modelling using kriging is used to reduce the optimization cost for high-fidelity aerodynamic calculations. Analysis of varying flight conditions, angle of attack, is made to assess robustness of design for particular mission. Two cases are compared: global optimization of 11 parameters and optimization divided into two consecutive sub-optimizations. Findings Surrogate modelling proves its usefulness for cutting computational time. Optimum design found by splitting problem into sub-optimizations finds better design at lower computational cost. Practical implications It is demonstrated, how surrogate modelling can be used for analysis of robustness, and why it is important to consider it. Intuitive split of wing design into airfoil and planform sub-optimizations brings promising savings in the optimization cost. Originality/value Methodology presented in this paper can be used in various optimization problems, especially those involving expensive computations and requiring top quality design.


Author(s):  
Mohammed Abdulla Salim Al Husaini ◽  
Mohamed Hadi Habaebi ◽  
Teddy Surya Gunawan ◽  
Md Rafiqul Islam ◽  
Elfatih A. A. Elsheikh ◽  
...  

AbstractBreast cancer is one of the most significant causes of death for women around the world. Breast thermography supported by deep convolutional neural networks is expected to contribute significantly to early detection and facilitate treatment at an early stage. The goal of this study is to investigate the behavior of different recent deep learning methods for identifying breast disorders. To evaluate our proposal, we built classifiers based on deep convolutional neural networks modelling inception V3, inception V4, and a modified version of the latter called inception MV4. MV4 was introduced to maintain the computational cost across all layers by making the resultant number of features and the number of pixel positions equal. DMR database was used for these deep learning models in classifying thermal images of healthy and sick patients. A set of epochs 3–30 were used in conjunction with learning rates 1 × 10–3, 1 × 10–4 and 1 × 10–5, Minibatch 10 and different optimization methods. The training results showed that inception V4 and MV4 with color images, a learning rate of 1 × 10–4, and SGDM optimization method, reached very high accuracy, verified through several experimental repetitions. With grayscale images, inception V3 outperforms V4 and MV4 by a considerable accuracy margin, for any optimization methods. In fact, the inception V3 (grayscale) performance is almost comparable to inception V4 and MV4 (color) performance but only after 20–30 epochs. inception MV4 achieved 7% faster classification response time compared to V4. The use of MV4 model is found to contribute to saving energy consumed and fluidity in arithmetic operations for the graphic processor. The results also indicate that increasing the number of layers may not necessarily be useful in improving the performance.


Geophysics ◽  
2021 ◽  
pp. 1-77
Author(s):  
Hanchen Wang ◽  
Tariq Alkhalifah

The ample size of time-lapse data often requires significant event detection and source location efforts, especially in areas like shale gas exploration regions where a large number of micro-seismic events are often recorded. In many cases, the real-time monitoring and locating of these events are essential to production decisions. Conventional methods face considerable drawbacks. For example, traveltime-based methods require traveltime picking of often noisy data, while migration and waveform inversion methods require expensive wavefield solutions and event detection. Both tasks require some human intervention, and this becomes a big problem when too many sources need to be located, which is common in micro-seismic monitoring. Machine learning has recently been used to identify micro-seismic events or locate their sources once they are identified and picked. We propose to use a novel artificial neural network framework to directly map seismic data, without any event picking or detection, to their potential source locations. We train two convolutional neural networks on labeled synthetic acoustic data containing simulated micro-seismic events to fulfill such requirements. One convolutional neural network, which has a global average pooling layer to reduce the computational cost while maintaining high-performance levels, aims to classify the number of events in the data. The other network predicts the source locations and other source features such as the source peak frequencies and amplitudes. To reduce the size of the input data to the network, we correlate the recorded traces with a central reference trace to allow the network to focus on the curvature of the input data near the zero-lag region. We train the networks to handle single, multi, and no event segments extracted from the data. Tests on a simple vertical varying model and a more realistic Otway field model demonstrate the approach's versatility and potential.


2021 ◽  
Vol 58 (1) ◽  
pp. 5614-5624
Author(s):  
Dr. Asadi Srinivasulu, Dr. Umesh Neelakantan, Tarkeshwar Barua

Lung disease is one of the significant reasons for malignancy related passing because of its forceful nature and postponed discoveries at cutting edge stages. Early discovery of disease would encourage in sparing a huge number of lives over the globe consistently. Lung malignant growth discovery at beginning time has gotten significant and furthermore simple with picture handling and profound learning systems. Lung Cancer side effects are persistent cough, chest torment that deteriorates with profound breathing, roughness, unexplained loss of hunger and weight, coughing up blood or rust-shaded mucus, brevity of breath, bronchitis, pneumonia or different diseases that continue repeating. Lung quiet Computer Tomography (CT) check pictures are utilized to identify and arrange the lung knobs and to recognize the threat level of that knob. Extended Convolutional Neural Networks (ECNN) work achieved relative examination with parameters like precision, time intricacy and elite, lessens computational cost, and works with modest quantity of preparing information is superior to the current framework. consumers.


2022 ◽  
pp. 1-10
Author(s):  
Daniel Trevino-Sanchez ◽  
Vicente Alarcon-Aquino

The need to detect and classify objects correctly is a constant challenge, being able to recognize them at different scales and scenarios, sometimes cropped or badly lit is not an easy task. Convolutional neural networks (CNN) have become a widely applied technique since they are completely trainable and suitable to extract features. However, the growing number of convolutional neural networks applications constantly pushes their accuracy improvement. Initially, those improvements involved the use of large datasets, augmentation techniques, and complex algorithms. These methods may have a high computational cost. Nevertheless, feature extraction is known to be the heart of the problem. As a result, other approaches combine different technologies to extract better features to improve the accuracy without the need of more powerful hardware resources. In this paper, we propose a hybrid pooling method that incorporates multiresolution analysis within the CNN layers to reduce the feature map size without losing details. To prevent relevant information from losing during the downsampling process an existing pooling method is combined with wavelet transform technique, keeping those details "alive" and enriching other stages of the CNN. Achieving better quality characteristics improves CNN accuracy. To validate this study, ten pooling methods, including the proposed model, are tested using four benchmark datasets. The results are compared with four of the evaluated methods, which are also considered as the state-of-the-art.


Sign in / Sign up

Export Citation Format

Share Document