Machine Learning Enabled Adaptive Optimization of a Transonic Compressor Rotor With Precompression

2019 ◽  
Vol 141 (5) ◽  
Author(s):  
Michael Joly ◽  
Soumalya Sarkar ◽  
Dhagash Mehta

In aerodynamic design, accurate and robust surrogate models are important to accelerate computationally expensive computational fluid dynamics (CFD)-based optimization. In this paper, a machine learning framework is presented to speed-up the design optimization of a highly loaded transonic compressor rotor. The approach is threefold: (1) dynamic selection and self-tuning among several surrogate models; (2) classification to anticipate failure of the performance evaluation; and (3) adaptive selection of new candidates to perform CFD evaluation for updating the surrogate, which facilitates design space exploration and reduces surrogate uncertainty. The framework is demonstrated with a multipoint optimization of the transonic NASA rotor 37, yielding increased compressor efficiency in less than 48 h on 100 central processing unit cores. The optimized rotor geometry features precompression that relocates and attenuates the shock, without the stability penalty or undesired reacceleration usually observed in the literature.

Author(s):  
Michael Joly ◽  
Soumalya Sarkar ◽  
Dhagash Mehta

In aerodynamic design, accurate and robust surrogate models are important to accelerate computationally expensive CFD-based optimization. In this paper, a machine learning framework is presented to speed-up the design optimization of a highly-loaded transonic compressor rotor. The approach is three-fold: (1) dynamic selection and self-tuning among several surrogate models; (2) classification to anticipate failure of the performance evaluation; and (3) adaptive selection of new candidates to perform CFD evaluation for updating the surrogate, which facilitates design space exploration and reduces surrogate uncertainty. The framework is demonstrated with a multi-point optimization of the transonic NASA rotor 37, yielding increased compressor efficiency in less than 48 hours on 100 CPU cores. The optimized rotor geometry features pre-compression that relocates and attenuates the shock, without the stability penalty or undesired reacceleration usually observed in the literature.


Author(s):  
Suguru N. Kudoh

A neurorobot is a model system for biological information processing with vital components and the artificial peripheral system. As a central processing unit of the neurorobot, a dissociated culture system possesses a simple and functional network comparing to a whole brain; thus, it is suitable for exploration of spatiotemporal dynamics of electrical activity of a neuronal circuit. The behavior of the neurorobot is determined by the response pattern of neuronal electrical activity evoked by a current stimulation from outer world. “Certain premise rules” should be embedded in the relationship between spatiotemporal activity of neurons and intended behavior. As a strategy for embedding premise rules, two ideas are proposed. The first is “shaping,” by which a neuronal circuit is trained to deliver a desired output. Shaping strategy presumes that meaningful behavior requires manipulation of the living neuronal network. The second strategy is “coordinating.” A living neuronal circuit is regarded as the central processing unit of the neurorobot. Instinctive behavior is provided as premise control rules, which are embedded into the relationship between the living neuronal network and robot. The direction of self-tuning process of neurons is not always suitable for desired behavior of the neurorobot, so the interface between neurons and robot should be designed so as to make the direction of self-tuning process of the neuronal network correspond with desired behavior of the robot. Details of these strategies and concrete designs of the interface between neurons and robot are be introduced and discussed in this chapter.


Atmosphere ◽  
2020 ◽  
Vol 11 (8) ◽  
pp. 870 ◽  
Author(s):  
Chih-Chiang Wei ◽  
Tzu-Hao Chou

Situated in the main tracks of typhoons in the Northwestern Pacific Ocean, Taiwan frequently encounters disasters from heavy rainfall during typhoons. Accurate and timely typhoon rainfall prediction is an imperative topic that must be addressed. The purpose of this study was to develop a Hadoop Spark distribute framework based on big-data technology, to accelerate the computation of typhoon rainfall prediction models. This study used deep neural networks (DNNs) and multiple linear regressions (MLRs) in machine learning, to establish rainfall prediction models and evaluate rainfall prediction accuracy. The Hadoop Spark distributed cluster-computing framework was the big-data technology used. The Hadoop Spark framework consisted of the Hadoop Distributed File System, MapReduce framework, and Spark, which was used as a new-generation technology to improve the efficiency of the distributed computing. The research area was Northern Taiwan, which contains four surface observation stations as the experimental sites. This study collected 271 typhoon events (from 1961 to 2017). The following results were obtained: (1) in machine-learning computation, prediction errors increased with prediction duration in the DNN and MLR models; and (2) the system of Hadoop Spark framework was faster than the standalone systems (single I7 central processing unit (CPU) and single E3 CPU). When complex computation is required in a model (e.g., DNN model parameter calibration), the big-data-based Hadoop Spark framework can be used to establish highly efficient computation environments. In summary, this study successfully used the big-data Hadoop Spark framework with machine learning, to develop rainfall prediction models with effectively improved computing efficiency. Therefore, the proposed system can solve problems regarding real-time typhoon rainfall prediction with high timeliness and accuracy.


Author(s):  
Manoj Kollam ◽  
Ajay Joshi

Earthquake is a devastating natural hazard which has a capability to wipe out thousands of lives and cause economic loss to the geographical location. Seismic stations continuously gather data without the necessity of the occurrence of an event. The gathered data is processed by the model to forecast the occurrence of earthquakes. This paper presents a model to forecast earthquakes using Parallel processing. Machine Learning is rapidly taking over a variety of aspects in our daily lives. Even though Machine Learning methods can be used for analyzing data, in the scenario of event forecasts like earthquakes, performance of Machine Learning is limited as the data grows day by day. Using ML alone is not a perfect solution for the model. To increase the model performance and accuracy, a new ML model is designed using parallel processing. The drawbacks of ML using central processing unit (CPU) can be overcome byGraphic Processing unit (GPU) implementation, since the parallelism is naturally provided using framework for developing GPU utilizing computational algorithms, known as the Compute Unified Device Architecture (CUDA). The implementation of hybrid state vector machine (H-SVM) algorithm using parallel processing through CUDA is used to forecast earthquakes. Our experiments show that the GPU based implementation achieved typical speedup values in the range of 3-70 times compared to conventional central processing unit (CPU). Results of different experiments are discussed along with their consequences.


Electronics ◽  
2020 ◽  
Vol 9 (7) ◽  
pp. 1069
Author(s):  
Minseon Kang ◽  
Yongseok Lee ◽  
Moonju Park

Recently, the application of machine learning on embedded systems has drawn interest in both the research community and industry because embedded systems located at the edge can produce a faster response and reduce network load. However, software implementation of neural networks on Central Processing Units (CPUs) is considered infeasible in embedded systems due to limited power supply. To accelerate AI processing, the many-core Graphics Processing Unit (GPU) has been a preferred device to the CPU. However, its energy efficiency is not still considered to be good enough for embedded systems. Among other approaches for machine learning on embedded systems, neuromorphic processing chips are expected to be less power-consuming and overcome the memory bottleneck. In this work, we implemented a pedestrian image detection system on an embedded device using a commercially available neuromorphic chip, NM500, which is based on NeuroMem technology. The NM500 processing time and the power consumption were measured as the number of chips was increased from one to seven, and they were compared to those of a multicore CPU system and a GPU-accelerated embedded system. The results show that NM500 is more efficient in terms of energy required to process data for both learning and classification than the GPU-accelerated system or the multicore CPU system. Additionally, limits and possible improvement of the current NM500 are identified based on the experimental results.


2014 ◽  
Vol 668-669 ◽  
pp. 592-597
Author(s):  
Ling Wang ◽  
Xin Chen

The core unit of the flight control system being the central part of the Unmanned Aerial Vehicle (UAV) system is the flight control computer playing a vital role in the stability and security of the entire system, which has fast and reliable requirements for system startup. Bootloader is the first section of software code that runs after powering on, mainly responsible for initializing hardware devices. By loading an application or operating system kernel, the bootloader completes the system startup. Based on MPC8280 processor’s hardware platform of the central processing unit of distributed flight control computer, this paper designs and implements a non-operating system boot scheme. Under the boot scheme, an optimal boot scheme aiming at increasing efficiency of software development for VxWorks operating system is provide.


Author(s):  
Prerana Shenoy S. P. ◽  
Sai Vishnu Soudri ◽  
Ramakanth Kumar P. ◽  
Sahana Bailuguttu

Observability is the ability for us to monitor the state of the system, which involves monitoring standard metrics like central processing unit (CPU) utilization, memory usage, and network bandwidth. The more we can understand the state of the system, the better we can improve the performance by recognizing unwanted behavior, improving the stability and reliability of the system. To achieve this, it is essential to build an automated monitoring system that is easy to use and efficient in its working. To do so, we have built a Kubernetes operator that automates the deployment and monitoring of applications and notifies unwanted behavior in real time. It also enables the visualization of the metrics generated by the application and allows standardizing these visualization dashboards for each type of application. Thus, it improves the system's productivity and vastly saves time and resources in deploying monitored applications, upgrading Kubernetes resources for each application deployed, and migration of applications.


Author(s):  
Shweta Sharma ◽  
Rama Krishna ◽  
Rakesh Kumar

With latest development in technology, the usage of smartphones to fulfill day-to-day requirements has been increased. The Android-based smartphones occupy the largest market share among other mobile operating systems. The hackers are continuously keeping an eye on Android-based smartphones by creating malicious apps housed with ransomware functionality for monetary purposes. Hackers lock the screen and/or encrypt the documents of the victim’s Android based smartphones after performing ransomware attacks. Thus, in this paper, a framework has been proposed in which we (1) utilize novel features of Android ransomware, (2) reduce the dimensionality of the features, (3) employ an ensemble learning model to detect Android ransomware, and (4) perform a comparative analysis to calculate the computational time required by machine learning models to detect Android ransomware. Our proposed framework can efficiently detect both locker and crypto ransomware. The experimental results reveal that the proposed framework detects Android ransomware by achieving an accuracy of 99.67% with Random Forest ensemble model. After reducing the dimensionality of the features with principal component analysis technique; the Logistic Regression model took least time to execute on the Graphics Processing Unit (GPU) and Central Processing Unit (CPU) in 41 milliseconds and 50 milliseconds respectively


2019 ◽  
Vol 15 (10) ◽  
pp. 155014771988355 ◽  
Author(s):  
Nematullo Rahmatov ◽  
Anand Paul ◽  
Faisal Saeed ◽  
Won-Hwa Hong ◽  
HyunCheol Seo ◽  
...  

The aim of this article is to automate quality control once a product, essentially a central processing unit system, is manufactured. Creating a model that helps in quality control, increases efficiency and speed of production by rejecting abnormal products automatically is vital. A widely used technology for this is to use industrial image processing that is based on the use of special cameras or imaging systems installed within the production line. In this article, we propose a highly efficient model to automate central processing unit system production lines in an industry such that images of the production lines are scanned and any abnormalities in their assembly are pointed out by the model and information about this is transferred to the system administrator via a cyber-physical cloud system network. A machine learning–based approach is used for proper classification. This model not only focuses on just the abnormalities but also helps in configuring the angles from which images of the production are taken, and our methods show 92% accuracy.


Sign in / Sign up

Export Citation Format

Share Document