Estimating Deterioration Level of Aircraft Engines

Author(s):  
Hai Qiu ◽  
Neil Eklund ◽  
Weizhong Yan ◽  
Piero Bonissone ◽  
Feng Xue ◽  
...  

This paper describes an approach to estimate the deterioration level of aircraft engines using engine monitoring data and a physics-based engine model. The estimation process is carried out by a neural network, which is trained by data generated using a physical-based engine model complemented with an empirically derived engine deterioration model. The deterioration model allows manipulation of several engine health parameters, such as module efficiency and flow capacity, to simulate engine deterioration. Simulated sensor outputs are used to build independent transfer functions relating the sensor values to a deterioration level. A calibration model corrects the sensor readings to a reference condition so that the effect of variation of operating condition is minimized. The proposed approach can be used to assess engine deterioration level in real time. The proposed deterioration estimation approach is validated using real-world engine data.

2001 ◽  
Vol 38-40 ◽  
pp. 859-865 ◽  
Author(s):  
Manuel A. Sánchez-Montañés ◽  
Peter König ◽  
Paul F.M.J. Verschure

Author(s):  
Akash Kumar, Dr. Amita Goel Prof. Vasudha Bahl and Prof. Nidhi Sengar

Object Detection is a study in the field of computer vision. An object detection model recognizes objects of the real world present either in a captured image or in real-time video where the object can belong to any class of objects namely humans, animals, objects, etc. This project is an implementation of an algorithm based on object detection called You Only Look Once (YOLO v3). The architecture of yolo model is extremely fast compared to all previous methods. Yolov3 model executes a single neural network to the given image and then divides the image into predetermined bounding boxes. These boxes are weighted by the predicted probabilities. After non max-suppression it gives the result of recognized objects together with bounding boxes. Yolo trains and directly executes object detection on full images.


2019 ◽  
Vol 40 (Supplement_1) ◽  
Author(s):  
J D Kasprzak ◽  
M Kierepka ◽  
J Z Peruga ◽  
D Dudek ◽  
B Machura ◽  
...  

Abstract Background Three-dimensional (3D) echocardiographic data acquired from transesophageal (TEE) window are commonly used in planning and during percutaneous structural cardiac interventions (PSCI). Purpose We hypothesized that innovative, interactive mixed reality display can be integrated with procedural PSCI workflow to improve perception and interpretation of 3D data representing cardiac anatomy. Methods 3D TEE datasets were acquired before, during and after the completion of PSCI in 8 patients (occluders: 2 atrial appendage, 2 patent foramen ovale and 3 atrial septal implantations and percutaneous mitral commissurotomy). 30 Carthesian DICOM files were used to test the feasibility of mixed reality with commercially available head-mounted device (overlying hologram of 3D TEE data onto real-world view) as display for the interventional or imaging operator. Dedicated software was used for files conversion and 3D rendering of data to display device (in 1 case real-time Wi-Fi streaming from echocardiograph) and spatial manipulation of hologram during PSCI. Custom viewer was used to perform volume rendering and adjustment (cropping, transparency and shading control). Results Pre- and intraprocedural 3D TEE was performed in all 8 patients (5 women, age 40–83). Thirty selected 3DTEE datasets were successfully transferred and displayed in mixed reality head-mounted device as a holographic image overlying the real world view. The analysis was performed both before and during the procedure and compared with flatscreen 2-D display of the echocardiograph. In one case, real-time data transfer was successfully implemented during mitral balloon commissurotomy. The quality of visualization was judged as good without diagnostic content loss in all (100%) datasets. Both target structures and additional anatomical details were clearly presented including fenestrations of atrial septal defect, prominent Eustachian valve and earlier cardiac implants. Volume rendered views were touchlessly manipulated and displayed with a selection of intensity windows, transfer functions, and filters. Detail display was judged comparable to current 2-D volume-rendering on commercial workstations and touchless user interface - comfortable for optimization of views during PSCI. Conclusions Mixed reality display using a commercially available head-mounted device can be successfully integrated with preparation and execution of PSCI. The benefits of this solution include touchless image control and unobstructed real world viewing facilitating intraprocedural use, thus showing superiority over virtual or enhanced reality solutions. Expected progress includes integration of color flow data and optimization of real-time streaming option.


Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 342
Author(s):  
Fabio Martinelli ◽  
Fiammetta Marulli ◽  
Francesco Mercaldo ◽  
Antonella Santone

The proliferation of info-entertainment systems in nowadays vehicles has provided a really cheap and easy-to-deploy platform with the ability to gather information about the vehicle under analysis. With the purpose to provide an architecture to increase safety and security in automotive context, in this paper we propose a fully connected neural network architecture considering position-based features aimed to detect in real-time: (i) the driver, (ii) the driving style and (iii) the path. The experimental analysis performed on real-world data shows that the proposed method obtains encouraging results.


Micromachines ◽  
2018 ◽  
Vol 9 (10) ◽  
pp. 495 ◽  
Author(s):  
Sungho Kim ◽  
Jungho Kim ◽  
Jinyong Lee ◽  
Junmo Ahn

Remote measurements of thermal radiation are very important for analyzing the solar effect in various environments. This paper presents a novel real-time remote temperature estimation method by applying a deep learning-based regression method to midwave infrared hyperspectral images. A conventional remote temperature estimation using only one channel or multiple channels cannot provide a reliable temperature in dynamic weather environments because of the unknown atmospheric transmissivities. This paper solves the issue (real-time remote temperature measurement with high accuracy) with the proposed surface temperature-deep convolutional neural network (ST-DCNN) and a hyperspectral thermal camera (TELOPS HYPER-CAM MWE). The 27-layer ST-DCNN regressor can learn and predict the underlying temperatures from 75 spectral channels. Midwave infrared hyperspectral image data of a remote object were acquired three times a day (10:00, 13:00, 15:00) for 7 months to consider the dynamic weather variations. The experimental results validate the feasibility of the novel remote temperature estimation method in real-world dynamic environments. In addition, the thermal stealth properties of two types of paint were demonstrated by the proposed ST-DCNN as a real-world application.


Author(s):  
Shaimaa Abbas Fahdel Al-Abaidy

Recently An ANN is the very much powerful and strong technology for solving various real world-real time issues in area of image processing such as medical, research center etc. This paper represents the Artificial Neural network in combination with the image processing. The main aim of this paper is to study the ANN in integration with digital image processing and encryption technologies. So in this sense we discussed the basics of image processing, digital image processing, artificial neural network and encryption. With the help of proposed block diagram we discussed ANN with and without encryption. When an ANN is used with digital image processing several algorithms are available. Here we discussed the few algorithms in shortly with their iteration equation used at time of learning the ANN.


2016 ◽  
Vol 2016 (3) ◽  
pp. 136-154 ◽  
Author(s):  
Laurent Simon ◽  
Wenduan Xu ◽  
Ross Anderson

AbstractWe present a new side-channel attack against soft keyboards that support gesture typing on Android smartphones. An application without any special permissions can observe the number and timing of the screen hardware interrupts and system-wide software interrupts generated during user input, and analyze this information to make inferences about the text being entered by the user. System-wide information is usually considered less sensitive than app-specific information, but we provide concrete evidence that this may be mistaken. Our attack applies to all Android versions, including Android M where the SELinux policy is tightened.We present a novel application of a recurrent neural network as our classifier to infer text. We evaluate our attack against the “Google Keyboard” on Nexus 5 phones and use a real-world chat corpus in all our experiments. Our evaluation considers two scenarios. First, we demonstrate that we can correctly detect a set of pre-defined “sentences of interest” (with at least 6 words) with 70% recall and 60% precision. Second, we identify the authors of a set of anonymous messages posted on a messaging board. We find that even if the messages contain the same number of words, we correctly re-identify the author more than 97% of the time for a set of up to 35 sentences.Our study demonstrates a new way in which system-wide resources can be a threat to user privacy. We investigate the effect of rate limiting as a countermeasure but find that determining a proper rate is error-prone and fails in subtle cases. We conclude that real-time interrupt information should be made inaccessible, perhaps via a tighter SELinux policy in the next Android version.


2018 ◽  
Author(s):  
◽  
Zhi Zhang

Despite being a core topic for more than several decades, object detection is still receiving increasing attentions due to its irreplaceable importance in a wide variety of applications. Abundant object detectors based on deep neural networks have shown significantly revamped accuracies in recent years. However, it's still the day one for these models to be effectively deployed to real world. In this dissertation, we focus on object detection models which tackle real world problems that are unavailable few years ago. We also aim at making object detectors on the go, which means detectors are not longer required to be run on workstations and cloud services which is latency unfriendly. To achieve these goals, we addressed the problem in two phases: application and deployment. We have done thoughtful research on both areas. Our contribution involves inter-frame information fusing, model knowledge distillation, advanced model flow control for progressive inference, and hardware oriented model design and optimization. More specifically, we proposed a novel cross-frame verification scheme for spatial temporal fused object detection model for sequential images and videos in a proposal and reject favor. To compress model from a learning basis and resolve domain specific training data shortage, we improved the learning algorithm to handle insufficient labeled data by searching for optimal guidance paths from pre-trained models. To further reduce model inference cost, we designed a progressive neural network which run in flexible cost enabled by RNN style decision controller during runtime. We recognize the awkward model deployment problem, especially for object detection models that require excessive customized layers. In response, we propose to use end-to-end neural network which use pure neural network components to substitute traditional post-processing operations. We also applied operator decomposition and graph level and on-device optimization towards real-time object detection on low power edge devices. All these works have achieved state-of-the-art performances and converted to successful applications.


Sign in / Sign up

Export Citation Format

Share Document