scholarly journals Windings Fault Detection and Prognosis in Electro-Mechanical Flight Control Actuators Operating in Active-Active Configuration

Author(s):  
Andrea De Martin ◽  
Giovanni Jacazio ◽  
George Vachtsevanos

One of the most significant research trends in the last decades of the aeronautic industry is the effort to move towards the design and the production of “more electric aircraft”. Within this framework, the application of the electrical technology to flight control systems has seen a progressive, although slow, increase: starting with the introduction of fly-by-wire and proceeding with the partial replacement of the traditional hydraulic/electro-hydraulic actuators with purely electro-mechanical ones. This evolution allowed to obtain more flexible solutions, reduced installation issues and enhanced aircraft control capability. Electro-Mechanical Actuators (EMAs) are however far from being a mature technology and still suffer from several safety issues, which can be partially limited by increasing the complexity of their design and hence their production costs. The development of a robust Prognostics and Health Management (PHM) system could provide a way to prevent the occurrence of a critical failure without resorting to complex device design. This paper deals with the first part of the study of a comprehensive PHM system for EMAs employed as primary flight control actuators; the peculiarities of the application are presented and discussed, while a novel approach, based on short pre-flight/post-flight health monitoring tests, is proposed. Turn-to-turn short in the electric motor windings is identified as the most common electrical degradation and a particle filtering framework for anomaly detection and prognosis featuring a self-tuning non-linear model is proposed. Features, anomaly detection and a prognostic algorithm are hence evaluated through state-of-the art performance metrics and their results discussed.

Actuators ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 215
Author(s):  
Antonio Carlo Bertolino ◽  
Andrea De Martin ◽  
Giovanni Jacazio ◽  
Massimo Sorli

Electro-hydraulic servo-actuators (EHSAs) are currently considered the state-of-the art solution for the control of the primary flight control systems of civil and military aircraft. Combining the expected service life of a commercial aircraft with the fact that electro-hydraulic technology is employed in the vast majority of currently in-service aircraft and is planned to be used on future platforms as well, the development of an effective Prognostic and Health Management (PHM) system could provide significant advantages to fleet operators and aircraft maintenance, such as the reduction of unplanned flight disruptions and increased availability of the aircraft. The occurrence of excessive internal leakage within the EHSAs is one of the most common causes of return from the field of flight control actuators, making this failure mode a priority in the definition of any dedicated PHM routine. This paper presents a case study on the design of a prognostic system for this degradation mode, in the context of a wider effort toward the definition of a prognostic framework suitable to work on in-flight data. The study is performed by means of a high-fidelity simulation model supported by experimental activities. Results of both the simulation and the experimental work are used to select a suitable feature, then implemented within the prognostic framework based on particle filtering. The algorithm is at first theoretically discussed, and then tested against several degradation patterns. Performances are evaluated through state-of-the-art metrics, showing promising results and providing the basis towards future applications on real in-flight data.


2020 ◽  
Author(s):  
Bo Zhang ◽  
Hongyu Zhang ◽  
Pablo Moscato

<div>Complex software intensive systems, especially distributed systems, generate logs for troubleshooting. The logs are text messages recording system events, which can help engineers determine the system's runtime status. This paper proposes a novel approach named ADR (stands for Anomaly Detection by workflow Relations) that employs matrix nullspace to mine numerical relations from log data. The mined relations can be used for both offline and online anomaly detection and facilitate fault diagnosis. We have evaluated ADR on log data collected from two distributed systems, HDFS (Hadoop Distributed File System) and BGL (IBM Blue Gene/L supercomputers system). ADR successfully mined 87 and 669 numerical relations from the logs and used them to detect anomalies with high precision and recall. For online anomaly detection, ADR employs PSO (Particle Swarm Optimization) to find the optimal sliding windows' size and achieves fast anomaly detection.</div><div>The experimental results confirm that ADR is effective for both offline and online anomaly detection. </div>


Author(s):  
Rakesh Kumar ◽  
Gaurav Dhiman ◽  
Neeraj Kumar ◽  
Rajesh Kumar Chandrawat ◽  
Varun Joshi ◽  
...  

AbstractThis article offers a comparative study of maximizing and modelling production costs by means of composite triangular fuzzy and trapezoidal FLPP. It also outlines five different scenarios of instability and has developed realistic models to minimize production costs. Herein, the first attempt is made to examine the credibility of optimized cost via two different composite FLP models, and the results were compared with its extension, i.e., the trapezoidal FLP model. To validate the models with real-time phenomena, the Production cost data of Rail Coach Factory (RCF) Kapurthala has been taken. The lower, static, and upper bounds have been computed for each situation, and then systems of optimized FLP are constructed. The credibility of each model of composite-triangular and trapezoidal FLP concerning all situations has been obtained, and using this membership grade, the minimum and the greatest minimum costs have been illustrated. The performance of each composite-triangular FLP model was compared to trapezoidal FLP models, and the intense effects of trapezoidal on composite fuzzy LPP models are investigated.


2010 ◽  
Vol 6 (4) ◽  
pp. 341-354 ◽  
Author(s):  
Hui-Huang Hsu ◽  
Chien-Chen Chen

This research aimed at building an intelligent system that can detect abnormal behavior for the elderly at home. Active RFID tags can be deployed at home to help collect daily movement data of the elderly who carries an RFID reader. When the reader detects the signals from the tags, RSSI values that represent signal strength are obtained. The RSSI values are reversely related to the distance between the tags and the reader and they are recorded following the movement of the user. The movement patterns, not the exact locations, of the user are the major concern. With the movement data (RSSI values), the clustering technique is then used to build a personalized model of normal behavior. After the model is built, any incoming datum outside the model can be viewed as abnormal and an alarm can be raised by the system. In this paper, we present the system architecture for RFID data collection and preprocessing, clustering for anomaly detection, and experimental results. The results show that this novel approach is promising.


2016 ◽  
Vol 145 (5) ◽  
pp. 925-941 ◽  
Author(s):  
G. MURPHY ◽  
C. D. PILCHER ◽  
S. M. KEATING ◽  
R. KASSANJEE ◽  
S. N. FACENTE ◽  
...  

SUMMARYIn 2011 the Incidence Assay Critical Path Working Group reviewed the current state of HIV incidence assays and helped to determine a critical path to the introduction of an HIV incidence assay. At that time the Consortium for Evaluation and Performance of HIV Incidence Assays (CEPHIA) was formed to spur progress and raise standards among assay developers, scientists and laboratories involved in HIV incidence measurement and to structure and conduct a direct independent comparative evaluation of the performance of 10 existing HIV incidence assays, to be considered singly and in combinations as recent infection test algorithms. In this paper we report on a new framework for HIV incidence assay evaluation that has emerged from this effort over the past 5 years, which includes a preliminary target product profile for an incidence assay, a consensus around key performance metrics along with analytical tools and deployment of a standardized approach for incidence assay evaluation. The specimen panels for this evaluation have been collected in large volumes, characterized using a novel approach for infection dating rules and assembled into panels designed to assess the impact of important sources of measurement error with incidence assays such as viral subtype, elite host control of viraemia and antiretroviral treatment. We present the specific rationale for several of these innovations, and discuss important resources for assay developers and researchers that have recently become available. Finally, we summarize the key remaining steps on the path to development and implementation of reliable assays for monitoring HIV incidence at a population level.


2021 ◽  
Vol 71 (2) ◽  
pp. 111-123
Author(s):  
Sveinung Nesheim ◽  
Kjell Arne Malo ◽  
Nathalie Labonnote

Abstract As long-spanning timber floor elements attempt to achieve a meaningful market share, proof of serviceability continues to be a demanding task as international consensus remains unsettled. Initiatives to improve vibration levels are achievable, but a lack of confidence in the market is resulting in increases in margins for both manufacturers and contractors. State-of-the-art concrete alternatives are offered at less than half the price, and even though timber floors offer reduced completion costs and low carbon emissions, the market is continuously reserved. Cost reductions for timber floor elements to competitive levels must be pursued throughout the product details and in the stages of manufacturing. As new wood products are introduced to the market, solution space is increased to levels that demand computerized optimization models, which require accurate expenditure predictions. To meet this challenge, a method called item-driven activity-based consumption (IDABC) has been developed and presented in this study. The method establishes an accurate relationship between product specifications and overall resource consumption linked to finished manufactured products. In addition to production time, method outcomes include cost distributions, including labor costs, and carbon emissions for both accrued materials and production-line activities. A novel approach to resource estimation linked to assembly friendliness is also presented. IDABC has been applied to a timber component and assembly line operated by a major manufacturer in Norway and demonstrates good agreement with empirical data.


2021 ◽  
Author(s):  
Andrea De Martin ◽  
Giovanni Jacazio ◽  
Massimo Sorli ◽  
Giuseppe Vitrani

Abstract Stability Control Augmentation Systems (SCAS) are widely adopted to enhance the flight stability of rotary-wing aircraft operating in difficult aerodynamic conditions, such as low altitude missions, stationary flight nearby vertical walls or in presence of heavy gusts. Such systems are based upon small electro-hydraulic servosystems controlled in position through a dedicated servovalve. The SCAS operates with limited authority over the main control linkage translating the pilot input in the movement of the main flight control actuator. Being critical for the operability of the helicopter, the definition of a Prognostics and Health Management (PHM) framework for the SCAS systems would provide significant advantages, such as better risk mitigation, improved availability, and a reduction in the occurrences of unpredicted failures which still represent one of the most known downsides of helicopters. This paper provides the results of a preliminary analysis on the effects of the inception and progression of several degradation types within a simulated SCAS system. Signals usually available within such devices are hence combined with measurements provided by additional sensors to check the feasibility of a PHM system with and without dedicated sensors. The resulting features selection process shows that although the dedicated measurements are required to design a complete PHM system, it appears nonetheless possible to obtain valuable information on the health status of the SCAS system without resorting to additional sensors.


2021 ◽  
pp. 1-15
Author(s):  
Savaridassan Pankajashan ◽  
G. Maragatham ◽  
T. Kirthiga Devi

Anomaly-based detection is coupled with recognizing the uncommon, to catch the unusual activity, and to find the strange action behind that activity. Anomaly-based detection has a wide scope of critical applications, from bank application security to regular sciences to medical systems to marketing apps. Anomaly-based detection adopted by various Machine Learning techniques is really a type of system that consists of artificial intelligence. With the ever-expanding volume and new sorts of information, for example, sensor information from an incontestably enormous amount of IoT devices and from network flow data from cloud computing, it is implicitly understood without surprise that there is a developing enthusiasm for having the option to deal with more conclusions automatically by means of AI and ML applications. But with respect to anomaly detection, many applications of the scheme are simply the passion for detection. In this paper, Machine Learning (ML) techniques, namely the SVM, Isolation forest classifiers experimented and with reference to Deep Learning (DL) techniques, the proposed DA-LSTM (Deep Auto-Encoder LSTM) model are adopted for preprocessing of log data and anomaly-based detection to get better performance measures of detection. An enhanced LSTM (long-short-term memory) model, optimizing for the suitable parameter using a genetic algorithm (GA), is utilized to recognize better the anomaly from the log data that is filtered, adopting a Deep Auto-Encoder (DA). The Deep Neural network models are utilized to change over unstructured log information to training ready features, which are reasonable for log classification in detecting anomalies. These models are assessed, utilizing two benchmark datasets, the Openstack logs, and CIDDS-001 intrusion detection OpenStack server dataset. The outcomes acquired show that the DA-LSTM model performs better than other notable ML techniques. We further investigated the performance metrics of the ML and DL models through the well-known indicator measurements, specifically, the F-measure, Accuracy, Recall, and Precision. The exploratory conclusion shows that the Isolation Forest, and Support vector machine classifiers perform roughly 81%and 79%accuracy with respect to the performance metrics measurement on the CIDDS-001 OpenStack server dataset while the proposed DA-LSTM classifier performs around 99.1%of improved accuracy than the familiar ML algorithms. Further, the DA-LSTM outcomes on the OpenStack log data-sets show better anomaly detection compared with other notable machine learning models.


Sign in / Sign up

Export Citation Format

Share Document