Real-Time Trajectory Generation and Reachability Determination in Autorotative Flare

2020 ◽  
Vol 65 (3) ◽  
pp. 1-17
Author(s):  
Brian F. Eberle ◽  
Jonathan D. Rogers

Autorotation maneuvers inherently offer little margin for error in execution and induce high pilot workload, particularly as the aircraft nears the ground in an autorotative flare. Control augmentation systems may potentially reduce pilot workload while simultaneously improving the likelihood of a successful landing by offering the pilot appropriate cues. This paper presents an initial investigation of a real-time trajectory generation scheme for autorotative flare based on time-to-contact theory. The algorithm exhibits deterministic runtime performance and provides a speed trajectory that can be tracked by a pilot or inner-loop controller to bring the vehicle to a desired landing point at the time of touchdown. A low-order model of the helicopter dynamics in autorotation is used to evaluate dynamic feasibility of the generated trajectories. By generating and evaluating trajectories to an array of candidate landing points, the set of reachable landing points in front of the aircraft is determined. Simulation results are presented in which the trajectory generator is coupled with a previously derived autorotation controller. Example cases and trade studies are conducted in a six degree-of-freedom simulation environment to demonstrate overall performance as well as robustness of the algorithm to variations in target landing point, helicopter gross weight, and winds. The robustness of the reachability determination portion of the algorithm is likewise evaluated through trade studies examining off-nominal flare entry conditions and the effects of winds.

1994 ◽  
Vol 116 (4) ◽  
pp. 687-701 ◽  
Author(s):  
H. H. Cheng

The real-time implementation of path planning, trajectory generation, and servo control for manipulation of the prototype UPSarm are presented in this paper. The prototype UPSarm, which is primarily designed for studying the feasibility of loading packages inside a trailer, is a ten degree-of-freedom hybrid serial-and-parallel-driven redundant robot manipulator. The direct, forward, inverse, and indirect kinematic solutions of the UPSarm using three coordinate spaces: actuator space, effective joint space, and world Cartesian coordinate space are derived for real-time path planning, trajectory generation, and control. The manipulation of the UPSarm is based upon a general-purpose path planner and trajectory generator. Provided with appropriate kinematics modules and sufficient computational power, this path planner and trajectory generator can be used for real-time motion control of any degree-of-freedom hybrid serial-and-parallel-driven electromechanical devices. A VMEbus-based distributed computing system has been implemented for real-time motion control of the UPSarm. A PID-based feedforward servo control scheme is used in our servo controller. The motion examples of the UPSarm programmed in our robot language will show the practical manipulation of hybrid serial-and-parallel-driven redundant kinematic chains.


Author(s):  
Supriya Raheja

Background: The extension of CPU schedulers with fuzzy has been ascertained better because of its unique capability of handling imprecise information. Though, other generalized forms of fuzzy can be used which can further extend the performance of the scheduler. Objectives: This paper introduces a novel approach to design an intuitionistic fuzzy inference system for CPU scheduler. Methods: The proposed inference system is implemented with a priority scheduler. The proposed scheduler has the ability to dynamically handle the impreciseness of both priority and estimated execution time. It also makes the system adaptive based on the continuous feedback. The proposed scheduler is also capable enough to schedule the tasks according to dynamically generated priority. To demonstrate the performance of proposed scheduler, a simulation environment has been implemented and the performance of proposed scheduler is compared with the other three baseline schedulers (conventional priority scheduler, fuzzy based priority scheduler and vague based priority scheduler). Results: Proposed scheduler is also compared with the shortest job first CPU scheduler as it is known to be an optimized solution for the schedulers. Conclusion: Simulation results prove the effectiveness and efficiency of intuitionistic fuzzy based priority scheduler. Moreover, it provides optimised results as its results are comparable to the results of shortest job first.


2021 ◽  
Vol 11 (1) ◽  
pp. 410
Author(s):  
Yu-Hsien Lin ◽  
Yu-Ting Lin ◽  
Yen-Jun Chiu

On the basis of a full-appendage DARPA SUBOFF model (DTRC model 5470), a scale (λ = 0.535) semi-autonomous submarine free-running model (SFRM) was designed for testing its manoeuvrability and stability in the constrained water. Prior to the experimental tests of the SFRM, a six-degree-of-freedom (6-DOF) manoeuvre model with an autopilot system was developed by using logic operations in MATLAB. The SFRM’s attitude and its trim polygon were presented by coping with the changes in mass and trimming moment. By adopting a series of manoeuvring tests in empty tanks, the performances of the SFRM were introduced in cases of three sailing speeds. In addition, the PD controller was established by considering the simulation results of these manoeuvring tests. The optimal control gains with respect to each manoeuvring test can be calculated by using the PID tuner in MATLAB. Two sets of control gains derived from the optimal characteristics parameters were compared in order to decide on the most appropriate PD controller with the line-of-sight (LOS) guidance algorithm for the SFRM in the autopilot simulation. Eventually, the simulated trajectories and course angles of the SFRM would be illustrated in the post-processor based on the Cinema 4D modelling.


2020 ◽  
Vol 53 (2) ◽  
pp. 9276-9281
Author(s):  
Bahareh Sabetghadam ◽  
Rita Cunha ◽  
António Pascoal

Robotics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 12
Author(s):  
Yixiang Lim ◽  
Nichakorn Pongsarkornsathien ◽  
Alessandro Gardi ◽  
Roberto Sabatini ◽  
Trevor Kistan ◽  
...  

Advances in unmanned aircraft systems (UAS) have paved the way for progressively higher levels of intelligence and autonomy, supporting new modes of operation, such as the one-to-many (OTM) concept, where a single human operator is responsible for monitoring and coordinating the tasks of multiple unmanned aerial vehicles (UAVs). This paper presents the development and evaluation of cognitive human-machine interfaces and interactions (CHMI2) supporting adaptive automation in OTM applications. A CHMI2 system comprises a network of neurophysiological sensors and machine-learning based models for inferring user cognitive states, as well as the adaptation engine containing a set of transition logics for control/display functions and discrete autonomy levels. Models of the user’s cognitive states are trained on past performance and neurophysiological data during an offline calibration phase, and subsequently used in the online adaptation phase for real-time inference of these cognitive states. To investigate adaptive automation in OTM applications, a scenario involving bushfire detection was developed where a single human operator is responsible for tasking multiple UAV platforms to search for and localize bushfires over a wide area. We present the architecture and design of the UAS simulation environment that was developed, together with various human-machine interface (HMI) formats and functions, to evaluate the CHMI2 system’s feasibility through human-in-the-loop (HITL) experiments. The CHMI2 module was subsequently integrated into the simulation environment, providing the sensing, inference, and adaptation capabilities needed to realise adaptive automation. HITL experiments were performed to verify the CHMI2 module’s functionalities in the offline calibration and online adaptation phases. In particular, results from the online adaptation phase showed that the system was able to support real-time inference and human-machine interface and interaction (HMI2) adaptation. However, the accuracy of the inferred workload was variable across the different participants (with a root mean squared error (RMSE) ranging from 0.2 to 0.6), partly due to the reduced number of neurophysiological features available as real-time inputs and also due to limited training stages in the offline calibration phase. To improve the performance of the system, future work will investigate the use of alternative machine learning techniques, additional neurophysiological input features, and a more extensive training stage.


Author(s):  
Chun-ying Huang ◽  
Yun-chen Cheng ◽  
Guan-zhang Huang ◽  
Ching-ling Fan ◽  
Cheng-hsin Hsu

Real-time screen-sharing provides users with ubiquitous access to remote applications, such as computer games, movie players, and desktop applications (apps), anywhere and anytime. In this article, we study the performance of different screen-sharing technologies, which can be classified into native and clientless ones. The native ones dictate that users install special-purpose software, while the clientless ones directly run in web browsers. In particular, we conduct extensive experiments in three steps. First, we identify a suite of the most representative native and clientless screen-sharing technologies. Second, we propose a systematic measurement methodology for comparing screen-sharing technologies under diverse and dynamic network conditions using different performance metrics. Last, we conduct extensive experiments and perform in-depth analysis to quantify the performance gap between clientless and native screen-sharing technologies. We found that our WebRTC-based implementation achieves the best overall performance. More precisely, it consumes a maximum of 3 Mbps bandwidth while reaching a high decoding ratio and delivering good video quality. Moreover, it leads to a steadily high decoding ratio and video quality under dynamic network conditions. By presenting the very first rigorous comparisons of the native and clientless screen-sharing technologies, this article will stimulate more exciting studies on the emerging clientless screen-sharing technologies.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Saaveethya Sivakumar ◽  
Alpha Agape Gopalai ◽  
King Hann Lim ◽  
Darwin Gouwanda ◽  
Sunita Chauhan

AbstractThis paper presents a wavelet neural network (WNN) based method to reduce reliance on wearable kinematic sensors in gait analysis. Wearable kinematic sensors hinder real-time outdoor gait monitoring applications due to drawbacks caused by multiple sensor placements and sensor offset errors. The proposed WNN method uses vertical Ground Reaction Forces (vGRFs) measured from foot kinetic sensors as inputs to estimate ankle, knee, and hip joint angles. Salient vGRF inputs are extracted from primary gait event intervals. These selected gait inputs facilitate future integration with smart insoles for real-time outdoor gait studies. The proposed concept potentially reduces the number of body-mounted kinematics sensors used in gait analysis applications, hence leading to a simplified sensor placement and control circuitry without deteriorating the overall performance.


Sign in / Sign up

Export Citation Format

Share Document