scholarly journals Intelligent Spacecraft Visual GNC Architecture With the State-Of-the-Art AI Components for On-Orbit Manipulation

2021 ◽  
Vol 8 ◽  
Author(s):  
Zhou Hao ◽  
R. B. Ashith Shyam ◽  
Arunkumar Rathinam ◽  
Yang Gao

Conventional spacecraft Guidance, Navigation, and Control (GNC) architectures have been designed to receive and execute commands from ground control with minimal automation and autonomy onboard spacecraft. In contrast, Artificial Intelligence (AI)-based systems can allow real-time decision-making by considering system information that is difficult to model and incorporate in the conventional decision-making process involving ground control or human operators. With growing interests in on-orbit services with manipulation, the conventional GNC faces numerous challenges in adapting to a wide range of possible scenarios, such as removing unknown debris, potentially addressed using emerging AI-enabled robotic technologies. However, a complete paradigm shift may need years' efforts. As an intermediate solution, we introduce a novel visual GNC system with two state-of-the-art AI modules to replace the corresponding functions in the conventional GNC system for on-orbit manipulation. The AI components are as follows: (i) A Deep Learning (DL)-based pose estimation algorithm that can estimate a target's pose from two-dimensional images using a pre-trained neural network without requiring any prior information on the dynamics or state of the target. (ii) A technique for modeling and controlling space robot manipulator trajectories using probabilistic modeling and reproduction to previously unseen situations to avoid complex trajectory optimizations on board. This also minimizes the attitude disturbances of spacecraft induced on it due to the motion of the robot arm. This architecture uses a centralized camera network as the main sensor, and the trajectory learning module of the 7 degrees of freedom robotic arm is integrated into the GNC system. The intelligent visual GNC system is demonstrated by simulation of a conceptual mission—AISAT. The mission is a micro-satellite to carry out on-orbit manipulation around a non-cooperative CubeSat. The simulation shows how the GNC system works in discrete-time simulation with the control and trajectory planning are generated in Matlab/Simulink. The physics rendering engine, Eevee, renders the whole simulation to provide a graphic realism for the DL pose estimation. In the end, the testbeds developed to evaluate and demonstrate the GNC system are also introduced. The novel intelligent GNC system can be a stepping stone toward future fully autonomous orbital robot systems.

2019 ◽  
Vol 2 (2) ◽  
pp. 22-34
Author(s):  
Tabassom Sedighi

The Bayesian network (BN) method is one of the data-driven methods which have been successfully used to assist problem-solving in a wide range of disciplines including policy making, information technology, engineering, medicine, and more recently biology and ecology. BNs are particularly useful for diverse problems of varying size and complexity, where uncertainties are inherent in the system. BNs engage directly with subjective data in a transparent way and have become a state-of-the-art technology to support decision-making under uncertainty.


Author(s):  
Phani K. Nagarjuna ◽  
Athamaram H. Soni

Abstract The problem of inverse kinematics in Robotics, is a nonlinear mapping from a given cartesian coordinates to the desirable joint coordinates of the robot arm. It is found that an appropriately designed neural network can be trained to learn the non-linearity of the Inverse Kinematic Equation (IKE). We present an approach for solving the Forward Kinematic Equation (FKE) and the IKE by means of a Multi Layer Back-Propagation Neural Network (Rumelhart et al., 1986). The neural network approach is applied to a Two Degrees-of-Freedom (DOF) robot manipulator and the results are compared with those obtained using the analytical solution. The results obtained from the simulation of the neural network indicate a fairly accurate learning of the FKE and IKE by the Multi Layer Back-Propagation Neural Network.


2018 ◽  
Author(s):  
Tanmay Nath ◽  
Alexander Mathis ◽  
An Chi Chen ◽  
Amir Patel ◽  
Matthias Bethge ◽  
...  

Noninvasive behavioral tracking of animals during experiments is crucial to many scientific pursuits. Extracting the poses of animals without using markers is often essential for measuring behavioral effects in biomechanics, genetics, ethology & neuroscience. Yet, extracting detailed poses without markers in dynamically changing backgrounds has been challenging. We recently introduced an open source toolbox called DeepLabCut that builds on a state-of-the-art human pose estimation algorithm to allow a user to train a deep neural network using limited training data to precisely track user-defined features that matches human labeling accuracy. Here, with this paper we provide an updated toolbox that is self contained within a Python package that includes new features such as graphical user interfaces and active-learning based network refinement. Lastly, we provide a step-by-step guide for using DeepLabCut.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Kevin Yu ◽  
Thomas Wegele ◽  
Daniel Ostler ◽  
Dirk Wilhelm ◽  
Hubertus Feußner

AbstractTelemedicine has become a valuable asset in emergency responses for assisting paramedics in decision making and first contact treatment. Paramedics in unfamiliar environments or time-critical situations often encounter complications for which they require external advice. Modern ambulance vehicles are equipped with microphones, cameras, and vital sensors, which allow experts to remotely join the local team. However, the visual channels are rarely used since the statically installed cameras only allow broad views at the patient. They neither allow a close-up view nor a dynamic viewpoint controlled by the remote expert. In this paper, we present EyeRobot, a concept which enables dynamic viewpoints for telepresence using the intuitive control of the user’s head motion. In particular, EyeRobot utilizes the 6 degrees of freedom pose estimation capabilities of modern head-mounted displays and applies them in real-time to the pose of a robot arm. A stereo-camera, installed on the end-effector of the robot arm, serves as the eyes of the remote expert at the local site. We put forward an implementation of EyeRobot and present the results of our pilot study which indicates its intuitive control.


Author(s):  
Thu Zar ◽  
Theingi A ◽  
Su Yin Win

Robots and manipulators are used to serve machine tools in automatic production system. The robot arm was designed with two degrees of freedom and accomplished accurately simple tasks. This paper represents experimental results on circular movement and straight line movement of two-link manipulator in a vertical movement. The lengths of robot arm links was designed from the desired workspace boundary conditions. Pololu 70:1 gear motors are selected for two joint revolutions of the robot manipulator after analysing the dynamics of two links. To control the robot, it performs inverse kinematic calculations and communicates the proper angles serially to a microcontroller that drives the motors with the capability of modifying position, speed and acceleration. Testing and validation of the robot arm was carried out and results shows that it work properly.


Sensors ◽  
2020 ◽  
Vol 20 (15) ◽  
pp. 4114
Author(s):  
Shao-Kang Huang ◽  
Chen-Chien Hsu ◽  
Wei-Yen Wang ◽  
Cheng-Hung Lin

Accurate estimation of 3D object pose is highly desirable in a wide range of applications, such as robotics and augmented reality. Although significant advancement has been made for pose estimation, there is room for further improvement. Recent pose estimation systems utilize an iterative refinement process to revise the predicted pose to obtain a better final output. However, such refinement process only takes account of geometric features for pose revision during the iteration. Motivated by this approach, this paper designs a novel iterative refinement process that deals with both color and geometric features for object pose refinement. Experiments show that the proposed method is able to reach 94.74% and 93.2% in ADD(-S) metric with only 2 iterations, outperforming the state-of-the-art methods on the LINEMOD and YCB-Video datasets, respectively.


1994 ◽  
Vol 18 (3) ◽  
pp. 191-205 ◽  
Author(s):  
A. Hemami

As part of a study towards automation of the loading process in a mechanical loader, in excavation or a reclaiming operation from a muck pile, the forces involved in the scooping action must be analyzed. In the present approach, the cutting edge of a bucket is regarded as the tool point for a robot manipulator. The loader, itself, is considered to be a robot arm for which the relevant knowledge and state of the art in robotics can be utilized for automation of its operation. There are five forces at each instant during scooping that must be provided for the bucket by the actuators moving the bucket. These forces are: The weight of material to be moved; the force towards pushing, pressing and compacting the material; the friction forces; the digging or cutting force; and the dynamic or inertia forces for the motion. Analysis of these forces and formulating their variation during scooping needs a great deal of theoretical and experimental research. This work is confined to the first force, the weight of the loaded material. After a general description of all these forces, approximate expressions are derived for the calculation of this force in terms of the parameters of motion and dimensions of a loader bucket.


2014 ◽  
Vol 668-669 ◽  
pp. 347-351 ◽  
Author(s):  
Lang Liu ◽  
Niu Wang ◽  
Chu Zhong Yu ◽  
Da Tao Wang

Robot manipulator position and posture control is a popular topic in the field of uncalibrated visual servoing, this paper presents a kalman filter-based robot manipulator five-degrees of freedom uncalibrated vision positioning method. In the case of the fixed binocular cameras and manipulator parameters are unknown; firstly, the specific point and angle image features information in the camera image space were selected in order to describe the relative pose relationship between robot manipulator ends and goals. Then, the kalman filter online estimation algorithm was applied to calculate image Jacobian matrix which is mapping relationship between image space to cartesian mission space, and vision controller was designed in the image plane realized robot manipulator five-degrees of freedom uncalibrated vision positioning control. Finally, Six-degrees of freedom robot manipulator’s five-degrees of freedom uncalibrated visual positioning Simulink model established in the Matlab environment, and the simulation result show that kalman filter online estimation method made the robot manipulator rapid convergence to the desired position and posture with high accuracy.


2014 ◽  
Vol 6 (1) ◽  
pp. 66-75
Author(s):  
Herizon Herizon ◽  
Ade Diana

Robot is one technology that is being developed at this time. Robot manipulators are widely used in industry, especially robotic arm that has a certain degree of freedom. The problems that occurred in the robot arm is the accuracy in determining the position of the object to be moved. This study aims to apply the method forward kinematics equation modeling on the movement of the robot manipulator in particular robot arm 3 degrees of freedom (DOF) equipped with a gripper which serves to clamp and move the object. The method used in this study is an experimental method in phases: the design of hardware and software, interconnect hardware and software in the system of movement of the robot. Joints actuator using servo motors. Manipulator control system is used to adjust the angular position of each joint with CodeVisionAVR programming language that is sent in parallel to the motor driver so as to produce pulses to move the bike. Forward kinematics equation modeling using trigonometric equations. Forward kinematics modeling applications on the movement of the robot arm that is used to provide information about the value of the angle and the coordinates of each joint. Results of testing the hardware controlled by software to show the error (error) the movement of each joint is varied by between 0.06% - 2.567%.


2021 ◽  
Vol 1 ◽  
pp. 87
Author(s):  
Konstantinos C. Apostolakis ◽  
Nikolaos Dimitriou ◽  
George Margetis ◽  
Stavroula Ntoa ◽  
Dimitrios Tzovaras ◽  
...  

Background: Augmented reality (AR) and artificial intelligence (AI) are highly disruptive technologies that have revolutionised practices in a wide range of domains. Their potential has not gone unnoticed in the security sector with several law enforcement agencies (LEAs) employing AI applications in their daily operations for forensics and surveillance. In this paper, we present the DARLENE ecosystem, which aims to bridge existing gaps in applying AR and AI technologies for rapid tactical decision-making in situ with minimal error margin, thus enhancing LEAs’ efficiency and Situational Awareness (SA). Methods: DARLENE incorporates novel AI techniques for computer vision tasks such as activity recognition and pose estimation, while also building an AR framework for visualization of the inferenced results via dynamic content adaptation according to each individual officer’s stress level and current context. The concept has been validated with end-users through co-creation workshops, while the decision-making mechanism for enhancing LEAs’ SA has been assessed with experts. Regarding computer vision components, preliminary tests of the instance segmentation method for humans’ and objects’ detection have been conducted on a subset of videos from the RWF-2000 dataset for violence detection, which have also been used to test a human pose estimation method that has so far exhibited impressive results and will constitute the basis of further developments in DARLENE. Results: Evaluation results highlight that target users are positive towards the adoption of the proposed solution in field operations, and that the SA decision-making mechanism produces highly acceptable outcomes. Evaluation of the computer vision components yielded promising results and identified opportunities for improvement. Conclusions: This work provides the context of the DARLENE ecosystem and presents the DARLENE architecture, analyses its individual technologies, and demonstrates preliminary results, which are positive both in terms of technological achievements and user acceptance of the proposed solution.


Sign in / Sign up

Export Citation Format

Share Document