scholarly journals THE SYSTEM OF VIDEO-DATA PROCESSING FOR THE AUTONOMOUS CONTROL OF MOBILE ROBOT

2014 ◽  
pp. 82-85
Author(s):  
Denis Vershok ◽  
Rauf Sadykhov ◽  
Andrei Selikhanovich ◽  
Klaus Schilling ◽  
Hubert Roth

This paper describes the system of video-data processing based on monocular vision for autonomous control of mobile robot. The system allows detecting obstacles in a robot environment modeled as a set of straight-line segment. The given system consists of three basic stages and uses original algorithms, ensuring the required precision and realization of the system in the real time. The first stage uses a fast edge detection algorithm on the basis of two- dimensional Walsh transform. The algorithm of modified Hough transform is used for detection of straight-line segments. The third stage «segment tracking» uses Kalman filtration for tracking segments in a monocular sequence of images.

2011 ◽  
Vol 20 (12) ◽  
pp. 1685-1693 ◽  
Author(s):  
GYO TAEK JIN ◽  
SEOJUNG PARK

It is known that every nontrivial knot has at least two quadrisecants. Given a knot, we mark each intersection point of each of its quadrisecants. Replacing each subarc between two nearby marked points with a straight line segment joining them, we obtain a polygonal closed curve which we will call the quadrisecant approximation of the given knot. We show that for any hexagonal trefoil knot, there are only three quadrisecants, and the resulting quadrisecant approximation has the same knot type.


Author(s):  
Jun Zhang ◽  
Shuhua Wang ◽  
Zhengling Yang

This paper has described the measurement method of the phone slot on the assembly line. The method is based on computer vision. After reducing the effect of noise through fast median method, the authors locate the target area. Thus the authors extract the target area. The authors have proposed an edge detection algorithm based on the improved canny operator. The authors also have put forward the linear fitting method based on RHT-LSM and then the authors delete the straight line whose slope is greater than the given threshold. Therefore the authors can find the corresponding angular point coordinates. Then the authors have compared the semi-circular data of the slots ends and find the points whose tangent line’s slop are biggest. These points are absolutely tin the leftmost and in the rightmost. Thus the authors have got the length and the width of the image in the coordinate system. Then the authors can get the camera’s internal parameters and external parameters after the camera calibration. The practice shows that the system is feasible and it has high use value.


Author(s):  
Anil Kumar ◽  
Hailin Ren ◽  
Pinhas Ben-Tzvi

This paper presents a monocular vision-based, unsupervised floor detection algorithm for semi-autonomous control of a Hybrid Mechanism Mobile Robot (HMMR). The paper primarily focuses on combining monocular vision cues with inertial sensing and ultrasonic ranging for on-line obstacle identification and path planning in the event of limited wireless connectivity. A novel, unsupervised vision algorithm was developed for floor detection and identifying traversable areas, in order to avoid obstacles in semi-autonomous control architecture. The floor detection algorithms were validated and experimentally tested in an indoor environment under various lighting conditions.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1081
Author(s):  
Tamon Miyake ◽  
Shintaro Yamamoto ◽  
Satoshi Hosono ◽  
Satoshi Funabashi ◽  
Zhengxue Cheng ◽  
...  

Gait phase detection, which detects foot-contact and foot-off states during walking, is important for various applications, such as synchronous robotic assistance and health monitoring. Gait phase detection systems have been proposed with various wearable devices, sensing inertial, electromyography, or force myography information. In this paper, we present a novel gait phase detection system with static standing-based calibration using muscle deformation information. The gait phase detection algorithm can be calibrated within a short time using muscle deformation data by standing in several postures; it is not necessary to collect data while walking for calibration. A logistic regression algorithm is used as the machine learning algorithm, and the probability output is adjusted based on the angular velocity of the sensor. An experiment is performed with 10 subjects, and the detection accuracy of foot-contact and foot-off states is evaluated using video data for each subject. The median accuracy is approximately 90% during walking based on calibration for 60 s, which shows the feasibility of the static standing-based calibration method using muscle deformation information for foot-contact and foot-off state detection.


1999 ◽  
Vol 17 (1) ◽  
pp. 51-60 ◽  
Author(s):  
Jun Tang ◽  
Keigo Watanabe ◽  
Katsutoshi Kuribayashi ◽  
Yamato Shiraishi

Algorithms ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 56
Author(s):  
Gokarna Sharma ◽  
Ramachandran Vaidyanathan ◽  
Jerry L. Trahan

We consider the distributed setting of N autonomous mobile robots that operate in Look-Compute-Move (LCM) cycles and use colored lights (the robots with lights model). We assume obstructed visibility where a robot cannot see another robot if a third robot is positioned between them on the straight line segment connecting them. In this paper, we consider the problem of positioning N autonomous robots on a plane so that every robot is visible to all others (this is called the Complete Visibility problem). This problem is fundamental, as it provides a basis to solve many other problems under obstructed visibility. In this paper, we provide the first, asymptotically optimal, O(1) time, O(1) color algorithm for Complete Visibility in the asynchronous setting. This significantly improves on an O(N)-time translation of the existing O(1) time, O(1) color semi-synchronous algorithm to the asynchronous setting. The proposed algorithm is collision-free, i.e., robots do not share positions, and their paths do not cross. We also introduce a new technique for moving robots in an asynchronous setting that may be of independent interest, called Beacon-Directed Curve Positioning.


1981 ◽  
Vol 71 (4) ◽  
pp. 1351-1360
Author(s):  
Tom Goforth ◽  
Eugene Herrin

abstract An automatic seismic signal detection algorithm based on the Walsh transform has been developed for short-period data sampled at 20 samples/sec. Since the amplitude of Walsh function is either +1 or −1, the Walsh transform can be accomplished in a computer with a series of shifts and fixed-point additions. The savings in computation time makes it possible to compute the Walsh transform and to perform prewhitening and band-pass filtering in the Walsh domain with a microcomputer for use in real-time signal detection. The algorithm was initially programmed in FORTRAN on a Raytheon Data Systems 500 minicomputer. Tests utilizing seismic data recorded in Dallas, Albuquerque, and Norway indicate that the algorithm has a detection capability comparable to a human analyst. Programming of the detection algorithm in machine language on a Z80 microprocessor-based computer has been accomplished; run time on the microcomputer is approximately 110 real time. The detection capability of the Z80 version of the algorithm is not degraded relative to the FORTRAN version.


2017 ◽  
Vol 7 (1.1) ◽  
pp. 213
Author(s):  
Sheela Rani ◽  
Vuyyuru Tejaswi ◽  
Bonthu Rohitha ◽  
Bhimavarapu Akhil

Recognition of face has been turned out to be the most important and interesting area in research. A face recognition framework is a PC application that is apt for recognizing or confirming the presence of human face from a computerized picture, from the video frames etc. One of the approaches to do this is by matching the chosen facial features with the pictures in the database. It is normally utilized as a part of security frameworks and can be implemented in different biometrics, for example, unique finger impression or eye iris acknowledgment frameworks. A picture is a mix of edges. The curved line potions where the brightness of the image change intensely are known as edges. We utilize a similar idea in the field of face-detection, the force of facial colours are utilized as a consistent value. Face recognition includes examination of a picture with a database of stored faces keeping in mind the end goal to recognize the individual in the given input picture. The entire procedure covers in three phases face detection, feature extraction and recognition and different strategies are required according to the specified requirements.


Sign in / Sign up

Export Citation Format

Share Document