scholarly journals Technical vision system to ensure the movement of a group of multicopters

Author(s):  
Yurii Bobkov ◽  
Pavlo Pishchela

The actual task of controlling a group of multicopters performing coordinated actions and are locating at short distances from each other, cannot be performed with the help of a standard on-board autopilot on GPS or GLONASS signals, which give large errors. The solution to this problem is possible due to additional equipment that allows you to set the distance between the multicopters and their relative position. To do this, it is proposed to mark each multicopter with an image label in the form of a standard geometric figure or a geometric body of a given color and size, and to use technical vision system and image recognition algorithms. The structure of the technical vision system for the multicopter was developed and algorithms for image processing and calculation of the change of coordinates of the neighboring multicopter, which are transmitted to the control system to introduce the necessary motion correction, were proposed. The method to identify the reference object in the image of the scene by its color was used in this work. This method is very effective compared to other methods, because it requires only one pass per pixel, which gives a significant advantage in speed during video stream frame processing. RGB color model with a color depth of 24-bit was chosen based on the analysis. Since the lighting during the flight can change, the color is set by the limits of change of the components R, G, B. To determine the distance between multicopters, a very simple but effective method of determination the area of the recognition object (labels on the neighboring multicopter) with next comparation it with the actual value is used. Since the reference object is artificial, its area can be specified with high accuracy. The offset of the center of the object from the center of the frame is used to calculate the other two coordinates. In the beginning, the specific camera instance is calibrated both for a known value of the area of the object and for its displacement along the axes relative to the center of the frame. The technical vision system model in the Simulink software environment of the Matlab system was created to test the proposed algorithms. Based on the simulation results in Simulink, you can generate code in the C programming language for further implementation of the system in real time. A series of studies of the model was conducted using a Logitech C210 webcam with a 0.3 megapixel photo matrix (640x480 resolution). According to the results of the experiment, it was found that the maximum relative error in determining the coordinates of the multicopter did not exceed 6.8 %.

2021 ◽  
Vol 9 (7) ◽  
pp. 740
Author(s):  
Alexander Konoplin ◽  
Vladimir Filaretov ◽  
Alexander Yurmanov

A novel method for supervisory control of multilink manipulators mounted on underwater vehicles is considered. This method is designed to significantly increase the level of automation of manipulative operations, by the building of motion trajectories for a manipulator working tool along the surfaces of work objects on the basis of target indications given by the operator. This is achieved as follows: The operator targets the camera (with changeable spatial orientation of optical axis) mounted on the vehicle at the work object, and uses it to set one or more working point on the selected object. The geometric shape of the object in the work area is determined using clouds of points obtained from the technical vision system. Depending on the manipulative task set, the spatial motion trajectories and the orientation of the manipulator working tool are automatically set using the spatial coordinates of these points lying on the work object surfaces. The designed method was implemented in the C++ programming language. A graphical interface has also been created that provides rapid testing of the accuracy of overlaying the planned trajectories on the mathematically described surface of a work object. Supervisory control of an underwater manipulator was successfully simulated in the V-REP environment.


Author(s):  
A. A. Nedbaylov

The calculations required in project activities for engineering students are commonly performed in electronic spreadsheets. Practice has shown that utilizing those calculations could prove to be quite difficult for students of other fields. One of the causes for such situation (as well as partly for problems observed during Java and C programming languages courses) lies in the lack of a streamlined distribution structure for both the source data and the end results. A solution could be found in utilizing a shared approach for information structuring in spreadsheet and software environment, called “the Book Method”, which takes into account the engineering psychology issues regarding the user friendliness of working with electronic information. This method can be applied at different levels in academic institutions and at teacher training courses.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 391
Author(s):  
Luca Bigazzi ◽  
Stefano Gherardini ◽  
Giacomo Innocenti ◽  
Michele Basso

In this paper, solutions for precise maneuvering of an autonomous small (e.g., 350-class) Unmanned Aerial Vehicles (UAVs) are designed and implemented from smart modifications of non expensive mass market technologies. The considered class of vehicles suffers from light load, and, therefore, only a limited amount of sensors and computing devices can be installed on-board. Then, to make the prototype capable of moving autonomously along a fixed trajectory, a “cyber-pilot”, able on demand to replace the human operator, has been implemented on an embedded control board. This cyber-pilot overrides the commands thanks to a custom hardware signal mixer. The drone is able to localize itself in the environment without ground assistance by using a camera possibly mounted on a 3 Degrees Of Freedom (DOF) gimbal suspension. A computer vision system elaborates the video stream pointing out land markers with known absolute position and orientation. This information is fused with accelerations from a 6-DOF Inertial Measurement Unit (IMU) to generate a “virtual sensor” which provides refined estimates of the pose, the absolute position, the speed and the angular velocities of the drone. Due to the importance of this sensor, several fusion strategies have been investigated. The resulting data are, finally, fed to a control algorithm featuring a number of uncoupled digital PID controllers which work to bring to zero the displacement from the desired trajectory.


Author(s):  
Д.А. Смирнов ◽  
В.Г. Бондарев ◽  
А.В. Николенко

Проведен краткий анализ как отечественных, так и зарубежных систем межсамолетной навигации. В ходе анализа были отражены недостатки систем межсамолетной навигации и представлен актуальный подход повышения точности системы навигации за счет применения системы технического зрения. Для определения местоположения ведущего самолета предлагается рассмотреть в качестве измерительного комплекса систему технического зрения, которая способна решать большой круг задач на различных этапах, в частности, и полет строем. Систему технического зрения предлагается установить на ведомом самолете с целью измерения всех параметров, необходимых для формирования автоматического управления полетом летательного аппарата. Обработка изображений ведущего самолета выполняется с целью определения координат трех идентичных точек на фоточувствительных матрицах. Причем в качестве этих точек выбираются оптически контрастные элементы конструкции летательного аппарата, например окончания крыла, хвостового оперения и т.д. Для упрощения процедуры обработки изображений возможно использование полупроводниковых источников света в инфракрасном диапазоне (например, с длиной волны λ = 1,54 мкм), что позволяет работать даже в сложных метеоусловиях. Такой подход может быть использован при автоматизации полета строем более чем двух летательных аппаратов, при этом необходимо только оборудовать системой технического зрения все ведомые самолеты группы The article provides a brief analysis of both domestic and foreign inter-aircraft navigation systems. In the course of the analysis, we found the shortcomings of inter-aircraft navigation systems and presented an up-to-date approach to improving the accuracy of the navigation system through the use of a technical vision system. To determine the location of the leading aircraft, we proposed to consider a technical vision system as a measuring complex, which is able to solve a large range of tasks at various stages, in particular, flight in formation. We proposed to install the technical vision system on the slave aircraft in order to measure all the parameters necessary for the formation of automatic flight control of the aircraft. We performed an image processing of the leading aircraft to determine the coordinates of three identical points on photosensitive matrices. Moreover, we selected optically contrasting elements of the aircraft structure as these points, for example, the end of the wing, tail, etc. To simplify the image processing procedure, it is possible to use semiconductor light sources in the infrared range (for example, with a wavelength of λ = 1.54 microns), which allows us to work even in difficult weather conditions. This approach can be used when automating a flight in formation of more than two aircraft, while it is only necessary to equip all the guided aircraft of the group with a technical vision system


Author(s):  
P. P. Kazakevich ◽  
A. N. Yurin ◽  
G. А. Prokopovich

The most rational method for identifying the quality of fruits is the optical method using PPE, which has the accuracy and stability of measurement, as well as distance and high productivity. The paper presents classification of fruit quality recognition systems and substantiates the design and technological scheme of the vision system for sorting them, consisting of an optical module with installed structural illumination and a video camera, an electronic control unit with an interface and actuators for the sorter and conveyor for fruits. In the course of the study, a single-stream type of fruit flow in PPE with forced rotation was substantiated, a structural and technological scheme of an STZ with a feeding conveyor, an optical module and a control unit, an algorithm for functioning of the STZ software was developed based on algorithm for segmentation of fruit colors, tracking algorithm, etc. deep learning ANN, which provide recognition of the size and color of fruits, as well as damage from mechanical stress, pests and diseases. The developed STZ has been introduced into the processing line for sorting and packing apples, LSP-4 has successfully passed preliminary tests and production tests at OJSC Ostromechevo. In the course of preliminary tests of the LSP-4 line, it was found that it provided fruit recognition with a probability of at least 95%, while the labor productivity made 2.5 t/h.


Author(s):  
Matthew L. Dering ◽  
Conrad S. Tucker

The authors of this work present a computer vision approach that discovers and classifies objects in a video stream, towards an automated system for managing End of Life (EOL) waste streams. Currently, the sorting stage of EOL waste management is an extremely manual and tedious process that increases the costs of EOL options and minimizes its attractiveness as a profitable enterprise solution. There have been a wide range of EOL methodologies proposed in the engineering design community that focus on determining the optimal EOL strategies of reuse, recycle, remanufacturing and resynthesis. However, many of these methodologies assume a product/component disassembly cost based on human labor, which hereby increases the cost of EOL waste management. For example, recent EOL options such as resynthesis, rely heavily on the optimal sorting and combining of components in a novel way to form new products. This process however, requires considerable manual labor that may make this option less attractive, given products with highly complex interactions and components. To mitigate these challenges, the authors propose a computer vision system that takes live video streams of incoming EOL waste and i) automatically identifies and classifies products/components of interest and ii) predicts the EOL process that will be needed for a given product/component that is classified. A case study involving an EOL waste stream video is used to demonstrate the predictive accuracy of the proposed methodology in identifying and classifying EOL objects.


2016 ◽  
Vol 2016 (12) ◽  
pp. 1-8
Author(s):  
Peter Reichel ◽  
Jens Döge ◽  
Nico Peter ◽  
Christoph Hoppe ◽  
Andreas Reichel ◽  
...  

2004 ◽  
Author(s):  
V. D. Gorbach ◽  
I. V. Souzdalev ◽  
E. V. Shapovalov ◽  
A. I. Klochko ◽  
F. N. Kiselevskiy

2013 ◽  
Vol 694-697 ◽  
pp. 1925-1930
Author(s):  
Xin Jie Wang ◽  
Zhi Lin Yang ◽  
Jie Liu

Robot location is a key technology of quadruped robot with Hand-fused-Foot. The location method based on binocular vision system is studied for quadruped robot with Hand-fused-Foot. After obtaining image by a single camera, the object is segmented by using characteristic extraction method based on color characteristic. Then image processing such as filtering (de-noising) and opening is performed. The object is identified and its centroid coordinate in image is obtained. Location of robot based on environment reference--object coordinate is achieved. Experiments show the effectiveness and the accuracy (within 4cm) of the method.


Author(s):  
Sergey I. Babaev ◽  
Alexey I. Baranchikov ◽  
Natalya N. Grinchenko ◽  
Aleksandr N. Kolesenkov ◽  
Alexander A. Loginov

Sign in / Sign up

Export Citation Format

Share Document