scholarly journals Development and research of the algorithm search of the singular points position in the aircraft vision systems

Author(s):  
A. S. Tuzhilkin

The paper deals with the feature detection problem in aircraft machine vision systems. We developed a fast detection algorithm. We determined feature locations in an image sequence depicting fast-moving objects as registered by aircraft machine vision systems. We analysed existing feature detection algorithms. The input image frame sequence undergoes a geometric transformation, that is, rotation. We simulated how our algorithm processed a sequence of frames depicting the target. We compared efficiency and speed of feature detection in the well-known SIFT algorithm with that of our algorithm. We demonstrate that the algorithm developed ensures faster feature detection that is more resistant to geometric transformations of the target image as compared to the SIFT algorithm.

2021 ◽  
Vol 40 (1) ◽  
pp. 773-786
Author(s):  
Shuai Liu ◽  
Ying Xu ◽  
Lingming Guo ◽  
Meng Shao ◽  
Guodong Yue ◽  
...  

Tens of thousands of work-related injuries and deaths are reported in the construction industry each year, and a high percentage of them are due to construction workers not wearing safety equipment. In order to address this safety issue, it is particularly necessary to automatically identify people and detect the safety characteristics of personnel at the same time in the prefabricated building. Therefore, this paper proposes a depth feature detection algorithm based on the Extended-YOLOv3 model. On the basis of the YOLOv3 network, a security feature recognition network and a feature transmission network are added to achieve the purpose of detecting security features while identifying personnel. Firstly, a security feature recognition network is added side by side on the basis of the YOLOv3 network to analyze the wearing characteristics of construction workers. Secondly, the S-SPP module is added to the object detection and feature recognition network to broaden the features of the deep network and help the network extract more useful features from the high-resolution input image. Finally, a special feature transmission network is designed to transfer features between the construction worker detection network and the security feature recognition network, so that the two networks can obtain feature information from the other network respectively. Compared with YOLOv3 algorithm, Extended-YOLOv3 in this paper adds security feature recognition and feature transmission functions, and adds S-SPP module to the object detection and feature recognition network. The experimental results show that the Extended-YOLOv3 algorithm is 1.3% better than the YOLOV3 algorithm in AP index.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Liyun Liu

In this paper, line dancing's moving object detection technology based on machine vision is studied to improve object detection. For this purpose, the improved frame difference for the background modeling technique is combined with the target detection algorithm. The moving target is extracted, and the postmorphological processing is carried out to make the target detection more accurate. Based on this, the tracking target is determined on the time axis of the moving target tracking stage, the position of the target in each frame is found, and the most similar target is found in each frame of the video sequence. The association relationship is established to determine a moving object template or feature. Through certain measurement criteria, the mean-shift algorithm is used to search the optimal candidate target in the image frame and carry out the corresponding matching to realize moving objects' tracking. This method can detect the moving targets of line dancing in various areas through the experimental analysis, which will not be affected by the position or distance, and always has a more accurate detection effect.


2017 ◽  
Vol 2 (1) ◽  
pp. 80-87
Author(s):  
Puyda V. ◽  
◽  
Stoian. A.

Detecting objects in a video stream is a typical problem in modern computer vision systems that are used in multiple areas. Object detection can be done on both static images and on frames of a video stream. Essentially, object detection means finding color and intensity non-uniformities which can be treated as physical objects. Beside that, the operations of finding coordinates, size and other characteristics of these non-uniformities that can be used to solve other computer vision related problems like object identification can be executed. In this paper, we study three algorithms which can be used to detect objects of different nature and are based on different approaches: detection of color non-uniformities, frame difference and feature detection. As the input data, we use a video stream which is obtained from a video camera or from an mp4 video file. Simulations and testing of the algoritms were done on a universal computer based on an open-source hardware, built on the Broadcom BCM2711, quad-core Cortex-A72 (ARM v8) 64-bit SoC processor with frequency 1,5GHz. The software was created in Visual Studio 2019 using OpenCV 4 on Windows 10 and on a universal computer operated under Linux (Raspbian Buster OS) for an open-source hardware. In the paper, the methods under consideration are compared. The results of the paper can be used in research and development of modern computer vision systems used for different purposes. Keywords: object detection, feature points, keypoints, ORB detector, computer vision, motion detection, HSV model color


Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Hiranya Jayakody ◽  
Paul Petrie ◽  
Hugo Jan de Boer ◽  
Mark Whitty

Abstract Background Stomata analysis using microscope imagery provides important insight into plant physiology, health and the surrounding environmental conditions. Plant scientists are now able to conduct automated high-throughput analysis of stomata in microscope data, however, existing detection methods are sensitive to the appearance of stomata in the training images, thereby limiting general applicability. In addition, existing methods only generate bounding-boxes around detected stomata, which require users to implement additional image processing steps to study stomata morphology. In this paper, we develop a fully automated, robust stomata detection algorithm which can also identify individual stomata boundaries regardless of the plant species, sample collection method, imaging technique and magnification level. Results The proposed solution consists of three stages. First, the input image is pre-processed to remove any colour space biases occurring from different sample collection and imaging techniques. Then, a Mask R-CNN is applied to estimate individual stomata boundaries. The feature pyramid network embedded in the Mask R-CNN is utilised to identify stomata at different scales. Finally, a statistical filter is implemented at the Mask R-CNN output to reduce the number of false positive generated by the network. The algorithm was tested using 16 datasets from 12 sources, containing over 60,000 stomata. For the first time in this domain, the proposed solution was tested against 7 microscope datasets never seen by the algorithm to show the generalisability of the solution. Results indicated that the proposed approach can detect stomata with a precision, recall, and F-score of 95.10%, 83.34%, and 88.61%, respectively. A separate test conducted by comparing estimated stomata boundary values with manually measured data showed that the proposed method has an IoU score of 0.70; a 7% improvement over the bounding-box approach. Conclusions The proposed method shows robust performance across multiple microscope image datasets of different quality and scale. This generalised stomata detection algorithm allows plant scientists to conduct stomata analysis whilst eliminating the need to re-label and re-train for each new dataset. The open-source code shared with this project can be directly deployed in Google Colab or any other Tensorflow environment.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2052
Author(s):  
Xinghai Yang ◽  
Fengjiao Wang ◽  
Zhiquan Bai ◽  
Feifei Xun ◽  
Yulin Zhang ◽  
...  

In this paper, a deep learning-based traffic state discrimination method is proposed to detect traffic congestion at urban intersections. The detection algorithm includes two parts, global speed detection and a traffic state discrimination algorithm. Firstly, the region of interest (ROI) is selected as the road intersection from the input image of the You Only Look Once (YOLO) v3 object detection algorithm for vehicle target detection. The Lucas-Kanade (LK) optical flow method is employed to calculate the vehicle speed. Then, the corresponding intersection state can be obtained based on the vehicle speed and the discrimination algorithm. The detection of the vehicle takes the position information obtained by YOLOv3 as the input of the LK optical flow algorithm and forms an optical flow vector to complete the vehicle speed detection. Experimental results show that the detection algorithm can detect the vehicle speed and traffic state discrimination method can judge the traffic state accurately, which has a strong anti-interference ability and meets the practical application requirements.


2020 ◽  
Vol 22 (1) ◽  
pp. 124-153
Author(s):  
Saba Rabab ◽  
Pieter Badenhorst ◽  
Yi-Ping Phoebe Chen ◽  
Hans D. Daetwyler

2013 ◽  
Vol 347-350 ◽  
pp. 3505-3509 ◽  
Author(s):  
Jin Huang ◽  
Wei Dong Jin ◽  
Na Qin

In order to reduce the difficulty of adjusting parameters for the codebook model and the computational complexity of probability distribution for the Gaussian mixture model in intelligent visual surveillance, a moving objects detection algorithm based on three-dimensional Gaussian mixture codebook model using XYZ color model is proposed. In this algorithm, a codebook model based on XYZ color model is built, and then the Gaussian model based on X, Y and Z components in codewords is established respectively. In this way, the characteristic of the three-dimensional Gaussian mixture model for the codebook model is obtained. The experimental results show that the proposed algorithm can attain higher real-time capability and its average frame rate is about 16.7 frames per second, while it is about 8.3 frames per second for the iGMM (improved Gaussian mixture model) algorithm, about 6.1 frames per second for the BM (Bayes model) algorithm, about 12.5 frames per second for the GCBM (Gaussian-based codebook model) algorithm, and about 8.5 frames per second for the CBM (codebook model) algorithm in the comparative experiments. Furthermore the proposed algorithm can obtain better detection quantity.


2008 ◽  
Vol 51 (3) ◽  
pp. 1089-1097 ◽  
Author(s):  
H. Zhang ◽  
B. Chen ◽  
L. Zhang

2006 ◽  
Vol 44 (3) ◽  
pp. 181-187 ◽  
Author(s):  
Ta-Te LIN ◽  
Chung-Fang CHIEN ◽  
Wen-Chi LIAO ◽  
Kuo-Chi CHUNG ◽  
Jen-Min CHANG

Sign in / Sign up

Export Citation Format

Share Document