detection and recognition
Recently Published Documents


TOTAL DOCUMENTS

2115
(FIVE YEARS 826)

H-INDEX

50
(FIVE YEARS 11)

The training of special ability of skiing should start from the control of body posture ability to highlight the characteristics of the sports. Thus, the athletes can have the sports ability in the process of high-speed skiing. This paper establishes a system to automatically recognize the skiing posture which can help athletes grasp the skiing postures. First, the skiing images are collected by distributed camera. Second, the skeleton features are extracted to learn a classification model which is used to recognize and adjust skiing postures. Lastly, the analytical results of posture recognition is returned to athletes through Internet of bodies. The framework can effectively recognize the skiing postures and provide athletes with training advices.


Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 278
Author(s):  
Cătălina Lucia Cocianu ◽  
Cristian Răzvan Uscatu

Many technological applications of our time rely on images captured by multiple cameras. Such applications include the detection and recognition of objects in captured images, the tracking of objects and analysis of their motion, and the detection of changes in appearance. The alignment of images captured at different times and/or from different angles is a key processing step in these applications. One of the most challenging tasks is to develop fast algorithms to accurately align images perturbed by various types of transformations. The paper reports a new method used to register images in the case of geometric perturbations that include rotations, translations, and non-uniform scaling. The input images can be monochrome or colored, and they are preprocessed by a noise-insensitive edge detector to obtain binarized versions. Isotropic scaling transformations are used to compute multi-scale representations of the binarized inputs. The algorithm is of memetic type and exploits the fact that the computation carried out in reduced representations usually produces promising initial solutions very fast. The proposed method combines bio-inspired and evolutionary computation techniques with clustered search and implements a procedure specially tailored to address the premature convergence issue in various scaled representations. A long series of tests on perturbed images were performed, evidencing the efficiency of our memetic multi-scale approach. In addition, a comparative analysis has proved that the proposed algorithm outperforms some well-known registration procedures both in terms of accuracy and runtime.


Symmetry ◽  
2022 ◽  
Vol 14 (1) ◽  
pp. 148
Author(s):  
Nikita Andriyanov ◽  
Ilshat Khasanshin ◽  
Daniil Utkin ◽  
Timur Gataullin ◽  
Stefan Ignar ◽  
...  

Despite the great possibilities of modern neural network architectures concerning the problems of object detection and recognition, the output of such models is the local (pixel) coordinates of objects bounding boxes in the image and their predicted classes. However, in several practical tasks, it is necessary to obtain more complete information about the object from the image. In particular, for robotic apple picking, it is necessary to clearly understand where and how much to move the grabber. To determine the real position of the apple relative to the source of image registration, it is proposed to use the Intel Real Sense depth camera and aggregate information from its depth and brightness channels. The apples detection is carried out using the YOLOv3 architecture; then, based on the distance to the object and its localization in the image, the relative distances are calculated for all coordinates. In this case, to determine the coordinates of apples, a transition to a symmetric coordinate system takes place by means of simple linear transformations. Estimating the position in a symmetric coordinate system allows estimating not only the magnitude of the shift but also the location of the object relative to the camera. The proposed approach makes it possible to obtain position estimates with high accuracy. The approximate root mean square error is 7–12 mm, depending on the range and axis. As for precision and recall metrics, the first is 100% and the second is 90%.


2022 ◽  
Author(s):  
HanCong Feng

<div>The analysis of intercepted multi-function radar (MFR) signals has gained considerable attention in the field of cognitive electronic reconnaissance. With the rapid development of MFR, the switch between different work modes is becoming more flexible, increasing the agility of pulse parameters. Most of the existing approaches for recognizing MFR behaviors heavily depend on prior information, which can hardly be obtained in a non-cooperative way. This study develops a novel hierarchical contrastive self-supervise-based method for segmenting and clustering MFR pulse sequences. First, a convolutional neural network (CNN) with a limited receptive field is trained in a contrastive way to distinguish between pulse descriptor words (PDW) in the original order and the samples created by random permutations to detect the boundary between each radar word and perform segmentation. Afterward, the K-means++ algorithm with cosine distances is established to cluster the segmented PDWs according to the output vectors of the CNN’s last layer for radar words extraction. This segmenting and clustering process continues to go in the extracted radar word sequence, radar phase sequence, and so on, finishing the automatic extraction of MFR behavior states in the MFR hierarchical model. Simulation results show that without using any labeled data, the proposed method can effectively mine distinguishable patterns in the sequentially arriving PDWs and recognize the MFR behavior states under corrupted, overlapped pulse parameters.</div>


2022 ◽  
Author(s):  
HanCong Feng

<div>The analysis of intercepted multi-function radar (MFR) signals has gained considerable attention in the field of cognitive electronic reconnaissance. With the rapid development of MFR, the switch between different work modes is becoming more flexible, increasing the agility of pulse parameters. Most of the existing approaches for recognizing MFR behaviors heavily depend on prior information, which can hardly be obtained in a non-cooperative way. This study develops a novel hierarchical contrastive self-supervise-based method for segmenting and clustering MFR pulse sequences. First, a convolutional neural network (CNN) with a limited receptive field is trained in a contrastive way to distinguish between pulse descriptor words (PDW) in the original order and the samples created by random permutations to detect the boundary between each radar word and perform segmentation. Afterward, the K-means++ algorithm with cosine distances is established to cluster the segmented PDWs according to the output vectors of the CNN’s last layer for radar words extraction. This segmenting and clustering process continues to go in the extracted radar word sequence, radar phase sequence, and so on, finishing the automatic extraction of MFR behavior states in the MFR hierarchical model. Simulation results show that without using any labeled data, the proposed method can effectively mine distinguishable patterns in the sequentially arriving PDWs and recognize the MFR behavior states under corrupted, overlapped pulse parameters.</div>


Aerospace ◽  
2022 ◽  
Vol 9 (1) ◽  
pp. 31
Author(s):  
Farhad Samadzadegan ◽  
Farzaneh Dadrass Javan ◽  
Farnaz Ashtari Mahini ◽  
Mehrnaz Gholamshahi

Drones are becoming increasingly popular not only for recreational purposes but also in a variety of applications in engineering, disaster management, logistics, securing airports, and others. In addition to their useful applications, an alarming concern regarding physical infrastructure security, safety, and surveillance at airports has arisen due to the potential of their use in malicious activities. In recent years, there have been many reports of the unauthorized use of various types of drones at airports and the disruption of airline operations. To address this problem, this study proposes a novel deep learning-based method for the efficient detection and recognition of two types of drones and birds. Evaluation of the proposed approach with the prepared image dataset demonstrates better efficiency compared to existing detection systems in the literature. Furthermore, drones are often confused with birds because of their physical and behavioral similarity. The proposed method is not only able to detect the presence or absence of drones in an area but also to recognize and distinguish between two types of drones, as well as distinguish them from birds. The dataset used in this work to train the network consists of 10,000 visible images containing two types of drones as multirotors, helicopters, and also birds. The proposed deep learning method can directly detect and recognize two types of drones and distinguish them from birds with an accuracy of 83%, mAP of 84%, and IoU of 81%. The values of average recall, average accuracy, and average F1-score were also reported as 84%, 83%, and 83%, respectively, in three classes.


Author(s):  
Zengfang Shi ◽  
Meizhou Liu

The existing target detection and recognition technology has the problem of fuzzy features of moving vehicles, which leads to poor detection effect. A moving car detection and recognition technology based on artificial intelligence is designed. The point operation is adopted to enhance the high frequency information of the image, increase the image contrast, and delineate the video image tracking target. The motion vector similarity is used to predict the moving target area in the next frame of the image. The texture features of the moving car are extracted by artificial intelligence, and the center moment is calculated by the gray histogram distribution curve, the edge feature extraction algorithm is used to set the detection and recognition mode. Experimental results: under complex conditions, this design technology, compared with the other two kinds of moving vehicle detection and recognition technology, detected three more moving vehicles, which proved that the application prospect of the moving vehicle detection and recognition technology integrated with artificial intelligence is broader.


Agriculture ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. 73
Author(s):  
Kaidong Lei ◽  
Chao Zong ◽  
Ting Yang ◽  
Shanshan Peng ◽  
Pengfei Zhu ◽  
...  

In large-scale sow production, real-time detection and recognition of sows is a key step towards the application of precision livestock farming techniques. In the pig house, the overlap of railings, floors, and sows usually challenge the accuracy of sow target detection. In this paper, a non-contact machine vision method was used for sow targets perception in complex scenarios, and the number position of sows in the pen could be detected. Two multi-target sow detection and recognition models based on the deep learning algorithms of Mask-RCNN and UNet-Attention were developed, and the model parameters were tuned. A field experiment was carried out. The data-set obtained from the experiment was used for algorithm training and validation. It was found that the Mask-RCNN model showed a higher recognition rate than that of the UNet-Attention model, with a final recognition rate of 96.8% and complete object detection outlines. In the process of image segmentation, the area distribution of sows in the pens was analyzed. The position of the sow’s head in the pen and the pixel area value of the sow segmentation were analyzed. The feeding, drinking, and lying behaviors of the sow have been identified on the basis of image recognition. The results showed that the average daily lying time, standing time, feeding and drinking time of sows were 12.67 h(MSE 1.08), 11.33 h(MSE 1.08), 3.25 h(MSE 0.27) and 0.391 h(MSE 0.10), respectively. The proposed method in this paper could solve the problem of target perception of sows in complex scenes and would be a powerful tool for the recognition of sows.


Author(s):  
Shilpa Sharma ◽  
Linesh Raja ◽  
Vaibhav Bhatnagar ◽  
Divya Sharma ◽  
Swami Nisha Bhagirath ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document