feature matching
Recently Published Documents


TOTAL DOCUMENTS

1471
(FIVE YEARS 598)

H-INDEX

36
(FIVE YEARS 10)

2022 ◽  
Vol 31 (2) ◽  
pp. 1-32
Author(s):  
Luca Ardito ◽  
Andrea Bottino ◽  
Riccardo Coppola ◽  
Fabrizio Lamberti ◽  
Francesco Manigrasso ◽  
...  

In automated Visual GUI Testing (VGT) for Android devices, the available tools often suffer from low robustness to mobile fragmentation, leading to incorrect results when running the same tests on different devices. To soften these issues, we evaluate two feature matching-based approaches for widget detection in VGT scripts, which use, respectively, the complete full-screen snapshot of the application ( Fullscreen ) and the cropped images of its widgets ( Cropped ) as visual locators to match on emulated devices. Our analysis includes validating the portability of different feature-based visual locators over various apps and devices and evaluating their robustness in terms of cross-device portability and correctly executed interactions. We assessed our results through a comparison with two state-of-the-art tools, EyeAutomate and Sikuli. Despite a limited increase in the computational burden, our Fullscreen approach outperformed state-of-the-art tools in terms of correctly identified locators across a wide range of devices and led to a 30% increase in passing tests. Our work shows that VGT tools’ dependability can be improved by bridging the testing and computer vision communities. This connection enables the design of algorithms targeted to domain-specific needs and thus inherently more usable and robust.


Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 223
Author(s):  
Zihao Wang ◽  
Sen Yang ◽  
Mengji Shi ◽  
Kaiyu Qin

In this study, a multi-level scale stabilizer intended for visual odometry (MLSS-VO) combined with a self-supervised feature matching method is proposed to address the scale uncertainty and scale drift encountered in the field of monocular visual odometry. Firstly, the architecture of an instance-level recognition model is adopted to propose a feature matching model based on a Siamese neural network. Combined with the traditional approach to feature point extraction, the feature baselines on different levels are extracted, and then treated as a reference for estimating the motion scale of the camera. On this basis, the size of the target in the tracking task is taken as the top-level feature baseline, while the motion matrix parameters as obtained by the original visual odometry of the feature point method are used to solve the real motion scale of the current frame. The multi-level feature baselines are solved to update the motion scale while reducing the scale drift. Finally, the spatial target localization algorithm and the MLSS-VO are applied to propose a framework intended for the tracking of target on the mobile platform. According to the experimental results, the root mean square error (RMSE) of localization is less than 3.87 cm, and the RMSE of target tracking is less than 4.97 cm, which demonstrates that the MLSS-VO method based on the target tracking scene is effective in resolving scale uncertainty and restricting scale drift, so as to ensure the spatial positioning and tracking of the target.


2022 ◽  
Vol 2022 ◽  
pp. 1-11
Author(s):  
Yu Wang

In this paper, we use machine learning algorithms to conduct in-depth research and analysis on the construction of human-computer interaction systems and propose a simple and effective method for extracting salient features based on contextual information. The method can retain the dynamic and static information of gestures intact, which results in a richer and more robust feature representation. Secondly, this paper proposes a dynamic planning algorithm based on feature matching, which uses the consistency and accuracy of feature matching to measure the similarity of two frames and then uses a dynamic planning algorithm to find the optimal matching distance between two gesture sequences. The algorithm ensures the continuity and accuracy of the gesture description and makes full use of the spatiotemporal location information of the features. The features and limitations of common motion target detection methods in motion gesture detection and common machine learning tracking methods in gesture tracking are first analyzed, and then, the kernel correlation filter method is improved by designing a confidence model and introducing a scale filter, and finally, comparison experiments are conducted on a self-built gesture dataset to verify the effectiveness of the improved method. During the training and validation of the model by the corpus, the complementary feature extraction methods are ablated and learned, and the corresponding results obtained are compared with the three baseline methods. But due to this feature, GMMs are not suitable when users want to model the time structure. It has been widely used in classification tasks. By using the kernel function, the support vector machine can transform the original input set into a high-dimensional feature space. After experiments, the speech emotion recognition method proposed in this paper outperforms the baseline methods, proving the effectiveness of complementary feature extraction and the superiority of the deep learning model. The speech is used as the input of the system, and the emotion recognition is performed on the input speech, and the corresponding emotion obtained is successfully applied to the human-computer dialogue system in combination with the online speech recognition method, which proves that the speech emotion recognition applied to the human-computer dialogue system has application research value.


Astrodynamics ◽  
2022 ◽  
Vol 6 (1) ◽  
pp. 69-79
Author(s):  
Anran Wang ◽  
Li Wang ◽  
Yinuo Zhang ◽  
Baocheng Hua ◽  
Tao Li ◽  
...  

AbstractTianwen-1 (TW-1) is the first Chinese interplanetary mission to have accomplished orbiting, landing, and patrolling in a single exploration of Mars. After safe landing, it is essential to reconstruct the descent trajectory and determine the landing site of the lander. For this purpose, we processed descent images of the TW-1 optical obstacle-avoidance sensor (OOAS) and digital orthophoto map (DOM) of the landing area using our proposed hybrid-matching method, in which the landing process is divided into two parts. In the first, crater matching is used to obtain the geometric transformations between the OOAS images and DOM to calculate the position of the lander. In the second, feature matching is applied to compute the position of the lander. We calculated the landing site of TW-1 to be 109.9259° E, 25.0659° N with a positional accuracy of 1.56 m and reconstructed the landing trajectory with a horizontal root mean squared error of 1.79 m. These results will facilitate the analyses of the obstacle-avoidance system and optimize the control strategy in the follow-up planetary-exploration missions.


2022 ◽  
Author(s):  
Chris Gnam ◽  
Timothy B. Chase ◽  
Karthik Dantu ◽  
John L. Crassidis

2022 ◽  
Vol 107 ◽  
pp. 104539
Author(s):  
María Flores ◽  
David Valiente ◽  
Arturo Gil ◽  
Oscar Reinoso ◽  
Luis Payá

Author(s):  
Can Zhang ◽  
Xiuxi Ma ◽  
Mengbi Wang ◽  
Jun Huang
Keyword(s):  

This paper provides a new approach for human identification based on Neighborhood Rough Set (NRS) algorithm with biometric application of ear recognition. The traditional rough set model can just be used to evaluate categorical features. The neighborhood model is used to evaluate both numerical and categorical features by assigning different thresholds for different classes of features. The feature vectors are obtained from ear image and ear matching process is performed. Actually, matching is a process of ear identification. The extracted features are matched with classes of ear images enrolled in the database. NRS algorithm is developed in this work for feature matching. A set of 20 persons are used for experimental analysis and each person is having six images. The experimental result illustrates the high accuracy of NRS approach when compared to other existing techniques.


Sign in / Sign up

Export Citation Format

Share Document