Gun model recognition using geometric features of contour image

Author(s):  
Zhisheng Zhou ◽  
Jun Han ◽  
Jiaxin Chen ◽  
Yuming Dong
2020 ◽  
pp. 1-12
Author(s):  
Hu Jingchao ◽  
Haiying Zhang

The difficulty in class student state recognition is how to make feature judgments based on student facial expressions and movement state. At present, some intelligent models are not accurate in class student state recognition. In order to improve the model recognition effect, this study builds a two-level state detection framework based on deep learning and HMM feature recognition algorithm, and expands it as a multi-level detection model through a reasonable state classification method. In addition, this study selects continuous HMM or deep learning to reflect the dynamic generation characteristics of fatigue, and designs random human fatigue recognition experiments to complete the collection and preprocessing of EEG data, facial video data, and subjective evaluation data of classroom students. In addition to this, this study discretizes the feature indicators and builds a student state recognition model. Finally, the performance of the algorithm proposed in this paper is analyzed through experiments. The research results show that the algorithm proposed in this paper has certain advantages over the traditional algorithm in the recognition of classroom student state features.


2014 ◽  
Vol 134 (2) ◽  
pp. 233-241
Author(s):  
Yukiko Shinozuka ◽  
Takuya Minagawa ◽  
Hideo Saito

2018 ◽  
Vol 11 (6) ◽  
pp. 304
Author(s):  
Javier Pinzon-Arenas ◽  
Robinson Jimenez-Moreno ◽  
Ruben Hernandez-Beleno

2020 ◽  
Vol 5 (2) ◽  
pp. 504
Author(s):  
Matthias Omotayo Oladele ◽  
Temilola Morufat Adepoju ◽  
Olaide ` Abiodun Olatoke ◽  
Oluwaseun Adewale Ojo

Yorùbá language is one of the three main languages that is been spoken in Nigeria. It is a tonal language that carries an accent on the vowel alphabets. There are twenty-five (25) alphabets in Yorùbá language with one of the alphabets a digraph (GB). Due to the difficulty in typing handwritten Yorùbá documents, there is a need to develop a handwritten recognition system that can convert the handwritten texts to digital format. This study discusses the offline Yorùbá handwritten word recognition system (OYHWR) that recognizes Yorùbá uppercase alphabets. Handwritten characters and words were obtained from different writers using the paint application and M708 graphics tablets. The characters were used for training and the words were used for testing. Pre-processing was done on the images and the geometric features of the images were extracted using zoning and gradient-based feature extraction. Geometric features are the different line types that form a particular character such as the vertical, horizontal, and diagonal lines. The geometric features used are the number of horizontal lines, number of vertical lines, number of right diagonal lines, number of left diagonal lines, total length of all horizontal lines, total length of all vertical lines, total length of all right slanting lines, total length of all left-slanting lines and the area of the skeleton. The characters are divided into 9 zones and gradient feature extraction was used to extract the horizontal and vertical components and geometric features in each zone. The words were fed into the support vector machine classifier and the performance was evaluated based on recognition accuracy. Support vector machine is a two-class classifier, hence a multiclass SVM classifier least square support vector machine (LSSVM) was used for word recognition. The one vs one strategy and RBF kernel were used and the recognition accuracy obtained from the tested words ranges between 66.7%, 83.3%, 85.7%, 87.5%, and 100%. The low recognition rate for some of the words could be as a result of the similarity in the extracted features.


2021 ◽  
Vol 18 (2) ◽  
pp. 172988142199958
Author(s):  
Shundao Xie ◽  
Hong-Zhou Tan

In recent years, the application of two-dimensional (2D) barcode is more and more extensive and has been used as landmarks for robots to detect and peruse the information. However, it is hard to obtain a sharp 2D barcode image because of the moving robot, and the common solution is to deblur the blurry image before decoding the barcode. Image deblurring is an ill-posed problem, where ringing artifacts are commonly presented in the deblurred image, which causes the increase of decoding time and the limited improvement of decoding accuracy. In this article, a novel approach is proposed using blur-invariant shape and geometric features to make a blur-readable (BR) 2D barcode, which can be directly decoded even when seriously blurred. The finder patterns of BR code consist of two concentric rings and five disjoint disks, whose centroids form two triangles. The outer edges of the concentric rings can be regarded as blur-invariant shapes, which enable BR code to be quickly located even in a blurred image. The inner angles of the triangle are of blur-invariant geometric features, which can be used to store the format information of BR code. When suffering from severe defocus blur, the BR code can not only reduce the decoding time by skipping the deblurring process but also improve the decoding accuracy. With the defocus blur described by circular disk point-spread function, simulation results verify the performance of blur-invariant shape and the performance of BR code under blurred image situation.


Agriculture ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 6
Author(s):  
Ewa Ropelewska

The aim of this study was to evaluate the usefulness of the texture and geometric parameters of endocarp (pit) for distinguishing different cultivars of sweet cherries using image analysis. The textures from images converted to color channels and the geometric parameters of the endocarp (pits) of sweet cherry ‘Kordia’, ‘Lapins’, and ‘Büttner’s Red’ were calculated. For the set combining the selected textures from all color channels, the accuracy reached 100% when comparing ‘Kordia’ vs. ‘Lapins’ and ‘Kordia’ vs. ‘Büttner’s Red’ for all classifiers. The pits of ‘Kordia’ and ‘Lapins’, as well as ‘Kordia’ and ‘Büttner’s Red’ were also 100% correctly discriminated for discriminative models built separately for RGB, Lab and XYZ color spaces, G, L and Y color channels and for models combining selected textural and geometric features. For discrimination ‘Lapins’ and ‘Büttner’s Red’ pits, slightly lower accuracies were determined—up to 93% for models built based on textures selected from all color channels, 91% for the RGB color space, 92% for the Lab and XYZ color spaces, 84% for the G and L color channels, 83% for the Y channel, 94% for geometric features, and 96% for combined textural and geometric features.


Sign in / Sign up

Export Citation Format

Share Document