Online approach to measuring relative location of spatial geometric features of long rotating parts

Measurement ◽  
2021 ◽  
pp. 110317
Author(s):  
Yunlong Liu ◽  
Honggen Zhou ◽  
Donghao Zhao ◽  
Xiaoyan Guan ◽  
Guochao Li ◽  
...  
2018 ◽  
Vol 11 (6) ◽  
pp. 304
Author(s):  
Javier Pinzon-Arenas ◽  
Robinson Jimenez-Moreno ◽  
Ruben Hernandez-Beleno

Author(s):  
Seok Lee ◽  
Juyong Park ◽  
Dongkyung Nam

In this article, the authors present an image processing method to reduce three-dimensional (3D) crosstalk for eye-tracking-based 3D display. Specifically, they considered 3D pixel crosstalk and offset crosstalk and applied different approaches based on its characteristics. For 3D pixel crosstalk which depends on the viewer’s relative location, they proposed output pixel value weighting scheme based on viewer’s eye position, and for offset crosstalk they subtracted luminance of crosstalk components according to the measured display crosstalk level in advance. By simulations and experiments using the 3D display prototypes, the authors evaluated the effectiveness of proposed method.


2020 ◽  
Vol 5 (2) ◽  
pp. 504
Author(s):  
Matthias Omotayo Oladele ◽  
Temilola Morufat Adepoju ◽  
Olaide ` Abiodun Olatoke ◽  
Oluwaseun Adewale Ojo

Yorùbá language is one of the three main languages that is been spoken in Nigeria. It is a tonal language that carries an accent on the vowel alphabets. There are twenty-five (25) alphabets in Yorùbá language with one of the alphabets a digraph (GB). Due to the difficulty in typing handwritten Yorùbá documents, there is a need to develop a handwritten recognition system that can convert the handwritten texts to digital format. This study discusses the offline Yorùbá handwritten word recognition system (OYHWR) that recognizes Yorùbá uppercase alphabets. Handwritten characters and words were obtained from different writers using the paint application and M708 graphics tablets. The characters were used for training and the words were used for testing. Pre-processing was done on the images and the geometric features of the images were extracted using zoning and gradient-based feature extraction. Geometric features are the different line types that form a particular character such as the vertical, horizontal, and diagonal lines. The geometric features used are the number of horizontal lines, number of vertical lines, number of right diagonal lines, number of left diagonal lines, total length of all horizontal lines, total length of all vertical lines, total length of all right slanting lines, total length of all left-slanting lines and the area of the skeleton. The characters are divided into 9 zones and gradient feature extraction was used to extract the horizontal and vertical components and geometric features in each zone. The words were fed into the support vector machine classifier and the performance was evaluated based on recognition accuracy. Support vector machine is a two-class classifier, hence a multiclass SVM classifier least square support vector machine (LSSVM) was used for word recognition. The one vs one strategy and RBF kernel were used and the recognition accuracy obtained from the tested words ranges between 66.7%, 83.3%, 85.7%, 87.5%, and 100%. The low recognition rate for some of the words could be as a result of the similarity in the extracted features.


Author(s):  
Ruth Garrett Millikan

There are non-uniceptual same-tracking mechanisms, mechanisms that same-track not in order to implement storage of information about their targets, but merely as an aid to the identification of further things. Examples are the various mechanisms of perceptual constancy, self-relative location trackers, object-constancy mechanisms, and same-trackers for real categories. There are also several kinds of unicepts, hence, of unitrackers, procedural, substantive, attributive. What begins as a non-uniceptual same-tracker might or might not be redeployed to serve also as a procedural unitracker, or a procedural unitracker might be redeployed to serve also as a substance unitracker or an attribute unitracker. This is possible because the difference between affordances, substances, and attributes is not a basic ontological distinction but is relative to cognitive use.


2021 ◽  
Vol 18 (2) ◽  
pp. 172988142199958
Author(s):  
Shundao Xie ◽  
Hong-Zhou Tan

In recent years, the application of two-dimensional (2D) barcode is more and more extensive and has been used as landmarks for robots to detect and peruse the information. However, it is hard to obtain a sharp 2D barcode image because of the moving robot, and the common solution is to deblur the blurry image before decoding the barcode. Image deblurring is an ill-posed problem, where ringing artifacts are commonly presented in the deblurred image, which causes the increase of decoding time and the limited improvement of decoding accuracy. In this article, a novel approach is proposed using blur-invariant shape and geometric features to make a blur-readable (BR) 2D barcode, which can be directly decoded even when seriously blurred. The finder patterns of BR code consist of two concentric rings and five disjoint disks, whose centroids form two triangles. The outer edges of the concentric rings can be regarded as blur-invariant shapes, which enable BR code to be quickly located even in a blurred image. The inner angles of the triangle are of blur-invariant geometric features, which can be used to store the format information of BR code. When suffering from severe defocus blur, the BR code can not only reduce the decoding time by skipping the deblurring process but also improve the decoding accuracy. With the defocus blur described by circular disk point-spread function, simulation results verify the performance of blur-invariant shape and the performance of BR code under blurred image situation.


Sign in / Sign up

Export Citation Format

Share Document