scholarly journals Research on Key Feature Extraction and Position Accurate Tracking Based on Computer Vision Image

2019 ◽  
Vol 1168 ◽  
pp. 042004
Author(s):  
Yan Sun
2005 ◽  
Vol 33 (1) ◽  
pp. 2-17 ◽  
Author(s):  
D. Colbry ◽  
D. Cherba ◽  
J. Luchini

Abstract Commercial databases containing images of tire tread patterns are currently used by product designers, forensic specialists and product application personnel to identify whether a given tread pattern matches an existing tire. Currently, this pattern matching process is almost entirely manual, requiring visual searches of extensive libraries of tire tread patterns. Our work explores a first step toward automating this pattern matching process by building on feature analysis techniques from computer vision and image processing to develop a new method for extracting and classifying features from tire tread patterns and automatically locating candidate matches from a database of existing tread pattern images. Our method begins with a selection of tire tread images obtained from multiple sources (including manufacturers' literature, Web site images, and Tire Guides, Inc.), which are preprocessed and normalized using Two-Dimensional Fast Fourier Transforms (2D-FFT). The results of this preprocessing are feature-rich images that are further analyzed using feature extraction algorithms drawn from research in computer vision. A new, feature extraction algorithm is developed based on the geometry of the 2D-FFT images of the tire. The resulting FFT-based analysis allows independent classification of the tire images along two dimensions, specifically by separating “rib” and “lug” features of the tread pattern. Dimensionality of (0,0) indicates a smooth treaded tire with no pattern; dimensionality of (1,0) and (0,1) are purely rib and lug tires; and dimensionality of (1,1) is an all-season pattern. This analysis technique allows a candidate tire to be classified according to the features of its tread pattern, and other tires with similar features and tread pattern classifications can be automatically retrieved from the database.


2019 ◽  
Vol 9 (7) ◽  
pp. 1385 ◽  
Author(s):  
Luca Donati ◽  
Eleonora Iotti ◽  
Giulio Mordonini ◽  
Andrea Prati

Visual classification of commercial products is a branch of the wider fields of object detection and feature extraction in computer vision, and, in particular, it is an important step in the creative workflow in fashion industries. Automatically classifying garment features makes both designers and data experts aware of their overall production, which is fundamental in order to organize marketing campaigns, avoid duplicates, categorize apparel products for e-commerce purposes, and so on. There are many different techniques for visual classification, ranging from standard image processing to machine learning approaches: this work, made by using and testing the aforementioned approaches in collaboration with Adidas AG™, describes a real-world study aimed at automatically recognizing and classifying logos, stripes, colors, and other features of clothing, solely from final rendering images of their products. Specifically, both deep learning and image processing techniques, such as template matching, were used. The result is a novel system for image recognition and feature extraction that has a high classification accuracy and which is reliable and robust enough to be used by a company like Adidas. This paper shows the main problems and proposed solutions in the development of this system, and the experimental results on the Adidas AG™ dataset.


2011 ◽  
Vol 217-218 ◽  
pp. 27-32
Author(s):  
Guo Feng Qin ◽  
Yu Sun ◽  
Qi Yan Li

Detection of vehicles plays an important role in the area of the modern intelligent traffic management. And the pattern recognition is a hot issue in the area of computer vision. This article introduces an Automobile Automatic Recognition System based on image. It begins with the structures of the system. Then detailed methods for implementation are discussed. This system take use of a camera to get traffic images, then after image pretreatment and segmentation, do the works of feature extraction, template matching and pattern recognition, to identify different models and get vehicular traffic statistics. Finally, the implementation of the system is introduced. The algorithms of recognized process were verified in this application case.


JURNAL TIKA ◽  
2021 ◽  
Vol 6 (02) ◽  
pp. 140-146
Author(s):  
Dedy Armiady

Sistem dapat menggunakan IP Camera maupun CCTV, IP Camera membutuhkan kabel UTP untuk melakukan komunikasi data, sementara CCTV membutuhkan kabel Coaxial. Pengenalan wajah dilakukan melalui tahap Face Detection, Feature Extraction dan Face Recognition, selanjutnya dicocokkan dengan data profil yang tersimpan di dalam Database. Untuk mendeteksi wajah diperlukan OpenCV yang ditanamkan ke dalam sistem. OpenCV adalah sebuah library (perpustakaan) yang digunakan untuk mengolah gambar dan video hingga kita mampu mengekstrak informasi di dalamnya. OpenCV dapat berjalan di berbagai bahasa pemrograman, seperti C, C++, Java, Python, dan juga didukung di berbagai platform seperti Windows, Linux, Mac OS, iOS dan Android. Setiap pengguna sistem Absensi Face Recognition perlu dilakukan registrasi terlebih dahulu 1 (satu) persatu, dan sistem melakukan Training dari video setiap pengguna yang didaftarkan dan dibuat Source Base dalam bentuk foto dan disimpan di komputer server sebagai menjadi pembanding dan mendeteksi wajah dari berbagai sudut kamera nantinya. Database digunakan adalah MySQL dengan data yang ditampung adalah informasi data wajah, data jadwal, data User serta data informasi absensi. Koneksi untuk CCTV menggunakan RTSP yang merupakan jaringan komputer yang dirancang untuk kebutuhan multimedia dan sistem komunikasi data, yang dapat yang dapat mengendalikan aliran media dari server. Protokol ini digunakan untuk menetapkan dan mengendalikan sesi media antara dua titik ujungnya. Sebagian besar server RTSP menggunakan Real-time Transport Protocol (RTP) yang saling melengkapi dengan Real-time Control Protocol (RTCP) untuk pengiriman aliran media. Sementara itu penggunaan IP Camera atau Kamera IP adalah kamera dengan basis Internet Protocol, jenis kamera video digital yang menerima data kontrol dan mengirimkan data gambar melalui jaringan IP. biasanya digunakan untuk pengawasan tetapi berbeda dengan kamera analog Closed-circuit Television (CCTV), yang mana tidak memerlukan perangkat perekaman lokal, namun hanya jaringan area lokal. Kebanyakan kamera IP adalah Webcam, tetapi istilah kamera IP atau Netcam biasanya hanya berlaku untuk kamera yang dapat langsung diakses melalui koneksi jaringan dan dapat digunakan untuk kamera pengawasan.


Author(s):  
Kostas Karpouzis ◽  
Athanasios Drosopoulos ◽  
Spiros Ioannou ◽  
Amaryllis Raouzaiou ◽  
Nicolas Tsapatsoulis ◽  
...  

Emotionally-aware Man-Machine Interaction (MMI) systems are presently at the forefront of interest of the computer vision and artificial intelligence communities, since they give the opportunity to less technology-aware people to use computers more efficiently, overcoming fears and preconceptions. Most emotion-related facial and body gestures are considered to be universal, in the sense that they are recognized along different cultures; therefore, the introduction of an “emotional dictionary” that includes descriptions and perceived meanings of facial expressions and body gestures, so as to help infer the likely emotional state of a specific user, can enhance the affective nature of MMI applications (Picard, 2000).


Author(s):  
Kostas Karpouzis ◽  
Athanasios Drosopoulos ◽  
Spiros Ioannou ◽  
Amaryllis Raouzaiou ◽  
Nicolas Tsapatsoulis ◽  
...  

Emotionally-aware Man-Machine Interaction (MMI) systems are presently at the forefront of interest of the computer vision and artificial intelligence communities, since they give the opportunity to less technology-aware people to use computers more efficiently, overcoming fears and preconceptions. Most emotion-related facial and body gestures are considered to be universal, in the sense that they are recognized along different cultures; therefore, the introduction of an “emotional dictionary” that includes descriptions and perceived meanings of facial expressions and body gestures, so as to help infer the likely emotional state of a specific user, can enhance the affective nature of MMI applications (Picard, 2000).


Sign in / Sign up

Export Citation Format

Share Document