Towards Automating Artifact Analysis: A Study Showing Potential Applications of Computer Vision and Morphometrics to Artifact Typology

Author(s):  
Michael J. Lenardi ◽  
Daria E. Merwin
2015 ◽  
Vol 7 (3) ◽  
pp. 321-345 ◽  
Author(s):  
Hemad Zareiforoush ◽  
Saeid Minaei ◽  
Mohammad Reza Alizadeh ◽  
Ahmad Banakar

2014 ◽  
Vol 474 ◽  
pp. 179-185
Author(s):  
Rastislav Ďuriš

The wide potential applications of humanoid robots require that the robots can move in general environment, overcome various obstacles, detect predefined objects and control of its motion according to all these parameters. The goal of this paper is address the problem of implementation of computer vision to motion control of humanoid robot. We focus on using of computer vision and image processing techniques, based on which the robot can detect and recognize a predefined color object in a captured image. An algorithm to detection and localization of objects is described. The results obtained from image processing are used in an algorithm for controlling of the robot movement.


2017 ◽  
Author(s):  
Martin Leroux ◽  
Sofiane Achiche ◽  
Maxime Raison

Over the last decade, eye tracking systems have been developed and used in many fields, mostly to identify targets on a screen, i.e. a plane. For novel applications such as the control of robotic devices by the user vision, there is a great interest in developing methods base on eye tracking to identify target points in free three dimensional environments. The objective of this paper is to characterise the accuracy the eye tracking and computer vision combination that was designed recently to overcome many limitations of eye tracking in 3D space. We propose a characterization protocol to assess the behavior of the accuracy of the system over the workspace of a robotic manipulator assistant. Applying this protocol to 33 subjects, we estimated the behavior of the error of the system relatively to the target position on a cylindrical workspace and to the acquisition time. Over our workspace, targets are located on average at 0.84 m and our method shows an accuracy 12.65 times better than the calculation of the 3D point of gaze. With the current accuracy, many potential applications become possible, such as visually controlled robotic assistants in the field of rehabilitation and adaptation engineering.


2020 ◽  
pp. 21-34
Author(s):  
Carlos Ismael Orozco ◽  
Eduardo Xamena ◽  
María Elena Buemi ◽  
Julio Jacobo Berlles

Action recognition in videos is currently a topic of interest in the area of computer vision, due to potential applications such as: multimedia indexing, surveillance in public spaces, among others. In this paper we propose (1) Implement a CNN–LSTM architecture. First, a pre-trained VGG16 convolutional neural network extracts the features of the input video. Then, an LSTM classifies the video in a particular class. (2) Study how the number of LSTM units affects the performance of the system. To carry out the training and test phases, we used the KTH, UCF-11 and HMDB-51 datasets. (3) Evaluate the performance of our system using accuracy as evaluation metric. We obtain 93%, 91% and 47% accuracy respectively for each dataset. 


Author(s):  
Yan Yan ◽  
Yu-Jin Zhang

Over the past few years, face recognition has gained many interests. Face recognition has become a popular area of research in computer vision and pattern recognition. The problem attracts researchers from different disciplines such as image processing, pattern recognition, neural networks, computer vision, and computer graphics (Zhao, Chellappa, Rosenfeld & Phillips, 2003). Face recognition is a typical computer vision problem. The goal of computer vision is to understand the images of scenes, locate and identify objects, determine their structures, spatial arrangements and relationship with other objects (Shah, 2002). The main task of face recognition is to locate and identify the identity of people in the scene. Face recognition is also a challenging pattern recognition problem. The number of training samples of each face class is usually so small that it is hard to learn the distribution of each class. In addition, the within-class difference may be sometimes larger than the between-class difference due to variations in illumination, pose, expression, age, etc. The availability of the feasible technologies brings face recognition many potential applications, such as in face ID, access control, security, surveillance, smart cards, law enforcement, face databases, multimedia management, human computer interaction, etc (Li & Jain, 2005). Traditional still image-based face recognition has achieved great success in constrained environments. However, once the conditions (including illumination, pose, expression, age) change too much, the performance declines dramatically. The recent FRVT2002 (Face Recognition Vendor Test 2002) (Phillips, Grother, Micheals, Blackburn, Tabassi & Bone 2003) shows that the recognition performance of face images captured in an outdoor environment and different days is still not satisfying. Current still image-based face recognition algorithms are even far away from the capability of human perception system (Zhao, Chellappa, Rosenfeld & Phillips, 2003). On the other hand, psychology and physiology studies have shown that motion can help people for better face recognition (Knight & Johnston, 1997; O’Toole, Roark & Abdi, 2002). Torres (2004) pointed out that traditional still image-based face recognition confronts great challenges and difficulties. There are two potential ways to solve it: video-based face recognition technology and multi-modal identification technology. During the past several years, many research efforts have been concentrated on video-based face recognition. Compared with still image-based face recognition, true video-based face recognition algorithms that use both spatial and temporal information started only a few years ago (Zhao, Chellappa, Rosenfeld & Phillips, 2003). This article gives an overview of most existing methods in the field of video-based face recognition and analyses their respective pros and cons. First, a general statement of face recognition is given. Then, most existing methods for video-based face recognition are briefly reviewed. Some future trends and conclusions are given in the end.


2020 ◽  
Vol 60 (2) ◽  
pp. 131-139
Author(s):  
Paramjit Kaur ◽  
Kewal Krishan ◽  
Suresh K. Sharma ◽  
Tanuj Kanchan

The face is an important part of the human body, distinguishing individuals in large groups of people. Thus, because of its universality and uniqueness, it has become the most widely used and accepted biometric method. The domain of face recognition has gained the attention of many scientists, and hence it has become a standard benchmark in the area of human recognition. It has turned out to be the most deeply studied area in computer vision for more than four decades. It has a wide array of applications, including security monitoring, automated surveillance systems, victim and missing-person identification and so on. This review presents the broad range of methods used for face recognition and attempts to discuss their advantages and disadvantages. Initially, we present the basics of face-recognition technology, its standard workflow, background and problems, and the potential applications. Then, face-recognition methods with their advantages and limitations are discussed. The concluding section presents the possibilities and future implications for further advancing the field.


Author(s):  
M. PARISA BEHAM ◽  
S. MOHAMED MANSOOR ROOMI

Face recognition has become more significant and relevant in recent years owing to it potential applications. Since the faces are highly dynamic and pose more issues and challenges to solve, researchers in the domain of pattern recognition, computer vision and artificial intelligence have proposed many solutions to reduce such difficulties so as to improve the robustness and recognition accuracy. As many approaches have been proposed, efforts are also put in to provide an extensive survey of the methods developed over the years. The objective of this paper is to provide a survey of face recognition papers that appeared in the literature over the past decade under all severe conditions that were not discussed in the previous survey and to categorize them into meaningful approaches, viz. appearance based, feature based and soft computing based. A comparative study of merits and demerits of these approaches have been presented.


Author(s):  
BO YANG ◽  
JAN FLUSSER ◽  
TOMÁŠ SUK

Steerability is a useful and important property of "kernel" functions. It enables certain complicated operations involving orientation manipulation on images to be executed with high efficiency. Thus, we focus our attention on the steerability of Hermite polynomials and their versions modulated by the Gaussian function with different powers, defined as the Hermite kernel. Certain special cases of such kernel, Hermite polynomials, Hermite functions and Gaussian derivatives are discussed in detail. Correspondingly, these cases demonstrate that the Hermite kernel is a powerful and effective tool for image processing. Furthermore, the steerability of the Hermite kernel is proved with the help of a property of Hermite polynomials revealing the rule concerning the product of two Hermite polynomials after coordination rotation. Consequently, any order of the Hermite kernel inherits steerability. Moreover, a couple sets of an explicit interpolation function and basis function can be directly obtained. We provide some examples to verify steerability of the Hermite kernel. Experimental results show the effectiveness of steerability and its potential applications in the fields of image processing and computer vision.


2017 ◽  
Author(s):  
Martin Leroux ◽  
Sofiane Achiche ◽  
Maxime Raison

Over the last decade, eye tracking systems have been developed and used in many fields, mostly to identify targets on a screen, i.e. a plane. For novel applications such as the control of robotic devices by the user vision, there is a great interest in developing methods base on eye tracking to identify target points in free three dimensional environments. The objective of this paper is to characterise the accuracy the eye tracking and computer vision combination that was designed recently to overcome many limitations of eye tracking in 3D space. We propose a characterization protocol to assess the behavior of the accuracy of the system over the workspace of a robotic manipulator assistant. Applying this protocol to 33 subjects, we estimated the behavior of the error of the system relatively to the target position on a cylindrical workspace and to the acquisition time. Over our workspace, targets are located on average at 0.84 m and our method shows an accuracy 12.65 times better than the calculation of the 3D point of gaze. With the current accuracy, many potential applications become possible, such as visually controlled robotic assistants in the field of rehabilitation and adaptation engineering.


Sign in / Sign up

Export Citation Format

Share Document