scholarly journals Dynamic Projection Mapping with a Single IR Camera

2017 ◽  
Vol 2017 ◽  
pp. 1-10 ◽  
Author(s):  
Naoki Hashimoto ◽  
Ryo Koizumi ◽  
Daisuke Kobayashi

We propose a dynamic projection mapping system with effective machine-learning and high-speed edge-based object tracking using a single IR camera. The machine-learning techniques are used for precise 3D initial posture estimation from 2D IR images, as a detection process. After the detection, we apply an edge-based tracking process for real-time image projection. In this paper, we implement our proposal and actually achieve dynamic projection mapping. In addition, we evaluate the performance of our proposal through the comparison with a Kinect-based tracking system.

2012 ◽  
Vol 4 (2) ◽  
pp. 32-59 ◽  
Author(s):  
K. K. Chaturvedi ◽  
V.B. Singh

Bug severity is the degree of impact that a defect has on the development or operation of a component or system, and can be classified into different levels based on their impact on the system. Identification of severity level can be useful for bug triager in allocating the bug to the concerned bug fixer. Various researchers have attempted text mining techniques in predicting the severity of bugs, detection of duplicate bug reports and assignment of bugs to suitable fixer for its fix. In this paper, an attempt has been made to compare the performance of different machine learning techniques namely Support vector machine (SVM), probability based Naïve Bayes (NB), Decision Tree based J48 (A Java implementation of C4.5), rule based Repeated Incremental Pruning to Produce Error Reduction (RIPPER) and Random Forests (RF) learners in predicting the severity level (1 to 5) of a reported bug by analyzing the summary or short description of the bug reports. The bug report data has been taken from NASA’s PITS (Projects and Issue Tracking System) datasets as closed source and components of Eclipse, Mozilla & GNOME datasets as open source projects. The analysis has been carried out in RapidMiner and STATISTICA data mining tools. The authors measured the performance of different machine learning techniques by considering (i) the value of accuracy and F-Measure for all severity level and (ii) number of best cases at different threshold level of accuracy and F-Measure.


2017 ◽  
Author(s):  
Udit Arora ◽  
Sohit Verma ◽  
Sarthak Sahni ◽  
Tushar Sharma

Several ball tracking algorithms have been reported in literature. However, most of them use high-quality video and multiple cameras, and the emphasis has been on coordinating the cameras or visualizing the tracking results. This paper aims to develop a system for assisting the umpire in the sport of Cricket in making decisions like detection of no-balls, wide-balls, leg before wicket and bouncers, with the help of a single smartphone camera. It involves the implementation of Computer Vision algorithms for object detection and motion tracking, as well as the integration of machine learning algorithms to optimize the results. Techniques like Histogram of Gradients (HOG) and Support Vector Machine (SVM) are used for object classification and recognition. Frame subtraction, minimum enclosing circle, and contour detection algorithms are optimized and used for the detection of a cricket ball. These algorithms are applied using the Open Source Python Library - OpenCV. Machine Learning techniques - Linear and Quadratic Regression are used to track and predict the motion of the ball. It also involves the use of open source Python library VPython for the visual representation of the results. The paper describes the design and structure for the approach undertaken in the system for analyzing and visualizing off-air low-quality cricket videos.


Author(s):  
Cesar A. Sanchez-Martinez ◽  
Paulo Lopez-Meyer ◽  
Esdras Juarez-Hernandez ◽  
Aaron Desiga-Orenday ◽  
Andres Viveros-Wacher

Symmetry ◽  
2017 ◽  
Vol 9 (9) ◽  
pp. 197 ◽  
Author(s):  
Kamran Siddique ◽  
Zahid Akhtar ◽  
Haeng-gon Lee ◽  
Woongsup Kim ◽  
Yangwoo Kim

2021 ◽  
Author(s):  
Adrian Bulzacki

Gestures are a fundamental part of human communication and are becoming a key component of human-computer interaction. Traditionally, to teach computers to recognize specific gestures, researchers have used a sensor, usually a camera, to collect large gesture datasets, which are then classified and structured using machine learning techniques. Yet finding a way to confidently differentiate between several gesture classes has proven to be rather difficult for those working in the gesture recognition field. To capture the samples of movements necessary to train gesture recognition systems, the first step is to provide research participants with appropriate instructions. As collecting gesture data is the crucial first step of creating a robust gesture dataset, this dissertation will examine the modalities of instruction used in gesture recognition research to examine whether appropriate directives are conveyed to research participants. These experiments will result in the creation of a new dataset, the PJVA-20 dataset, comprised of 50 samples of 20 gesture classes sampled from 6 participants. After collecting the gesture samples of the PJVA-20 dataset, this dissertation will establish the benchmark recognition system PJVA — chiefly comprised of AMFE, Polynomial Motion Approximation, and Principal Component Analysis (PCA)—to contribute to the gesture recognition literature in terms of novel gesture recognition algorithms that can achieve high speed and accuracy results. This also involves examining studies in the gesture recognition literature to determine which machine learning algorithms offer reliability, speed, and accuracy for solving complex gesture recognition problems, as well as experimenting and testing the PJVA approach against other researchers in the Computer Vision and Machine Learning fields. In particular, the MSRC-12 research provides a benchmark point of comparison for research in this field. To test the quality of samples on the PJVA-20 against the MSRC-12, a new method is established for extracting motion feature vectors through a novel gesture recognition approach, AMFE. This is tested by applying PJVA to extract and label gesture data from both the MSRC- 12 and PJVA-20 datasets.


Author(s):  
Gracia Nirmala Rani D. ◽  
J. Shanthi ◽  
S. Rajaram

The importance and growth of the digital IC have become more popular because of parameters such as small feature size, high speed, low cost, less power consumption, and temperature. There have been various techniques and methodologies developed so far using different optimization algorithms and data structures based on the dimensions of the IC to improve these parameters. All these existing algorithms illustrate explicit advantages in optimizing the chip area, maximum temperature of the chip, and wire length. Though there are some advantages in these traditional algorithms, there are few demerits such as execution time, integration, and computational complexity due to the necessity of handling large number of data. Machine learning techniques produce vibrant results in such fields where it is required to handle big data in order to optimize the scaling parameters of IC design. The objective of this chapter is to give an elaborate idea of applying machine learning techniques using Bayesian theorem to create automation tool for VLSI 3D IC design steps.


Sign in / Sign up

Export Citation Format

Share Document