scholarly journals Metric for the Fusion of Synthetic and Real Imagery from Multimodal Sensors

2014 ◽  
Vol 7 (4) ◽  
pp. 355-362
Author(s):  
Rami Nahas ◽  
S. P. Kozaitis
Author(s):  
Tousif Ahmed ◽  
Mohsin Y. Ahmed ◽  
Md Mahbubur Rahman ◽  
Ebrahim Nemati ◽  
Bashima Islam ◽  
...  

Author(s):  
Byron J. Pierce ◽  
George A. Geri

There is some question as to whether non-collimated (i.e., real) imagery viewed at one meter or less provides sufficiently realistic visual cues to support out-the-window flight simulator training. As a first step toward answering this question, we have obtained perceived size and velocity estimates using both simple stimuli in a controlled laboratory setting and full simulator imagery in an apparatus consisting of optically combined collimated and real-image displays. In the size study it was found that real imagery appeared 15-30% smaller than collimated imagery. In the velocity studies, the laboratory data showed that the perceived velocity of real imagery was less than that of collimated imagery. No perceived velocity effects were found with the simulator imagery. Results support the position that for training tasks requiring accurate perception of spatial and temporal aspects of the simulated visual environment, misperceptions of size, but not velocity, need to be considered when real-image displays are used.


Proceedings ◽  
2018 ◽  
Vol 2 (19) ◽  
pp. 1262 ◽  
Author(s):  
Muhammad Razzaq ◽  
Ian Cleland ◽  
Chris Nugent ◽  
Sungyoung Lee

Activity recognition (AR) is a subtask in pervasive computing and context-aware systems, which presents the physical state of human in real-time. These systems offer a new dimension to the widely spread applications by fusing recognized activities obtained from the raw sensory data generated by the obtrusive as well as unobtrusive revolutionary digital technologies. In recent years, an exponential growth has been observed for AR technologies and much literature exists focusing on applying machine learning algorithms on obtrusive single modality sensor devices. However, University of Jaén Ambient Intelligence (UJAmI), a Smart Lab in Spain has initiated a 1st UCAmI Cup challenge by sharing aforementioned varieties of the sensory data in order to recognize the human activities in the smart environment. This paper presents the fusion, both at the feature level and decision level for multimodal sensors by preprocessing and predicting the activities within the context of training and test datasets. Though it achieves 94% accuracy for training data and 47% accuracy for test data. However, this study further evaluates post-confusion matrix also and draws a conclusion for various discrepancies such as imbalanced class distribution within the training and test dataset. Additionally, this study also highlights challenges associated with the datasets for which, could improve further analysis.


Symmetry ◽  
2019 ◽  
Vol 11 (2) ◽  
pp. 227
Author(s):  
Eckart Michaelsen ◽  
Stéphane Vujasinovic

Representative input data are a necessary requirement for the assessment of machine-vision systems. For symmetry-seeing machines in particular, such imagery should provide symmetries as well as asymmetric clutter. Moreover, there must be reliable ground truth with the data. It should be possible to estimate the recognition performance and the computational efforts by providing different grades of difficulty and complexity. Recent competitions used real imagery labeled by human subjects with appropriate ground truth. The paper at hand proposes to use synthetic data instead. Such data contain symmetry, clutter, and nothing else. This is preferable because interference with other perceptive capabilities, such as object recognition, or prior knowledge, can be avoided. The data are given sparsely, i.e., as sets of primitive objects. However, images can be generated from them, so that the same data can also be fed into machines requiring dense input, such as multilayered perceptrons. Sparse representations are preferred, because the author’s own system requires such data, and in this way, any influence of the primitive extraction method is excluded. The presented format allows hierarchies of symmetries. This is important because hierarchy constitutes a natural and dominant part in symmetry-seeing. The paper reports some experiments using the author’s Gestalt algebra system as symmetry-seeing machine. Additionally included is a comparative test run with the state-of-the-art symmetry-seeing deep learning convolutional perceptron of the PSU. The computational efforts and recognition performance are assessed.


Sensors ◽  
2012 ◽  
Vol 12 (9) ◽  
pp. 12588-12605 ◽  
Author(s):  
Manhyung Han ◽  
La The Vinh ◽  
Young-Koo Lee ◽  
Sungyoung Lee
Keyword(s):  

2017 ◽  
Vol 33 (1) ◽  
pp. 4-4
Author(s):  
Martin S. Banks
Keyword(s):  

Author(s):  
Neal Winter ◽  
Joanne B. Culpepper ◽  
Noel J. Richards ◽  
Christopher S. Madden ◽  
Vivienne Wheaton

Sign in / Sign up

Export Citation Format

Share Document