multimodal system
Recently Published Documents


TOTAL DOCUMENTS

107
(FIVE YEARS 14)

H-INDEX

10
(FIVE YEARS 1)

Author(s):  
Satvik Garg ◽  
Pradyumn Pundit ◽  
Himanshu Jindal ◽  
Hemraj Saini ◽  
Somya Garg

2021 ◽  
pp. 107522
Author(s):  
Unais Sait ◽  
Gokul Lal K.V. ◽  
Sanjana Shivakumar ◽  
Tarun Kumar ◽  
Rahul Bhaumik ◽  
...  

2021 ◽  
pp. 263497952110070
Author(s):  
Rohit Ashok Khot ◽  
Deepti Aggarwal ◽  
Jung-Ying (Lois) Yi ◽  
Daniel Prohasky

COVID-19 has brought significant changes to our lives and eating practices, where many of us are required to not only stay at home but also eat at home. This is particularly challenging for lone-person households as eating alone can be boring, less motivating and could lead to unhealthy behaviors such as mindless snacking or skipping of meals. To remedy such situations, we present Guardian of the Snacks, a tangible multimodal system that encourages mindful snacking by offering a playful companionship to snacking. The system is customizable to bear the shape of different animals and can be adapted to different snacking scenarios. Like a good companion, it encourages eating but moderates overeating through auditory feedback and playful nudging. In this article, we reflect on our design process and contribute ideas for the future development of technology-driven mindful snacking.


2020 ◽  
Vol 9 (6) ◽  
pp. 2411-2418
Author(s):  
Muthana H. Hamd ◽  
Rabab A. Rasool

This paper presents three novelty aspects in developing biometric system-based face recognition software for human identification applications. First, the computations cost is greatly reduced by eliminating the feature extraction phase and considering only the detected face features from the phase congruency. Secondly, a motivation towards applying a new technique, named mean-based training (MBT) is applied urgently to overcome the matching delay caused by the long feature vector. The last novelty aspect is utilizing the one-to-one mapping relationship for fusing the edge-to-angle unimodal classification results into a multimodal system using the logical-OR rule. Despite some dataset difficulties like Unconstrained Facial Images(UFI) which includes varying illuminations, expressions, occlusions, and poses, the multimodal system has highly improved the accuracy rate and achieved a promising recognition result, where the decision fusion is classified correctly (84, 92, and 72%) with only one training vector per MBT in contrast to (80, 62, and 68%) with five training vectors for Normal matching. These results are measured by Eucledian, Manhattan, and Cosine distance measure respectively.


Author(s):  
Stelios Gavras ◽  
Spiros Baxevanakis ◽  
Dimitris Kikidis ◽  
Efthymios Kyrodimos ◽  
Themis Exarchos
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document