Instant Learning Sound Sensor: Flexible Real-World Event Recognition System for Ubiquitous Computing

Author(s):  
Yuya Negishi ◽  
Nobuo Kawaguchi
2009 ◽  
pp. 2627-2643
Author(s):  
Rainer Malaka

Designing user interfaces for ubiquitous computing applications is a challenging task. In this chapter we discuss how to build intelligent interfaces. The foundations are usability criteria that are valid for all computer products. There are a number of established methods for the design process that can help to meet these goals. In particular participatory and iterative so-called human centered approaches are important for interfaces in ubiquitous computing. The question on how to make interfaces more intelligent is not trivial and there are multiple approaches to enhance either the intelligence of the system or that of the user. Novel interface approaches follow the idea of embodied interaction and put particular emphasis on the situated use of a system and the mental models humans develop in their real-world environment.


Author(s):  
Patrik Spieß ◽  
Jens Müller

This chapter describes example use cases for ubiquitous computing technology in a corporate environment that have been evaluated as prototypes under realistic conditions. The main example reduces risk in the handling of hazardous substances by detecting potentially dangerous storage situations and raising alarms if certain rules are violated. We specify the requirements, implementation decisions, and lessons learned from evaluation. It is shown that ubiquitous computing in a shop floor, warehouse, or retail environment can drastically improve real-world business processes, making them safer and more efficient.


Author(s):  
Gwo-Jen Hwang ◽  
Ting-Ting Wu ◽  
Yen-Jung Chen

The prosperous development of wireless communication and sensor technologies has attracted the attention of researchers from both computer and education fields. Various investigations have been made for applying the new technologies to education purposes, such that more active and adaptive learning activities can be conducted in the real world. Nowadays, ubiquitous learning (u-learning) has become a popular trend of education all over the world, and hence it is worth reviewing the potential issues concerning the use of u-computing technologies in education, which could be helpful to the researchers who are interested in the investigation of mobile and ubiquitous learning.


Author(s):  
Yu-Gang Jiang ◽  
Qi Dai ◽  
Yingbin Zheng ◽  
Xiangyang Xue ◽  
Jie Liu ◽  
...  

Sensors ◽  
2019 ◽  
Vol 19 (8) ◽  
pp. 1863 ◽  
Author(s):  
Samadiani ◽  
Huang ◽  
Cai ◽  
Luo ◽  
Chi ◽  
...  

Facial Expression Recognition (FER) can be widely applied to various research areas, such as mental diseases diagnosis and human social/physiological interaction detection. With the emerging advanced technologies in hardware and sensors, FER systems have been developed to support real-world application scenes, instead of laboratory environments. Although the laboratory-controlled FER systems achieve very high accuracy, around 97%, the technical transferring from the laboratory to real-world applications faces a great barrier of very low accuracy, approximately 50%. In this survey, we comprehensively discuss three significant challenges in the unconstrained real-world environments, such as illumination variation, head pose, and subject-dependence, which may not be resolved by only analysing images/videos in the FER system. We focus on those sensors that may provide extra information and help the FER systems to detect emotion in both static images and video sequences. We introduce three categories of sensors that may help improve the accuracy and reliability of an expression recognition system by tackling the challenges mentioned above in pure image/video processing. The first group is detailed-face sensors, which detect a small dynamic change of a face component, such as eye-trackers, which may help differentiate the background noise and the feature of faces. The second is non-visual sensors, such as audio, depth, and EEG sensors, which provide extra information in addition to visual dimension and improve the recognition reliability for example in illumination variation and position shift situation. The last is target-focused sensors, such as infrared thermal sensors, which can facilitate the FER systems to filter useless visual contents and may help resist illumination variation. Also, we discuss the methods of fusing different inputs obtained from multimodal sensors in an emotion system. We comparatively review the most prominent multimodal emotional expression recognition approaches and point out their advantages and limitations. We briefly introduce the benchmark data sets related to FER systems for each category of sensors and extend our survey to the open challenges and issues. Meanwhile, we design a framework of an expression recognition system, which uses multimodal sensor data (provided by the three categories of sensors) to provide complete information about emotions to assist the pure face image/video analysis. We theoretically analyse the feasibility and achievability of our new expression recognition system, especially for the use in the wild environment, and point out the future directions to design an efficient, emotional expression recognition system.


Sign in / Sign up

Export Citation Format

Share Document