Real time message composition through head movements on portable Android devices

Author(s):  
Laura Montanini ◽  
Enea Cippitelli ◽  
Ennio Gambi ◽  
Susanna Spinsante
2015 ◽  
Vol 3 ◽  
pp. 829-836 ◽  
Author(s):  
Ilja T. Feldstein ◽  
Alexander Güntner ◽  
Klaus Bengler

2015 ◽  
Vol 9 (2) ◽  
pp. 141-151 ◽  
Author(s):  
Laura Montanini ◽  
Enea Cippitelli ◽  
Ennio Gambi ◽  
Susanna Spinsante

2005 ◽  
Vol 93 (4) ◽  
pp. 2294-2301 ◽  
Author(s):  
Per Magne Knutsen ◽  
Dori Derdikman ◽  
Ehud Ahissar

Due to recent advances that enable real-time electrophysiological recordings in brains of awake behaving rodents, effective methods for analyzing the large amount of behavioral data thus generated, at millisecond resolution, are required. We describe a semiautomated, efficient method for accurate tracking of head and mystacial vibrissae (whisker) movements in freely moving rodents using high-speed video. By tracking the entire length of individual whiskers, we show how both location and shape of whiskers are relevant when describing the kinematics of whisker movements and whisker interactions with objects during a whisker-dependent task and exploratory behavior.


1996 ◽  
Vol 64 (2) ◽  
pp. 209-218 ◽  
Author(s):  
Gert Stange ◽  
Roland Hengstenberg
Keyword(s):  

2013 ◽  
Vol 24 (4) ◽  
pp. 371-378 ◽  
Author(s):  
Jeremy F. Magland ◽  
Anna Rose Childress
Keyword(s):  

The machine vision systems have been playing a significant role in visual monitoring systems. With the help of stereovision and machine learning, it will be able to mimic human-like visual system and behaviour towards the environment. In this paper, we present a stereo vision based 3-DOF robot which will be used to monitor places from remote using cloud server and internet devices. The 3-DOF robot will transmit human-like head movements, i.e., yaw, pitch, roll and produce 3D stereoscopic video and stream it in Real-time. This video stream is sent to the user through any generic internet devices with VR box support, i.e., smartphones giving the user a First-person real-time 3D experience and transfers the head motion of the user to the robot also in Real-time. The robot will also be able to track moving objects and faces as a target using deep neural networks which enables it to be a standalone monitoring robot. The user will be able to choose specific subjects to monitor in a space. The stereovision enables us to track the depth information of different objects detected and will be used to track human interest objects with its distances and sent to the cloud. A full working prototype is developed which showcases the capabilities of a monitoring system based on stereo vision, robotics, and machine learning.


1996 ◽  
Vol 85 (2) ◽  
pp. 287-292 ◽  
Author(s):  
Martin J. Ryan ◽  
Robert K. Erickson ◽  
David N. Levin ◽  
Charles A. Pelizzari ◽  
R. Loch Macdonald ◽  
...  

✓ The accuracy of a novel frameless stereotactic system was determined during 10 surgeries performed to resect brain tumors. An array of three charge-coupled device cameras tracked the locations of infrared light-emitting diodes on a hand-held stylus and on a reference frame attached to the patient's skull with a single bone screw. Patient—image registration was achieved retrospectively by digitizing randomly chosen scalp points with the system and fitting them to a scalp surface model derived from magnetic resonance (MR) images. The reference frame enabled continual correction for patient head movements so that registration was maintained even when the patient's head was not immobilized in a surgical clamp. The location of the stylus was displayed in real-time on cross-sectional and three-dimensional MR images of the head; this information was used to predict the locations of small intracranial lesions. The average distance (and standard deviation) between the actual position of the mass and its stereotactically predicted location was 4.8 ± 3.5 mm. The authors conclude that frameless stereotaxy can be used for accurate localization of intracranial masses without resorting to using fiducial markers during presurgical imaging and without immobilizing the patient's head during surgery.


2009 ◽  
Vol 364 (1535) ◽  
pp. 3485-3495 ◽  
Author(s):  
Steven M. Boker ◽  
Jeffrey F. Cohn ◽  
Barry-John Theobald ◽  
Iain Matthews ◽  
Timothy R. Brick ◽  
...  

When people speak with one another, they tend to adapt their head movements and facial expressions in response to each others' head movements and facial expressions. We present an experiment in which confederates' head movements and facial expressions were motion tracked during videoconference conversations, an avatar face was reconstructed in real time, and naive participants spoke with the avatar face. No naive participant guessed that the computer generated face was not video. Confederates' facial expressions, vocal inflections and head movements were attenuated at 1 min intervals in a fully crossed experimental design. Attenuated head movements led to increased head nods and lateral head turns, and attenuated facial expressions led to increased head nodding in both naive participants and confederates. Together, these results are consistent with a hypothesis that the dynamics of head movements in dyadicconversation include a shared equilibrium. Although both conversational partners were blind to the manipulation, when apparent head movement of one conversant was attenuated, both partners responded by increasing the velocity of their head movements.


1979 ◽  
Vol 44 ◽  
pp. 41-47
Author(s):  
Donald A. Landman

This paper describes some recent results of our quiescent prominence spectrometry program at the Mees Solar Observatory on Haleakala. The observations were made with the 25 cm coronagraph/coudé spectrograph system using a silicon vidicon detector. This detector consists of 500 contiguous channels covering approximately 6 or 80 Å, depending on the grating used. The instrument is interfaced to the Observatory’s PDP 11/45 computer system, and has the important advantages of wide spectral response, linearity and signal-averaging with real-time display. Its principal drawback is the relatively small target size. For the present work, the aperture was about 3″ × 5″. Absolute intensity calibrations were made by measuring quiet regions near sun center.


Sign in / Sign up

Export Citation Format

Share Document