head orientation
Recently Published Documents


TOTAL DOCUMENTS

453
(FIVE YEARS 101)

H-INDEX

42
(FIVE YEARS 4)

Author(s):  
Afizan Azman ◽  
Mohd. Fikri Azli Abdullah ◽  
Sumendra Yogarayan ◽  
Siti Fatimah Abdul Razak ◽  
Hartini Azman ◽  
...  

<span>Cognitive distraction is one of the several contributory factors in road accidents. A number of cognitive distraction detection methods have been developed. One of the most popular methods is based on physiological measurement. Head orientation, gaze rotation, blinking and pupil diameter are among popular physiological parameters that are measured for driver cognitive distraction. In this paper, lips and eyebrows are studied. These new features on human facial expression are obvious and can be easily measured when a person is in cognitive distraction. There are several types of movement on lips and eyebrows that can be captured to indicate cognitive distraction. Correlation and classification techniques are used in this paper for performance measurement and comparison. Real time driving experiment was setup and faceAPI was installed in the car to capture driver’s facial expression. Linear regression, support vector machine (SVM), static Bayesian network (SBN) and logistic regression (LR) are used in this study. Results showed that lips and eyebrows are strongly correlated and have a significant role in improving cognitive distraction detection. Dynamic Bayesian network (DBN) with different confidence of levels was also used in this study to classify whether a driver is distracted or not.</span>


Author(s):  
Miranda Huang ◽  
Abby Jones ◽  
Afsoon Sabet ◽  
Jillian Masters ◽  
Natalie Dearing ◽  
...  

Tick-borne diseases are on the rise globally; however, information is lacking about tick questing behavior. In this laboratory study, we explored tick preferences for stem type (plastic grass, wooden, and metal), questing height, and head orientation. Using 60 Amblyomma americanum adults over three 72-hour replicates, we determined that 21.7% of ticks quested at any given time and that ticks exhibited a strong preference to quest with their heads oriented downwards, irrespective of stem type. Individual ticks tended to quest on only one stem in this study, and a maximum of three. Nonetheless, ticks appeared to prefer questing on wooden and plastic grass stems over metal stems. We did not find an effect of time of day on tick questing rates. Increased understanding of tick questing behavior can improve vector control efforts.  


Indoor Air ◽  
2021 ◽  
Author(s):  
Jingcui Xu ◽  
Cunteng Wang ◽  
Sau Chung Fu ◽  
Christopher Y. H. Chao

2021 ◽  
Author(s):  
Jonathan Liebers ◽  
Patrick Horn ◽  
Christian Burschik ◽  
Uwe Gruenefeld ◽  
Stefan Schneegass

Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8148
Author(s):  
Sana Sabah Al-azzawi ◽  
Siavash Khaksar ◽  
Emad Khdhair Hadi ◽  
Himanshu Agrawal ◽  
Iain Murray

Cerebral palsy (CP) is a common reason for human motor ability limitations caused before birth, through infancy or early childhood. Poor head control is one of the most important problems in children with level IV CP and level V CP, which can affect many aspects of children’s lives. The current visual assessment method for measuring head control ability and cervical range of motion (CROM) lacks accuracy and reliability. In this paper, a HeadUp system that is based on a low-cost, 9-axis, inertial measurement unit (IMU) is proposed to capture and evaluate the head control ability for children with CP. The proposed system wirelessly measures CROM in frontal, sagittal, and transverse planes during ordinary life activities. The system is designed to provide real-time, bidirectional communication with an Euler-based, sensor fusion algorithm (SFA) to estimate the head orientation and its control ability tracking. The experimental results for the proposed SFA show high accuracy in noise reduction with faster system response. The system is clinically tested on five typically developing children and five children with CP (age range: 2–5 years). The proposed HeadUp system can be implemented as a head control trainer in an entertaining way to motivate the child with CP to keep their head up.


2021 ◽  
Vol 12 ◽  
Author(s):  
Bastian I. Hougaard ◽  
Hendrik Knoche ◽  
Jim Jensen ◽  
Lars Evald

Purpose: Virtual reality (VR) and eye tracking may provide detailed insights into spatial cognition. We hypothesized that virtual reality and eye tracking may be used to assess sub-types of spatial neglect in stroke patients not readily available from conventional assessments.Method: Eighteen stroke patients with spatial neglect and 16 age and gender matched healthy subjects wearing VR headsets were asked to look around freely in a symmetric 3D museum scene with three pictures. Asymmetry of performance was analyzed to reveal group-level differences and possible neglect sub-types on an individual level.Results: Four out of six VR and eye tracking measures revealed significant differences between patients and controls in this free-viewing task. Gaze-asymmetry between-pictures (including fixation time and count) and head orientation were most sensitive to spatial neglect behavior on a group level analysis. Gaze-asymmetry and head orientation each identified 10 out of 18 (56%), compared to 12 out of 18 (67%) for the best conventional test. Two neglect patients without deviant performance on conventional measures were captured by the VR and eyetracking measures. On the individual level, five stroke patients revealed deviant gaze-asymmetry within-pictures and six patients revealed deviant eye orientation in either direction that were not captured by the group-level analysis.Conclusion: This study is a first step in using VR in combination with eye tracking measures as individual differential neglect subtype diagnostics. This may pave the way for more sensitive and elaborate sub-type diagnostics of spatial neglect that may respond differently to various treatment approaches.


2021 ◽  
Author(s):  
Changliang Guo ◽  
Garrett J. Blair ◽  
Megha Sehgal ◽  
Federico N. Sangiuliano Jimka ◽  
Arash Bellafard ◽  
...  

We present a large field of view (FOV) open-source miniature microscope (MiniLFOV) designed to extend the capabilities of the UCLA Miniscope platform to large-scale, single cell resolution neural imaging in freely behaving large rodents and head-fixed mice. This system is capable of multiple imaging configurations, including deep brain imaging using implanted optical probes and cortical imaging through cranial windows. The MiniLFOV interfaces with existing open-source UCLA Miniscope DAQ hardware and software, can achieve single cell resolution imaging across a 3.6 × 2.7 mm field of view at 23 frames per second, has an electrically adjustable working distance of up to 3.5 mm±150 µm using an onboard electrowetting lens, incorporates an absolute head-orientation sensor, and weighs under 14 grams. The MiniLFOV provides a 30-fold larger FOV and yields 20-fold better sensitivity than Miniscope V3, and a 12-fold larger FOV with 2-fold better sensitivity than Miniscope V4. Power and data transmission are handled through a single, flexible coaxial cable down to 0.3 mm in diameter facilitating naturalistic behavior. We validated the MiniLFOV in freely behaving rats by simultaneously imaging >1000 GCaMP7s expressing neurons in the CA1 layer of the hippocampus and in head-fixed mice by simultaneously imaging ~2000 neurons in the mouse dorsal cortex through a 4 × 4 mm cranial window. For freely behaving experiments, the MiniLFOV supports optional wire-free operation using a 3.5 g wire-free data acquisition expansion board which enables close to 1-hour of wire-free recording with a 400 mAh (7.5 g) on-board single-cell lithium-polymer battery and expands wire-free imaging techniques to larger animal models. We expect this new open-source implementation of the UCLA Miniscope platform will enable researchers to address novel hypotheses concerning brain function in freely behaving animals.


2021 ◽  
Vol 69 (4) ◽  
pp. 87-94
Author(s):  
Radu-Daniel BOLCAȘ ◽  
◽  
Diana DRANGA ◽  

Facial expression recognition (FER) is a field where many researchers have tried to create a model able to recognize emotions from a face. With many applications such as interfaces between human and machine, safety or medical, this field has continued to develop with the increase of processing power. This paper contains a broad description on the psychological aspects of the FER and provides a description on the datasets and algorithms that make the neural networks possible. Then a literature review is performed on the recent studies in the facial emotion recognition detailing the methods and algorithms used to improve the capabilities of systems using machine learning. Each interesting aspect of the studies are discussed to highlight the novelty and related concepts and strategies that make the recognition attain a good accuracy. In addition, challenges related to machine learning were discussed, such as overfitting, possible causes and solutions and challenges related to the dataset such as expression unrelated discrepancy such as head orientation, illumination, dataset class bias. Those aspects are discussed in detail, as a review was performed with the difficulties that come with using deep neural networks serving as a guideline to the advancement domain. Finally, those challenges offer an insight in what possible future directions can be taken to develop better FER systems.


Mathematics ◽  
2021 ◽  
Vol 9 (22) ◽  
pp. 2889
Author(s):  
Vassilis G. Kaburlasos ◽  
Chris Lytridis ◽  
Eleni Vrochidou ◽  
Christos Bazinas ◽  
George A. Papakostas ◽  
...  

Social robots keep proliferating. A critical challenge remains their sensible interaction with humans, especially in real world applications. Hence, computing with real world semantics is instrumental. Recently, the Lattice Computing (LC) paradigm has been proposed with a capacity to compute with semantics represented by partial order in a mathematical lattice data domain. In the aforementioned context, this work proposes a parametric LC classifier, namely a Granule-based-Classifier (GbC), applicable in a mathematical lattice (T,⊑) of tree data structures, each of which represents a human face. A tree data structure here emerges from 68 facial landmarks (points) computed in a data preprocessing step by the OpenFace software. The proposed (tree) representation retains human anonymity during data processing. Extensive computational experiments regarding three different pattern recognition problems, namely (1) head orientation, (2) facial expressions, and (3) human face recognition, demonstrate GbC capacities, including good classification results, and a common human face representation in different pattern recognition problems, as well as data induced granular rules in (T,⊑) that allow for (a) explainable decision-making, (b) tunable generalization enabled also by formal logic/reasoning techniques, and (c) an inherent capacity for modular data fusion extensions. The potential of the proposed techniques is discussed.


Author(s):  
Johannes M. Arend ◽  
Tim Lübeck ◽  
Christoph Pörschmann

AbstractHigh-quality rendering of spatial sound fields in real-time is becoming increasingly important with the steadily growing interest in virtual and augmented reality technologies. Typically, a spherical microphone array (SMA) is used to capture a spatial sound field. The captured sound field can be reproduced over headphones in real-time using binaural rendering, virtually placing a single listener in the sound field. Common methods for binaural rendering first spatially encode the sound field by transforming it to the spherical harmonics domain and then decode the sound field binaurally by combining it with head-related transfer functions (HRTFs). However, these rendering methods are computationally demanding, especially for high-order SMAs, and require implementing quite sophisticated real-time signal processing. This paper presents a computationally more efficient method for real-time binaural rendering of SMA signals by linear filtering. The proposed method allows representing any common rendering chain as a set of precomputed finite impulse response filters, which are then applied to the SMA signals in real-time using fast convolution to produce the binaural signals. Results of the technical evaluation show that the presented approach is equivalent to conventional rendering methods while being computationally less demanding and easier to implement using any real-time convolution system. However, the lower computational complexity goes along with lower flexibility. On the one hand, encoding and decoding are no longer decoupled, and on the other hand, sound field transformations in the SH domain can no longer be performed. Consequently, in the proposed method, a filter set must be precomputed and stored for each possible head orientation of the listener, leading to higher memory requirements than the conventional methods. As such, the approach is particularly well suited for efficient real-time binaural rendering of SMA signals in a fixed setup where usually a limited range of head orientations is sufficient, such as live concert streaming or VR teleconferencing.


Sign in / Sign up

Export Citation Format

Share Document