3d head model
Recently Published Documents


TOTAL DOCUMENTS

35
(FIVE YEARS 2)

H-INDEX

4
(FIVE YEARS 0)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
L. Zastko ◽  
L. Makinistian ◽  
A. Tvarožná ◽  
F. L. Ferreyra ◽  
I. Belyaev

AbstractWhether the use of mobile phones (MP) represents a health hazard is still under debate. As part of the attempts to resolve this uncertainty, there has been an extensive characterization of the electromagnetic fields MP emit and receive. While the radiofrequencies (RF) have been studied exhaustively, the static magnetic fields (SMF) have received much less attention, regardless of the fact there is a wealth of evidence demonstrating their biological effects. We performed 2D maps of the SMF at several distances from the screen of 5 MP (models between 2013 and 2018) using a tri-axis magnetometer. We built a mathematical model to fit our measurements, extrapolated them down to the phones’ screen, and calculated the SMF on the skin of a 3D head model, showing that exposure is in the µT to mT range. Our literature survey prompts the need of further research not only on the biological effects of SMF and their gradients, but also on their combination with extremely low frequency (ELF) and RF fields. The study of combined fields (SMF, ELF, and RF) as similar as possible to the ones that occur in reality should provide a more sensible assessment of potential risks.


2021 ◽  
Author(s):  
Natalie Schaworonkow ◽  
Vadim V Nikulin

Analyzing non-invasive recordings of electroencephalography (EEG) and magnetoencephalography (MEG) directly in sensor space, using the signal from individual sensors, is a convenient and standard way of working with this type of data. However, volume conduction introduces considerable challenges for sensor space analysis. While the general idea of signal mixing due to volume conduction in EEG/MEG is recognized, the implications have not yet been clearly exemplified. Here, we illustrate how different types of activity overlap on the level of individual sensors. We show spatial mixing in the context of alpha rhythms, which are known to have generators in different areas of the brain. Using simulations with a realistic 3D head model and lead field and data analysis of a large resting-state EEG dataset, we show that electrode signals can be differentially affected by spatial mixing by computing a sensor complexity measure. While prominent occipital alpha rhythms result in less heterogeneous spatial mixing on posterior electrodes, central electrodes show a diversity of rhythms present. This makes the individual contributions, such as the sensorimotor mu-rhythm and temporal alpha rhythms, hard to disentangle from the dominant occipital alpha. Additionally, we show how strong occipital rhythms rhythms can contribute the majority of activity to frontal channels, potentially compromising analyses that are solely conducted in sensor space. We also outline specific consequences of signal mixing for frequently used assessment of power, power ratios and connectivity profiles in basic research and for neurofeedback application. With this work, we hope to illustrate the effects of volume conduction in a concrete way, such that the provided practical illustrations may be of use to EEG researchers to in order to evaluate whether sensor space is an appropriate choice for their topic of investigation.


2017 ◽  
Vol 65 (5) ◽  
pp. 733-739
Author(s):  
I. Lüsi ◽  
G. Anbarjafari

Abstract Real-time mimicking of human facial movement on a 3D head model is a challenge which has attracted attention of many researchers. In this research work we propose a new method for enhancing the capturing of the shape of lips. We present an automatic lip movement tracking method which employs a cosine function to interpolate between extracted lip features in order to make the detection more accurate. In order to test the proposed method, mimicking lip movements of a speaker on a 3D head model is studied. Microsoft Kinect II is used in order to capture videos and both RGB and depth information are used to locate the mouth of a speaker followed by fitting a cosine function in order to track the changes of the features extracted from the lips.


2017 ◽  
Vol 96 (3) ◽  
pp. 3317-3331 ◽  
Author(s):  
Elisa Ricci ◽  
Ernestina Cianca ◽  
Tommaso Rossi ◽  
Marina Diomedi ◽  
Parth Deshpande

Author(s):  
Ahmad Zamzuri Mohamad Ali ◽  
Wee Hoe Tan

The 3D talking-head mobile app is a type of mobile app that presents the head of a computer generated three-dimensional animated character that can talk or hold a conversation with human users. It is commonly used for language learning or entertainment, thus the quality of the mobile app is determined by the accuracy and the authenticity of lip synchronization and facial expressions. A typical 3D talking-head mobile app is structured by six key components, i.e., animated 3D head model, voice over scripts, background audio, background graphics, navigational buttons, and instructional captions and subtitles. When the app is meant for educational purposes, the integration of these components requires proficiency in creating an animated 3D talking head, authoring a mobile app, and understanding pedagogical principles for mobile assisted language learning. The mastery of scientific knowledge in these areas is essential to keep abreast with the advancement of mobile technologies and future research direction.


2014 ◽  
Vol 654 ◽  
pp. 287-290
Author(s):  
Lu Feng ◽  
Quan Fu ◽  
Xiang Long ◽  
Zhuang Zhi Wu

This paper presents a novel and efficient 3D head model keypoint recognition framework based on the geometry image. Based on conformal mapping and diffusion scale space, our method can utilize the SIFT method to extract and describe the keypoint of 3D head model. We use this framework to identify the keypoint of the human head. The experiments shows the robust and efficiency of our method.


Sign in / Sign up

Export Citation Format

Share Document