The Sounds and Moves of ibtiẓāl in 20th-Century Iran

2016 ◽  
Vol 48 (1) ◽  
pp. 151-155 ◽  
Author(s):  
Ida Meftahi

The scene opens with the camera zooming in on a small raised stage where a group of muṭribs (minstrel performers) are enacting a rūḥawzī piece. At stage left, a young man is singing a love song that describes the physical features of his beloved, Chihilgis. He is accompanied by an ensemble that plays rhythmic music (in 6/8 meter) on traditional Iranian instruments—the tunbak, the tār, and the kamānchih. Standing next to the singer is Chihilgis, performed by a crossed-dressed performer (zanpūsh) who sports a long wig and moves flirtatiously to the song, making coquettish gestures with the eyes, lips, and shoulders. Chihilgis then joins the dance center stage with the two other main characters: the protagonist, enacted by the black-faced performer Mubarak, who has a tambourine (dāyirih) in hand; and Haji, Chihilgis’ old father, who sports a white cotton beard. With variations based on the characters, the dance consists of typical muṭribī moves, including exaggerated wrist and hip rotations, facial gestures such as blinking, and sliding head movements. This musical segment is followed by a witty, humorous dialogue between Mubarak and Haji with sexual undertones.

Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7206
Author(s):  
Jinhyuk Kim ◽  
Jaekwang Cha ◽  
Shiho Kim

The typical configuration of virtual reality (VR) devices consists of a head-mounted display (HMD) and handheld controllers. As such, these units have limited utility in tasks that require hand-free operation, such as in surgical operations or assembly works in cyberspace. We propose a user interface for a VR headset based on a wearer’s facial gestures for hands-free interaction, similar to a touch interface. By sensing and recognizing the expressions associated with the in situ intentional movements of a user’s facial muscles, we define a set of commands that combine predefined facial gestures with head movements. This is achieved by utilizing six pairs of infrared (IR) photocouplers positioned at the foam interface of an HMD. We demonstrate the usability and report on the user experience as well as the performance of the proposed command set using an experimental VR game without any additional controllers. We obtained more than 99% of recognition accuracy for each facial gesture throughout the three steps of experimental tests. The proposed input interface is a cost-effective and efficient solution that facilitates hands-free user operation of a VR headset using built-in infrared photocouplers positioned in the foam interface. The proposed system recognizes facial gestures and incorporates a hands-free user interface to HMD, which is similar to the touch-screen experience of a smartphone.


Author(s):  
Ana Maria Alves

Throughout the 20th century, literature of “engagement”took center stage in the hands of Sartre, emphasizing the responsibility of the writer. Our purpose is to reflect on way the texts from the end of the last century have made this modality of “engagement”some how . In addition, we will try to ascertain and present what the new methods of literary intervention consist of.


2021 ◽  
Author(s):  
Emmanuele Tidoni ◽  
Henning Holle ◽  
Michele Scandola ◽  
Igor Schindler ◽  
Loron E. Hill ◽  
...  

Interpreting the behaviour of autonomous machines will be a daily activity for future generations. Yet, surprisingly little is currently known about how people ascribe intentions to human-like and non-human-like agents or objects. In a series of six experiments, we compared people’s ability to extract non-mentalistic (i.e., where an agent is looking) and mentalistic (i.e., what an agent is looking at; what an agent is going to do) information from identical gaze and head movements performed by humans, human-like robots, and a non-human-like object. Results showed that people are faster to infer the mental content of human agents compared to robotic agents. Furthermore, the form of the non-human entity may differently engage mentalizing processes depending on how human-like its appearance is. These results are not easily explained by non-mentalizing strategies (e.g., spatial accounts), as we observed no clear differences in control conditions across the three different agents. Overall, results suggest that human-like robotic actions may be processed differently from both humans’ and objects’ behaviour. We discuss the extent to which these findings inform our understanding of the relevance of an agents’ or objects’ physical features in triggering mentalizing abilities and its relevance for human–robot interaction.


Author(s):  
W. Engel ◽  
M. Kordesch ◽  
A. M. Bradshaw ◽  
E. Zeitler

Photoelectron microscopy is as old as electron microscopy itself. Electrons liberated from the object surface by photons are utilized to form an image that is a map of the object's emissivity. This physical property is a function of many parameters, some depending on the physical features of the objects and others on the conditions of the instrument rendering the image.The electron-optical situation is tricky, since the lateral resolution increases with the electric field strength at the object's surface. This, in turn, leads to small distances between the electrodes, restricting the photon flux that should be high for the sake of resolution.The electron-optical development came to fruition in the sixties. Figure 1a shows a typical photoelectron image of a polycrystalline tantalum sample irradiated by the UV light of a high-pressure mercury lamp.


1999 ◽  
Vol 58 (3) ◽  
pp. 170-179 ◽  
Author(s):  
Barbara S. Muller ◽  
Pierre Bovet

Twelve blindfolded subjects localized two different pure tones, randomly played by eight sound sources in the horizontal plane. Either subjects could get information supplied by their pinnae (external ear) and their head movements or not. We found that pinnae, as well as head movements, had a marked influence on auditory localization performance with this type of sound. Effects of pinnae and head movements seemed to be additive; the absence of one or the other factor provoked the same loss of localization accuracy and even much the same error pattern. Head movement analysis showed that subjects turn their face towards the emitting sound source, except for sources exactly in the front or exactly in the rear, which are identified by turning the head to both sides. The head movement amplitude increased smoothly as the sound source moved from the anterior to the posterior quadrant.


Sign in / Sign up

Export Citation Format

Share Document