scholarly journals Gravity Influences How We Expect a Cursor to Move

Perception ◽  
2021 ◽  
pp. 030100662110652
Author(s):  
Eli Brenner ◽  
Milan Houben ◽  
Ties Schukking ◽  
Emily M. Crowe

We expect a cursor to move upwards when we push our computer mouse away. Do we expect it to move upwards on the screen, upwards with respect to our body, or upwards with respect to gravity? To find out, we asked participants to perform a simple task that involved guiding a cursor with a mouse. It took participants that were sitting upright longer to reach targets with the cursor if the screen was tilted, so not only directions on the screen are relevant. Tilted participants’ performance was indistinguishable from that of upright participants when the screen was tilted slightly in the same direction. Thus, the screen's orientation with respect to both the body and gravity are relevant. Considering published estimates of the ocular counter-roll induced by head tilt, it is possible that participants actually expect the cursor to move in a certain direction on their retina.

1995 ◽  
Vol 5 (1) ◽  
pp. 1-17
Author(s):  
Tamara L. Chelette ◽  
Eric J. Martin ◽  
William B. Albery

The effect of head tilt on the perception of self-orientation while in a greater than one G environment was studied in nine subjects using the Armstrong Laboratory Dynamic Environment Simulator. After a 12-s stabilization period at a constant head tilt and G level, subjects reported their perception of the horizon by placing their right hand in a position they believed to be horizontal. Head tilt conditions ranged from -30° to +45° pitch over each of three head yaw positions. G levels ranged from one to four and were in the longitudinal axis of the body (Gz). Hand position was recorded in both the pitch and roll body axes. A function of head tilt did improve the fit of a multiple regression model to the collected data in both the pitch and roll axes (P < .05). The best fit was accomplished with a nonlinear function of G and head pitch. When the head remained level but the environment tilted with respect to the G vector (at angles similar to those perceived during head tilt), subjects accurately reported the environmental tilt. Head tilt under G can result in vestibular-based illusionary perception of environmental tilt. Actual environmental tilt is accurately perceived due to added channels of haptic information.


2014 ◽  
Vol 41 (10) ◽  
pp. 869-877 ◽  
Author(s):  
Gabriel B. Dadi ◽  
Timothy R.B. Taylor ◽  
Paul M. Goodrum ◽  
William F. Maloney

Engineering information delivery can be a source of inefficient communication of design, leading to construction rework and lower worker morale. Due to errors, omissions, and misinterpretations, there remains a great opportunity to improve the traditional documentation of engineering information that craft professionals use to complete their work. Historically, physical three dimensional (3D) models built by hand provided 3D physical representations of the project to assist in sequencing, visualization, and planning of critical construction activities. This practice has greatly diminished since the adoption of 3D computer-aided design (CAD) and building information modeling technologies. Recently, additive manufacturing (a.k.a. 3D printing) technologies have allowed for three dimensional printing of 3D CAD models. A cognitive experiment was established to measure the effectiveness of 2D drawings, a 3D computer model, and a 3D printed model in delivering engineering information to an end-user are scientifically measured. The 3D printed model outperformed the 2D drawings and 3D computer interface in productivity measures. This paper’s primary contribution to the body of knowledge is identification of how different mediums of engineering information influence the performance of a simple task execution.


Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 2832 ◽  
Author(s):  
Juyoung Lee ◽  
Sang Chul Ahn ◽  
Jae-In Hwang

People are interested in traveling in an infinite virtual environment, but no standard navigation method exists yet in Virtual Reality (VR). The Walking-In-Place (WIP) technique is a navigation method that simulates movement to enable immersive travel with less simulator sickness in VR. However, attaching the sensor to the body is troublesome. A previously introduced method that performed WIP using an Inertial Measurement Unit (IMU) helped address this problem. That method does not require placement of additional sensors on the body. That study proved, through evaluation, the acceptable performance of WIP. However, this method has limitations, including a high step-recognition rate when the user does various body motions within the tracking area. Previous works also did not evaluate WIP step recognition accuracy. In this paper, we propose a novel WIP method using position and orientation tracking, which are provided in the most PC-based VR HMDs. Our method also does not require additional sensors on the body and is more stable than the IMU-based method for non-WIP motions. We evaluated our method with nine subjects and found that the WIP step accuracy was 99.32% regardless of head tilt, and the error rate was 0% for squat motion, which is a motion prone to error. We distinguish jog-in-place as “intentional motion” and others as “unintentional motion”. This shows that our method correctly recognizes only jog-in-place. We also apply the saw-tooth function virtual velocity to our method in a mathematical way. Natural navigation is possible when the virtual velocity approach is applied to the WIP method. Our method is useful for various applications which requires jogging.


2000 ◽  
Vol 83 (3) ◽  
pp. 1522-1535 ◽  
Author(s):  
Karin Jaggi-Schwarz ◽  
Hubert Misslisch ◽  
Bernhard J. M. Hess

We examined the three-dimensional (3-D) spatial orientation of postrotatory eye velocity after horizontal off-vertical axis rotations by varying the final body orientation with respect to gravity. Three rhesus monkeys were oriented in one of two positions before the onset of rotation: pitched 24° nose-up or 90° nose-up (supine) relative to the earth-horizontal plane and rotated at ±60°/s around the body-longitudinal axis. After 10 turns, the animals were stopped in 1 of 12 final positions separated by 30°. An empirical analysis of the postrotatory responses showed that the resultant response plane remained space-invariant, i.e., accurately represented the actual head tilt plane at rotation stop. The alignment of the response vector with the spatial vertical was less complete. A complementary analysis, based on a 3-D model that implemented the spatial transformation and dynamic interaction of otolith and lateral semicircular canal signals, confirmed the empirical description of the spatial response. In addition, it allowed an estimation of the low-pass filter time constants in central otolith and semicircular canal pathways as well as the weighting ratio between direct and inertially transformed canal signals in the output. Our results support the hypothesis that the central vestibular system represents head velocity in gravity-centered coordinates by sensory integration of otolith and semicircular canal signals.


Author(s):  
Douglas Allchin

Amid the mantra-like rhetoric of the value of “hands- on” learning, the growth of computer “alternatives” to dissection in biology education is a striking anomaly. Instead of touching and experiencing real organisms, students now encounter life as virtual images. Hands-on, perhaps, but on a keyboard instead. Or on a computer mouse, not the living kind. This deep irony might prompt some to hastily redesign such alternatives. Or to find and adopt others. However, one could—far more deeply and profitably—view this as an occasion to reflect on the aims in teaching biology. What do computer programs and models teach? By not sacrificing any animal, one ostensibly expresses respect for life. Nothing seems more important—or moral—for a biology student to learn. Yet using this standard—respect for life—many alternatives to dissection seem deeply flawed. First, most alternatives share a fundamental destructive strategy of taking organisms apart. Each organ is removed and discarded in turn. That might seem to be the very nature of dissection. Yet some contend that “the best dissection is the one that makes the fewest cuts.” Here, the aim is discovery, not destruction. One tries to separate and clarify anatomical structures: trace pathways, find boundaries, encounter connections—quite impossible if things are precut and disappear as preformed units in a single mouse click. The “search and destroy” strategy, once common, is now justly condemned. Such dissections were never well justified. They reflect poor educational goals and fundamentally foster disrespect toward animals. Indeed, dissections may be opportunities to monitor and thus guide student attitudes. Search-and-destroy alternatives to dissection merely echo antiquated approaches. Better no dissections at all than such ill-conceived alternatives. Second, prepackaged images or take-apart models are not much better. They reduce the body to parts. No more than pieces in a mechanical clock. They neatly parcel the body into discrete units. However, a real body is messy. It is held together with all sorts of connective tissue.


Author(s):  
Ivette M. Morazzani ◽  
Dennis W. Hong

This paper presents the work addressing the issue of standing up after falling down for a novel three-legged mobile robot STriDER (Self-excited Tripedal Dynamic Experimental Robot). The robot is inherently stable when all three feet are on the ground due to its tripod stance, but it can still fall down if it trips while taking a step or if unexpected external forces act on it. The unique structure of STriDER makes the simple task of standing up challenging for a number of reasons; the high height of the robot and long limbs require high torque at the actuators due to its large moment arms; the joint configuration and length of the limbs limit the workspace where the feet can be placed on the ground for support; the compact design of the joints allows limited joint actuation motor output torque; three limbs do not allow extra support and stability in the process of standing up. This paper examines four standing up strategies unique to STriDER: three feet, two feet and one foot pushup, and spiral pushup. For all of these standing up strategies, the robot places its feet or foot at desired positions and then pushes the feet against the ground thus, lifting the body upwards. The four pushup methods for standing up were analyzed and evaluated considering the constraints such as, static stability, friction at the feet, kinematic configuration and joint motor torque limits, thus determining the suggested design and operation parameters. The motor torque trends as the robot stands up using pushup methods were investigated and the results from the analysis were validated through experiments.


1997 ◽  
Vol 9 (2) ◽  
pp. 171-190 ◽  
Author(s):  
Michael C. Mozer ◽  
Peter W. Halligan ◽  
John C. Marshall

For more than a century, it has been known that damage to the right hemisphere of the brain can cause patients to be unaware of the contralesional side of space. This condition, known as unilateral neglect, represents a collection of clinically related spatial disorders characterized by the failure in free vision to respond, explore, or orient to stimuli predominantly located on the side of space opposite the damaged hemisphere. Recent studies using the simple task of line bisection, a conventional diagnostic test, have proven surprisingly revealing with respect to the spatial and attentional impairments involved in neglect. In line bisection, the patient is asked to mark the midpoint of a thin horizontal lie on a sheet of paper. Neglect patients generally transect far to the right of the center. Extensive studies of line bisection have been conducted, manipulating-among other factors-line length, orientation, and position. We have simulated the pattern of results using an existing computational model of visual perception and selective attention called MORSEL (Mozer, 1991). MORSEL has already been used to model data in a related disorder, neglect dyslexia (Mozer & Behrmann, 1990). In this earlier work, MORSEL was “lesioned” in accordance with the damage we suppose to have occurred in the brains of neglect patients. The same model and lesion can simulate the detailed pattern of performance on line bisection, including the following observations: (1) no consistent across-subject bias is found in normals; (2) transection displacements are proportional to line length in neglect patients; (3) variability of displacements is proportional to line length, in both normals and patients; (4) position of the lines with respect to the body or the page on which they are drawn has little effect; and (5) for lines drawn at different orientations, displacements are proportional to the cosine of the orientation angle. MORSEL fails to account for one observation: across patients, the variability of displacements for a particular line length is roughly proportional to mean displacement. Nonetheless, the overall fit of the model is sufficiently good that we believe MORSEL can be used as a diagnostic tool to characterize the specific nature of a patient's deficit, and thereby has potential down the line in therapy.


2021 ◽  
Vol 14 ◽  
Author(s):  
Ksander N. De Winkel ◽  
Ellen Edel ◽  
Riender Happee ◽  
Heinrich H. Bülthoff

Percepts of verticality are thought to be constructed as a weighted average of multisensory inputs, but the observed weights differ considerably between studies. In the present study, we evaluate whether this can be explained by differences in how visual, somatosensory and proprioceptive cues contribute to representations of the Head In Space (HIS) and Body In Space (BIS). Participants (10) were standing on a force plate on top of a motion platform while wearing a visualization device that allowed us to artificially tilt their visual surroundings. They were presented with (in)congruent combinations of visual, platform, and head tilt, and performed Rod &amp; Frame Test (RFT) and Subjective Postural Vertical (SPV) tasks. We also recorded postural responses to evaluate the relation between perception and balance. The perception data shows that body tilt, head tilt, and visual tilt affect the HIS and BIS in both experimental tasks. For the RFT task, visual tilt induced considerable biases (≈ 10° for 36° visual tilt) in the direction of the vertical expressed in the visual scene; for the SPV task, participants also adjusted platform tilt to correct for illusory body tilt induced by the visual stimuli, but effects were much smaller (≈ 0.25°). Likewise, postural data from the SPV task indicate participants slightly shifted their weight to counteract visual tilt (0.3° for 36° visual tilt). The data reveal a striking dissociation of visual effects between the two tasks. We find that the data can be explained well using a model where percepts of the HIS and BIS are constructed from direct signals from head and body sensors, respectively, and indirect signals based on body and head signals but corrected for perceived neck tilt. These findings show that perception of the HIS and BIS derive from the same sensory signals, but see profoundly different weighting factors. We conclude that observations of different weightings between studies likely result from querying of distinct latent constructs referenced to the body or head in space.


Author(s):  
Mikhail G. Grif ◽  
◽  
R. Elakkiya ◽  
Alexey L. Prikhodko ◽  
Maxim А. Bakaev ◽  
...  

In the paper, we consider recognition of sign languages (SL) with a particular focus on Russian and Indian SLs. The proposed recognition system includes five components: configuration, orientation, localization, movement and non-manual markers. The analysis uses methods of recognition of individual gestures and continuous sign speech for Indian and Russian sign languages (RSL). To recognize individual gestures, the RSL Dataset was developed, which includes more than 35,000 files for over 1000 signs. Each sign was performed with 5 repetitions and at least by 5 deaf native speakers of the Russian Sign Language from Siberia. To isolate epenthesis for continuous RSL, 312 sentences with 5 repetitions were selected and recorded on video. Five types of movements were distinguished, namely, "No gesture", "There is a gesture", "Initial movement", "Transitional movement", "Final movement". The markup of sentences for highlighting epenthesis was carried out on the Supervisely.ly platform. A recurrent network architecture (LSTM) was built, implemented using the TensorFlow Keras machine learning library. The accuracy of correct recognition of epenthesis was 95 %. The work on a similar dataset for the recognition of both individual gestures and continuous Indian sign language (ISL) is continuing. To recognize hand gestures, the mediapipe holistic library module was used. It contains a group of trained neural network algorithms that allow obtaining the coordinates of the key points of the body, palms and face of a person in the image. The accuracy of 85 % was achieved for the verification data. In the future, it is necessary to significantly increase the amount of labeled data. To recognize non-manual components, a number of rules have been developed for certain movements in the face. These rules include positions for the eyes, eyelids, mouth, tongue, and head tilt.


2007 ◽  
Vol 38 (1) ◽  
pp. 161-179 ◽  
Author(s):  
JOHN GERRING

A widespread turn towards mechanism-centred explanations can be viewed across the social sciences in recent decades. This article clarifies what it might mean in practical terms to adopt a mechanismic view of causation. This simple task of definition turns out to be considerably more difficult than it might at first appear. The body of the article elucidates a series of tensions and conflicts within this ambient concept, looking closely at how influential authors have employed this ubiquitous term. It is discovered that ‘mechanism’ has at least nine distinct meanings as the term is used within contemporary social science: (1) the pathway or process by which an effect is produced; (2) an unobservable causal factor; (3) an easy-to-observe causal factor; (4) a context-dependent (bounded) explanation; (5) a universal (or at least highly general) explanation; (6) an explanation that presumes highly contingent phenomena; (7) an explanation built on phenomena that exhibit lawlike regularities; (8) a distinct technique of analysis (based on qualitative, case study, or process-tracing evidence); or (9) a micro-level explanation for a causal phenomenon. Some of these meanings may be combined into coherent definitions; others are obviously contradictory. It is argued, however, that only the first meaning is consistent with all contemporary usages and with contemporary practices within the social sciences; this is therefore proposed as a minimal (core) definition of the concept. The other meanings are regarded as arguments surrounding the core concept.


Sign in / Sign up

Export Citation Format

Share Document