scholarly journals A common reference frame for describing rotation of the distal femur

2009 ◽  
Vol 91-B (5) ◽  
pp. 683-690 ◽  
Author(s):  
J. Victor ◽  
D. Van Doninck ◽  
L. Labey ◽  
F. Van Glabbeek ◽  
P. Parizel ◽  
...  
2006 ◽  
Vol 96 (1) ◽  
pp. 352-362 ◽  
Author(s):  
Sabine M. Beurze ◽  
Stan Van Pelt ◽  
W. Pieter Medendorp

At some stage in the process of a sensorimotor transformation for a reaching movement, information about the current position of the hand and information about the location of the target must be encoded in the same frame of reference to compute the hand-to-target difference vector. Two main hypotheses have been proposed regarding this reference frame: an eye-centered and a body-centered frame. Here we evaluated these hypotheses using the pointing errors that subjects made when planning and executing arm movements to memorized targets starting from various initial hand positions while keeping gaze fixed in various directions. One group of subjects ( n = 10) was tested without visual information about hand position during movement planning (unseen-hand condition); another group ( n = 8) was tested with hand and target position simultaneously visible before movement onset (seen-hand condition). We found that both initial hand position and gaze fixation direction had a significant effect on the magnitude and direction of the pointing error. Errors were significantly smaller in the seen-hand condition. For both conditions, though, a reference frame analysis showed that the errors arose at an eye- or hand-centered stage or both, but not at a body-centered stage. As a common reference frame is required to specify a movement vector, these results suggest that an eye-centered mechanism is involved in integrating target and hand position in programming reaching movements. We discuss how simple gain elements modulating the eye-centered target and hand-position signals can account for these results.


2016 ◽  
Vol 6 (1) ◽  
Author(s):  
P. Häkli ◽  
M. Lidberg ◽  
L. Jivall ◽  
T. Nørbech ◽  
O. Tangen ◽  
...  

AbstractThe NKG 2008 GPS campaign was carried out in September 28 – October 4, 2008. The purpose was to establish a common reference frame in the Nordic- Baltic-Arctic region, and to improve and update the transformations from the latest global ITRF reference frame to the national ETRS89 realizations of the Nordic/Baltic countries. Postglacial rebound in the Fennoscandian area causes intraplate deformations up to about 10 mm/yr to the Eurasian tectonic plate which need to be taken into account in order to reach centimetre level accuracies in the transformations. We discuss some possible alternatives and present the most applicable transformation strategy. The selected transformation utilizes the de facto transformation recommended by the EUREF but includes additional intraplate corrections and a new common Nordic-Baltic reference frame to serve the requirements of the Nordic/Baltic countries. To correct for the intraplate deformations in the Nordic-Baltic areawe have used the commonNordic deformation model NKG RF03vel. The new common reference frame, NKG ETRF00, was aligned to ETRF2000 at epoch 2000.0 in order to be close to the national ETRS89 realizations and to coincide with the land uplift epoch of the national height systems. We present here the realization of the NKG ETRF00 and transformation formulae together with the parameters to transform from global ITRF coordinates to Nordic/Baltic realizations of the ETRS89.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Wolf-Dieter Vogl ◽  
Hrvoje Bogunović ◽  
Sebastian M. Waldstein ◽  
Sophie Riedl ◽  
Ursula Schmidt-Erfurth

AbstractAge-related macular degeneration (AMD) is the predominant cause of vision loss in the elderly with a major impact on ageing societies and healthcare systems. A major challenge in AMD management is the difficulty to determine the disease stage, the highly variable progression speed and the risk of conversion to advanced AMD, where irreversible functional loss occurs. In this study we developed an optical coherence tomography (OCT) imaging based spatio-temporal reference frame to characterize the morphologic progression of intermediate age-related macular degeneration (AMD) and to identify distinctive patterns of conversion to the advanced stages macular neovascularization (MNV) and macular atrophy (MA). We included 10,040 OCT volumes of 518 eyes with intermediate AMD acquired according to a standardized protocol in monthly intervals over two years. Two independent masked retina specialists determined the time of conversion to MNV or MA. All scans were aligned to a common reference frame by intra-patient and inter-patient registration. Automated segmentations of retinal layers and the choroid were computed and en-face maps were transformed into the common reference frame. Population maps were constructed in the subgroups converting to MNV (n=135), MA (n=50) and in non-progressors (n=333). Topographically resolved maps of changes were computed and tested for statistical significant differences. The development over time was analysed by a joint model accounting for longitudinal and right-censoring aspect. Significantly enhanced thinning of the outer nuclear layer (ONL) and retinal pigment epithelium (RPE)–photoreceptorinner segment/outer segment (PR-IS/OS) layers within the central 3 mm and a faster thinning speed preceding conversion was documented for MA progressors. Converters to MNV presented an accelerated thinning of the choroid and appearance changes in the choroid prior to MNV onset. The large-scale automated image analysis allowed us to distinctly assess the progression of morphologic changes in intermediate AMD based on conventional OCT imaging. Distinct topographic and temporal patterns allow to prospectively determine eyes with risk of progression and thereby greatly improving early detection, prevention and development of novel therapeutic strategies.


Nature ◽  
10.1038/32648 ◽  
1998 ◽  
Vol 392 (6673) ◽  
pp. 278-282 ◽  
Author(s):  
D. R. W. Wylie ◽  
W. F. Bischof ◽  
B. J. Frost

2005 ◽  
Vol 94 (4) ◽  
pp. 2331-2352 ◽  
Author(s):  
O'Dhaniel A. Mullette-Gillman ◽  
Yale E. Cohen ◽  
Jennifer M. Groh

The integration of visual and auditory events is thought to require a joint representation of visual and auditory space in a common reference frame. We investigated the coding of visual and auditory space in the lateral and medial intraparietal areas (LIP, MIP) as a candidate for such a representation. We recorded the activity of 275 neurons in LIP and MIP of two monkeys while they performed saccades to a row of visual and auditory targets from three different eye positions. We found 45% of these neurons to be modulated by the locations of visual targets, 19% by auditory targets, and 9% by both visual and auditory targets. The reference frame for both visual and auditory receptive fields ranged along a continuum between eye- and head-centered reference frames with ∼10% of auditory and 33% of visual neurons having receptive fields that were more consistent with an eye- than a head-centered frame of reference and 23 and 18% having receptive fields that were more consistent with a head- than an eye-centered frame of reference, leaving a large fraction of both visual and auditory response patterns inconsistent with both head- and eye-centered reference frames. The results were similar to the reference frame we have previously found for auditory stimuli in the inferior colliculus and core auditory cortex. The correspondence between the visual and auditory receptive fields of individual neurons was weak. Nevertheless, the visual and auditory responses were sufficiently well correlated that a simple one-layer network constructed to calculate target location from the activity of the neurons in our sample performed successfully for auditory targets even though the weights were fit based only on the visual responses. We interpret these results as suggesting that although the representations of space in areas LIP and MIP are not easily described within the conventional conceptual framework of reference frames, they nevertheless process visual and auditory spatial information in a similar fashion.


Sign in / Sign up

Export Citation Format

Share Document