Conveying trunk orientation information through a wearable tactile interface

2020 ◽  
Vol 88 ◽  
pp. 103176
Author(s):  
Roberta Etzi ◽  
Alberto Gallace ◽  
Gemma Massetti ◽  
Marco D'Agostino ◽  
Viola Cinquetti ◽  
...  
Author(s):  
Jonathan Ogle ◽  
Daniel Powell ◽  
Eric Amerling ◽  
Detlef Matthias Smilgies ◽  
Luisa Whittaker-Brooks

<p>Thin film materials have become increasingly complex in morphological and structural design. When characterizing the structure of these films, a crucial field of study is the role that crystallite orientation plays in giving rise to unique electronic properties. It is therefore important to have a comparative tool for understanding differences in crystallite orientation within a thin film, and also the ability to compare the structural orientation between different thin films. Herein, we designed a new method dubbed the mosaicity factor (MF) to quantify crystallite orientation in thin films using grazing incidence wide-angle X-ray scattering (GIWAXS) patterns. This method for quantifying the orientation of thin films overcomes many limitations inherent in previous approaches such as noise sensitivity, the ability to compare orientation distributions along different axes, and the ability to quantify multiple crystallite orientations observed within the same Miller index. Following the presentation of MF, we proceed to discussing case studies to show the efficacy and range of application available for the use of MF. These studies show how using the MF approach yields quantitative orientation information for various materials assembled on a substrate.<b></b></p>


1998 ◽  
Vol 1 (3) ◽  
pp. 173-187
Author(s):  
Wayne J. Albert ◽  
Joan M. Stevenson ◽  
Geneviève A. Dumas ◽  
Roger W. Wheeler

The objectives of this study were to: 1) develop a dynamic 2D link segment model for lifting using the constraints of four sensors from an electromagnetic motion analysis system; 2) evaluate the magnitude of shoulder movement in the sagittal plane during lifting; and 3) investigate the effect of shoulder translation on trunk acceleration and lumbar moments calculated by the developed model and comparing it with two separate 2D dynamic link segment models. Six women and six men lifted loads of 2 kg, 7 kg, 12 kg and 2 kg, 12 kg, 22 kg respectively, under stoop, squat and freestyle conditions. Trunk orientation and position, as well as shoulder position were monitored during all lifts using the Polhemus FASTRAK\trdmk. Results indicated that average range of motion was 0.05 ± 0.02 m in the horizontal direction and 0.03 ± 0.02 m in the vertical direction. Shoulder position relative to T1 was located 0.07 ± 0.02 m anteriorly, and 0.02 ± 0.04 m superiorly (0.06 and 0.00 m for males and 0.08 and 0.04 m for females, respectively). To estimate the effect of shoulder motion on trunk acceleration and L5/S1 moments, three two-dimensional dynamic link segment models were developed within the constraints of the electromagnetic tracking system and compared. Trunk segment endpoints were defined as L5/S1 and either T1 or shoulder depending on model type. For trunk accelerations, average differences between models were greater than 40 deg/s² in 70.4% trunk accelerations did not translate into significantly different moment calculations between models. Average peak dynamic L5/S1 moment differences between models were smaller than 4 Nm for all lifting conditions which failed to be statistically significant (p>0.05). The model type did not have a statistically significant effect on peak L5/S1 moments. Therefore, despite important shoulder joint translations, peak L5/S1 moments were not significantly affected.


2008 ◽  
Vol 5 (1-4) ◽  
pp. 434-441 ◽  
Author(s):  
J. G. Lee ◽  
S. J. Yang ◽  
Y. H. Cho ◽  
S. K. Yoo ◽  
J. W. Park

2020 ◽  
Author(s):  
Harriet M J Smith ◽  
Sally Andrews ◽  
Thom Baguley ◽  
Melissa Fay Colloff ◽  
Josh P Davis ◽  
...  

Unfamiliar simultaneous face matching is error prone. Reducing incorrect identification decisions will positively benefit forensic and security contexts. The absence of view-independent information in static images likely contributes to the difficulty of unfamiliar face matching. We tested whether a novel interactive viewing procedure that provides the user with 3D structural information as they rotate a facial image to different orientations would improve face matching accuracy. We tested the performance of ‘typical’ (Experiment 1) and ‘superior’ (Experiment 2) face recognisers, comparing their performance using high quality (Experiment 3) and pixelated (Experiment 4) Facebook profile images. In each trial, participants responded whether two images featured the same person with one of these images being either a static face, a video providing orientation information, or an interactive image. Taken together, the results show that fluid orientation information and interactivity prompt shifts in criterion and support matching performance. Because typical and superior face recognisers both benefited from the structural information provided by the novel viewing procedures, our results point to qualitatively similar reliance on pictorial encoding in these groups. This also suggests that interactive viewing tools can be valuable in assisting face matching in high performing practitioner groups.


2020 ◽  
Vol 20 (5) ◽  
pp. 60-67
Author(s):  
Dilara Gumusbas ◽  
Tulay Yildirim

AbstractOffline signature is one of the frequently used biometric traits in daily life and yet skilled forgeries are posing a great challenge for offline signature verification. To differentiate forgeries, a variety of research has been conducted on hand-crafted feature extraction methods until now. However, these methods have recently been set aside for automatic feature extraction methods such as Convolutional Neural Networks (CNN). Although these CNN-based algorithms often achieve satisfying results, they require either many samples in training or pre-trained network weights. Recently, Capsule Network has been proposed to model with fewer data by using the advantage of convolutional layers for automatic feature extraction. Moreover, feature representations are obtained as vectors instead of scalar activation values in CNN to keep orientation information. Since signature samples per user are limited and feature orientations in signature samples are highly informative, this paper first aims to evaluate the capability of Capsule Network for signature identification tasks on three benchmark databases. Capsule Network achieves 97 96, 94 89, 95 and 91% accuracy on CEDAR, GPDS-100 and MCYT databases for 64×64 and 32×32 resolutions, which are lower than usual, respectively. The second aim of the paper is to generalize the capability of Capsule Network concerning the verification task. Capsule Network achieves average 91, 86, and 89% accuracy on CEDAR, GPDS-100 and MCYT databases for 64×64 resolutions, respectively. Through this evaluation, the capability of Capsule Network is shown for offline verification and identification tasks.


1997 ◽  
Vol 114 (2) ◽  
pp. 384-389 ◽  
Author(s):  
J. Massion ◽  
K. Popov ◽  
J.-C. Fabre ◽  
P. Rage ◽  
V. Gurfinkel

2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Justin L. Gardner ◽  
Elisha P. Merriam

Selectivity for many basic properties of visual stimuli, such as orientation, is thought to be organized at the scale of cortical columns, making it difficult or impossible to measure directly with noninvasive human neuroscience measurement. However, computational analyses of neuroimaging data have shown that selectivity for orientation can be recovered by considering the pattern of response across a region of cortex. This suggests that computational analyses can reveal representation encoded at a finer spatial scale than is implied by the spatial resolution limits of measurement techniques. This potentially opens up the possibility to study a much wider range of neural phenomena that are otherwise inaccessible through noninvasive measurement. However, as we review in this article, a large body of evidence suggests an alternative hypothesis to this superresolution account: that orientation information is available at the spatial scale of cortical maps and thus easily measurable at the spatial resolution of standard techniques. In fact, a population model shows that this orientation information need not even come from single-unit selectivity for orientation tuning, but instead can result from population selectivity for spatial frequency. Thus, a categorical error of interpretation can result whereby orientation selectivity can be confused with spatial frequency selectivity. This is similarly problematic for the interpretation of results from numerous studies of more complex representations and cognitive functions that have built upon the computational techniques used to reveal stimulus orientation. We suggest in this review that these interpretational ambiguities can be avoided by treating computational analyses as models of the neural processes that give rise to measurement. Building upon the modeling tradition in vision science using considerations of whether population models meet a set of core criteria is important for creating the foundation for a cumulative and replicable approach to making valid inferences from human neuroscience measurements. Expected final online publication date for the Annual Review of Vision Science, Volume 7 is September 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


Sign in / Sign up

Export Citation Format

Share Document