hand image
Recently Published Documents


TOTAL DOCUMENTS

80
(FIVE YEARS 20)

H-INDEX

12
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Alessio Canepa ◽  
Edoardo Ragusa ◽  
Christian Gianoglio ◽  
Paolo Gastaldo ◽  
Rodolfo Zunino

Pain Medicine ◽  
2021 ◽  
Author(s):  
Ebonie K Rio ◽  
Tasha R Stanton ◽  
Benedict M Wand ◽  
James R Debenham ◽  
Jill Cook ◽  
...  

Abstract Objective To determine if impairment in motor imagery processes is present in Achilles tendinopathy (AT), as demonstrated by a reduced ability to quickly and accurately identify the laterality (left-right judgement) of a pictured limb. Additionally, this study aimed to use a novel data pooling approach to combine data collected at 3 different sites via meta-analytical techniques that allow exploration of heterogeneity. Design Multi-site case-control study. Methods Three independent studies with similar protocols were conducted by separate research groups. Each study-site evaluated left/right judgement performance for images of feet and hands using Recognise© software and compared performance between people with AT and healthy controls. Results from each study-site were independently collated, then combined in a meta-analysis. Results In total, 126 participants (40 unilateral, 22 bilateral AT cases, 61 controls) were included. There were no differences between AT cases and controls for hand image accuracy and reaction time. Contrary to the hypothesis, there were no differences in performance between those with AT and controls for foot image reaction time, however there were conflicting findings for foot accuracy, based on four separate analyses. There were no differences between the affected and unaffected sides in people with unilateral AT. Conclusions Impairments in motor imagery performance for hands were not found in this study, and we found inconsistent results for foot accuracy. This contrasts to studies in persistent pain of limbs, face and knee osteoarthritis, and suggests that differences in pathoetiology or patient demographics may uniquely influence proprioceptive representation.


Author(s):  
Anam Malik

The research paper includes development of Application GUI for the ANN Hand Geometry based Recognition System with initial stages of Image Acquisition, Image Pre-processing and Feature Extraction and ANN Recognition using MATLAB. The application is to be tested on database for accuracy and performance and analytical comparisons are to be made on basis of testing. The research presents a method based on moment invariant method and Artificial Neural Network (ANN) which uses a four-step process: separates the hand image from its background, normalizes and digitizes the image, applies statistical features like Length and Width of the Fingers, Diameter of the Palm Perimeter Measurements, maxima and mini points and finally implements recognition and was successful in the verification as ANN was trained for seven neural net layers with 150000 iterations each. Neural network with MLP is highly efficient. The ANN is trained and tested on a total of 150 input palm images from CASIA Multi-Spectral Palmprint Image Database. The two different datasets are created for Left Palm Images and Right Palm Images. The Dataset1 includes 90 left palm images from 15 subjects with 06 images from each subject. The Dataset2 includes 60 right palm images from 10 subjects with 06 images from each subject.


Author(s):  
Samer Kais Jameel ◽  
Jafar Majidpour

Recently, numerous challenging problems have existed for transforming different image types (thermal infrared (TIR), visible spectrum, and near-infrared (NIR)). Other types of cameras may lack the ability and features of certain types of frequently-used cameras that produce different types of images. Based on camera features, different applications might emerge from observing a scenario under specific conditions (darkness, fog, night, day, and artificial light). We need to jump from one field to another to understand the scenario better. This paper proposes a fully automatic model (GVTI-AE) to manipulate the transformation into different types of vibrant, realistic images using the AutoEncoder method, which requires neither pre-nor post-processing or any user input. The experiments carried out using the GVTI-AE model showed that the perceptually realistic results produced in the widely available datasets (Tecnocampus Hand Image Database, Carl dataset, and IRIS Thermal/Visible Face Database).


Author(s):  
Mohammad Abbadi ◽  
Afaf Tareef ◽  
Afnan Sarayreh

The human hand has been considered a promising component for biometric-based identification and authentication systems for many decades. In this paper, hand side recognition framework is proposed based on deep learning and biometric authentication using the hashing method. The proposed approach performs in three phases: (a) hand image segmentation and enhancement by morphological filtering, automatic thresholding, and active contour deformation, (b) hand side recognition based on deep Convolutional Neural Networks (CNN), and (c) biometric authentication based on the hashing method. The proposed framework is evaluated using a very large hand dataset, which consists of 11076 hand images, including left/ right and dorsal/ palm hand images for 190 persons. Finally, the experimental results show the efficiency of the proposed framework in both dorsal-palm and left-right recognition with an average accuracy of 96.24 and 98.26, respectively, using a completely automated computer program.


Author(s):  
Takao Fukui ◽  
Aya Murayama ◽  
Asako Miura

Although the hand is an important organ in interpersonal interactions, focusing on this body part explicitly is less common in daily life compared with the face. We investigated (i) whether a person’s recognition of their own hand is different from their recognition of another person’s hand (i.e., self hand vs. other’s hand) and (ii) whether a close social relationship affects hand recognition (i.e., a partner’s hand vs. an unknown person’s hand). For this aim, we ran an experiment in which participants took part in one of two discrimination tasks: (i) a self–others discrimination task or (ii) a partner/unknown opposite-sex person discrimination task. In these tasks, participants were presented with a hand image and asked to select one of two responses, self (partner) or other (unknown persons), as quickly and accurately as possible. We manipulated hand ownership (self (partner)/other(unknown person)), hand image laterality (right/left), and visual perspective of hand image (upright/upside-down). A main effect of hand ownership in both tasks (i.e., self vs. other and partner vs. unknown person) was found, indicating longer reaction times for self and partner images. The results suggest that close social relationships modulate hand recognition—namely, “self-expansion” to a romantic partner could occur at explicit visual hand recognition.


2020 ◽  
Vol 1628 ◽  
pp. 012002
Author(s):  
D V Arslanova ◽  
A E Grishin ◽  
A I Denisov ◽  
D Z Galimullin ◽  
N V Denisova ◽  
...  
Keyword(s):  

2020 ◽  
Vol 15 (12) ◽  
pp. 1975-1988
Author(s):  
Luisa F. Sánchez-Peralta ◽  
Artzai Picón ◽  
Francisco M. Sánchez-Margallo ◽  
J. Blas Pagador

Abstract Purpose Data augmentation is a common technique to overcome the lack of large annotated databases, a usual situation when applying deep learning to medical imaging problems. Nevertheless, there is no consensus on which transformations to apply for a particular field. This work aims at identifying the effect of different transformations on polyp segmentation using deep learning. Methods A set of transformations and ranges have been selected, considering image-based (width and height shift, rotation, shear, zooming, horizontal and vertical flip and elastic deformation), pixel-based (changes in brightness and contrast) and application-based (specular lights and blurry frames) transformations. A model has been trained under the same conditions without data augmentation transformations (baseline) and for each of the transformation and ranges, using CVC-EndoSceneStill and Kvasir-SEG, independently. Statistical analysis is performed to compare the baseline performance against results of each range of each transformation on the same test set for each dataset. Results This basic method identifies the most adequate transformations for each dataset. For CVC-EndoSceneStill, changes in brightness and contrast significantly improve the model performance. On the contrary, Kvasir-SEG benefits to a greater extent from the image-based transformations, especially rotation and shear. Augmentation with synthetic specular lights also improves the performance. Conclusion Despite being infrequently used, pixel-based transformations show a great potential to improve polyp segmentation in CVC-EndoSceneStill. On the other hand, image-based transformations are more suitable for Kvasir-SEG. Problem-based transformations behave similarly in both datasets. Polyp area, brightness and contrast of the dataset have an influence on these differences.


Sign in / Sign up

Export Citation Format

Share Document