just noticeable difference
Recently Published Documents


TOTAL DOCUMENTS

285
(FIVE YEARS 90)

H-INDEX

24
(FIVE YEARS 3)

2022 ◽  
Vol 151 (1) ◽  
pp. 80-94
Author(s):  
Fernando del Solar Dorrego ◽  
Michelle C. Vigeant

2022 ◽  
Vol 70 (1) ◽  
pp. 62-86
Author(s):  
Boban Bondžulić ◽  
Boban Pavlović ◽  
Nenad Stojanović ◽  
Vladimir Petrović

Introduction/purpose: The paper presents interesting research related to the performance analysis of the picture-wise just noticeable difference (JND) prediction model and its application in the quality assessment of images with JPEG compression. Methods: The performance analysis of the JND model was conducted in an indirect way by using the publicly available results of subject-rated image datasets with the separation of images into two classes (above and below the threshold of visible differences). In the performance analysis of the JND prediction model and image quality assessment, five image datasets were used, four of which come from the visible wavelength range, and one dataset is intended for remote sensing and surveillance with images from the infrared part of the electromagnetic spectrum. Results: The pap 86 er shows that using a picture-wise JND model, subjective image quality assessment scores can be estimated with better accuracy, leading to significant performance improvements of the traditional peak signal-to-noise ratio (PSNR). The gain achieved by introducing the picture-wise JND model in the objective assessment depends on the chosen dataset and the results of the initial simple to compute PSNR measure, and it was obtained on all five datasets. The mean linear correlation coefficient (for five datasets) between subjective and PSNR objective quality estimates increased from 74% (traditional PSNR) to 90% (picture-wise JND PSNR). Conclusion: Further improvement of the JND-based objective measure can be obtained by improving the picture-wise model of JND prediction.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Hubert Kim ◽  
Alan T. Asbeck

AbstractJoint torque feedback is a new and promising means of kinesthetic feedback imposed by a wearable device. The torque feedback provides the wearer temporal and spatial information during a motion task. Nevertheless, little research has been conducted on quantifying the psychophysical parameters of how well humans can perceive external torques under various joint conditions. This study aims to investigate the just noticeable difference (JND) perceptual ability of the elbow joint to joint torques. The paper focuses on the ability of two primary joint proprioceptors, the Golgi-tendon organ (GTO) and muscle spindle (MS), to detect elbow torques, since touch and pressure sensors were masked. We studied 14 subjects while the arm was isometrically contracted (static condition) and was moving at a constant speed (dynamic condition). In total there were 10 joint conditions investigated, which varied the direction of the arm’s movement and the preload direction as well as torque direction. The JND torques under static conditions ranged from 0.097 Nm with no preload to 0.197 Nm with a preload of 1.28 Nm. The maximum dynamic JND torques were 0.799 Nm and 0.428 Nm, when the arm was flexing and extending at 213 degrees per second, respectively.


2021 ◽  
Author(s):  
Hyojin Kim ◽  
Viktorija Ratkute ◽  
Bastian Epp

Hearing thresholds can be used to quantify one's hearing ability. In various masking conditions, hearing thresholds can vary depending on the auditory cues. With comodulated masking noise and interaural phase disparity (IPD), target detection can be facilitated, lowering detection thresholds. This perceptual phenomenon is quantified as masking release: comodulation masking release (CMR) and binaural masking level difference (BMLD). As these measures only reflect the low limit of hearing, the relevance of masking release at supra-threshold levels is still unclear. Here, we used both psychoacoustic and electro-physiological measures to investigate the effect of masking release at supra-threshold levels. We investigated whether the difference in the amount of masking release will affect listening at supra-threshold levels. We used intensity just-noticeable difference (JND) to quantify an increase in the salience of the tone. As a physiological correlate of JND, we investigated late auditory evoked potentials (LAEPs) with electroencephalography (EEG). The results showed that the intensity JNDs were equal at the same intensity of the tone regardless of masking release conditions. For LAEP measures, the slope of the P2 amplitudes with a function of the level was inversely correlated with the intensity JND. In addition, the P2 amplitudes were higher in dichotic conditions compared to diotic conditions. Estimated the salience of the target tone from both experiments suggested that the salience of masked tone at supra-threshold levels may only be beneficial with BMLD.


2021 ◽  
Vol 33 (5) ◽  
pp. 1104-1116
Author(s):  
Yoshihiro Tanaka ◽  
Shogo Shiraki ◽  
Kazuki Katayama ◽  
Kouta Minamizawa ◽  
Domenico Prattichizzo ◽  
...  

Tactile sensations are crucial for achieving precise operations. A haptic connection between a human operator and a robot has the potential to promote smooth human-robot collaboration (HRC). In this study, we assemble a bilaterally shared haptic system for grasping operations, such as both hands of humans using a bottle cap-opening task. A robot arm controls the grasping force according to the tactile information from the human that opens the cap with a finger-attached acceleration sensor. Then, the grasping force of the robot arm is fed back to the human using a wearable squeezing display. Three experiments are conducted: measurement of the just noticeable difference in the tactile display, a collaborative task with different bottles under two conditions, with and without tactile feedback, including psychological evaluations using a questionnaire, and a collaborative task under an explicit strategy. The results obtained showed that the tactile feedback provided the confidence that the cooperative robot was adjusting its action and improved the stability of the task with the explicit strategy. The results indicate the effectiveness of the tactile feedback and the requirement for an explicit strategy of operators, providing insight into the design of an HRC with bilaterally shared haptic perception.


Displays ◽  
2021 ◽  
pp. 102096
Author(s):  
Yafen Xing ◽  
Haibing Yin ◽  
Yang Zhou ◽  
Yong Chen ◽  
Chenggang Yan

2021 ◽  
Vol 15 ◽  
Author(s):  
Jiaqiu Sun ◽  
Ziqing Wang ◽  
Xing Tian

How different sensory modalities interact to shape perception is a fundamental question in cognitive neuroscience. Previous studies in audiovisual interaction have focused on abstract levels such as categorical representation (e.g., McGurk effect). It is unclear whether the cross-modal modulation can extend to low-level perceptual attributes. This study used motional manual gestures to test whether and how the loudness perception can be modulated by visual-motion information. Specifically, we implemented a novel paradigm in which participants compared the loudness of two consecutive sounds whose intensity changes around the just noticeable difference (JND), with manual gestures concurrently presented with the second sound. In two behavioral experiments and two EEG experiments, we investigated our hypothesis that the visual-motor information in gestures would modulate loudness perception. Behavioral results showed that the gestural information biased the judgment of loudness. More importantly, the EEG results demonstrated that early auditory responses around 100 ms after sound onset (N100) were modulated by the gestures. These consistent results in four behavioral and EEG experiments suggest that visual-motor processing can integrate with auditory processing at an early perceptual stage to shape the perception of a low-level perceptual attribute such as loudness, at least under challenging listening conditions.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Nadia Paraskevoudi ◽  
Iria SanMiguel

AbstractThe ability to distinguish self-generated stimuli from those caused by external sources is critical for all behaving organisms. Although many studies point to a sensory attenuation of self-generated stimuli, recent evidence suggests that motor actions can result in either attenuated or enhanced perceptual processing depending on the environmental context (i.e., stimulus intensity). The present study employed 2-AFC sound detection and loudness discrimination tasks to test whether sound source (self- or externally-generated) and stimulus intensity (supra- or near-threshold) interactively modulate detection ability and loudness perception. Self-generation did not affect detection and discrimination sensitivity (i.e., detection thresholds and Just Noticeable Difference, respectively). However, in the discrimination task, we observed a significant interaction between self-generation and intensity on perceptual bias (i.e. Point of Subjective Equality). Supra-threshold self-generated sounds were perceived softer than externally-generated ones, while at near-threshold intensities self-generated sounds were perceived louder than externally-generated ones. Our findings provide empirical support to recent theories on how predictions and signal intensity modulate perceptual processing, pointing to interactive effects of intensity and self-generation that seem to be driven by a biased estimate of perceived loudness, rather by changes in detection and discrimination sensitivity.


Sign in / Sign up

Export Citation Format

Share Document