An Image Strategy Based on Saliency Detection Using Luminance Contrast for Artificial Vision with Retinal Prosthesis

2021 ◽  
pp. 273-281
Author(s):  
Jing Wang ◽  
Jianyun Liu ◽  
Yuting Zhang ◽  
Haiyi Zhu ◽  
Yanling Han ◽  
...  
2020 ◽  
Vol 104 (12) ◽  
pp. 1730-1734 ◽  
Author(s):  
Sandra R Montezuma ◽  
Susan Y Sun ◽  
Arup Roy ◽  
Avi Caspi ◽  
Jessy D Dorn ◽  
...  

AimTo demonstrate the potential clinically meaningful benefits of a thermal camera integrated with the Argus II, an artificial vision therapy system, for assisting Argus II users in localising and discriminating heat-emitting objects.MethodsSeven blind patients implanted with Argus II retinal prosthesis participated in the study. Two tasks were investigated: (1) localising up to three heat-emitting objects by indicating the location of the objects and (2) discriminating a specific heated object out of three presented on a table. Heat-emitting objects placed on a table included a toaster, a flat iron, an electric kettle, a heating pad and a mug of hot water. Subjects completed the two tasks using the unmodified Argus II system with a visible-light camera and the thermal camera-integrated Argus II.ResultsSubjects more accurately localised heated objects displayed on a table (p=0.011) and discriminated a specific type of object (p=0.005) presented with the thermal camera integrated with the Argus II versus the unmodified Argus II camera.ConclusionsThe thermal camera integrated with the artificial vision therapy system helps users to locate and differentiate heat-emitting objects more precisely than a visible light sensor. The integration of the thermal camera with the Argus II may have significant benefits in patients’ daily life.


2019 ◽  
Author(s):  
Noelle R. B. Stiles ◽  
Vivek R. Patel ◽  
James D. Weiland

AbstractIn the sighted, auditory and visual perception typically interact strongly and influence each other significantly. Blindness acquired in adulthood alters these multisensory pathways. During blindness, it has been shown that the senses functionally reorganize, enabling visual cortex to be recruited for auditory processing. It is yet unknown whether this reorganization is permanent, or whether auditory-visual interactions can be re-established in cases of partial visual recovery.Retinal prostheses restore visual perception to the late blind and provide an opportunity to determine if these auditory-visual connections and interactions are still viable after years of plasticity and neglect. We tested Argus II retinal prosthesis patients (N = 7) for an auditory-visual illusion, the ventriloquist effect, in which the perceived location of an auditory stimulus is modified by the presence of a visual stimulus. Prosthetically-restored visual perception significantly modified patients’ auditory perceptions, comparable to results with sighted control participants (N = 10). Furthermore, the auditory-visual interaction strength in retinal prosthesis patients exhibited a significant partial anti-correlation with patient age, as well as a significant partial correlation with duration of prosthesis use.These results indicate that auditory-visual interactions can be restored after decades of blindness, and that auditory-visual processing pathways and regions can be re-engaged. Furthermore, they indicate the resilience of multimodal interactions to plasticity during blindness, and that this plasticity can either be partially reversed or at least does not prevent auditory-visual interactions. Finally, this study provides hope for the restoration of sensory perception, complete with multisensory integration, even after years of visual deprivation.SignificanceRetinal prostheses restore visual perception to the blind by means of an implanted retinal stimulator wirelessly connected to a camera mounted on glasses. Individuals with prosthetic vision can locate and identify simple objects, and identify the direction of visual motion. A key question is whether this prosthetic vision will interact with the other senses, such as audition, in the same way that natural vision does. We found that artificial vision, like natural vision, can alter auditory localization. This suggests that the brain processes prosthetic vision similarly to natural vision despite altered visual processing in the retina. In addition, it implies that reorganization of the senses during blindness may be reversible, allowing for the rehabilitation of crossmodal interactions after visual restoration.


2020 ◽  
Author(s):  
Jacob Thomas Thorn ◽  
Enrico Migliorini ◽  
Diego Ghezzi

AbstractObjectiveRetinal prostheses hold the potential to restore artificial vision in blind patients suffering from outer retinal dystrophies. The optimal number, density, and coverage of the electrodes that a retinal prosthesis should have to provide adequate artificial vision in daily activities is still an open question and an important design parameter needed to develop better implants.ApproachTo address this question, we investigated the interaction between the visual angle, the pixel number and the pixel density without being limited by a small electrode count, as in previous research reports. We implemented prosthetic vision in a virtual reality environment in order to simulate the real-life experience of using a retinal prosthesis. We designed four different tasks simulating: object recognition, word reading, perception of a descending step and crossing a street.Main resultsThe results of our study showed that in all the tasks the visual angle played the most significant role in improving the performance of the participant.SignificanceThe design of new retinal prostheses should take into account the relevance of the restored visual angle to provide a helpful and valuable visual aid to profoundly or totally blind patients.


Author(s):  
Han Liu ◽  
Bo Li ◽  
Tao Zheng ◽  
Jiaxu Yao
Keyword(s):  

Author(s):  
M. N. Favorskaya ◽  
L. C. Jain

Introduction:Saliency detection is a fundamental task of computer vision. Its ultimate aim is to localize the objects of interest that grab human visual attention with respect to the rest of the image. A great variety of saliency models based on different approaches was developed since 1990s. In recent years, the saliency detection has become one of actively studied topic in the theory of Convolutional Neural Network (CNN). Many original decisions using CNNs were proposed for salient object detection and, even, event detection.Purpose:A detailed survey of saliency detection methods in deep learning era allows to understand the current possibilities of CNN approach for visual analysis conducted by the human eyes’ tracking and digital image processing.Results:A survey reflects the recent advances in saliency detection using CNNs. Different models available in literature, such as static and dynamic 2D CNNs for salient object detection and 3D CNNs for salient event detection are discussed in the chronological order. It is worth noting that automatic salient event detection in durable videos became possible using the recently appeared 3D CNN combining with 2D CNN for salient audio detection. Also in this article, we have presented a short description of public image and video datasets with annotated salient objects or events, as well as the often used metrics for the results’ evaluation.Practical relevance:This survey is considered as a contribution in the study of rapidly developed deep learning methods with respect to the saliency detection in the images and videos.


Sign in / Sign up

Export Citation Format

Share Document