visual clues
Recently Published Documents


TOTAL DOCUMENTS

68
(FIVE YEARS 16)

H-INDEX

13
(FIVE YEARS 1)

2021 ◽  
Vol 2070 (1) ◽  
pp. 012204
Author(s):  
Aravind P Madhu ◽  
C Akhil Balu ◽  
Akshay Krishnan ◽  
Adithya Aravind ◽  
Jibin Noble ◽  
...  

Abstract Stereoscopic, or multi-view, display systems that can give significant visual clues for the human brain to understand three-dimensional (3D) objects, they are regarded as better alternatives to traditional two-dimensional (2D) displays. A device that can render 3D images for viewers without the use of specific headgear or glasses is known as an auto-stereoscopic display. Manipulation of light rays via Light engines is also used to create 3D images in 3D space. We introduce a new auto-stereoscopic swept-volume display (SVD) system based on light-emitting diode (LED) arrays in this research. A display device plus a graphics control sub-system makes up this system. The display device is a 2D revolving panel of LEDs that generates 3D images using “persistence of vision”.


2021 ◽  
Vol 5 (2) ◽  
Author(s):  
Wendy J. Sadler

Science shows as a medium for communicating science are used widely across the UK, yet there is little literature about the long-term impact they may have. This longitudinal study looks at the short-term and long-term impact of the science show Music to Your Ears, which was initially performed throughout the UK on behalf of the Institute of Physics in 2002, and which has since been offered at schools and events through the enterprise Science Made Simple. The impact was measured using the immediate reaction to the show, the number (and type) of demonstrations (demos) recalled over the long term, and the applied use of any memories from the show. Quantitative and qualitative data were gathered using questionnaires immediately after the show and focus groups held two and a half years later. To enrich the data, and minimize bias, interviews with professional science presenters were also included in the data analysis. Data from the questionnaires were used to develop a framework of five demonstration categories to describe their essence, or main purpose. The categories used in this study were: curiosity (C), human (H), analogy (A), mechanics (M) and phenomena (P). It was found that even after two and a half years, almost 25 per cent of demos from the show could be recalled without prompting. When prompted with verbal and visual clues, over 50 per cent of the demos from the show could be recalled by the group tested. In addition, around 9 per cent of the demos were recalled and related to an alternative context to the show, suggesting that some cognitive processing may have happened with the most memorable elements of the show. The ‘curiosity’ type of demo was found to be the most memorable in both the short term and long term.


2021 ◽  
Vol 11 (9) ◽  
pp. 67-74
Author(s):  
Mantasha Dilkash ◽  
Susmita Banerjee ◽  
Gaurav Dubey

Purpose: Low vision patients have difficulties maintaining and keeping social distancing guidelines in the fight against the spread of the COVID-19 outbreak. This study examines COVID-19 and social distancing: Challenges faced by patients with low vision. The study objective is to identify the contribution of Covid-19 to the challenges faced by patients with low vision who visited the hospital. 35 low vision patients participated in the study. Method: A self-administrated, cross-sectional survey in English was distributed using Google forms through various professional bodies across patients with low vision visiting the hospital. The questionnaire was also presented to the patients via telephonic conversation. The study objective is to identify the contribution of Covid-19 to the challenges faced by patients with low vision visited to the hospital and examine how social distancing measure has increased the challenges faced by patients with low vision. The questionnaire will contain closed-ended questions. Result: Questionnaires were distributed to patients with low vision attending the hospital 35 responses were obtained through the questionnaire. Among the participants, 22 were males, and 19 were females. Patients who participated in the study were between 10 to 70 years of age. Social distancing increases the challenges faced by people with low vision. People with low vision had restrictive movement due to problems in maintaining social distancing and travelling outside. Challenges for low vision people depend upon the level of their sight loss; blind people rely on canes and a human guide. An individual with low vision can use their visual clues by identifying the shape and size of the object and their other senses to identify the object. Conclusion: Coronavirus pandemic is a worldwide challenge that has spread across all sectors, including human. It applies to all industries. In no doubt, COVID-19 contributes to the faced by patients with low vision and the social distancing measures, which is one of the best ways of reducing the further spread of the disease across the globe. Key words: COVID-19, Challenges, Optometrist, Low Vision, Social Distancing.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5178
Author(s):  
Sangbong Yoo ◽  
Seongmin Jeong ◽  
Seokyeon Kim ◽  
Yun Jang

Gaze movement and visual stimuli have been utilized to analyze human visual attention intuitively. Gaze behavior studies mainly show statistical analyses of eye movements and human visual attention. During these analyses, eye movement data and the saliency map are presented to the analysts as separate views or merged views. However, the analysts become frustrated when they need to memorize all of the separate views or when the eye movements obscure the saliency map in the merged views. Therefore, it is not easy to analyze how visual stimuli affect gaze movements since existing techniques focus excessively on the eye movement data. In this paper, we propose a novel visualization technique for analyzing gaze behavior using saliency features as visual clues to express the visual attention of an observer. The visual clues that represent visual attention are analyzed to reveal which saliency features are prominent for the visual stimulus analysis. We visualize the gaze data with the saliency features to interpret the visual attention. We analyze the gaze behavior with the proposed visualization to evaluate that our approach to embedding saliency features within the visualization supports us to understand the visual attention of an observer.


Forests ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 794
Author(s):  
Shaneka S. Lawson ◽  
Aziz Ebrahimi ◽  
James R. McKenna

Chestnut blight, a disease that has spread rampantly among American (Castanea dentata (Marsh.) Borkh.) and European chestnut (C. sativa Mill.) trees, results from infection by the fungal pathogen Cryphonectria parasitica (Murrill) M.E. Barr (C. parasitica). This fungus was introduced in the early 1900s and has almost functionally eliminated chestnut trees from the North American landscape. In 2017, we collected chestnut blight samples from two sites (Site B, (Fulton Co., IN) and Site C (Marshall Co., IN)). At the Fulton County planting, Site B, cankers had formed, healed over, and the trees were healthy. However, at the second site in Marshall County, (Site C), cankers continued to propagate until all of the chestnut trees had died back to the ground. Research evidence worldwide has indicated that these visual clues likely result from the presence of a hypovirus. Upon closer inspection and the subsequent isolation and reproduction of spores, no hypovirus has been identified from either site. Here, we present a curious coincidence where one site has completely succumbed to the disease, while the other has been able to spring back to health.


Author(s):  
Barry Chametzky

An online learning environment is a rather lonely, isolated place. Because of this seemingly dismal venue, learners suffer in invisible ways such as attrition and disempowerment. While great educational things can and do happen online, it is vital to remember that because of the reduced visual clues, a number of things that need to be accomplished if learners are to succeed in this environment. In order to understand more clearly what is required in an online environment for learners to be successful, under the umbrella of communication, this author will discuss a number of ways to help course members break down feelings of isolation, increase meaningfulness, and increase empowerment.


2020 ◽  
Vol 6 (3) ◽  
pp. 571-574
Author(s):  
Anna Schaufler ◽  
Alfredo Illanes ◽  
Ivan Maldonado ◽  
Axel Boese ◽  
Roland Croner ◽  
...  

AbstractIn robot-assisted procedures, the surgeon controls the surgical instruments from a remote console, while visually monitoring the procedure through the endoscope. There is no haptic feedback available to the surgeon, which impedes the assessment of diseased tissue and the detection of hidden structures beneath the tissue, such as vessels. Only visual clues are available to the surgeon to control the force applied to the tissue by the instruments, which poses a risk for iatrogenic injuries. Additional information on haptic interactions of the employed instruments and the treated tissue that is provided to the surgeon during robotic surgery could compensate for this deficit. Acoustic emissions (AE) from the instrument/tissue interactions, transmitted by the instrument are a potential source of this information. AE can be recorded by audio sensors that do not have to be integrated into the instruments, but that can be modularly attached to the outside of the instruments shaft or enclosure. The location of the sensor on a robotic system is essential for the applicability of the concept in real situations. While the signal strength of the acoustic emissions decreases with distance from the point of interaction, an installation close to the patient would require sterilization measures. The aim of this work is to investigate whether it is feasible to install the audio sensor in non-sterile areas far away from the patient and still be able to receive useful AE signals. To determine whether signals can be recorded at different potential mounting locations, instrument/tissue interactions with different textures were simulated in an experimental setup. The results showed that meaningful and valuable AE can be recorded in the non-sterile area of a robotic surgical system despite the expected signal losses.


Robotics ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 56 ◽  
Author(s):  
Alexandre Alapetite ◽  
Zhongyu Wang ◽  
John Paulin Hansen ◽  
Marcin Zajączkowski ◽  
Mikołaj Patalan

Positioning is an essential aspect of robot navigation, and visual odometry an important technique for continuous updating the internal information about robot position, especially indoors without GPS (Global Positioning System). Visual odometry is using one or more cameras to find visual clues and estimate robot movements in 3D relatively. Recent progress has been made, especially with fully integrated systems such as the RealSense T265 from Intel, which is the focus of this article. We compare between each other three visual odometry systems (and one wheel odometry, as a known baseline), on a ground robot. We do so in eight scenarios, varying the speed, the number of visual features, and with or without humans walking in the field of view. We continuously measure the position error in translation and rotation thanks to a ground truth positioning system. Our result shows that all odometry systems are challenged, but in different ways. The RealSense T265 and the ZED Mini have comparable performance, better than our baseline ORB-SLAM2 (mono-lens without inertial measurement unit (IMU)) but not excellent. In conclusion, a single odometry system might still not be sufficient, so using multiple instances and sensor fusion approaches are necessary while waiting for additional research and further improved products.


Author(s):  
Alexandre Alapetite ◽  
Zhongyu Wang ◽  
John Paulin Hansen ◽  
Marcin Zajączkowski ◽  
Mikolaj Patalan

Positioning is an essential aspect of robot navigation, and visual odometry an important technique for continuous updating the internal information about robot position, especially indoors without GPS. Visual odometry is using one or more cameras to find visual clues and estimate robot movements in 3D relatively. Recent progress has been made, especially with fully integrated systems such as the RealSense T265 from Intel, which is the focus of this article. We compare between each other three visual odometry systems and one wheel odometry, on a ground robot. We do so in 8 scenarios, varying the speed, the number of visual features, and with or without humans walking in the field of view. We continuously measure the position error in translation and rotation thanks to a ground truth positioning system. Our result show that all odometry systems are challenged, but in different ways. In average, ORB-SLAM2 has the poorer results, while the RealSense T265 and the Zed Mini have comparable performance. In conclusion, a single odometry system might still not be sufficient, so using multiple instances and sensor fusion approaches are necessary while waiting for additional research and further improved products.


2020 ◽  
pp. 57-71
Author(s):  
Frankie Y. Bailey
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document