active vision
Recently Published Documents


TOTAL DOCUMENTS

766
(FIVE YEARS 92)

H-INDEX

35
(FIVE YEARS 4)

2021 ◽  
Vol 2 (4) ◽  
pp. 214-222
Author(s):  
Raju Kaiti ◽  
◽  
Asik Pradhan ◽  
Monica Chaudhry ◽  
◽  
...  

AIM: To study clinical profile of amblyopia and also the outcomes of occlusion therapy among the amblyopes. METHODS: This was a hospital-based longitudinal study by design. Data were collected from April 2015 to April 2016 in Ophthalmology Department of Dhulikhel Hospital. Presenting visual acuity, chief complaint at presentation, age at presentation, refractive status, binocularity and fixation patterns were assessed in all the children with amblyopia. Improvement in visual acuity was also noted in all the subjects after occlusion therapy, which is a most commonly used modality of treatment for amblyopia. RESULTS: Among 1092 children examined during the study period, 60 (5.49%) were amblyopic. Among them, 35 (58.30%) were females and 25 (41.70%) were males. The mean age at presentation was 8.87±3.29y. Meridional amblyopia was the most prevalent subtype seen in 43.3% (n=26) of children followed by anisohypermetropic amblyopia (20%, n=12). The most common refractive error was astigmatism accounting for 58.30% of the total cases followed by hypermetropia (22.5%) and myopia (7.5%). Compliance with spectacle wear combined with occlusion therapy and active vision therapy was 73.3% (n=44). There was a statistically significant improvement in visual acuity of the amblyopic eyes after the different treatment strategies after 3mo (P=0.002). CONCLUSION: Prevalence of amblyopia and associated visual impairment is still a public health issue in developing countries like Nepal. Lack of awareness and lack of community or preschool vision screening for children lead to late presentation and significant visual impairment associated with the condition. The burden can easily be reduced with screening camps, timely referrals and proper interventions.


Author(s):  
Neng Pan ◽  
Ruibin Zhang ◽  
Tiankai Yang ◽  
Can Cui ◽  
Chao Xu ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Jing Zhang ◽  
Zhaochun Li

Intelligent farming machines are becoming a new trend in modern agriculture. The intelligence and automation allow planting to become data-driven, leading to more timely and cost-effective production and management of farms and improving the quality and output of farm products. This paper presents a proposal for developing a type of intelligent tea picking machine based on active computer vision and Internet of Things (IoT) techniques. The intelligent tea picking machine possesses an active vision system for new tip positioning and can automatically implement tea picking operation in the natural environment. The active vision system provided with a cross-light path of projection and camera is designed according to the actual characteristics of picking surface, where new tips can be recognized by referring to the color factor and their height information is easily acquired by fringe projection profilometry. Furthermore, the machine attaches wireless communication equipment to transmit the real-time status of the tea picking process to an intermediary platform and eventually to the Internet for extensive data analysis. The data such as color factor and quantity of new tips collected through IoT can be used for different quality and production evaluations. The focus of this paper can promote the automation and intelligence of tea pickers and agricultural machinery.


Author(s):  
Juan Antonio Rojas-Quintero ◽  
Juan Antonio Rojas-Estrada ◽  
Eric Alejandro Rodriguez-Sanchez ◽  
Jose Alberto Vizcarra-Corral

2021 ◽  
Author(s):  
Vincenzo Suriani ◽  
Sara Kaszuba ◽  
Sandeep R. Sabbella ◽  
Francesco Riccio ◽  
Daniele Nardi

2021 ◽  
Vol 8 ◽  
Author(s):  
Sabhari Natarajan ◽  
Galen Brown ◽  
Berk Calli

In this work, we present several heuristic-based and data-driven active vision strategies for viewpoint optimization of an arm-mounted depth camera to aid robotic grasping. These strategies aim to efficiently collect data to boost the performance of an underlying grasp synthesis algorithm. We created an open-source benchmarking platform in simulation (https://github.com/galenbr/2021ActiveVision), and provide an extensive study for assessing the performance of the proposed methods as well as comparing them against various baseline strategies. We also provide an experimental study with a real-world two finger parallel jaw gripper setup by utilizing an existing grasp planning benchmark in the literature. With these analyses, we were able to quantitatively demonstrate the versatility of heuristic methods that prioritize certain types of exploration, and qualitatively show their robustness to both novel objects and the transition from simulation to the real world. We identified scenarios in which our methods did not perform well and objectively difficult scenarios, and present a discussion on which avenues for future research show promise.


2021 ◽  
Author(s):  
Mouze Qiu ◽  
Jin Zhang ◽  
Xiaonan Xiong ◽  
Kai Zheng ◽  
Ze Yang ◽  
...  

Abstract Rotational vision system (RVS) is a common type of active vision with only rotational freedom. Typically, the rotational freedom is provided by turntable and pan-tilt-zoom (PTZ). Or eye in hand (EIH) structure in an articulated arm robot. The ideal assumption that rotation axes are perfectly aligned with the coordinate axes of the local camera is mostly violated due to assembling deviations and limitations of manufacturing accuracy. To solve this problem, we propose a generalized deviation model for a specified rotation axis that relates the rotation motion of the platform to the exterior orientation (EO) of the camera. Based on it we put heuristic estimation algorithms through minimizing global reprojection error and fitting a circle in space respectively for rotating platform with or without accurate angle measurements with constrained global optimization. Implemented experiments on a servo pan-tilt turntable validate the accuracy and efficiency of the above models and calibration technique.


Author(s):  
Stephen Grossberg

This chapter explains fundamental differences between seeing and recognition, notably how and why our brains use conscious seeing to control actions like looking and reaching, while we learn both view-, size-, and view-specific object recognition categories, and view-, size-, and position-invariant object recognition categories, as our eyes search a scene during active vision. The dorsal Where cortical stream and the ventral What cortical stream interact to regulate invariant category learning by solving the View-to-Object Binding problem whereby inferotemporal, or IT, cortex associates only views of a single object with its learned invariant category. Feature-category resonances between V2/V4 and IT support category recognition. Symptoms of visual agnosia emerge when IT is lesioned. V2 and V4 interact to enable amodal completion of partially occluded objects behind their occluders, without requiring that all occluders look transparent. V4 represents the unoccluded surfaces of opaque objects and triggers a surface-shroud resonance with posterial parietal cortex, or PPC, that renders surfaces consciously visible, and enables them to control actions. Clinical symptoms of visual neglect emerge when PPC is lesioned. A unified explanation is given of data about visual crowding, situational awareness, change blindness, motion-induced blindness, visual search, perceptual stability, and target swapping. Although visual boundaries and surfaces obey computationally complementary laws, feedback between boundaries and surfaces ensure their consistency and initiate figure-ground separation, while commanding our eyes to foveate sequences of salient features on object surfaces, and thereby triggering invariant category learning. What-to-Where stream interactions enable Where’s Waldo searches for desired objects in cluttered scenes.


2021 ◽  
Vol 8 ◽  
Author(s):  
Noel Cortés-Pérez ◽  
Luz Abril Torres-Méndez

A mirror-based active system capable of changing the view’s direction of a pre-existing fixed camera is presented. The aim of this research work is to extend the perceptual tracking capabilities of an underwater robot without altering its structure. The ability to control the view’s direction allows the robot to explore its entire surroundings without any actual displacement, which can be useful for more effective motion planning and for different navigation strategies, such as object tracking and/or obstacle evasion, which are of great importance for natural preservation in environments as complex and fragile as coral reefs. Active vision systems based on mirrors had been used mainly in terrestrial platforms to capture the motion of fast projectiles using high-speed cameras of considerable size and weight, but they had not been used on underwater platforms. In this sense, our approach incorporates a lightweight design adapted to an underwater robot using affordable and easy-access technology (i.e., 3D printing). Our active system consists of two arranged mirrors, one of which remains static in front of the robot’s camera, while the orientation of the second mirror is controlled by two servomotors. Object tracking is performed by using only the pixels contained on the homography of a defined area in the active mirror. HSV color space is used to reduce lighting change effects. Since color and geometry information of the tracking object are previously known, a window filter is applied over the H-channel for color blobs detection, then, noise is filtered and the object’s centroid is estimated. If the object is lost, a Kalman filter is applied to predict its position. Finally, with this information, an image PD controller computes the servomotor articular values. We have carried out experiments in real environments, testing our active vision system in an object-tracking application where an artificial object is manually displaced on the periphery of the robot and the mirror system is automatically reconfigured to keep such object focused by the camera, having satisfactory results in real time for detecting objects of low complexity and in poor lighting conditions.


Sign in / Sign up

Export Citation Format

Share Document