scholarly journals A Ground Truth Vision System for Robotic Soccer

Author(s):  
António J. R. Neves ◽  
Fred Gomes ◽  
Paulo Dias ◽  
Alina Trifan
Aerospace ◽  
2020 ◽  
Vol 7 (3) ◽  
pp. 31 ◽  
Author(s):  
Dario Modenini ◽  
Anton Bahu ◽  
Giacomo Curzi ◽  
Andrea Togni

To enable a reliable verification of attitude determination and control systems for nanosatellites, the environment of low Earth orbits with almost disturbance-free rotational dynamics must be simulated. This work describes the design solutions adopted for developing a dynamic nanosatellite attitude simulator testbed at the University of Bologna. The facility integrates several subsystems, including: (i) an air-bearing three degree of freedom platform, with automatic balancing system, (ii) a Helmholtz cage for geomagnetic field simulation, (iii) a Sun simulator, and (iv) a metrology vision system for ground-truth attitude generation. Apart from the commercial off-the-shelf Helmholtz cage, the other subsystems required substantial development efforts. The main purpose of this manuscript is to offer some cost-effective solutions for their in-house development, and to show through experimental verification that adequate performances can be achieved. The proposed approach may thus be preferred to the procurement of turn-key solutions, when required by budget constraints. The main outcome of the commissioning phase of the facility are: a residual disturbance torque affecting the air bearing platform of less than 5 × 10−5 Nm, an attitude determination rms accuracy of the vision system of 10 arcmin, and divergence of the Sun simulator light beam of less than 0.5° in a 35 cm diameter area.


Author(s):  
Gianmichele Grittani ◽  
Gilberto Gallinelli ◽  
José Ramŕez
Keyword(s):  

2011 ◽  
Vol 103 ◽  
pp. 717-724
Author(s):  
Hossain Shahera ◽  
Serikawa Seiichi

Texture surface analysis is very important for machine vision system. We explore Gray Level Co-occurrence Matrix-based 2ndorder statistical features to understand image texture surface. We employed several features on our ground-truth dataset to understand its nature; and later employed it in a building dataset. Based on our experimental results, we can conclude that these image features can be useful for texture analysis and related fields.


Author(s):  
Eirik Hexeberg Henriksen ◽  
Ingrid Schjølberg ◽  
Tor Berge Gjersvik

A vision based underwater localization system using fiducial markers is proposed. The system is implemented as a new control mode in a shared control system for ROVs and tested experimentally in a pool. A high accuracy underwater motion capture system in the pool is used as ground truth for performance assessment of the proposed system. The main objective has been to assess the feasibility for using a vision system and fiducial markers for localization of an underwater vehicle performing automated manipulation at subsea facilities. This research show that it is feasible to perform autonomous manipulation using the proposed system. The assessment of the performance in a set of experiments show that all degrees of freedom have a an approximate Gaussian error distribution with zero mean. The largest position errors are experienced in the Y-direction. This is seem to be caused by inherent coupling between yaw and lateral position in the camera projection of the fiducial marker and errors in the yaw direction. The main contribution of this paper is an experimental performance assessment of a localization method commonly used for terrestrial robots in a marine environment.


2018 ◽  
Vol 75 (4) ◽  
pp. 1393-1404 ◽  
Author(s):  
Melanie J Underwood ◽  
Shale Rosen ◽  
Arill Engås ◽  
Terje Jørgensen ◽  
Anders Fernö

Abstract In-trawl camera systems promise to improve the resolution of trawl sampling used to ground-truth the interpretation of acoustic survey data. In this study, the residence time of fish in front of the Deep Vision camera system, used to identify, measure and count fish inside the trawl, was analysed to determine the reliability of spatial distribution recorded by the system. Although Atlantic herring (Clupea harengus), haddock (Melanogrammus aeglefinus), and most Atlantic cod (Gadus morhua) moved quickly back through the aft part of the pelagic trawl, saithe (Pollachius virens) spent up to 4 min in front of the system. The residence time increased for saithe and cod when other individuals were present, and cod swimming in the low water flow close to the trawl netting spent longer there than cod at the centre of the trawl. Surprisingly, residence time was not related to the size of the fish, which may be explained by the collective behaviour of shoaling fish. Our findings suggest that while in-trawl images can be used to identify, measure and count most species, when sampling fast-swimming species such as saithe the position inferred from when they were imaged may not reflect the actual spatial distribution prior to capture.


As inspired by birds flying in flocks, their vision is one of the most critical components to enable them to respond to their neighbor’s motion. In this paper, a novel approach in developing a Vision System as the primary sensor for relative positioning in flight formation of a Leader-Follower scenario is introduced. To use the system in real-time and on-board of the unmanned aerial vehicles (UAVs) with up to 1.5 kilograms of payload capacity, few computing platforms are reviewed and evaluated. The study shows that the NVIDIA Jetson TX1 is the most suited platform for this project. In addition, several different techniques and approaches for developing the algorithm is discussed as well. As per system requirements and conducted study, the algorithm that is developed for this Vision System is based on Tracking and On-Line Machine Learning approach. Flight test has been performed to check the accuracy and reliability of the system, and the results indicate the minimum accuracy of 83% of the vision system against ground truth data.


2021 ◽  
Vol 18 (6) ◽  
pp. 172988142110593
Author(s):  
Ivan Kholodilin ◽  
Yuan Li ◽  
Qinglin Wang ◽  
Paul David Bourke

Recent advancements in deep learning require a large amount of the annotated training data containing various terms and conditions of the environment. Thus, developing and testing algorithms for the navigation of mobile robots can be expensive and time-consuming. Motivated by the aforementioned problems, this article presents a photorealistic simulator for the computer vision community working with omnidirectional vision systems. Built using unity, the simulator integrates sensors, mobile robots, and elements of the indoor environment and allows one to generate synthetic photorealistic data sets with automatic ground truth annotations. With the aid of the proposed simulator, two practical applications are studied, namely extrinsic calibration of the vision system and three-dimensional reconstruction of the indoor environment. For the proposed calibration and reconstruction techniques, the processes themselves are simple, robust, and accurate. Proposed methods are evaluated experimentally with data generated by the simulator. The proposed simulator and supporting materials are available online: http://www.ilabit.org .


2015 ◽  
Vol 12 (04) ◽  
pp. 1550019
Author(s):  
Liyuan Li ◽  
Qianli Xu ◽  
Gang S. Wang ◽  
Xinguo Yu ◽  
Yeow Kee Tan ◽  
...  

Computational systems for human–robot interaction (HRI) could benefit from visual perceptions of social cues that are commonly employed in human–human interactions. However, existing systems focus on one or two cues for attention or intention estimation. This research investigates how social robots may exploit a wide spectrum of visual cues for multiparty interactions. It is proposed that the vision system for social cue perception should be supported by two dimensions of functionality, namely, vision functionality and cognitive functionality. A vision-based system is proposed for a robot receptionist to embrace both functionalities for multiparty interactions. The module of vision functionality consists of a suite of methods that computationally recognize potential visual cues related to social behavior understanding. The performance of the models is validated by the ground truth annotation dataset. The module of cognitive functionality consists of two computational models that (1) quantify users’ attention saliency and engagement intentions, and (2) facilitate engagement-aware behaviors for the robot to adjust its direction of attention and manage the conversational floor. The performance of the robot’s engagement-aware behaviors is evaluated in a multiparty dialog scenario. The results show that the robot’s engagement-aware behavior based on visual perceptions significantly improve the effectiveness of communication and positively affect user experience.


Methodology ◽  
2019 ◽  
Vol 15 (Supplement 1) ◽  
pp. 43-60 ◽  
Author(s):  
Florian Scharf ◽  
Steffen Nestler

Abstract. It is challenging to apply exploratory factor analysis (EFA) to event-related potential (ERP) data because such data are characterized by substantial temporal overlap (i.e., large cross-loadings) between the factors, and, because researchers are typically interested in the results of subsequent analyses (e.g., experimental condition effects on the level of the factor scores). In this context, relatively small deviations in the estimated factor solution from the unknown ground truth may result in substantially biased estimates of condition effects (rotation bias). Thus, in order to apply EFA to ERP data researchers need rotation methods that are able to both recover perfect simple structure where it exists and to tolerate substantial cross-loadings between the factors where appropriate. We had two aims in the present paper. First, to extend previous research, we wanted to better understand the behavior of the rotation bias for typical ERP data. To this end, we compared the performance of a variety of factor rotation methods under conditions of varying amounts of temporal overlap between the factors. Second, we wanted to investigate whether the recently proposed component loss rotation is better able to decrease the bias than traditional simple structure rotation. The results showed that no single rotation method was generally superior across all conditions. Component loss rotation showed the best all-round performance across the investigated conditions. We conclude that Component loss rotation is a suitable alternative to simple structure rotation. We discuss this result in the light of recently proposed sparse factor analysis approaches.


Sign in / Sign up

Export Citation Format

Share Document