Automatic Calibration of Soccer Scenes Using Feature Detection

Author(s):  
Patrik Goorts ◽  
Steven Maesen ◽  
Yunjun Liu ◽  
Maarten Dumont ◽  
Philippe Bekaert ◽  
...  
2021 ◽  
Vol 13 (14) ◽  
pp. 2795
Author(s):  
Gonzalo Simarro ◽  
Daniel Calvete ◽  
Paola Souto

Following the path set out by the “Argus” project, video monitoring stations have become a very popular low cost tool to continuously monitor beaches around the world. For these stations to be able to offer quantitative results, the cameras must be calibrated. Cameras are typically calibrated when installed, and, at best, extrinsic calibrations are performed from time to time. However, intra-day variations of camera calibration parameters due to thermal factors, or other kinds of uncontrolled movements, have been shown to introduce significant errors when transforming the pixels to real world coordinates. Departing from well-known feature detection and matching algorithms from computer vision, this paper presents a methodology to automatically calibrate cameras, in the intra-day time scale, from a small number of manually calibrated images. For the three cameras analyzed here, the proposed methodology allows for automatic calibration of >90% of the images in favorable conditions (images with many fixed features) and ∼40% in the worst conditioned camera (almost featureless images). The results can be improved by increasing the number of manually calibrated images. Further, the procedure provides the user with two values that allow for the assessment of the expected quality of each automatic calibration. The proposed methodology, here applied to Argus-like stations, is applicable e.g., in CoastSnap sites, where each image corresponds to a different camera.


2007 ◽  
Author(s):  
Jan Theeuwes ◽  
Erik van der Burg ◽  
Artem V. Belopolsky

2017 ◽  
Vol 2 (1) ◽  
pp. 80-87
Author(s):  
Puyda V. ◽  
◽  
Stoian. A.

Detecting objects in a video stream is a typical problem in modern computer vision systems that are used in multiple areas. Object detection can be done on both static images and on frames of a video stream. Essentially, object detection means finding color and intensity non-uniformities which can be treated as physical objects. Beside that, the operations of finding coordinates, size and other characteristics of these non-uniformities that can be used to solve other computer vision related problems like object identification can be executed. In this paper, we study three algorithms which can be used to detect objects of different nature and are based on different approaches: detection of color non-uniformities, frame difference and feature detection. As the input data, we use a video stream which is obtained from a video camera or from an mp4 video file. Simulations and testing of the algoritms were done on a universal computer based on an open-source hardware, built on the Broadcom BCM2711, quad-core Cortex-A72 (ARM v8) 64-bit SoC processor with frequency 1,5GHz. The software was created in Visual Studio 2019 using OpenCV 4 on Windows 10 and on a universal computer operated under Linux (Raspbian Buster OS) for an open-source hardware. In the paper, the methods under consideration are compared. The results of the paper can be used in research and development of modern computer vision systems used for different purposes. Keywords: object detection, feature points, keypoints, ORB detector, computer vision, motion detection, HSV model color


Author(s):  
Suresha .M ◽  
. Sandeep

Local features are of great importance in computer vision. It performs feature detection and feature matching are two important tasks. In this paper concentrates on the problem of recognition of birds using local features. Investigation summarizes the local features SURF, FAST and HARRIS against blurred and illumination images. FAST and Harris corner algorithm have given less accuracy for blurred images. The SURF algorithm gives best result for blurred image because its identify strongest local features and time complexity is less and experimental demonstration shows that SURF algorithm is robust for blurred images and the FAST algorithms is suitable for images with illumination.


2020 ◽  
Author(s):  
Matthew Philip Kaesler ◽  
John C Dunn ◽  
Keith Ransom ◽  
Carolyn Semmler

The debate regarding the best way to test and measure eyewitness memory has dominated the eyewitness literature for more than thirty years. We argue that to resolve this debate requires the development and application of appropriate measurement models. In this study we develop models of simultaneous and sequential lineup presentations and use these to compare the procedures in terms of discriminability and response bias. We tested a key prediction of the diagnostic feature detection hypothesis that discriminability should be greater for simultaneous than sequential lineups. We fit the models to the corpus of studies originally described by Palmer and Brewer (2012, Law and Human Behavior, 36(3), 247-255) and to data from a new experiment. The results of both investigations showed that discriminability did not differ between the two procedures, while responses were more conservative for sequential presentation compared to simultaneous presentation. We conclude that the two procedures do not differ in the efficiency with which they allow eyewitness memory to be expressed. We discuss the implications of this for the diagnostic feature detection hypothesis and other sequential lineup procedures used in current jurisdictions.


2019 ◽  
Vol 31 (6) ◽  
pp. 844-850 ◽  
Author(s):  
Kevin T. Huang ◽  
Michael A. Silva ◽  
Alfred P. See ◽  
Kyle C. Wu ◽  
Troy Gallerani ◽  
...  

OBJECTIVERecent advances in computer vision have revolutionized many aspects of society but have yet to find significant penetrance in neurosurgery. One proposed use for this technology is to aid in the identification of implanted spinal hardware. In revision operations, knowing the manufacturer and model of previously implanted fusion systems upfront can facilitate a faster and safer procedure, but this information is frequently unavailable or incomplete. The authors present one approach for the automated, high-accuracy classification of anterior cervical hardware fusion systems using computer vision.METHODSPatient records were searched for those who underwent anterior-posterior (AP) cervical radiography following anterior cervical discectomy and fusion (ACDF) at the authors’ institution over a 10-year period (2008–2018). These images were then cropped and windowed to include just the cervical plating system. Images were then labeled with the appropriate manufacturer and system according to the operative record. A computer vision classifier was then constructed using the bag-of-visual-words technique and KAZE feature detection. Accuracy and validity were tested using an 80%/20% training/testing pseudorandom split over 100 iterations.RESULTSA total of 321 total images were isolated containing 9 different ACDF systems from 5 different companies. The correct system was identified as the top choice in 91.5% ± 3.8% of the cases and one of the top 2 or 3 choices in 97.1% ± 2.0% and 98.4 ± 13% of the cases, respectively. Performance persisted despite the inclusion of variable sizes of hardware (i.e., 1-level, 2-level, and 3-level plates). Stratification by the size of hardware did not improve performance.CONCLUSIONSA computer vision algorithm was trained to classify at least 9 different types of anterior cervical fusion systems using relatively sparse data sets and was demonstrated to perform with high accuracy. This represents one of many potential clinical applications of machine learning and computer vision in neurosurgical practice.


2004 ◽  
Vol 4 (5-6) ◽  
pp. 383-388
Author(s):  
D.M. Rogers

Water is a fundamental necessity of life. Yet water supply and distribution networks the world over are old and lacking in adequate maintenance. Consequently they often leak as much water as they deliver and provide an unacceptable quality of service to the customer. In certain parts of the world, water is available only for a few hours of the day. The solution is to build a mathematical model to simulate the operation of the real network in all of its key elements and apply it to optimise its operation. To be of value, the results of the model must be compared with field data. This process is known as calibration and is an essential element in the construction of an accurate model. This paper outlines the optimum approach to building and calibrating a mathematical model and how it can be applied to automatic calibration systems.


Author(s):  
Joel Z. Leibo ◽  
Tomaso Poggio

This chapter provides an overview of biological perceptual systems and their underlying computational principles focusing on the sensory sheets of the retina and cochlea and exploring how complex feature detection emerges by combining simple feature detectors in a hierarchical fashion. We also explore how the microcircuits of the neocortex implement such schemes pointing out similarities to progress in the field of machine vision driven deep learning algorithms. We see signs that engineered systems are catching up with the brain. For example, vision-based pedestrian detection systems are now accurate enough to be installed as safety devices in (for now) human-driven vehicles and the speech recognition systems embedded in smartphones have become increasingly impressive. While not being entirely biologically based, we note that computational neuroscience, as described in this chapter, makes up a considerable portion of such systems’ intellectual pedigree.


Sign in / Sign up

Export Citation Format

Share Document