geometric verification
Recently Published Documents


TOTAL DOCUMENTS

39
(FIVE YEARS 11)

H-INDEX

7
(FIVE YEARS 1)

BJR|Open ◽  
2021 ◽  
Vol 3 (1) ◽  
pp. 20210015
Author(s):  
Kate Shrewsbury-Gee ◽  
Daniel Kelly ◽  
Mike Kirby

Objectives: This paper uses clinical audit to determine the extent and dosimetric impact of additional imaging for patients undergoing ocular proton beam therapy who have no clips visible in the collimated beam. Methods: An audit was conducted on 399 patients treated at The National Centre for Eye Proton Therapy between 3 July 2017 and 14 June 2019. The mean total number of image pairs over the course of treatment for patients with and without clips visible in the collimated beam were compared. Results: Among 364 evaluable patients, 333 had clips visible in the collimated beam and 31 did not. There was a statistically significant increase of five image pairs required for patients with no clips visible compared with those with clips visible (mean 14.6 vs 9.6 image pairs, respectively; p = 2.74 × 10–6). This equated to an additional 1.5 mGy absorbed dose, representing an increase in secondary cancer induction risk from 0.0004 to 0.0007%. Conclusions: The small increase in concomitant dose and set-up time for patients with no clips visible in the collimated beam is not clinically significant. Advances in knowledge: This novel work highlights clinical audit from real on-treatment geometric verification data and frequencies, rather than protocols, for ocular proton beam therapy; something not present in the literature. The simple and straightforward methodology is easily and equally applicable to clinical audits (especially those under Ionising Radiation (Medical Exposure) Regulations) for photon techniques.


2020 ◽  
pp. 027836492094859
Author(s):  
Yulun Tian ◽  
Kasra Khosoussi ◽  
Jonathan P How

This paper presents resource-aware algorithms for distributed inter-robot loop-closure detection for applications such as collaborative simultaneous localization and mapping (CSLAM) and distributed image retrieval. In real-world scenarios, this process is resource-intensive as it involves exchanging many observations and geometrically verifying a large number of potential matches. This poses severe challenges for small-size and low-cost robots with various operational and resource constraints that limit, e.g., energy consumption, communication bandwidth, and computation capacity. This paper proposes a framework in which robots first exchange compact queries to identify a set of potential loop closures. We then seek to select a subset of potential inter-robot loop closures for geometric verification that maximizes a monotone submodular performance metric without exceeding budgets on computation (number of geometric verifications) and communication (amount of exchanged data for geometric verification). We demonstrate that this problem is, in general, NP-hard, and present efficient approximation algorithms with provable a priori performance guarantees. The proposed framework is extensively evaluated on real and synthetic datasets. A natural convex relaxation scheme is also presented to certify the near-optimal performance of the proposed framework a posteriori.


2020 ◽  
Vol 2020 (10) ◽  
pp. 313-1-313-7
Author(s):  
Raffaele Imbriaco ◽  
Egor Bondarev ◽  
Peter H.N. de With

Visual place recognition using query and database images from different sources remains a challenging task in computer vision. Our method exploits global descriptors for efficient image matching and local descriptors for geometric verification. We present a novel, multi-scale aggregation method for local convolutional descriptors, using memory vector construction for efficient aggregation. The method enables to find preliminary set of image candidate matches and remove visually similar but erroneous candidates. We deploy the multi-scale aggregation for visual place recognition on 3 large-scale datasets. We obtain a Recall@10 larger than 94% for the Pittsburgh dataset, outperforming other popular convolutional descriptors used in image retrieval and place recognition. Additionally, we provide a comparison for these descriptors on a more challenging dataset containing query and database images obtained from different sources, achieving over 77% Recall@10.


2020 ◽  
Vol 17 (5) ◽  
pp. 900-920
Author(s):  
Emanuele Guardiani ◽  
Anna Morabito

Sign in / Sign up

Export Citation Format

Share Document