Matching pixel and object image sizes during selection based on motion blur

Author(s):  
Aleksander K. Tsytsulin ◽  
Aleksey I. Bobrovsky ◽  
Aleksey V. Morozov
Keyword(s):  
2021 ◽  
Vol 14 (3) ◽  
pp. 1-17
Author(s):  
Elena Villaespesa ◽  
Seth Crider

Computer vision algorithms are increasingly being applied to museum collections to identify patterns, colors, and subjects by generating tags for each object image. There are multiple off-the-shelf systems that offer an accessible and rapid way to undertake this process. Based on the highlights of the Metropolitan Museum of Art's collection, this article examines the similarities and differences between the tags generated by three well-known computer vision systems (Google Cloud Vision, Amazon Rekognition, and IBM Watson). The results provide insights into the characteristics of these taxonomies in terms of the volume of tags generated for each object, their diversity, typology, and accuracy. In consequence, this article discusses the need for museums to define their own subject tagging strategy and selection criteria of computer vision tools based on their type of collection and tags needed to complement their metadata.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Barmak Honarvar Shakibaei Asli ◽  
Yifan Zhao ◽  
John Ahmet Erkoyuncu

AbstractHigh-quality medical ultrasound imaging is definitely concerning motion blur, while medical image analysis requires motionless and accurate data acquired by sonographers. The main idea of this paper is to establish some motion blur invariant in both frequency and moment domain to estimate the motion parameters of ultrasound images. We propose a discrete model of point spread function of motion blur convolution based on the Dirac delta function to simplify the analysis of motion invariant in frequency and moment domain. This model paves the way for estimating the motion angle and length in terms of the proposed invariant features. In this research, the performance of the proposed schemes is compared with other state-of-the-art existing methods of image deblurring. The experimental study performs using fetal phantom images and clinical fetal ultrasound images as well as breast scans. Moreover, to validate the accuracy of the proposed experimental framework, we apply two image quality assessment methods as no-reference and full-reference to show the robustness of the proposed algorithms compared to the well-known approaches.


Author(s):  
Denys Rozumnyi ◽  
Jan Kotera ◽  
Filip Šroubek ◽  
Jiří Matas

AbstractObjects moving at high speed along complex trajectories often appear in videos, especially videos of sports. Such objects travel a considerable distance during exposure time of a single frame, and therefore, their position in the frame is not well defined. They appear as semi-transparent streaks due to the motion blur and cannot be reliably tracked by general trackers. We propose a novel approach called Tracking by Deblatting based on the observation that motion blur is directly related to the intra-frame trajectory of an object. Blur is estimated by solving two intertwined inverse problems, blind deblurring and image matting, which we call deblatting. By postprocessing, non-causal Tracking by Deblatting estimates continuous, complete, and accurate object trajectories for the whole sequence. Tracked objects are precisely localized with higher temporal resolution than by conventional trackers. Energy minimization by dynamic programming is used to detect abrupt changes of motion, called bounces. High-order polynomials are then fitted to smooth trajectory segments between bounces. The output is a continuous trajectory function that assigns location for every real-valued time stamp from zero to the number of frames. The proposed algorithm was evaluated on a newly created dataset of videos from a high-speed camera using a novel Trajectory-IoU metric that generalizes the traditional Intersection over Union and measures the accuracy of the intra-frame trajectory. The proposed method outperforms the baselines both in recall and trajectory accuracy. Additionally, we show that from the trajectory function precise physical calculations are possible, such as radius, gravity, and sub-frame object velocity. Velocity estimation is compared to the high-speed camera measurements and radars. Results show high performance of the proposed method in terms of Trajectory-IoU, recall, and velocity estimation.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Henriette Frikke-Schmidt ◽  
Peter Arvan ◽  
Randy J. Seeley ◽  
Corentin Cras-Méneur

AbstractWhile numerous techniques can be used to measure and analyze insulin secretion in isolated islets in culture, assessments of insulin secretion in vivo are typically indirect and only semiquantitative. The CpepSfGFP reporter mouse line allows the in vivo imaging of insulin secretion from individual islets after a glucose stimulation, in live, anesthetized mice. Imaging the whole pancreas at high resolution in live mice to track the response of each individual islet over time includes numerous technical challenges and previous reports were only limited in scope and non-quantitative. Elaborating on this previous model—through the development of an improved methodology addressing anesthesia, temperature control and motion blur—we were able to track and quantify longitudinally insulin content throughout a glucose challenge in up to two hundred individual islets simultaneously. Through this approach we demonstrate quantitatively for the first time that while isolated islets respond homogeneously to glucose in culture, their profiles differ significantly in vivo. Independent of size or location, some islets respond sharply to a glucose stimulation while others barely secrete at all. This platform therefore provides a powerful approach to study the impact of disease, diet, surgery or pharmacological treatments on insulin secretion in the intact pancreas in vivo.


2021 ◽  
Vol 13 (12) ◽  
pp. 2351
Author(s):  
Alessandro Torresani ◽  
Fabio Menna ◽  
Roberto Battisti ◽  
Fabio Remondino

Mobile and handheld mapping systems are becoming widely used nowadays as fast and cost-effective data acquisition systems for 3D reconstruction purposes. While most of the research and commercial systems are based on active sensors, solutions employing only cameras and photogrammetry are attracting more and more interest due to their significantly minor costs, size and power consumption. In this work we propose an ARM-based, low-cost and lightweight stereo vision mobile mapping system based on a Visual Simultaneous Localization And Mapping (V-SLAM) algorithm. The prototype system, named GuPho (Guided Photogrammetric System) also integrates an in-house guidance system which enables optimized image acquisitions, robust management of the cameras and feedback on positioning and acquisition speed. The presented results show the effectiveness of the developed prototype in mapping large scenarios, enabling motion blur prevention, robust camera exposure control and achieving accurate 3D results.


Sign in / Sign up

Export Citation Format

Share Document