target size
Recently Published Documents


TOTAL DOCUMENTS

401
(FIVE YEARS 52)

H-INDEX

35
(FIVE YEARS 4)

2021 ◽  
Vol 12 ◽  
Author(s):  
Rebecca E. Rhodes ◽  
Hannah P. Cowley ◽  
Jay G. Huang ◽  
William Gray-Roncal ◽  
Brock A. Wester ◽  
...  

Aerial images are frequently used in geospatial analysis to inform responses to crises and disasters but can pose unique challenges for visual search when they contain low resolution, degraded information about color, and small object sizes. Aerial image analysis is often performed by humans, but machine learning approaches are being developed to complement manual analysis. To date, however, relatively little work has explored how humans perform visual search on these tasks, and understanding this could ultimately help enable human-machine teaming. We designed a set of studies to understand what features of an aerial image make visual search difficult for humans and what strategies humans use when performing these tasks. Across two experiments, we tested human performance on a counting task with a series of aerial images and examined the influence of features such as target size, location, color, clarity, and number of targets on accuracy and search strategies. Both experiments presented trials consisting of an aerial satellite image; participants were asked to find all instances of a search template in the image. Target size was consistently a significant predictor of performance, influencing not only accuracy of selections but the order in which participants selected target instances in the trial. Experiment 2 demonstrated that the clarity of the target instance and the match between the color of the search template and the color of the target instance also predicted accuracy. Furthermore, color also predicted the order of selecting instances in the trial. These experiments establish not only a benchmark of typical human performance on visual search of aerial images but also identify several features that can influence the task difficulty level for humans. These results have implications for understanding human visual search on real-world tasks and when humans may benefit from automated approaches.


2021 ◽  
Vol 97 ◽  
pp. 103502
Author(s):  
Kiana Kia ◽  
Jaejin Hwang ◽  
In-Sop Kim ◽  
Hakim Ishak ◽  
Jeong Ho Kim

2021 ◽  
Vol 11 (21) ◽  
pp. 9846
Author(s):  
Mungyeong Choe ◽  
Jaehyun Park ◽  
Hyun K. Kim

Although new virtual reality (VR) devices and their contents are actively being released, there are still not enough studies to prepare its interface/interaction standard. In this study, it was investigated whether specific interaction factors influenced task performance and the degree of virtual reality sickness when performing pointing tasks in immersive virtual reality. A smartphone-based VR device was used, and twenty-five targets were placed in a 5 × 5 layout on the VR experimental area that extended to a range similar to the human viewing angle. Task completion time (TCT) was significantly affected by target selection method (p < 0.001) and target size (p < 0.001), whereas the error rate (ER) significantly differed for the target selection method (p < 0.001) and not for the target size (p = 0.057). Target location was observed to be a factor affecting TCT (p < 0.001), but it did not affect the ER (p = 0.876). VR sickness was more severe when the target size was smaller. Gaze selection was found to be more efficient when accuracy is demanded, and manual selection is more efficient for quick selection. Moreover, applying these experimental data to Fitts’ Law showed that the movement time was found to be less affected by the device when using the gaze-selection method. Virtual reality provides a three-dimensional visual environment, but a one-dimensional formula can sufficiently predict the movement time. The result of this study is expected to be a reference for preparing interface/interaction design standards for virtual reality.


2021 ◽  
Author(s):  
Amaro Tuninetti ◽  
Andrea Megela Simmons ◽  
James A Simmons

Big brown bats emit wideband frequency modulated (FM) ultrasonic pulses for echolocation. They perceive target range from echo delay and target size from echo amplitude. Their sounds contain two prominent down-sweeping harmonic sweeps (FM1, ~55-22 kHz; FM2, ~100-55 kHz), which are affected differently by propagation out to the target and back to the bat. FM2 is attenuated more than FM1 during propagation. Bats anchor target ranging asymmetrically on the low frequencies in FM1, while FM2 only contributes if FM1 is present as well. These experiments tested whether the bat's ability to discriminate target size from the amplitude of echoes is affected by selectively attenuating upper or lower frequencies. Bats were trained to perform an echo amplitude discrimination task with virtual echo targets 83 cm away. While echo delay was held constant and echo amplitude was varied to estimate threshold, either lower FM1 frequencies or higher FM2 frequencies were attenuated. The results parallel effects seen in echo delay experiments; bats' performance was significantly poorer when the lower frequencies in echoes were attenuated, compared to higher frequencies. The bat's ability to distinguish between virtual targets at the same simulated range from echoes arriving at the same delay indicates a high level of focused attention for perceptual isolation of one and suppression of the other.


2021 ◽  
Vol 20 (1) ◽  
Author(s):  
Sophya Breedlove ◽  
Aldo Badano

Abstract Background Amyloid deposits in the temporal and frontal lobes in patients with Alzheimer’s disease make them potential targets to aid in early diagnosis. Recently, spectral small-angle X-ray scattering techniques have been proposed for interrogating deep targets such as amyloid plaques. Results We describe an optimization approach for the orientation of beams for deep target characterization. The model predicts the main features of scattering profiles from targets with varying shape, size and location. We found that increasing target size introduced additional smearing due to location uncertainty, and incidence angle affected the scattering profile by altering the path length or effective target size. For temporal and frontal lobe targets, beam effectiveness varied up to 2 orders of magnitude. Conclusions Beam orientation optimization might allow for patient-specific optimal paths for improved signal characterization.


2021 ◽  
Vol 5 ◽  
pp. 14
Author(s):  
Matthew Wilson ◽  
Adib R. Karam ◽  
Grayson L. Baird ◽  
Michael S. Furman ◽  
David J. Grand

Objectives: The aim of this retrospective study was to investigate the relationship between lung lesion lobar distribution, lesion size, and lung biopsy diagnostic yield. Material and Methods: This retrospective study was performed between January 1, 2013, and April 30, 2019, on CT-guided percutaneous transthoracic needle biopsies of 1522 lung lesions, median size 3.65 cm (range: 0.5– 15.5 cm). Lung lesions were localized as follows: upper lobes, right middle lobe and lingual, lower lobes superior segments, and lower lobes basal segments. Biopsies were classified as either diagnostic or non-diagnostic based on final cytology and/or pathology reports. Results were considered diagnostic if malignancy or a specific benign diagnosis was established, whereas atypical cells, non-specific benignity, or insufficient specimen were considered non-diagnostic. Results: The positive predictive value (PPV) of a diagnostic yield was 85%, regardless of lobar distribution. Because all PPVs were relatively high across locations (84–87%), we failed to find statistically significant difference in PPV between locations (P = 0.79). Furthermore, for every 1 cm increase in target size, the odds of a diagnostic yield increased by 1.42-fold or 42% above 85%. Although target size increased the diagnostic yield differently by location (between 1.4- and 1.8-fold across locations), these differences failed to be statistically significant, P = 0.55. Conclusion: Percutaneous transthoracic needle biopsy of lung lesions achieved high diagnostic yield (PPV: 84– 87%) across all lobes. A 42% odds increase in yield was achieved for every 1 cm increase in target size. However, this increase in size failed to be statistically significant between lobes.


2021 ◽  
Vol 11 (13) ◽  
pp. 6203
Author(s):  
Seonkoo Chee ◽  
Jaemyung Ryu ◽  
Hojong Choi

Recently released mobile phone cameras are capable of photographing objects at a fairly close distance. In addition, the field angle from the camera has increased. To measure the resolution of a mobile phone camera, the target must be photographed. To measure the resolution according to the object distance change from a mobile phone camera with a wide field angle, the target size must be large, whereas the target position must be moved. However, the target size cannot be changed. A virtual object for the target was created using a collimator. Moving a part of the lens group constituting the collimator also changes the virtual object distance. If the amount of change in the virtual object distance is large, the resolution of the collimator may also change. Therefore, a collimator that maintains the resolution even when the distance of the virtual object changes is designed as a floating type in which two lens groups move. Therefore, we propose a new floating collimator optical system that can inspect the resolution of mobile phone cameras from infinity to a close range to compensate for aberrations caused by object distance changes.


2021 ◽  
Vol 12 (3) ◽  
pp. 01-16
Author(s):  
Chiman Kwan ◽  
David Gribben

It is challenging to detect vehicles in long range and low quality infrared videos using deep learning techniques such as You Only Look Once (YOLO) mainly due to small target size. This is because small targets do not have detailed texture information. This paper focuses on practical approaches for target detection in infrared videos using deep learning techniques. We first investigated a newer version of You Only Look Once (YOLO v4). We then proposed a practical and effective approach by training the YOLO model using videos from longer ranges. Experimental results using real infrared videos ranging from 1000 m to 3500 m demonstrated huge performance improvements. In particular, the average detection percentage over the six ranges of 1000 m to 3500 m improved from 54% when we used the 1500 m videos for training to 95% if we used the 3000 m videos for training.


Sign in / Sign up

Export Citation Format

Share Document