image region
Recently Published Documents


TOTAL DOCUMENTS

239
(FIVE YEARS 27)

H-INDEX

19
(FIVE YEARS 0)

Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7618
Author(s):  
Jiho Seo ◽  
Jonghyeok Lee ◽  
Jaehyun Park ◽  
Hyungju Kim ◽  
Sungjin You

To estimate range and angle information of multiple targets, FMCW MIMO radars have been exploited with 2D MUSIC algorithms. To improve estimation accuracy, received signals from multiple FMCW MIMO radars are collected at the data fusion center and processed coherently, which increases data communication overhead and implementation complexity. To resolve them, we propose the distributed 2D MUSIC algorithm with coordinate transformation, in which 2D MUSIC algorithm is operated with respect to the reference radar’s coordinate at each radar in a distributed way. Rather than forwarding the raw data of received signal to the fusion center, each radar performs 2D MUSIC with its own received signal in the transformed coordinates. Accordingly, the distributed radars do not need to report all their measured signals to the data fusion center, but they forward their local cost function values of 2D MUSIC for the radar image region of interest. The data fusion center can then estimate the range and angle information of targets jointly from the aggregated cost function. By applying the proposed scheme to the experimentally measured data, its performance is verified in the real environment test.



2021 ◽  
Author(s):  
◽  
Evgeny Patrikeev

<p>Good image editing tools that modify colors of specified image regions or deform the depicted objects have always been an important part of graphics editors. Manual approaches to this task are too time-consuming, while fully automatic methods are not robust enough. Thus, the ideal editing method should include a combination of manual and automated components. This thesis shows that radial basis functions provide a suitable “engine” for two common image editing problems, where interactivity requires both reasonable performance and fast training.  There are many freeform image deformation methods to be used, each having advantages and disadvantages. This thesis explores the use of radial basis functions for freeform image deformation and compares it to a standard approach that uses B-spline warping.  Edit propagation is a promising user-guided color editing technique, which, instead of requiring precise selection of the region being edited, accepts color edits as a few brush strokes over an image region and then propagates these edits to the regions with similar appearance. This thesis focuses on an approach to edit propagation, which considers user input as an incomplete set of values of an intended edit function. The approach interpolates between the user input values using radial basis functions to find the edit function for the whole image.  While the existing approach applies the user-specified edits to all the regions with similar colors, this thesis presents an extension that propagates the edits more selectively. In addition to color information of each image point, it also takes the surrounding texture into account and better distinguishes different objects, giving the algorithm more information about the user-specified region and making the edit propagation more precise.</p>



2021 ◽  
Author(s):  
◽  
Evgeny Patrikeev

<p>Good image editing tools that modify colors of specified image regions or deform the depicted objects have always been an important part of graphics editors. Manual approaches to this task are too time-consuming, while fully automatic methods are not robust enough. Thus, the ideal editing method should include a combination of manual and automated components. This thesis shows that radial basis functions provide a suitable “engine” for two common image editing problems, where interactivity requires both reasonable performance and fast training.  There are many freeform image deformation methods to be used, each having advantages and disadvantages. This thesis explores the use of radial basis functions for freeform image deformation and compares it to a standard approach that uses B-spline warping.  Edit propagation is a promising user-guided color editing technique, which, instead of requiring precise selection of the region being edited, accepts color edits as a few brush strokes over an image region and then propagates these edits to the regions with similar appearance. This thesis focuses on an approach to edit propagation, which considers user input as an incomplete set of values of an intended edit function. The approach interpolates between the user input values using radial basis functions to find the edit function for the whole image.  While the existing approach applies the user-specified edits to all the regions with similar colors, this thesis presents an extension that propagates the edits more selectively. In addition to color information of each image point, it also takes the surrounding texture into account and better distinguishes different objects, giving the algorithm more information about the user-specified region and making the edit propagation more precise.</p>



2021 ◽  
Author(s):  
Konstantin Willeke ◽  
Araceli Ramirez Cardenas ◽  
Joachim Bellet ◽  
Ziad M. Hafed

The foveal visual image region provides the human visual system with the highest acuity. However, it is unclear whether such a high fidelity representational advantage is maintained when foveal image locations are committed to short term memory. Here we describe a paradoxically large distortion in foveal target location recall by humans. We briefly presented small, but high contrast, points of light at eccentricities ranging from 0.1 to 12 deg, while subjects maintained their line of sight on a stable target. After a brief memory period, the subjects indicated the remembered target locations via computer controlled cursors. The biggest localization errors, in terms of both directional deviations and amplitude percentage overshoots or undershoots, occurred for the most foveal targets, and such distortions were still present, albeit with qualitatively different patterns, when subjects shifted their gaze to indicate the remembered target locations. Foveal visual images are severely distorted in short term memory.



2021 ◽  
Vol 2068 (1) ◽  
pp. 012023
Author(s):  
Dejun Li ◽  
Guiyang Zhou ◽  
Kang Cheng ◽  
Cheng Wang ◽  
Yifan Wang ◽  
...  

Abstract In order to improve the accuracy of spinneret defect detection, a spinneret image region of interest segmentation algorithm is proposed for the problem that the complex background of spinneret image interferes seriously with the subsequent detection. The mask image is obtained by separating the fixed plate area and the spinneret wall area using the diffuse water filling method, and the minimum external circle and the maximum internal circle in the mask image are found using contour detection to obtain the mask image of the spinneret area, and then the spinneret area, i.e. the Region of Interest (ROI), is extracted. The experimental results show that this method can effectively separate the spinneret region and reduce the background interference.



2021 ◽  
Vol 7 (2) ◽  
pp. 871-874
Author(s):  
Birgit Stender ◽  
Oliver Blanck ◽  
Sebastian D. Reinartz ◽  
Olaf Dössel

Abstract One challenge in central hemodynamic monitoring based on electrical impedance tomography (EIT) is to robustly detect ventricular signal components and the corresponding EIT image region without external monitoring information. Current stimulation and voltage measurement of EIT were simulated with finite element porcine torso models in presence of a multitude of thoracic blood volume shifts. The simulated measurement data was examined for linear dependence on changes in stroke volume. Based on the results the EIT measurement information regarding stroke volume changes is sparse



Mathematics ◽  
2021 ◽  
Vol 9 (19) ◽  
pp. 2379
Author(s):  
Ganbayar Batchuluun ◽  
Na Rae Baek ◽  
Kang Ryoung Park

Various studies have been conducted for detecting humans in images. However, there are the cases where a part of human body disappears in the input image and leaves the camera field of view (FOV). Moreover, there are the cases where a pedestrian comes into the FOV as a part of the body slowly appears. In these cases, human detection and tracking fail by existing methods. Therefore, we propose the method for predicting a wider region than the FOV of a thermal camera based on the image prediction generative adversarial network version 2 (IPGAN-2). When an experiment was conducted using the marathon subdataset of the Boston University-thermal infrared video benchmark open dataset, the proposed method showed higher image prediction (structural similarity index measure (SSIM) of 0.9437) and object detection (F1 score of 0.866, accuracy of 0.914, and intersection over union (IoU) of 0.730) accuracies than state-of-the-art methods.





2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Chen Li

The most basic feature of an image is edge, which is the junction of one attribute area and another attribute area in the image. It is the most uncertain place in the image and the place where the image information is most concentrated. The edge of an image contains rich information. So, the edge location plays an important role in image processing, and its positioning method directly affects the image effect. In order to further improve the accuracy of edge location for multidimensional image, an edge location method for multidimensional image based on edge symmetry is proposed. The method first detects and counts the edges of multidimensional image, sets the region of interest, preprocesses the image with the Gauss filter, detects the vertical edges of the filtered image, and superposes the vertical gradient values of each pixel in the vertical direction to obtain candidate image regions. The symmetry axis position of the candidate image region is analyzed, and its symmetry intensity is measured. Then, the symmetry of vertical gradient projection in the candidate image region is analyzed to verify whether the candidate region is a real edge region. The multidimensional pulse coupled neural network (PCNN) model is used to synthesize the real edge region after edge symmetry processing, and the result of edge location of the multidimensional image is obtained. The results show that the method has strong antinoise ability, clear edge contour, and precise location.



i-Perception ◽  
2021 ◽  
Vol 12 (5) ◽  
pp. 204166952110545
Author(s):  
Fumiya Kurosawa ◽  
Taiki Orima ◽  
Kosuke Okada ◽  
Isamu Motoyoshi

The visual system represents textural image regions as simple statistics that are useful for the rapid perception of scenes and surfaces. What images ‘textures’ are, however, has so far mostly been subjectively defined. The present study investigated the empirical conditions under which natural images are processed as texture. We first show that ‘texturality’ – i.e., whether or not an image is perceived as a texture – is strongly correlated with the perceived similarity between an original image and its Portilla-Simoncelli (PS) synthesized image. We found that both judgments are highly correlated with specific PS statistics of the image. We also demonstrate that a discriminant model based on a small set of image statistics could discriminate whether a given image was perceived as a texture with over 90% accuracy. The results provide a method to determine whether a given image region is represented statistically by the human visual system.



Sign in / Sign up

Export Citation Format

Share Document