image space
Recently Published Documents


TOTAL DOCUMENTS

526
(FIVE YEARS 71)

H-INDEX

34
(FIVE YEARS 5)

Author(s):  
Christian Günther ◽  
Bahareh Khazayel ◽  
Christiane Tammer

AbstractIn vector optimization, it is of increasing interest to study problems where the image space (a real linear space) is preordered by a not necessarily solid (and not necessarily pointed) convex cone. It is well-known that there are many examples where the ordering cone of the image space has an empty (topological/algebraic) interior, for instance in optimal control, approximation theory, duality theory. Our aim is to consider Pareto-type solution concepts for such vector optimization problems based on the intrinsic core notion (a well-known generalized interiority notion). We propose a new Henig-type proper efficiency concept based on generalized dilating cones which are relatively solid (i.e., their intrinsic cores are nonempty). Using linear functionals from the dual cone of the ordering cone, we are able to characterize the sets of (weakly, properly) efficient solutions under certain generalized convexity assumptions. Toward this end, we employ separation theorems that are working in the considered setting.


2021 ◽  
Vol 13 (22) ◽  
pp. 4663
Author(s):  
Longhui Wang ◽  
Yan Zhang ◽  
Tao Wang ◽  
Yongsheng Zhang ◽  
Zhenchao Zhang ◽  
...  

Time delay and integration (TDI) charge-coupled device (CCD) is an image sensor for capturing images of moving objects at low light levels. This study examines the model construction of stitched TDI CCD original multi-slice images. The traditional approaches, for example, include the image-space-oriented algorithm and the object-space-oriented algorithm. The former indicates concise principles and high efficiency, whereas the panoramic stitching images lack the clear geometric relationships generated from the image-space-oriented algorithm. Similarly, even though the object-space-oriented algorithm generates an image with a clear geometric relationship, it is time-consuming due to the complicated and intensive computational demands. In this study, we developed a multi-slice satellite images stitching and geometric model construction method. The method consists of three major steps. First, the high-precision reference data assist in block adjustment and obtain the original slice image bias-corrected RFM to perform multi-slice image block adjustment. The second process generates the panoramic stitching image by establishing the image coordinate conversion relationship from the panoramic stitching image to the original multi-slice images. The final step is dividing the panoramic stitching image uniformly into image grids and employing the established image coordinate conversion relationship and the original multi-slice image bias-corrected RFM to generate a virtual control grid to construct the panoramic stitching image RFM. To evaluate the performance, we conducted experiments using the Tianhui-1(TH-1) high-resolution image and the Ziyuan-3(ZY-3) triple liner-array image data. The experimental results show that, compared with the object-space-oriented algorithm, the stitching accuracy loss of the generated panoramic stitching image was only 0.2 pixels and that the mean value was 0.799798 pixels, achieving the sub-pixel stitching requirements. Compared with the object-space-oriented algorithm, the RFM positioning difference of the panoramic stitching image was within 0.3 m, which achieves equal positioning accuracy.


2021 ◽  
Vol 10 (11) ◽  
pp. 742
Author(s):  
Xiaoyue Luo ◽  
Yanhui Wang ◽  
Benhe Cai ◽  
Zhanxing Li

Previous research on moving object detection in traffic surveillance video has mostly adopted a single threshold to eliminate the noise caused by external environmental interference, resulting in low accuracy and low efficiency of moving object detection. Therefore, we propose a moving object detection method that considers the difference of image spatial threshold, i.e., a moving object detection method using adaptive threshold (MOD-AT for short). In particular, based on the homograph method, we first establish the mapping relationship between the geometric-imaging characteristics of moving objects in the image space and the minimum circumscribed rectangle (BLOB) of moving objects in the geographic space to calculate the projected size of moving objects in the image space, by which we can set an adaptive threshold for each moving object to precisely remove the noise interference during moving object detection. Further, we propose a moving object detection algorithm called GMM_BLOB (GMM denotes Gaussian mixture model) to achieve high-precision detection and noise removal of moving objects. The case-study results show the following: (1) Compared with the existing object detection algorithm, the median error (MD) of the MOD-AT algorithm is reduced by 1.2–11.05%, and the mean error (MN) is reduced by 1.5–15.5%, indicating that the accuracy of the MOD-AT algorithm is higher in single-frame detection; (2) in terms of overall accuracy, the performance and time efficiency of the MOD-AT algorithm is improved by 7.9–24.3%, reflecting the higher efficiency of the MOD-AT algorithm; (3) the average accuracy (MP) of the MOD-AT algorithm is improved by 17.13–44.4%, the average recall (MR) by 7.98–24.38%, and the average F1-score (MF) by 10.13–33.97%; in general, the MOD-AT algorithm is more accurate, efficient, and robust.


2021 ◽  
pp. 1-21
Author(s):  
Naomi Shamul ◽  
Leo Joskowicz

BACKGROUND: Detecting and interpreting changes in the images of follow-up CT scans by the clinicians is often time-consuming and error-prone due to changes in patient position and non-rigid anatomy deformations. Thus, reconstructed repeat scan images are required, precluding reduced dose sparse-view repeat scanning. OBJECTIVE: To develop a method to automatically detect changes in a region of interest of sparse-view repeat CT scans in the presence of non-rigid deformations of the patient’s anatomy without reconstructing the original images. METHODS: The proposed method uses the sparse sinogram data of two CT scans to distinguish between genuine changes in the repeat scan and differences due to non-rigid anatomic deformations. First, size and contrast level of the changed regions are estimated from the difference between the scans’ sinogram data. The estimated types of changes in the repeat scan help optimize the method’s parameter values. Two scans are then aligned using Radon space non-rigid registration. Rays which crossed changes in the ROI are detected and back-projected onto image space in a two-phase procedure. These rays form a likelihood map from which the binary changed region map is computed. RESULTS: Experimental studies on four pairs of clinical lung and liver CT scans with simulated changed regions yield a mean changed region recall rate >  86%and a mean precision rate >  83%when detecting large changes with low contrast, and high contrast changes, even when small. The new method outperforms image space methods using prior image constrained compressed sensing (PICCS) reconstruction, particularly for small, low contrast changes (recall = 15.8%, precision = 94.7%). CONCLUSION: Our method for automatic change detection in sparse-view repeat CT scans with non-rigid deformations may assist radiologists by highlighting the changed regions and may obviate the need for a high-quality repeat scan image when no changes are detected.


2021 ◽  
Vol 23 ◽  
pp. 169-188
Author(s):  
Xiao Lin ◽  
◽  
Jin-Soo Kim ◽  
Sang-Eun Lee
Keyword(s):  

Author(s):  
Wanlu Xu ◽  
Hong Liu ◽  
Wei Shi ◽  
Ziling Miao ◽  
Zhisheng Lu ◽  
...  

Most existing person re-identification methods are effective in short-term scenarios because of their appearance dependencies. However, these methods may fail in long-term scenarios where people might change their clothes. To this end, we propose an adversarial feature disentanglement network (AFD-Net) which contains intra-class reconstruction and inter-class adversary to disentangle the identity-related and identity-unrelated (clothing) features. For intra-class reconstruction, the person images with the same identity are represented and disentangled into identity and clothing features by two separate encoders, and further reconstructed into original images to reduce intra-class feature variations. For inter-class adversary, the disentangled features across different identities are exchanged and recombined to generate adversarial clothes-changing images for training, which makes the identity and clothing features more independent. Especially, to supervise these new generated clothes-changing images, a re-feeding strategy is designed to re-disentangle and reconstruct these new images for image-level self-supervision in the original image space and feature-level soft-supervision in the disentangled feature space. Moreover, we collect a challenging Market-Clothes dataset and a real-world PKU-Market-Reid dataset for evaluation. The results on one large-scale short-term dataset (Market-1501) and five long-term datasets (three public and two we proposed) confirm the superiority of our method against other state-of-the-art methods.


Author(s):  
Ziling Miao ◽  
Hong Liu ◽  
Wei Shi ◽  
Wanlu Xu ◽  
Hanrong Ye

RGB-infrared (IR) person re-identification is a challenging task due to the large modality gap between RGB and IR images. Many existing methods bridge the modality gap by style conversion, requiring high-similarity images exchanged by complex CNN structures, like GAN. In this paper, we propose a highly compact modality-aware style adaptation (MSA) framework, which aims to explore more potential relations between RGB and IR modalities by introducing new related modalities. Therefore, the attention is shifted from bridging to filling the modality gap with no requirement on high-quality generated images. To this end, we firstly propose a concise feature-free image generation structure to adapt the original modalities to two new styles that are compatible with both inputs by patch-based pixel redistribution. Secondly, we devise two image style quantification metrics to discriminate styles in image space using luminance and contrast. Thirdly, we design two image-level losses based on the quantified results to guide the style adaptation during an end-to-end four-modality collaborative learning process. Experimental results on two datasets SYSU-MM01 and RegDB show that MSA achieves significant improvements with little extra computation cost and outperforms the state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document