image difference
Recently Published Documents


TOTAL DOCUMENTS

87
(FIVE YEARS 20)

H-INDEX

11
(FIVE YEARS 2)

2021 ◽  
Vol 11 (15) ◽  
pp. 7006
Author(s):  
Chang-Hwan Son

Layer decomposition to separate an input image into base and detail layers has been steadily used for image restoration. Existing residual networks based on an additive model require residual layers with a small output range for fast convergence and visual quality improvement. However, in inverse halftoning, homogenous dot patterns hinder a small output range from the residual layers. Therefore, a new layer decomposition network based on the Gaussian convolution model (GCM) and a structure-aware deblurring strategy is presented to achieve residual learning for both the base and detail layers. For the base layer, a new GCM-based residual subnetwork is presented. The GCM utilizes a statistical distribution, in which the image difference between a blurred continuous-tone image and a blurred halftoned image with a Gaussian filter can result in a narrow output range. Subsequently, the GCM-based residual subnetwork uses a Gaussian-filtered halftoned image as the input, and outputs the image difference as a residual, thereby generating the base layer, i.e., the Gaussian-blurred continuous-tone image. For the detail layer, a new structure-aware residual deblurring subnetwork (SARDS) is presented. To remove the Gaussian blurring of the base layer, the SARDS uses the predicted base layer as the input, and outputs the deblurred version. To more effectively restore image structures such as lines and text, a new image structure map predictor is incorporated into the deblurring network to induce structure-adaptive learning. This paper provides a method to realize the residual learning of both the base and detail layers based on the GCM and SARDS. In addition, it is verified that the proposed method surpasses state-of-the-art methods based on U-Net, direct deblurring networks, and progressively residual networks.


2021 ◽  
Vol 11 (2) ◽  
pp. 82-93
Author(s):  
Gilberto Ramos Vieira ◽  
Lívia Maria de Lima Leôncio ◽  
Clécia Gabriela Bezerra ◽  
Mírian Celly Medeiros Miranda David ◽  
Rhowena Jane Barbosa de Matos

Objective: Hydration can favor cognitive functions during childhood and adolescence, helping with daily and school activities. This study aimed to identify possible interactions between hydration and memory in children and adolescents. Methods: This is a systematic review with meta-analysis. The bibliographic search was conducted in the MEDLINE/PubMed, SciELO, LILACS, Web of Science, Embase, and Cochrane Library databases, through a combination of the descriptors: “hydration” AND “memory”; “hydration” AND “memory” AND “child”; “hydration” AND “memory” AND “children”; “organism hydration status” AND “memory”; “organism hydration status” AND “memory” AND “child”. Results: The search resulted in 816 articles, of which ten were selected for qualitative synthesis and two for the meta-analysis. The results indicated that hydration could not enhance working, visual and visuomotor memories, or visual attention (Line Tracing Task, MD 0.67, 95% CI -0.87 to 2.22; Indirect Image Difference, MD 0.32, 95% CI -0.75 to 1.40; Letter Cancellation, MD 1.68, 95% CI -0.81 to 4.17). Conclusion: From the obtained results, hydration per se does not reinforce working, visual and visuomotor memories, or visual attention. However, there are still gaps regarding other types of memory and cognitive, motor, nutritional and environmental integration.


2021 ◽  
Vol 13 (5) ◽  
pp. 868
Author(s):  
Zhenxuan Li ◽  
Wenzhong Shi ◽  
Yongchao Zhu ◽  
Hua Zhang ◽  
Ming Hao ◽  
...  

Recently, land cover change detection has become a research focus of remote sensing. To obtain the change information from remote sensing images at fine spatial and temporal resolutions, subpixel change detection is widely studied and applied. In this paper, a new subpixel change detection method based on radial basis function (RBF) for remote sensing images is proposed, in which the abundance image difference measure (AIDM) is designed and utilized to enhance the subpixel mapping (SPM) by borrowing the fine spatial distribution of the fine spatial resolution image to decrease the influence of the spectral unmixing error. First, the fine and coarse spatial resolution images are used to develop subpixel change detection. Second, linear spectral mixing modeling and the degradation procedure are conducted on the coarse and fine spatial resolution image to produce two temporal abundance images, respectively. Then, the designed AIDM is utilized to enhance the RBF-based SPM by comparing the two temporal abundance images. At last, the proposed RBF-AIDM method is applied for SPM and subpixel change detection. The synthetic images based on Landsat-7 Enhanced Thematic Mapper Plus (ETM+) and real case images based on two temporal Landsat-8 Operational Land Imager (OLI) images and one Moderate Resolution Imaging Spectroradiometer (MODIS) image are undertaken to validate the proposed method. The experimental results indicate that the proposed method can sufficiently decrease the influence of the spectral unmixing error and improve the subpixel change detection results.


2021 ◽  
pp. 1-1
Author(s):  
Qingbao Huang ◽  
Yu Liang ◽  
Jielong Wei ◽  
Cai Yi ◽  
Hanyu Liang ◽  
...  

Author(s):  
Pontus Andersson ◽  
Jim Nilsson ◽  
Tomas Akenine-Möller ◽  
Magnus Oskarsson ◽  
Kalle Åström ◽  
...  

Image quality measures are becoming increasingly important in the field of computer graphics. For example, there is currently a major focus on generating photorealistic images in real time by combining path tracing with denoising, for which such quality assessment is integral. We present FLIP, which is a difference evaluator with a particular focus on the differences between rendered images and corresponding ground truths. Our algorithm produces a map that approximates the difference perceived by humans when alternating between two images. FLIP is a combination of modified existing building blocks, and the net result is surprisingly powerful. We have compared our work against a wide range of existing image difference algorithms and we have visually inspected over a thousand image pairs that were either retrieved from image databases or generated in-house. We also present results of a user study which indicate that our method performs substantially better, on average, than the other algorithms. To facilitate the use of FLIP, we provide source code in C++, MATLAB, NumPy/SciPy, and PyTorch.


2020 ◽  
Vol 34 (10) ◽  
pp. 13819-13820 ◽  
Author(s):  
Chong Jiang ◽  
Zongzhang Zhang ◽  
Zixuan Chen ◽  
Jiacheng Zhu ◽  
Junpeng Jiang

Third-person imitation learning (TPIL) is a variant of generative adversarial imitation learning and can learn an expert-like policy from third-person expert demonstrations. Third-person expert demonstrations usually exist in the form of videos recorded in a third-person perspective, and there is a lack of direct correspondence with samples generated by agent. To alleviate this problem, we improve TPIL by applying image difference and variational discriminator bottleneck. Empirically, our new method has better performance than TPIL on two MuJoCo tasks, Reacher and Inverted Pendulum.


2020 ◽  
Vol 8 (6) ◽  
pp. 5116-5118

Face detection, face tracking, and Object identification is the first process in applications such as face detection-based attendance marking system, video surveillance, and tracking of human faces in case of emergency. The main objective of our project is to detect and track the moving human faces with a permanently placed fixed camera. We propose a general moving face detection and tracking system.Our project mainly focuses on the moving human face detection in a situation, let us say, the people moving together are meeting with each other and are detected as the people as long as they stay in the situation. This can be done with the help of an Image Difference Algorithm with the python programming language support, and also that the time period for each and every frame can be calculated.


Sign in / Sign up

Export Citation Format

Share Document