difference images
Recently Published Documents


TOTAL DOCUMENTS

126
(FIVE YEARS 29)

H-INDEX

16
(FIVE YEARS 3)

Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8290
Author(s):  
Meng Jia ◽  
Zhiqiang Zhao

Change detection from synthetic aperture radar (SAR) images is of great significance for natural environmental protection and human societal activity, which can be regarded as the process of assigning a class label (changed or unchanged) to each of the image pixels. This paper presents a novel classification technique to address the SAR change-detection task that employs a generalized Gamma deep belief network (gΓ-DBN) to learn features from difference images. We aim to develop a robust change detection method that can adapt to different types of scenarios for bitemporal co-registered Yellow River SAR image data set. This data set characterized by different looks, which means that the two images are affected by different levels of speckle. Widely used probability distributions offer limited accuracy for describing the opposite class pixels of difference images, making change detection entail greater difficulties. To address the issue, first, a gΓ-DBN can be constructed to extract the hierarchical features from raw data and fit the distribution of the difference images by means of a generalized Gamma distribution. Next, we propose learning the stacked spatial and temporal information extracted from various difference images by the gΓ-DBN. Consequently, a joint high-level representation can be effectively learned for the final change map. The visual and quantitative analysis results obtained on the Yellow River SAR image data set demonstrate the effectiveness and robustness of the proposed method.


2021 ◽  
Vol 13 (23) ◽  
pp. 4948
Author(s):  
Bailu Liu ◽  
Lei Guan ◽  
Hong Chen

In recent years, coral reef ecosystems have been affected by global climate change and human factors, resulting in frequent coral bleaching events. A severe coral bleaching event occurred in the northwest of Hainan Island, South China Sea, in 2020. In this study, we used the CoralTemp sea surface temperature (SST) and Sentinel-2B imagery to detect the coral bleaching event. From 31 May to 3 October, the average SST of the study area was 31.01 °C, which is higher than the local bleaching warning threshold value of 30.33 °C. In the difference images of 26 July and 4 September, a wide range of coral bleaching was found. According to the temporal variation in single band reflectance, the development process of bleaching is consistent with the changes in coral bleaching thermal alerts. The results show that the thermal stress level is an effective parameter for early warning of large-scale coral bleaching. High-resolution difference images can be used to detect the extent of coral bleaching. The combination of the two methods can provide better support for coral protection and research.


2021 ◽  
Vol 162 (6) ◽  
pp. 245
Author(s):  
Hayden Smotherman ◽  
Andrew J. Connolly ◽  
J. Bryce Kalmbach ◽  
Stephen K. N. Portillo ◽  
Dino Bektesevic ◽  
...  

Abstract Trans-Neptunian objects provide a window into the history of the solar system, but they can be challenging to observe due to their distance from the Sun and relatively low brightness. Here we report the detection of 75 moving objects that we could not link to any other known objects, the faintest of which has a VR magnitude of 25.02 ± 0.93 using the Kernel-Based Moving Object Detection (KBMOD) platform. We recover an additional 24 sources with previously known orbits. We place constraints on the barycentric distance, inclination, and longitude of ascending node of these objects. The unidentified objects have a median barycentric distance of 41.28 au, placing them in the outer solar system. The observed inclination and magnitude distribution of all detected objects is consistent with previously published KBO distributions. We describe extensions to KBMOD, including a robust percentile-based lightcurve filter, an in-line graphics-processing unit filter, new coadded stamp generation, and a convolutional neural network stamp filter, which allow KBMOD to take advantage of difference images. These enhancements mark a significant improvement in the readiness of KBMOD for deployment on future big data surveys such as LSST.


2021 ◽  
Vol 13 (16) ◽  
pp. 3171
Author(s):  
Pan Shao ◽  
Wenzhong Shi ◽  
Zhewei Liu ◽  
Ting Dong

Remote sensing change detection (CD) plays an important role in Earth observation. In this paper, we propose a novel fusion approach for unsupervised CD of multispectral remote sensing images, by introducing majority voting (MV) into fuzzy topological space (FTMV). The proposed FTMV approach consists of three principal stages: (1) the CD results of different difference images produced by the fuzzy C-means algorithm are combined using a modified MV, and an initial fusion CD map is obtained; (2) by using fuzzy topology theory, the initial fusion CD map is automatically partitioned into two parts: a weakly conflicting part and strongly conflicting part; (3) the weakly conflicting pixels that possess little or no conflict are assigned to the current class, while the pixel patterns with strong conflicts often misclassified are relabeled using the supported connectivity of fuzzy topology. FTMV can integrate the merits of different CD results and largely solve the conflicting problem during fusion. Experimental results on three real remote sensing images confirm the effectiveness and efficiency of the proposed method.


2021 ◽  
Vol 13 (15) ◽  
pp. 2969
Author(s):  
Youxi He ◽  
Zhenhong Jia ◽  
Jie Yang ◽  
Nikola K. Kasabov

Due to differences in external imaging conditions, multispectral images taken at different periods are subject to radiation differences, which severely affect the detection accuracy. To solve this problem, a modified algorithm based on slow feature analysis is proposed for multispectral image change detection. First, single-band slow feature analysis is performed to process bitemporal multispectral images band by band. In this way, the differences between unchanged pixels in each pair of single-band images can be sufficiently suppressed to obtain multiple feature-difference images containing real change information. Then, the feature-difference images of each band are fused into a grayscale distance image using the Euclidean distance. After Gaussian filtering of the grayscale distance image, false detection points can be further reduced. Finally, the k-means clustering method is performed on the filtered grayscale distance image to obtain the binary change map. Experiments reveal that our proposed algorithm is less affected by radiation differences and has obvious advantages in time complexity and detection accuracy.


2021 ◽  
Vol 11 (12) ◽  
pp. 5563
Author(s):  
Jinsol Ha ◽  
Joongchol Shin ◽  
Hasil Park ◽  
Joonki Paik

Action recognition requires the accurate analysis of action elements in the form of a video clip and a properly ordered sequence of the elements. To solve the two sub-problems, it is necessary to learn both spatio-temporal information and the temporal relationship between different action elements. Existing convolutional neural network (CNN)-based action recognition methods have focused on learning only spatial or temporal information without considering the temporal relation between action elements. In this paper, we create short-term pixel-difference images from the input video, and take the difference images as an input to a bidirectional exponential moving average sub-network to analyze the action elements and their temporal relations. The proposed method consists of: (i) generation of RGB and differential images, (ii) extraction of deep feature maps using an image classification sub-network, (iii) weight assignment to extracted feature maps using a bidirectional, exponential, moving average sub-network, and (iv) late fusion with a three-dimensional convolutional (C3D) sub-network to improve the accuracy of action recognition. Experimental results show that the proposed method achieves a higher performance level than existing baseline methods. In addition, the proposed action recognition network takes only 0.075 seconds per action class, which guarantees various high-speed or real-time applications, such as abnormal action classification, human–computer interaction, and intelligent visual surveillance.


2021 ◽  
Vol 6 (2) ◽  
pp. 98-105
Author(s):  
Adri Priadana ◽  
Aris Wahyu Murdiyanto

Vannamei shrimp is one of Indonesia's fishery commodities with great potential to be developed. One of the essential things in shrimp farming is a source of dissolved oxygen (DO) or a sufficient amount of oxygen content, which can be maintained by placing a waterwheel driven by a generator set engine called a generator. To keep the waterwheel running, the cultivators must continue to monitor it in real-time. Based on these problems, we need a method that can be used to detect the cessation of waterwheel rotation in shrimp ponds that focuses on the rotation of the waterwheel. This study aims to analyze the performance of the Accumulative Difference Images (ADI) method to detect the stopped waterwheel-spinning. This method was chosen because compared with the method that only compares the differences between two frames in each process, the ADI method is considered to reduce the error-rate. After all, it is taken from the results of the value of several frames' accumulated movement. The ADI method's application to detect the stopped waterwheel-spinning gives an accuracy of 95.68%. It shows that the ADI method can be applied to detect waterwheels' stop in shrimp ponds with a very good accuracy value.


2021 ◽  
Vol 13 (2) ◽  
pp. 103-114
Author(s):  
Yongzhen Ke ◽  
Yiping Cui

Tampering with images may involve the field of crime and also bring problems such as incorrect values to the public. Image local deformation is one of the most common image tampering methods, where the original texture features and the correlation between the pixels of an image are changed. Multiple fusion strategies based on first-order difference images and their texture feature is proposed to locate the tamper in local deformation image. Firstly, texture features using overlapping blocks on one color channel are extracted and fed into fuzzy c-means clustering method to generate a tamper probability map (TPM), and then several TPMs with different block sizes are fused in the first fusion. Secondly, different TPMs with different color channels and different texture features are respectively fused in the second and third fusion. The experimental results show that the proposed method can accurately detect the location of the local deformation of an image.


2021 ◽  
Vol 11 (4) ◽  
pp. 1807
Author(s):  
Jae-Yeul Kim ◽  
Jong-Eun Ha

In video surveillance, robust detection of foreground objects is usually done by subtracting a background model from the current image. Most traditional approaches use a statistical method to model the background image. Recently, deep learning has also been widely used to detect foreground objects in video surveillance. It shows dramatic improvement compared to the traditional approaches. It is trained through supervised learning, which requires training samples with pixel-level assignment. It requires a huge amount of time and is high cost, while traditional algorithms operate unsupervised and do not require training samples. Additionally, deep learning-based algorithms lack generalization power. They operate well on scenes that are similar to the training conditions, but they do not operate well on scenes that deviate from the training conditions. In this paper, we present a new method to detect foreground objects in video surveillance using multiple difference images as the input of convolutional neural networks, which guarantees improved generalization power compared to current deep learning-based methods. First, we adjust U-Net to use multiple difference images as input. Second, we show that training using all scenes in the CDnet 2014 dataset can improve the generalization power. Hyper-parameters such as the number of difference images and the interval between images in difference image computation are chosen by analyzing experimental results. We demonstrate that the proposed algorithm achieves improved performance in scenes that are not used in training compared to state-of-the-art deep learning and traditional unsupervised algorithms. Diverse experiments using various open datasets and real images show the feasibility of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document