band image
Recently Published Documents


TOTAL DOCUMENTS

180
(FIVE YEARS 47)

H-INDEX

13
(FIVE YEARS 3)

Author(s):  
R. J. L. Argamosa ◽  
A. C. Blanco ◽  
R. B. Reyes

Abstract. A large oil spill in Iloilo Straight that occurred on July 3, 2020, as well as a possible deliberate, small but frequent oil spill and surfactant contamination in Manila Bay, were mapped. The method employs the Sentinel 2-1C image, which is transformed into principal components to reveal the presence of oil spills and possibly surfactants. Additionally, a gradient boosting algorithm was trained to discriminate between pixels that were contaminated with oil and those that were not. The multi-band image with three principal components with a 99% cumulative explained variance ratio highlights the occurrence of an oil spill in Iloilo Straight. Further, the classified image produced by pixel-based classification clearly distinguishes between water and oil pixels in the said area. The methodology was applied to a Sentinel 2-1C image of Manila Bay, with pixels observed/identified as oil and classified as well. The highest density of supposedly oil-contaminated pixels (large or small but frequent) was observed on the eastern side of Manila Bay (Bataan). While there were no documented oil spills concurrent to the satellite image used, historical reports on the area indicate that the likelihood of an oil spill is extremely high due to the massive amount of shipping activity. Pixels supposedly contaminated by oil spills also occur in areas near ports where oil spills could occur as a result of ship operations. Pixels with the same properties as oil contamination are also visible in areas adjacent to fishponds and aquaculture, where phytoplankton and fish contribute to surfactant contamination.


2021 ◽  
Vol 2083 (3) ◽  
pp. 032052
Author(s):  
Huixiang Liu ◽  
Yang Liu ◽  
Peili Xi ◽  
Jie Chen ◽  
Wei Yang ◽  
...  

Abstract The atmosphere is a very important factor that affects the accuracy of X-band SAR image registration, and the ionosphere effect has the most intricate influence. In response to this problem, this paper introduces the mathematical model of ionospheric dispersion effect and scintillation effect. Then, echo simulation, imaging processing, and image registration are used to calculate the image offset caused by the ionosphere, which can determine whether the ionosphere effect needs to be compensated during image registration. Simulation experimental results show that in the X-band image registration, the dispersion effect needs to be compensated, and the impact of the scintillation effect can be ignored.


Author(s):  
Kishore Raju Kalidindi ◽  
Pardha Saradhi Varma G. ◽  
Rajyalakshmi Davuluri

The rich spectral and spatial information of hyperspectral images are well known in the literature. The higher dimensionality of HSI creates Hughes's effect and increased computational complexity. This demands reduction for HS images as a pre-processing step. The necessary reduction of bands can be achieved by a proper band selection (BS) technique. The proposed features based unsupervised BS technique follows three subsequent steps: 1) for each band image statistical features are extracted, 2) bands are clustered with a k-means approach using the extracted features, 3) each cluster is ranked using mean entropy measure, 4) bad clusters are removed, and 5) for each selected cluster, a representative band is selected. The proposed method is validated over three widely used standard datasets and six state-of-the-art approaches using an ensemble of binary SVM classifiers. The obtained results strongly suggest the clustering is essential to reduce the redundancy, and removal of cluster is informative to keep the informative bands.


2021 ◽  
Author(s):  
Istvan RACZ ◽  
Andras HORVATH ◽  
Noemi KRANITZ ◽  
Gyongyi KISS ◽  
Henriett REGOCZI ◽  
...  

2021 ◽  
pp. 393-402
Author(s):  
Min Li

In this paper, aiming at the need of stable access to visual information of intelligent management of greenhouse tomatoes, the color correction method of tomato plant image based on high dynamic range imaging technology was studied, in order to overcome the objective limitation of complex natural lighting conditions on the stable color presentation of working objects. In view of the color distortion caused by the temporal and spatial fluctuation of illumination in greenhouse and sudden change of radiation intensity in complex background, a calibration method of camera radiation response model based on multi-exposure intensity images is proposed. The fusion effect of multi band image is evaluated by field test. The results show that after multi band image fusion processing, the brightness difference between the recognized target and other near color background is significantly enhanced, and the brightness fluctuation of the background is suppressed. The color correction method was verified by field experiments, and the gray information, discrete degree and clarity of tomato plant images in different scenes and periods were improved.


2021 ◽  
Vol 13 (13) ◽  
pp. 2509
Author(s):  
Yalong Gu ◽  
Slawomir Blonski ◽  
Wenhui Wang ◽  
Sirish Uprety ◽  
Taeyoung Choi ◽  
...  

Due to complex radiometric calibration, the imagery collected by the Day/Night Band (DNB) of the Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the Suomi National Polar Partnership (Suomi-NPP) and the NOAA-20 follow-on satellite is subject to artifacts such as striping, which eventually affect Earth remote sensing applications. Through comprehensive analysis using the NOAA-20 VIIRS DNB prelaunch-test and on-orbit data, it is revealed that the striping results from flaws in the calibration process. In particular, a discrepancy between the low-gain stage (LGS) Earth view (EV) gain and the onboard calibrator solar diffuser view gain makes the operational LGS gain coefficients of a few aggregation modes and detectors biased. Detector nonlinearity at low radiance level also induces errors to the mid-gain stage (MGS) and high-gain stage (HGS) gain through the biased gain ratios. These systematic errors are corrected by scaling the operational LGS gains using the factors derived from the NOAA-20 VIIRS DNB prelaunch test data and by adopting linear regression for evaluating the gain ratios. Striping in the NOAA-20 VIIRS DNB imagery is visibly reduced after the upgraded gain calibration process was implemented in the operational calibration.


Author(s):  
Shaheera Rashwan ◽  
Walaa Sheta

The main objective of hyper/multispectral image fusion is producing a composite color image that allows for an appropriate visualization of the relevant spatial and spectral information. In this paper, we propose a general framework for spectral weighting-based image fusion. The proposed methodology relies on weight updates conducted using nature-inspired algorithms and a goodness-of-fit criterion defined as the average root mean square error. Simulations on four public data sets and a recent Landsat 8 image of Brullus Lake, Egypt, as an area of study prove the efficiency of the proposed framework. The purpose of the study is to present a framework of multi-band image fusion that produces a fused image of high quality to be further used in computer processing and the results show that the image produced by the presented framework has the highest quality compared with some of the state-of-the art algorithms. To prove the increase in the image quality, we used general quality metrics such as Universal Image Quality Index, Mutual Information, the Variance and Information Measure.


Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 255
Author(s):  
Yi Zhang ◽  
Shizhou Zhang ◽  
Ying Li ◽  
Yanning Zhang

Recently, both single modality and cross modality near-duplicate image detection tasks have received wide attention in the community of pattern recognition and computer vision. Existing deep neural networks-based methods have achieved remarkable performance in this task. However, most of the methods mainly focus on the learning of each image from the image pair, thus leading to less use of the information between the near duplicate image pairs to some extent. In this paper, to make more use of the correlations between image pairs, we propose a spatial transformer comparing convolutional neural network (CNN) model to compare near-duplicate image pairs. Specifically, we firstly propose a comparing CNN framework, which is equipped with a cross-stream to fully learn the correlation information between image pairs, while considering the features of each image. Furthermore, to deal with the local deformations led by cropping, translation, scaling, and non-rigid transformations, we additionally introduce a spatial transformer comparing CNN model by incorporating a spatial transformer module to the comparing CNN architecture. To demonstrate the effectiveness of the proposed method on both the single-modality and cross-modality (Optical-InfraRed) near-duplicate image pair detection tasks, we conduct extensive experiments on three popular benchmark datasets, namely CaliforniaND (ND means near duplicate), Mir-Flickr Near Duplicate, and TNO Multi-band Image Data Collection. The experimental results show that the proposed method can achieve superior performance compared with many state-of-the-art methods on both tasks.


2021 ◽  
Vol 11 (06) ◽  
pp. 140-151
Author(s):  
Fatima Omeis ◽  
Mondher Besbes ◽  
Christophe Sauvan ◽  
Henri Benisty

Sign in / Sign up

Export Citation Format

Share Document