scholarly journals COMPARISON OF OPEN SOURCE COMPRESSION ALGORITHMS ON VHR REMOTE SENSING IMAGES FOR EFFICIENT STORAGE HIERARCHY

Author(s):  
A. Akoguz ◽  
S. Bozkurt ◽  
A. A. Gozutok ◽  
G. Alp ◽  
E. G. Turan ◽  
...  

High resolution level in satellite imagery came with its fundamental problem as big amount of telemetry data which is to be stored after the downlink operation. Moreover, later the post-processing and image enhancement steps after the image is acquired, the file sizes increase even more and then it gets a lot harder to store and consume much more time to transmit the data from one source to another; hence, it should be taken into account that to save even more space with file compression of the raw and various levels of processed data is a necessity for archiving stations to save more space. Lossless data compression algorithms that will be examined in this study aim to provide compression without any loss of data holding spectral information. Within this objective, well-known open source programs supporting related compression algorithms have been implemented on processed GeoTIFF images of Airbus Defence & Spaces SPOT 6 & 7 satellites having 1.5 m. of GSD, which were acquired and stored by ITU Center for Satellite Communications and Remote Sensing (ITU CSCRS), with the algorithms Lempel-Ziv-Welch (LZW), Lempel-Ziv-Markov chain Algorithm (LZMA & LZMA2), Lempel-Ziv-Oberhumer (LZO), Deflate & Deflate 64, Prediction by Partial Matching (PPMd or PPM2), Burrows-Wheeler Transform (BWT) in order to observe compression performances of these algorithms over sample datasets in terms of how much of the image data can be compressed by ensuring lossless compression.

Author(s):  
A. Akoguz ◽  
S. Bozkurt ◽  
A. A. Gozutok ◽  
G. Alp ◽  
E. G. Turan ◽  
...  

High resolution level in satellite imagery came with its fundamental problem as big amount of telemetry data which is to be stored after the downlink operation. Moreover, later the post-processing and image enhancement steps after the image is acquired, the file sizes increase even more and then it gets a lot harder to store and consume much more time to transmit the data from one source to another; hence, it should be taken into account that to save even more space with file compression of the raw and various levels of processed data is a necessity for archiving stations to save more space. Lossless data compression algorithms that will be examined in this study aim to provide compression without any loss of data holding spectral information. Within this objective, well-known open source programs supporting related compression algorithms have been implemented on processed GeoTIFF images of Airbus Defence & Spaces SPOT 6 & 7 satellites having 1.5 


Author(s):  
Karl F. Warnick ◽  
Rob Maaskant ◽  
Marianna V. Ivashina ◽  
David B. Davidson ◽  
Brian D. Jeffs

2021 ◽  
Vol 13 (4) ◽  
pp. 1917
Author(s):  
Alma Elizabeth Thuestad ◽  
Ole Risbøl ◽  
Jan Ingolf Kleppe ◽  
Stine Barlindhaug ◽  
Elin Rose Myrvoll

What can remote sensing contribute to archaeological surveying in subarctic and arctic landscapes? The pros and cons of remote sensing data vary as do areas of utilization and methodological approaches. We assessed the applicability of remote sensing for archaeological surveying of northern landscapes using airborne laser scanning (LiDAR) and satellite and aerial images to map archaeological features as a basis for (a) assessing the pros and cons of the different approaches and (b) assessing the potential detection rate of remote sensing. Interpretation of images and a LiDAR-based bare-earth digital terrain model (DTM) was based on visual analyses aided by processing and visualizing techniques. 368 features were identified in the aerial images, 437 in the satellite images and 1186 in the DTM. LiDAR yielded the better result, especially for hunting pits. Image data proved suitable for dwellings and settlement sites. Feature characteristics proved a key factor for detectability, both in LiDAR and image data. This study has shown that LiDAR and remote sensing image data are highly applicable for archaeological surveying in northern landscapes. It showed that a multi-sensor approach contributes to high detection rates. Our results have improved the inventory of archaeological sites in a non-destructive and minimally invasive manner.


2021 ◽  
Vol 13 (4) ◽  
pp. 742
Author(s):  
Jian Peng ◽  
Xiaoming Mei ◽  
Wenbo Li ◽  
Liang Hong ◽  
Bingyu Sun ◽  
...  

Scene understanding of remote sensing images is of great significance in various applications. Its fundamental problem is how to construct representative features. Various convolutional neural network architectures have been proposed for automatically learning features from images. However, is the current way of configuring the same architecture to learn all the data while ignoring the differences between images the right one? It seems to be contrary to our intuition: it is clear that some images are easier to recognize, and some are harder to recognize. This problem is the gap between the characteristics of the images and the learning features corresponding to specific network structures. Unfortunately, the literature so far lacks an analysis of the two. In this paper, we explore this problem from three aspects: we first build a visual-based evaluation pipeline of scene complexity to characterize the intrinsic differences between images; then, we analyze the relationship between semantic concepts and feature representations, i.e., the scalability and hierarchy of features which the essential elements in CNNs of different architectures, for remote sensing scenes of different complexity; thirdly, we introduce CAM, a visualization method that explains feature learning within neural networks, to analyze the relationship between scenes with different complexity and semantic feature representations. The experimental results show that a complex scene would need deeper and multi-scale features, whereas a simpler scene would need lower and single-scale features. Besides, the complex scene concept is more dependent on the joint semantic representation of multiple objects. Furthermore, we propose the framework of scene complexity prediction for an image and utilize it to design a depth and scale-adaptive model. It achieves higher performance but with fewer parameters than the original model, demonstrating the potential significance of scene complexity.


2021 ◽  
Vol 13 (4) ◽  
pp. 747
Author(s):  
Yanghua Di ◽  
Zhiguo Jiang ◽  
Haopeng Zhang

Fine-grained visual categorization (FGVC) is an important and challenging problem due to large intra-class differences and small inter-class differences caused by deformation, illumination, angles, etc. Although major advances have been achieved in natural images in the past few years due to the release of popular datasets such as the CUB-200-2011, Stanford Cars and Aircraft datasets, fine-grained ship classification in remote sensing images has been rarely studied because of relative scarcity of publicly available datasets. In this paper, we investigate a large amount of remote sensing image data of sea ships and determine most common 42 categories for fine-grained visual categorization. Based our previous DSCR dataset, a dataset for ship classification in remote sensing images, we collect more remote sensing images containing warships and civilian ships of various scales from Google Earth and other popular remote sensing image datasets including DOTA, HRSC2016, NWPU VHR-10, We call our dataset FGSCR-42, meaning a dataset for Fine-Grained Ship Classification in Remote sensing images with 42 categories. The whole dataset of FGSCR-42 contains 9320 images of most common types of ships. We evaluate popular object classification algorithms and fine-grained visual categorization algorithms to build a benchmark. Our FGSCR-42 dataset is publicly available at our webpages.


2021 ◽  
Vol 13 (3) ◽  
pp. 531
Author(s):  
Caiwang Zheng ◽  
Amr Abd-Elrahman ◽  
Vance Whitaker

Measurement of plant characteristics is still the primary bottleneck in both plant breeding and crop management. Rapid and accurate acquisition of information about large plant populations is critical for monitoring plant health and dissecting the underlying genetic traits. In recent years, high-throughput phenotyping technology has benefitted immensely from both remote sensing and machine learning. Simultaneous use of multiple sensors (e.g., high-resolution RGB, multispectral, hyperspectral, chlorophyll fluorescence, and light detection and ranging (LiDAR)) allows a range of spatial and spectral resolutions depending on the trait in question. Meanwhile, computer vision and machine learning methodology have emerged as powerful tools for extracting useful biological information from image data. Together, these tools allow the evaluation of various morphological, structural, biophysical, and biochemical traits. In this review, we focus on the recent development of phenomics approaches in strawberry farming, particularly those utilizing remote sensing and machine learning, with an eye toward future prospects for strawberries in precision agriculture. The research discussed is broadly categorized according to strawberry traits related to (1) fruit/flower detection, fruit maturity, fruit quality, internal fruit attributes, fruit shape, and yield prediction; (2) leaf and canopy attributes; (3) water stress; and (4) pest and disease detection. Finally, we present a synthesis of the potential research opportunities and directions that could further promote the use of remote sensing and machine learning in strawberry farming.


2021 ◽  
pp. 1-14
Author(s):  
Zhenggang Wang ◽  
Jin Jin

Remote sensing image segmentation provides technical support for decision making in many areas of environmental resource management. But, the quality of the remote sensing images obtained from different channels can vary considerably, and manually labeling a mass amount of image data is too expensive and Inefficiently. In this paper, we propose a point density force field clustering (PDFC) process. According to the spectral information from different ground objects, remote sensing superpixel points are divided into core and edge data points. The differences in the densities of core data points are used to form the local peak. The center of the initial cluster can be determined by the weighted density and position of the local peak. An iterative nebular clustering process is used to obtain the result, and a proposed new objective function is used to optimize the model parameters automatically to obtain the global optimal clustering solution. The proposed algorithm can cluster the area of different ground objects in remote sensing images automatically, and these categories are then labeled by humans simply.


Sign in / Sign up

Export Citation Format

Share Document