scholarly journals Extension of wavelet compression algorithms to 3D and 4D image data: exploitation of data coherence in higher dimensions allows very high compression ratios

Author(s):  
Li Zeng ◽  
Christian Jansen ◽  
Michael A. Unser ◽  
Patrick Hunziker
Author(s):  
M. Chew ◽  
T. A. Good

Abstract Piston engines have generally been designed with strokes that are uniform throughout the engine cycle. Variable-stroke engines have been designed with the capability to change the stroke lengths from cycle to cycle depending on load requirement. This article examines the synthesis of piston engines that are designed to have different stroke lengths and different relative stroke timing over an engine cycle. Such piston trajectories have been found to exhibit very high thermal efficiencies without resorting to high compression ratios. An investigation into different mechanisms and approaches toward the synthesis of such engine mechanisms is presented.


The domain of image signal processing, image compression is the significant technique, which is mainly invented to reduce the redundancy of image data in order to able to transmit the image pixels with high quality resolution. The standard image compression techniques like losseless and lossy compression technique generates high compression ratio image with efficient storage and transmission requirement respectively. There are many image compression technique are available for example JPEG, DWT and DCT based compression algorithms which provides effective results in terms of high compression ratio with clear quality image transformation. But they have more computational complexities in terms of processing, encoding, energy consumption and hardware design. Thus, bringing out these challenges, the proposed paper considers the most prominent research papers and discuses FPGA architecture design and future scope in the state of art of image compression technique. The primary aim to investigate the research challenges toward VLSI designing and image compression. The core section of the proposed study includes three folds viz standard architecture designs, related work and open research challenges in the domain of image compression.


1993 ◽  
Vol 17 ◽  
pp. 398-404 ◽  
Author(s):  
Florence Fetterer ◽  
Jeffrey Hawkins

Under the Office of Naval Research-sponsored Arctic Leads Accelerated Research Initiative (ARI), a data set of Advanced Very High Resolution Radiometer (AVHRR) imagery covering the years 1988 through 1992 is being constructed. Relatively cloud-free imagery is selected from image hardcopies. Each image examined is subjectively ranked on the percentage of each sea or seas it covers, and the cloudiness of the image within each sea. The images are then logged in a spreadsheet. From the spreadsheet, about 20 images per month (for the year 1989) are ordered from the National Oceanic and Atmospheric Administration for processing. The image data are calibrated and mapped to one of two grids, which together cover most of the Arctic at 1 km per pixel. Care has been taken to match the grid and the projection to that of Special Sensor Microwave Imager (SSM/I) data distributed by the National Snow and Ice Data Center (NSIDC). The 1989 data set is complete at this time. Presently, data are distributed to the Remote Sensing Working Group of the ARI. NSIDC will distribute the data set to a wider audience at a later date.


Author(s):  
Gody Mostafa ◽  
Abdelhalim Zekry ◽  
Hatem Zakaria

When transmitting the data in digital communication, it is well desired that the transmitting data bits should be as minimal as possible, so many techniques are used to compress the data. In this paper, a Lempel-Ziv algorithm for data compression was implemented through VHDL coding. One of the most lossless data compression algorithms commonly used is Lempel-Ziv. The work in this paper is devoted to improve the compression rate, space-saving, and utilization of the Lempel-Ziv algorithm using a systolic array approach. The developed design is validated with VHDL simulations using Xilinx ISE 14.5 and synthesized on Virtex-6 FPGA chip. The results show that our design is efficient in providing high compression rates and space-saving percentage as well as improved utilization. The Throughput is increased by 50% and the design area is decreased by more than 23% with a high compression ratio compared to comparable previous designs.


2020 ◽  
Author(s):  
Dainora Jankauskiene ◽  
◽  
Indrius Kuklys ◽  
Lina Kukliene ◽  
Birute Ruzgiene ◽  
...  

Nowadays, the use of Unmanned Aerial Vehicle flying at a low altitude in conjunction with photogrammetric and LiDAR technologies allows to collect images of very high-resolution to generate dense points cloud and to simulate geospatial data of territories. The technology used in experimental research contains reconstruction of topography of surface with historical structure, observing the recreational infrastructure, obtaining geographic information for users who are involved in preservation and inspection of such unique cultural/ heritage object as are mounds in Lithuania. In order to get reliable aerial mapping products of preserved unique heritage object, such photogrammetric/ GIS procedures were performed: UAV flight for taking images with the camera; scanning surface by LiDAR simultaneously; processing of image data, 3D modelling and generation of orthophoto. Evaluation of images processing results shows that the accuracy of surface modelling by the use of UAV photogrammetry method satisfied requirements – mean RMSE equal to 0.031 m. The scanning surface by LiDAR from low altitude is advisable, relief representation of experimental area was obtained with mean accuracy up to 0.050 m. Aerial mapping by the use of UAV requires to specify appropriate ground sample distance (GSD) that is important for reducing number of images and time duration for modelling of area. Experiment shows that specified GSD of 1.7 cm is not reasonable, GSD size increased by 1.5 time would be applicable. The use of different software in addition for DSM visualization and analysis is redundant action.


2021 ◽  
Vol 38 (6) ◽  
pp. 1637-1646
Author(s):  
KVSV Trinadh Reddy ◽  
S. Narayana Reddy

In distributed m-health communication, it is a major challenge to develop an efficient blind watermarking method to protect the confidential medical data of patients. This paper proposes an efficient blind watermarking for medical images, which boasts a very high embedding capacity, a good robustness, and a strong imperceptibility. Three techniques, namely, discrete cosine transform (DCT), Weber’s descriptors (WDs), and Arnold chaotic map, were integrated to our method. Specifically, the Arnold chaotic map was used to scramble the watermark image. Then, the medical image was partitioned into non-over lapping blocks, and each block was subjected to DCT. After that, the scrambled watermark image data were embedded in the middle-band DCT coefficients of each block, such that two bits were embedded in each block. Simulation results show that the proposed watermarking method provides better imperceptibility, robustness, and computational complexity results with higher embedding capacity than the contrastive method.


2020 ◽  
Vol 21 (6) ◽  
pp. 1366-1384 ◽  
Author(s):  
João Valente ◽  
Bilal Sari ◽  
Lammert Kooistra ◽  
Henk Kramer ◽  
Sander Mücher

Abstract Knowing before harvesting how many plants have emerged and how they are growing is key in optimizing labour and efficient use of resources. Unmanned aerial vehicles (UAV) are a useful tool for fast and cost efficient data acquisition. However, imagery need to be converted into operational spatial products that can be further used by crop producers to have insight in the spatial distribution of the number of plants in the field. In this research, an automated method for counting plants from very high-resolution UAV imagery is addressed. The proposed method uses machine vision—Excess Green Index and Otsu’s method—and transfer learning using convolutional neural networks to identify and count plants. The integrated methods have been implemented to count 10 weeks old spinach plants in an experimental field with a surface area of 3.2 ha. Validation data of plant counts were available for 1/8 of the surface area. The results showed that the proposed methodology can count plants with an accuracy of 95% for a spatial resolution of 8 mm/pixel in an area up to 172 m2. Moreover, when the spatial resolution decreases with 50%, the maximum additional counting error achieved is 0.7%. Finally, a total amount of 170 000 plants in an area of 3.5 ha with an error of 42.5% was computed. The study shows that it is feasible to count individual plants using UAV-based off-the-shelf products and that via machine vision/learning algorithms it is possible to translate image data in non-expert practical information.


2002 ◽  
Vol 111 (10) ◽  
pp. 472-481
Author(s):  
Dave Bancroft
Keyword(s):  
Bit Rate ◽  

Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4153
Author(s):  
Gabriel Signoretti ◽  
Marianne Silva ◽  
Pedro Andrade ◽  
Ivanovitch Silva ◽  
Emiliano Sisinni ◽  
...  

Currently, the applications of the Internet of Things (IoT) generate a large amount of sensor data at a very high pace, making it a challenge to collect and store the data. This scenario brings about the need for effective data compression algorithms to make the data manageable among tiny and battery-powered devices and, more importantly, shareable across the network. Additionally, considering that, very often, wireless communications (e.g., low-power wide-area networks) are adopted to connect field devices, user payload compression can also provide benefits derived from better spectrum usage, which in turn can result in advantages for high-density application scenarios. As a result of this increase in the number of connected devices, a new concept has emerged, called TinyML. It enables the use of machine learning on tiny, computationally restrained devices. This allows intelligent devices to analyze and interpret data locally and in real time. Therefore, this work presents a new data compression solution (algorithm) for the IoT that leverages the TinyML perspective. The new approach is called the Tiny Anomaly Compressor (TAC) and is based on data eccentricity. TAC does not require previously established mathematical models or any assumptions about the underlying data distribution. In order to test the effectiveness of the proposed solution and validate it, a comparative analysis was performed on two real-world datasets with two other algorithms from the literature (namely Swing Door Trending (SDT) and the Discrete Cosine Transform (DCT)). It was found that the TAC algorithm showed promising results, achieving a maximum compression rate of 98.33%. Additionally, it also surpassed the two other models regarding the compression error and peak signal-to-noise ratio in all cases.


Sign in / Sign up

Export Citation Format

Share Document