The Kolmogorov Spline Network for Image Processing

Author(s):  
Pierre-Emmanuel Leni ◽  
Yohan D. Fougerolle ◽  
Frédéric Truchetet

In 1900, Hilbert stated that high order equations cannot be solved by sums and compositions of bivariate functions. In 1957, Kolmogorov proved this hypothesis wrong and presented his superposition theorem (KST) that allowed for writing every multivariate functions as sums and compositions of univariate functions. Sprecher has proposed in (Sprecher, 1996) and (Sprecher, 1997) an algorithm for exact univariate function reconstruction. Sprecher explicitly describes construction methods for univariate functions and introduces fundamental notions for the theorem comprehension (such as tilage). Köppen has presented applications of this algorithm to image processing in (Köppen, 2002) and (Köppen & Yoshida, 2005). The lack of flexibility of this scheme has been pointed out and another solution which approximates the univariate functions has been considered. More specifically, it has led us to consider Igelnik and Parikh’s approach, known as the KSN which offers several perspectives of modification of the univariate functions as well as their construction. This chapter will focus on the presentation of Igelnik and Parikh’s Kolmogorov Spline Network (KSN) for image processing and detail two applications: image compression and progressive transmission. Precisely, the developments presented in this chapter include: (1)Compression: the authors study the reconstruction quality using univariate functions containing only a fraction of the original image pixels. To improve the reconstruction quality, they apply this decomposition on images of details obtained by wavelet decomposition. The authors combine this approach into the JPEG 2000 encoder, and show that the obtained results improve JPEG 2000 compression scheme, even at low bitrates. (2)Progressive Transmission: the authors propose to modify the generation of the KSN. The image is decomposed into univariate functions that can be transmitted one after the other to add new data to the previously transmitted functions, which allows to progressively and exactly reconstruct the original image. They evaluate the transmission robustness and provide the results of the simulation of a transmission over packet-loss channels.

2013 ◽  
pp. 54-78
Author(s):  
Pierre-Emmanuel Leni ◽  
Yohan D. Fougerolle ◽  
Frédéric Truchetet

In 1900, Hilbert stated that high order equations cannot be solved by sums and compositions of bivariate functions. In 1957, Kolmogorov proved this hypothesis wrong and presented his superposition theorem (KST) that allowed for writing every multivariate functions as sums and compositions of univariate functions. Sprecher has proposed in (Sprecher, 1996) and (Sprecher, 1997) an algorithm for exact univariate function reconstruction. Sprecher explicitly describes construction methods for univariate functions and introduces fundamental notions for the theorem comprehension (such as tilage). Köppen has presented applications of this algorithm to image processing in (Köppen, 2002) and (Köppen & Yoshida, 2005). The lack of flexibility of this scheme has been pointed out and another solution which approximates the univariate functions has been considered. More specifically, it has led us to consider Igelnik and Parikh’s approach, known as the KSN which offers several perspectives of modification of the univariate functions as well as their construction. This chapter will focus on the presentation of Igelnik and Parikh’s Kolmogorov Spline Network (KSN) for image processing and detail two applications: image compression and progressive transmission. Precisely, the developments presented in this chapter include: (1)Compression: the authors study the reconstruction quality using univariate functions containing only a fraction of the original image pixels. To improve the reconstruction quality, they apply this decomposition on images of details obtained by wavelet decomposition. The authors combine this approach into the JPEG 2000 encoder, and show that the obtained results improve JPEG 2000 compression scheme, even at low bitrates. (2)Progressive Transmission: the authors propose to modify the generation of the KSN. The image is decomposed into univariate functions that can be transmitted one after the other to add new data to the previously transmitted functions, which allows to progressively and exactly reconstruct the original image. They evaluate the transmission robustness and provide the results of the simulation of a transmission over packet-loss channels.


Author(s):  
Erna Verawati ◽  
Surya Darma Nasution ◽  
Imam Saputra

Sharpening the image of the road display requies a degree of brightness in the process of sharpening the image from the original image result of the improved image. One of the sharpening of the street view image is image processing. Image processing is one of the multimedia components that plays an important role as a form of visual information. There are many image processing methods that are used in sharpening the image of street views, one of them is the gram schmidt spectral sharpening method and high pass filtering. Gram schmidt spectral sharpening method is method that has another name for intensity modulation based on a refinement fillter. While the high pass filtering method is a filter process that btakes image with high intensity gradients and low intensity difference that will be reduced or discarded. Researce result show that the gram schmidt spectral sharpening method and high pass filtering can be implemented properly so that the sharpening of the street view image can be guaranteed sharpening by making changes frome the original image to the image using the gram schmidt spectral sharpening method and high pass filtering.Keywords: Image processing, gram schmidt spectral sharpening and high pass filtering.


In many image processing applications, a wide range of image enhancement techniques are being proposed. Many of these techniques demanda lot of critical and advance steps, but the resultingimage perception is not satisfactory. This paper proposes a novel sharpening method which is being experimented with additional steps. In the first step, the color image is transformed into grayscale image, then edge detection process is applied using Laplacian technique. Then deduct this image from the original image. The resulting image is as expected; After performing the enhancement process,the high quality of the image can be indicated using the Tenengrad criterion. The resulting image manifested the difference in certain areas, the dimension and the depth as well. Histogram equalization technique can also be applied to change the images color.


Author(s):  
Pierre-Emmanuel Leni ◽  
Yohan D. Fougerolle ◽  
Frédéric Truchetet

In 1900, Hilbert declared that high order polynomial equations could not be solved by sums and compositions of continuous functions of less than three variables. This statement was proven wrong by the superposition theorem, demonstrated by Arnol’d and Kolmogorov in 1957, which allows for writing all multivariate functions as sums and compositions of univariate functions. Amongst recent computable forms of the theorem, Igelnik and Parikh’s approach, known as the Kolmogorov Spline Network (KSN), offers several alternatives for the univariate functions as well as their construction. A novel approach is presented for the embedding of authentication data (black and white logo, translucent or opaque image) in images. This approach offers similar functionalities than watermarking approaches, but relies on a totally different theory: the mark is not embedded in the 2D image space, but it is rather applied to an equivalent univariate representation of the transformed image. Using the progressive transmission scheme previously proposed (Leni, 2011), the pixels are re-arranged without any neighborhood consideration. Taking advantage of this naturally encrypted representation, it is proposed to embed the watermark in these univariate functions. The watermarked image can be accessed at any intermediate resolution, and fully recovered (by removing the embedded mark) without loss using a secret key. Moreover, the key can be different for every resolution, and both the watermark and the image can be globally restored in case of data losses during the transmission. These contributions lie in proposing a robust embedding of authentication data (represented by a watermark) into an image using the 1D space of univariate functions based on the Kolmogorov superposition theorem. Lastly, using a key, the watermark can be removed to restore the original image.


2016 ◽  
pp. 28-56 ◽  
Author(s):  
Sanjay Chakraborty ◽  
Lopamudra Dey

Image processing on quantum platform is a hot topic for researchers now a day. Inspired from the idea of quantum physics, researchers are trying to shift their focus from classical image processing towards quantum image processing. Storing and representation of images in a binary and ternary quantum system is always one of the major issues in quantum image processing. This chapter mainly deals with several issues regarding various types of image representation and storage techniques in a binary as well as ternary quantum system. How image pixels can be organized and retrieved based on their positions and intensity values in 2-states and 3-states quantum systems is explained here in detail. Beside that it also deals with the topic that focuses on the clear filteration of images in quantum system to remove unwanted noises. This chapter also deals with those important applications (like Quantum image compression, Quantum edge detection, Quantum Histogram etc.) where quantum image processing associated with some of the natural computing techniques (like AI, ANN, ACO, etc.).


2014 ◽  
Vol 543-547 ◽  
pp. 2547-2550
Author(s):  
Yan Rui Du ◽  
Bin Xie ◽  
Li Ping Wang

Thinning is widely used in image processing and pattern recognition, it reduces redundancy of the original image and for easily extracting features. By a lot of experiments, a new improved thinning algorithm is proposed in this paper. A flow process diagram is given too. In the new algorithm, thinking about some details. For example, thing of P1 cant be deleted, thing of how to wipe off burr. The experiments show that these algorithms achieve anticipate results.


2003 ◽  
Vol 15 (01) ◽  
pp. 38-45
Author(s):  
JIANN-DER LEE ◽  
SHU-YEN WAN ◽  
RUI-FENG WU

In this paper, a new compression scheme called hybrid compression model (HCM) is proposed for compressing clusters of similar images. The HCM exploits region growing to segment the median image that created by a cluster of similar images; and further, it uses centroid method to predict the values of original images data. The difference between the predict values and original data is stored for later use of progressive transmission. The experimental results obtained on various images show that our method provides significant improvement in compression efficiency while compared to the traditional centroid method.


Author(s):  
Gabriel Thomas

Signal processing can be a mathematical intense subject and undergraduate students may not be able to appreciate the enormous importance on the applied side of things. Even if the instructor mentions some of the fascinating application areas, it might be difficult for students to attend a laboratory session in which they will have enough time to code a signal processing algorithm and see a big difference when processing data in the frequency domain. In this paper I describe a simple example in which students have an opportunity to develop a theoretical solution to enhance an image and with just a few lines of code run a program that will show a dramatic difference between an original image and the enhanced version. Different fundamental signal processing concepts are reinforced with the laboratory example and its application only requires a computer and signal processing developing software such as Matlab.


Author(s):  
Harbi S. Jamila ◽  
Batool Daraam ◽  
Waseem M. Ali

Some image processing applications like segmentation need effective techniques that work as edge detection and extraction, many filters in this field fail to achieve the desired result and consequently the further processing fails, so it is needed sometimes to modify a technique to work in robust and effective way. In ultrasound, although it is a common and real-time non-destructive test methods, but processing and analyzing such images require special filters and modifications to overcome some weakness sides in this field, especially when scanning objects comparable to the acoustic wavelength. In this paper, two modified filters were suggested, first one is two-step unsharp filter, in this techniques the image was enhanced twice the first time the edges that extracted from the original scaled image where added back to the image, and in the second time the same edges were added to the scaled enhanced image. The second technique is summarized as adding back the Low-High and High-Low bands that were extracted previously from the original image by Haar wavelet transform to the image which reinforce its edges.


Sign in / Sign up

Export Citation Format

Share Document