An Automatic Target Classifier using Model Based Image Processing

Author(s):  
Douglas Haanpaa ◽  
Glenn Beach ◽  
Charles J. Cohen
Keyword(s):  
2020 ◽  
Vol 2020 ◽  
pp. 1-18
Author(s):  
Mingyu Tong ◽  
Kailiang Shao ◽  
Xilin Luo ◽  
Huiming Duan

Image filtering can change or enhance an image by emphasizing or removing certain features of the image. An image is a system in which some information is known and some information is unknown. Grey system theory is an important method for dealing with this kind of system, and grey correlation analysis and grey prediction modeling are important components of this method. In this paper, a fractional grey prediction model based on a filtering algorithm by combining a grey correlation model and a fractional prediction model is proposed. In this model, first, noise points are identified by comparing the grey correlation and the threshold value of each pixel in the filter window, and then, through the resolution coefficient of the important factor in image processing, a variety of grey correlation methods are compared. Second, the image noise points are used as the original sequence by the filter pane. The grey level of the middle point is predicted by the values of the surrounding pixel points combined with the fractional prediction model, replacing the original noise value to effectively eliminate the noise. Finally, an empirical analysis shows that the PSNR and MSE of the new model are approximately 27 and 140, respectively; these values are better than those of the comparison models and achieve good processing effects.


Proceedings ◽  
2019 ◽  
Vol 33 (1) ◽  
pp. 16
Author(s):  
Ali Mohammad-Djafari

Signale and image processing has always been the main tools in many area and in particular in Medical and Biomedical applications. Nowadays, there are great number of toolboxes, general purpose and very specialized, in which classical techniques are implemented and can be used: all the transformation based methods (Fourier, Wavelets, ...) as well as model based and iterative regularization methods. Statistical methods have also shown their success in some area when parametric models are available. Bayesian inference based methods had great success, in particular, when the data are noisy, uncertain, incomplete (missing values) or with outliers and where there is a need to quantify uncertainties. In some applications, nowadays, we have more and more data. To use these “Big Data” to extract more knowledge, the Machine Learning and Artificial Intelligence tools have shown success and became mandatory. However, even if in many domains of Machine Learning such as classification and clustering these methods have shown success, their use in real scientific problems are limited. The main reasons are twofold: First, the users of these tools cannot explain the reasons when the are successful and when they are not. The second is that, in general, these tools can not quantify the remaining uncertainties. Model based and Bayesian inference approach have been very successful in linear inverse problems. However, adjusting the hyper parameters is complex and the cost of the computation is high. The Convolutional Neural Networks (CNN) and Deep Learning (DL) tools can be useful for pushing farther these limits. At the other side, the Model based methods can be helpful for the selection of the structure of CNN and DL which are crucial in ML success. In this work, I first provide an overview and then a survey of the aforementioned methods and explore the possible interactions between them.


2009 ◽  
Author(s):  
John C. Aldrin ◽  
Jeremy S. Knopp ◽  
Kumar V. Jata ◽  
Donald O. Thompson ◽  
Dale E. Chimenti

Sign in / Sign up

Export Citation Format

Share Document