scholarly journals Analysis of HMAX Algorithm on Black Bar Image Dataset

Electronics ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 567
Author(s):  
Alessandro Carlini ◽  
Olivier Boisard ◽  
Michel Paindavoine

An accurate detection and classification of scenes and objects is essential for interacting with the world, both for living beings and for artificial systems. To reproduce this ability, which is so effective in the animal world, numerous computational models have been proposed, frequently based on bioinspired, computational structures. Among these, Hierarchical Max-pooling (HMAX) is probably one of the most important models. HMAX is a recognition model, mimicking the structures and functions of the primate visual cortex. HMAX has already proven its effectiveness and versatility. Nevertheless, its computational structure presents some criticalities, whose impact on the results has never been systematically assessed. Traditional assessments based on photographs force to choose a specific context; the complexity of images makes it difficult to analyze the computational structure. Here we present a new, general and unspecific assessment of HMAX, introducing the Black Bar Image Dataset, a customizable set of images created to be a universal and flexible model of any ‘real’ image. Results: surprisingly, HMAX demonstrates a notable sensitivity also with a low contrast of luminance. Images containing a wider information pattern enhance the performances. The presence of textures improves performance, but only if the parameterization of the Gabor filter allows its correct encoding. In addition, in complex conditions, HMAX demonstrates good effectiveness in classification. Moreover, the present assessment demonstrates the benefits offered by the Black Bar Image Dataset, its modularity and scalability, for the functional investigations of any computational models.

2019 ◽  
pp. 210-229
Author(s):  
Michael Weisberg

Michael Weisberg’s book Simulation and Similarity argued that although mathematical models are sometimes described in narrative form, they are best understood as interpreted mathematical structures. But how can a mathematical structure be causal, as many models described in narrative seem to be? This chapter argues that models with apparently narrative form are actually computational structures. It explores this suggestion in detail, examining what computational structure consists of, the resources it offers modelers, and why attempting to re-describe computational models as imaginary concrete systems fails even more dramatically than it does for mathematical models.


Author(s):  
Alifia Puspaningrum ◽  
Nahya Nur ◽  
Ozzy Secio Riza ◽  
Agus Zainal Arifin

Automatic classification of tuna image needs a good segmentation as a main process. Tuna image is taken with textural background and the tuna’s shadow behind the object. This paper proposed a new weighted thresholding method for tuna image segmentation which adapts hierarchical clustering analysisand percentile method. The proposed method considering all part of the image and the several part of the image. It will be used to estimate the object which the proportion has been known. To detect the edge of tuna images, 2D Gabor filter has been implemented to the image. The result image then threshold which the value has been calculated by using HCA and percentile method. The mathematical morphologies are applied into threshold image. In the experimental result, the proposed method can improve the accuracy value up to 20.04%, sensitivity value up to 29.94%, and specificity value up to 17,23% compared to HCA. The result shows that the proposed method cansegment tuna images well and more accurate than hierarchical cluster analysis method.


Author(s):  
D. Lebedev ◽  
A. Abzhalilova

Currently, biometric methods of personality are becoming more and more relevant recognition technology. The advantage of biometric identification systems, in comparison with traditional approaches, lies in the fact that not an external object belonging to a person is identified, but the person himself. The most widespread technology of personal identification by fingerprints, which is based on the uniqueness for each person of the pattern of papillary patterns. In recent years, many algorithms and models have appeared to improve the accuracy of the recognition system. The modern algorithms (methods) for the classification of fingerprints are analyzed. Algorithms for the classification of fingerprint images by the types of fingerprints based on the Gabor filter, wavelet - Haar, Daubechies transforms and multilayer neural network are proposed. Numerical and results of the proposed experiments of algorithms are carried out. It is shown that the use of an algorithm based on the combined application of the Gabor filter, a five-level wavelet-Daubechies transform and a multilayer neural network makes it possible to effectively classify fingerprints.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Ziting Zhao ◽  
Tong Liu ◽  
Xudong Zhao

Machine learning plays an important role in computational intelligence and has been widely used in many engineering fields. Surface voids or bugholes frequently appearing on concrete surface after the casting process make the corresponding manual inspection time consuming, costly, labor intensive, and inconsistent. In order to make a better inspection of the concrete surface, automatic classification of concrete bugholes is needed. In this paper, a variable selection strategy is proposed for pursuing feature interpretability, together with an automatic ensemble classification designed for getting a better accuracy of the bughole classification. A texture feature deriving from the Gabor filter and gray-level run lengths is extracted in concrete surface images. Interpretable variables, which are also the components of the feature, are selected according to a presented cumulative voting strategy. An ensemble classifier with its base classifier automatically assigned is provided to detect whether a surface void exists in an image or not. Experimental results on 1000 image samples indicate the effectiveness of our method with a comparable prediction accuracy and model explicable.


Human-computer interaction (HCI), in recent times, is gaining a lot of significance. The systems based on HCI have been designed for recognizing different facial expressions. The application areas for face recognition include robotics, safety, and surveillance system. The emotions so captured aid in predicting future actions in addition to providing valuable information. Fear, neutral, sad, surprise, happy are the categories of primary emotions. From the database of still images, certain features can be obtained using Gabor Filter (GF) and Histogram of Oriented Gradient (HOG). These two techniques are being used while extracting features for getting the expressions from the face. This paper focuses on the customized classification of GF and HOG using the KNN classifier.GF provides texture features whereas HOG finds applications for images exhibiting differing lighting conditions. Simplicity and linearity of KNN classifier appeals for its use in the present application. The paper also elaborates various distances used in KNN classifiers like city-block, Euclidean and correlation distance. This paper uses Matlab implementation of GF, HOG and KNN for extracting the required features and classification, respectively. Results exhibit that the accuracy of city- block distance is more .


2021 ◽  
Vol 2021 (29) ◽  
pp. 83-88
Author(s):  
Sahar Azimian ◽  
Farah Torkamani Azar ◽  
Seyed Ali Amirshahi

For a long time different studies have focused on introducing new image enhancement techniques. While these techniques show a good performance and are able to increase the quality of images, little attention has been paid to how and when overenhancement occurs in the image. This could possibly be linked to the fact that current image quality metrics are not able to accurately evaluate the quality of enhanced images. In this study we introduce the Subjective Enhanced Image Dataset (SEID) in which 15 observers are asked to enhance the quality of 30 reference images which are shown to them once at a low and another time at a high contrast. Observers were instructed to enhance the quality of the images to the point that any more enhancement will result in a drop in the image quality. Results show that there is an agreement between observers on when over-enhancement occurs and this point is closely similar no matter if the high contrast or the low contrast image is enhanced.


2015 ◽  
Vol 2 (2) ◽  
pp. 24-41 ◽  
Author(s):  
K. Viswanath ◽  
R. Gunasundari

The abnormalities of the kidney can be identified by ultrasound imaging. The kidney may have structural abnormalities like kidney swelling, change in its position and appearance. Kidney abnormality may also arise due to the formation of stones, cysts, cancerous cells, congenital anomalies, blockage of urine etc. For surgical operations it is very important to identify the exact and accurate location of stone in the kidney. The ultrasound images are of low contrast and contain speckle noise. This makes the detection of kidney abnormalities rather challenging task. Thus preprocessing of ultrasound images is carried out to remove speckle noise. In preprocessing, first image restoration is done to reduce speckle noise then it is applied to Gabor filter for smoothening. Next the resultant image is enhanced using histogram equalization. The preprocessed ultrasound image is segmented using distance regularized level set segmentation (DR-LSS), since it yields better results. It uses a two-step splitting methods to iteratively solve the DR-LSS equation, first step is iterating LSS equation, and then solving the Sign distance equation. The second step is to regularize the level set function which is the obtained from first step for better stability. The DR is included for LSS for eliminating of anti-leakages on image boundary. The DR-LSS does not require any expensive re-initialization and it is very high speed of operation. The RD-LSS results are compared with distance regularized level set evolution DRLSE1, DRLSE2 and DRLSE3. Extracted region of the kidney after segmentation is applied to Symlets (Sym12), Biorthogonal (bio3.7, bio3.9 & bio4.4) and Daubechies (Db12) lifting scheme wavelet subbands to extract energy levels. These energy level gives an indication about presence of stone in that particular location which significantly vary from that of normal energy level. These energy levels are trained by Multilayer Perceptron (MLP) and Back Propagation (BP) ANN to identify the type of stone with an accuracy of 98.6%.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Basma Abd El-Rahiem ◽  
Ahmed Sedik ◽  
Ghada M. El Banby ◽  
Hani M. Ibrahem ◽  
Mohamed Amin ◽  
...  

PurposeThe objective of this paper is to perform infrared (IR) face recognition efficiently with convolutional neural networks (CNNs). The proposed model in this paper has several advantages such as the automatic feature extraction using convolutional and pooling layers and the ability to distinguish between faces without visual details.Design/methodology/approachA model which comprises five convolutional layers in addition to five max-pooling layers is introduced for the recognition of IR faces.FindingsThe experimental results and analysis reveal high recognition rates of IR faces with the proposed model.Originality/valueA designed CNN model is presented for IR face recognition. Both the feature extraction and classification tasks are incorporated into this model. The problems of low contrast and absence of details in IR images are overcome with the proposed model. The recognition accuracy reaches 100% in experiments on the Terravic Facial IR Database (TFIRDB).


Poor understanding of rudist growth geometry and anatomy has hampered systematic studies of the superfamily. A flexible model that simulates the growth of rudist shells is therefore presented so that evolutionary trends in the group may be consistently analysed; this model is constructed by rotational or irrotational stacking of inclined gnomons around a contained axis. Functional analysis of shell geometry and reconstructed anatomy provides a more solid foundation for rudist systematics. The first rudists (Diceratidae) employed one or other of the spirogyrate umbones, inherited from megalodontid ancestors, as a facultatively elevating encrustation stem. Invagination of the ligament in the Caprotinidae permitted uncoiling of the shell, though this also entailed reduced gaping and therefore externalization of food entrapment, with increasing involvement of the mantle margins. Caprotinid functional design was preadapted to several new adaptive zones, which were exploited by various advanced descendant groups. Some of these groups show homeomorphic evolution and have often been assembled by earlier workers into polyphyletic ‘families’ (e.g. Caprinidae). An attempt is therefore made to establish a skeletal classification of rudists on the basis of true clades, as distinguished by careful functional analysis.


2010 ◽  
Vol 97-101 ◽  
pp. 2940-2943 ◽  
Author(s):  
Nang Seng Siri Mar ◽  
Clinton Fookes ◽  
K.D.V. Yarlagadda Prasad

This paper proposes the validity of a Gabor filter bank for feature extraction of solder joint images on Printed Circuit Boards (PCBs). A distance measure based on the Mahalanobis Cosine metric is also presented for classification of five different types of solder joints. From the experimental results, this methodology achieved high accuracy and a well generalised performance. This can be an effective method to reduce cost and improve quality in the production of PCBs in the manufacturing industry.


Sign in / Sign up

Export Citation Format

Share Document