scholarly journals Detection of fake shallots using website-based haar-like features algorithm

Compiler ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 51
Author(s):  
Bambang Agus Setyawan ◽  
Mutaqin Akbar

Shallots is commonly used as essential cooking spices or complement seasoning. The high market demand for this commodity has triggered some people to counterfeit it. They mix the shallots with defective products of onions to get more benefits. It urges to provide a system that can help people to distinguish whether the shallot is original or fake. This research aims to provides an object recognition system for fake shallots utilizing the Haar-Like Feature algorithm. It used the cascade training data set of 59 positive images and 150 negative images with 50 comparison images. The identification process of the shallots was through the haar-cascade process, integrated image, adaptive boosting, cascade classifier, and local binary pattern histogram. This system was made based on the Django website using the python programming language. The test was conducted 30 times on Brebes shallots mixed with Mumbai's mini onions in a single and mixture test method. The test obtained an average percentage of 69.2% for the object recognition of Mumbai's mini onions.

2021 ◽  
pp. 52-66
Author(s):  
Huang-Mei He ◽  
Yi Chen ◽  
Jia-Ying Xiao ◽  
Xue-Qing Chen ◽  
Zne-Jung Lee

China has carried out a large number of real estate market reforms that change the real estate market demand considerably. At the same time, the real estate price has soared in some cities and has surpassed the spending power of many ordinary people. As the real estate price has received widespread attention from society, it is important to understand what factors affect the real estate price. Therefore, we propose a data analysis method for finding out the influencing factors of real estate prices. The method performs data cleaning and conversion on the used data first. To discretize the real estate price, we use the mean ± standard deviation (SD), mean ± 0.5 SD, and mean ± 2 SD of the price and divide it into three categories as the output variable. Then, we establish the decision tree and random forest model for six different situations for comparison. When the data set is divided into training data (70%) and testing data (30%), it has the highest testing accuracy. In addition, by observing the importance of each input variable, it is found that the main influencing factors of real estate price are cost, interior decoration, location, and status. The results suggest that both the real estate industry and buyers should pay attention to these factors to adjust or purchase real estate.


Author(s):  
Pavitra Patel ◽  
A. A. Chaudhari ◽  
M. A. Pund ◽  
D. H. Deshmukh

<p>Speech emotion recognition is an important issue which affects the human machine interaction. Automatic recognition of human emotion in speech aims at recognizing the underlying emotional state of a speaker from the speech signal. Gaussian mixture models (GMMs) and the minimum error rate classifier (i.e. Bayesian optimal classifier) are popular and effective tools for speech emotion recognition. Typically, GMMs are used to model the class-conditional distributions of acoustic features and their parameters are estimated by the expectation maximization (EM) algorithm based on a training data set. In this paper, we introduce a boosting algorithm for reliably and accurately estimating the class-conditional GMMs. The resulting algorithm is named the Boosted-GMM algorithm. Our speech emotion recognition experiments show that the emotion recognition rates are effectively and significantly boosted by the Boosted-GMM algorithm as compared to the EM-GMM algorithm.<br />During this interaction, human beings have some feelings that they want to convey to their communication partner with whom they are communicating, and then their communication partner may be the human or machine. This work dependent on the emotion recognition of the human beings from their speech signal<br />Emotion recognition from the speaker’s speech is very difficult because of the following reasons: Because of the existence of the different sentences, speakers, speaking styles, speaking rates accosting variability was introduced. The same utterance may show different emotions. Therefore it is very difficult to differentiate these portions of utterance. Another problem is that emotion expression is depending on the speaker and his or her culture and environment. As the culture and environment gets change the speaking style also gets change, which is another challenge in front of the speech emotion recognition system.</p>


Author(s):  
Wening Mustikarini ◽  
Risanuri Hidayat ◽  
Agus Bejo

Abstract — Automatic Speech Recognition (ASR) is a technology that uses machines to process and recognize human voice. One way to increase recognition rate is to use a model of language you want to recognize. In this paper, a speech recognition application is introduced to recognize words "atas" (up), "bawah" (down), "kanan" (right), and "kiri" (left). This research used 400 samples of speech data, 75 samples from each word for training data and 25 samples for each word for test data. This speech recognition system was designed using Mel Frequency Cepstral Coefficient (MFCC) as many as 13 coefficients as features and Support Vector Machine (SVM) as identifiers. The system was tested with linear kernels and RBF, various cost values, and three sample sizes (n = 25, 75, 50). The best average accuracy value was obtained from SVM using linear kernels, a cost value of 100 and a data set consisted of 75 samples from each class. During the training phase, the system showed a f1-score (trade-off value between precision and recall) of 80% for the word "atas", 86% for the word "bawah", 81% for the word "kanan", and 100% for the word "kiri". Whereas by using 25 new samples per class for system testing phase, the f1-score was 76% for the "atas" class, 54% for the "bawah" class, 44% for the "kanan" class, and 100% for the "kiri" class.


2008 ◽  
Vol 2008 ◽  
pp. 1-6 ◽  
Author(s):  
Farid Flitti ◽  
Aicha Far ◽  
Bin Guo ◽  
Amine Bermak

Gas recognition is a new emerging research area with many civil, military, and industrial applications. The success of any gas recognition system depends on its computational complexity and its robustness. In this work, we propose a new low-complexity recognition method which is tested and successfully validated for tin-oxide gas sensor array chip. The recognition system is based on a vector angle similarity measure between the query gas and the representatives of the different gas classes. The latter are obtained using a clustering algorithm based on the same measure within the training data set. Experimented results on our in-house gas sensors array show more than98%of correct recognition. The robustness of the proposed method is tested by recognizing gas measurements with simulated drift. Less than1%of performance degradation is noted at the worst case scenario which represents a significant improvement when compared to the current state-of-the-art.


2014 ◽  
Vol 34 (1) ◽  
pp. 94-105 ◽  
Author(s):  
Ognjan Luzanin ◽  
Miroslav Plancak

Purpose – Main purpose is to present methodology which allows efficient hand gesture recognition using low-budget, 5-sensor data glove. To allow widespread use of low-budget data gloves in engineering virtual reality (VR) applications, gesture dictionaries must be enhanced with more ergonomic and symbolically meaningful hand gestures, while providing high gesture recognition rates when used by different seen and unseen users. Design/methodology/approach – The simple boundary-value gesture recognition methodology was replaced by a probabilistic neural network (PNN)-based gesture recognition system able to process simple and complex static gestures. In order to overcome problems inherent to PNN – primarily, slow execution with large training data sets – the proposed gesture recognition system uses clustering ensemble to reduce the training data set without significant deterioration of the quality of training. The reduction of training data set is efficiently performed using three types of clustering algorithms, yielding small number of input vectors that represent the original population very well. Findings – The proposed methodology is capable of providing efficient recognition of simple and complex static gestures and was also successfully tested with gestures of an unseen user, i.e. person who took no part in the training phase. Practical implications – The hand gesture recognition system based on the proposed methodology enables the use of affordable data gloves with a small number of sensors in VR engineering applications which require complex static gestures, including assembly and maintenance simulations. Originality/value – According to literature, there are no similar solutions that allow efficient recognition of simple and complex static hand gestures, based on a 5-sensor data glove.


2020 ◽  
Vol 17 (3) ◽  
pp. 299-305 ◽  
Author(s):  
Riaz Ahmad ◽  
Saeeda Naz ◽  
Muhammad Afzal ◽  
Sheikh Rashid ◽  
Marcus Liwicki ◽  
...  

This paper presents a deep learning benchmark on a complex dataset known as KFUPM Handwritten Arabic TexT (KHATT). The KHATT data-set consists of complex patterns of handwritten Arabic text-lines. This paper contributes mainly in three aspects i.e., (1) pre-processing, (2) deep learning based approach, and (3) data-augmentation. The pre-processing step includes pruning of white extra spaces plus de-skewing the skewed text-lines. We deploy a deep learning approach based on Multi-Dimensional Long Short-Term Memory (MDLSTM) networks and Connectionist Temporal Classification (CTC). The MDLSTM has the advantage of scanning the Arabic text-lines in all directions (horizontal and vertical) to cover dots, diacritics, strokes and fine inflammation. The data-augmentation with a deep learning approach proves to achieve better and promising improvement in results by gaining 80.02% Character Recognition (CR) over 75.08% as baseline.


2019 ◽  
Vol 12 (2) ◽  
pp. 120-127 ◽  
Author(s):  
Wael Farag

Background: In this paper, a Convolutional Neural Network (CNN) to learn safe driving behavior and smooth steering manoeuvring, is proposed as an empowerment of autonomous driving technologies. The training data is collected from a front-facing camera and the steering commands issued by an experienced driver driving in traffic as well as urban roads. Methods: This data is then used to train the proposed CNN to facilitate what it is called “Behavioral Cloning”. The proposed Behavior Cloning CNN is named as “BCNet”, and its deep seventeen-layer architecture has been selected after extensive trials. The BCNet got trained using Adam’s optimization algorithm as a variant of the Stochastic Gradient Descent (SGD) technique. Results: The paper goes through the development and training process in details and shows the image processing pipeline harnessed in the development. Conclusion: The proposed approach proved successful in cloning the driving behavior embedded in the training data set after extensive simulations.


Author(s):  
Ritu Khandelwal ◽  
Hemlata Goyal ◽  
Rajveer Singh Shekhawat

Introduction: Machine learning is an intelligent technology that works as a bridge between businesses and data science. With the involvement of data science, the business goal focuses on findings to get valuable insights on available data. The large part of Indian Cinema is Bollywood which is a multi-million dollar industry. This paper attempts to predict whether the upcoming Bollywood Movie would be Blockbuster, Superhit, Hit, Average or Flop. For this Machine Learning techniques (classification and prediction) will be applied. To make classifier or prediction model first step is the learning stage in which we need to give the training data set to train the model by applying some technique or algorithm and after that different rules are generated which helps to make a model and predict future trends in different types of organizations. Methods: All the techniques related to classification and Prediction such as Support Vector Machine(SVM), Random Forest, Decision Tree, Naïve Bayes, Logistic Regression, Adaboost, and KNN will be applied and try to find out efficient and effective results. All these functionalities can be applied with GUI Based workflows available with various categories such as data, Visualize, Model, and Evaluate. Result: To make classifier or prediction model first step is learning stage in which we need to give the training data set to train the model by applying some technique or algorithm and after that different rules are generated which helps to make a model and predict future trends in different types of organizations Conclusion: This paper focuses on Comparative Analysis that would be performed based on different parameters such as Accuracy, Confusion Matrix to identify the best possible model for predicting the movie Success. By using Advertisement Propaganda, they can plan for the best time to release the movie according to the predicted success rate to gain higher benefits. Discussion: Data Mining is the process of discovering different patterns from large data sets and from that various relationships are also discovered to solve various problems that come in business and helps to predict the forthcoming trends. This Prediction can help Production Houses for Advertisement Propaganda and also they can plan their costs and by assuring these factors they can make the movie more profitable.


2019 ◽  
Vol 9 (6) ◽  
pp. 1128 ◽  
Author(s):  
Yundong Li ◽  
Wei Hu ◽  
Han Dong ◽  
Xueyan Zhang

Using aerial cameras, satellite remote sensing or unmanned aerial vehicles (UAV) equipped with cameras can facilitate search and rescue tasks after disasters. The traditional manual interpretation of huge aerial images is inefficient and could be replaced by machine learning-based methods combined with image processing techniques. Given the development of machine learning, researchers find that convolutional neural networks can effectively extract features from images. Some target detection methods based on deep learning, such as the single-shot multibox detector (SSD) algorithm, can achieve better results than traditional methods. However, the impressive performance of machine learning-based methods results from the numerous labeled samples. Given the complexity of post-disaster scenarios, obtaining many samples in the aftermath of disasters is difficult. To address this issue, a damaged building assessment method using SSD with pretraining and data augmentation is proposed in the current study and highlights the following aspects. (1) Objects can be detected and classified into undamaged buildings, damaged buildings, and ruins. (2) A convolution auto-encoder (CAE) that consists of VGG16 is constructed and trained using unlabeled post-disaster images. As a transfer learning strategy, the weights of the SSD model are initialized using the weights of the CAE counterpart. (3) Data augmentation strategies, such as image mirroring, rotation, Gaussian blur, and Gaussian noise processing, are utilized to augment the training data set. As a case study, aerial images of Hurricane Sandy in 2012 were maximized to validate the proposed method’s effectiveness. Experiments show that the pretraining strategy can improve of 10% in terms of overall accuracy compared with the SSD trained from scratch. These experiments also demonstrate that using data augmentation strategies can improve mAP and mF1 by 72% and 20%, respectively. Finally, the experiment is further verified by another dataset of Hurricane Irma, and it is concluded that the paper method is feasible.


Sign in / Sign up

Export Citation Format

Share Document