scholarly journals Discrimination of civet coffee using visible spectroscopy

2020 ◽  
Vol 8 (3) ◽  
pp. 239-245
Author(s):  
Graciella Mae L Adier ◽  
Charlene A Reyes ◽  
Edwin R Arboleda

Civet coffee is considered as highly marketable and rare. This specialty coffee has a special flavor and higher price relative to regular coffee, and it is restricted in supply. Establishing a straightforward and efficient approach to distinguish Civet coffee for quality; likewise, consumer protection is fundamental. This study utilized visible spectroscopy as a non-destructive and quick technique to obtain the absorbance, ranging from 450 nm to 650 nm, of the civet coffee and non-civet coffee samples. Overall, 160 samples were analyzed, and the total spectra accumulated was 960. The data gathered from the first 120 samples were fed to the classification learner application and were used as a training data set. The remaining samples were used for testing the classification algorithm. The study shows that civet coffee bean samples have lower absorbance values in visible spectra than non-civet coffee bean samples. The process yields 96.7 % to 100 % classification scores for quadratic discriminant analysis and logistic regression. Among the two classification algorithms, logistic regression generated the fastest training time of 14.050 seconds. The application of visible spectroscopy combined with data mining algorithms is effective in discriminating civet coffee from non-civet coffee.

Author(s):  
Barak Chizi ◽  
Lior Rokach ◽  
Oded Maimon

Dimensionality (i.e., the number of data set attributes or groups of attributes) constitutes a serious obstacle to the efficiency of most data mining algorithms (Maimon and Last, 2000). The main reason for this is that data mining algorithms are computationally intensive. This obstacle is sometimes known as the “curse of dimensionality” (Bellman, 1961). The objective of Feature Selection is to identify features in the data-set as important, and discard any other feature as irrelevant and redundant information. Since Feature Selection reduces the dimensionality of the data, data mining algorithms can be operated faster and more effectively by using Feature Selection. In some cases, as a result of feature selection, the performance of the data mining method can be improved. The reason for that is mainly a more compact, easily interpreted representation of the target concept. The filter approach (Kohavi , 1995; Kohavi and John ,1996) operates independently of the data mining method employed subsequently -- undesirable features are filtered out of the data before learning begins. These algorithms use heuristics based on general characteristics of the data to evaluate the merit of feature subsets. A sub-category of filter methods that will be refer to as rankers, are methods that employ some criterion to score each feature and provide a ranking. From this ordering, several feature subsets can be chosen by manually setting There are three main approaches for feature selection: wrapper, filter and embedded. The wrapper approach (Kohavi, 1995; Kohavi and John,1996), uses an inducer as a black box along with a statistical re-sampling technique such as cross-validation to select the best feature subset according to some predictive measure. The embedded approach (see for instance Guyon and Elisseeff, 2003) is similar to the wrapper approach in the sense that the features are specifically selected for a certain inducer, but it selects the features in the process of learning.


Author(s):  
Wei Mingjun ◽  
Chai Lei ◽  
Wei Renying ◽  
Huo Wang

Our team has won the Grand Champion (Tie) of PAKDD-2007 data mining competition. The data mining task is to score credit card customers of a consumer finance company according to the likelihood that customers take up the home loans offered by the company. This report presents our solution for this business problem. TreeNet and logistic regression are the data mining algorithms used in this project. The final score is based on the cross-algorithm ensemble of two within-algorithm ensembles of TreeNet and logistic regression. Finally, some discussions from our solution are presented.


2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
R. Manjula Devi ◽  
S. Kuppuswami ◽  
R. C. Suganthe

Artificial neural network has been extensively consumed training model for solving pattern recognition tasks. However, training a very huge training data set using complex neural network necessitates excessively high training time. In this correspondence, a new fast Linear Adaptive Skipping Training (LAST) algorithm for training artificial neural network (ANN) is instituted. The core essence of this paper is to ameliorate the training speed of ANN by exhibiting only the input samples that do not categorize perfectly in the previous epoch which dynamically reducing the number of input samples exhibited to the network at every single epoch without affecting the network’s accuracy. Thus decreasing the size of the training set can reduce the training time, thereby ameliorating the training speed. This LAST algorithm also determines how many epochs the particular input sample has to skip depending upon the successful classification of that input sample. This LAST algorithm can be incorporated into any supervised training algorithms. Experimental result shows that the training speed attained by LAST algorithm is preferably higher than that of other conventional training algorithms.


2018 ◽  
Vol 7 (4.36) ◽  
pp. 845 ◽  
Author(s):  
K. Kavitha ◽  
K. Rohini ◽  
G. Suseendran

Data mining is the course of process during which knowledge is extracted through interesting patterns recognized from large amount of data. It is one of the knowledge exploring areas which is widely used in the field of computer science. Data mining is an inter-disciplinary area which has great impact on various other fields such as data analytics in business organizations, medical forecasting and diagnosis, market analysis, statistical analysis and forecasting, predictive analysis in various other fields. Data mining has multiple forms such as text mining, web mining, visual mining, spatial mining, knowledge mining and distributed mining. In general the process of data mining has many tasks from pre-processing. The actual task of data mining starts after the preprocessing task. This work deals with the analysis and comparison of the various Data mining algorithms particularly Meta classifiers based upon performance and accuracy. This work is under medical domain, which is using the lung function test report data along with the smoking data. This medical data set has been created from the raw data obtained from the hospital. In this paper work, we have analyzed the performance of Meta classifiers for classifying the files. Initially the performances of Meta and Rule classifiers are analyzed observed and found that the Meta classifier is more efficient than the Rule classifiers in Weka tool. The implementation work then continued with the performance comparison between the different types of classification algorithm among which the Meta classifiers showed comparatively higher accuracy in the process of classification. The four Meta classifier algorithms which are widely explored using the Weka tool namely Bagging, Attribute Selected Classifier, Logit Boost and Classification via Regression are used to classify this medical dataset and the result so obtained has been evaluated and compared to recognize the best among the classifier.  


2017 ◽  
Vol 9 (1) ◽  
pp. 50-58
Author(s):  
Ali Bayır ◽  
Sebnem Ozdemir ◽  
Sevinç Gülseçen

Political elections can be defined as systems that contain political tendencies and voters' perceptions and preferences. The outputs of those systems are formed by specific attributes of individuals such as age, gender, occupancy, educational status, socio-economic status, religious belief, etc. Those attributes can create a data set, which contains hidden information and undiscovered patterns that can be revealed by using data mining methods and techniques. The main purpose of this study is to define voting tendencies in politics by using some of data mining methods. According to that purpose, the survey results, which were prepared and applied before 2011 elections of Turkey by KONDA Research and Consultancy Company, were used as raw data set. After Preprocessing of data, models were generated via data mining algorithms, such as Gini, C4.5 Decision Tree, Naive Bayes and Random Forest. Because of increasing popularity and flexibility in analyzing process, R language and Rstudio environment were used.


Author(s):  
Ansar Abbas ◽  
Muhammad Aman Ullah ◽  
Abdul Waheed

This study is conducted to predict the body weight (BW) for Thalli sheep of southern Punjab from different body measurements. In the BW prediction, several body measurements viz., withers height, body length, head length, head width, ear length, ear width, neck length, neck width, heart girth, rump length, rump width, tail length, barrel depth and sacral pelvic width are used as predictors. The data mining algorithms such as Chi-square Automatic Interaction Detector (CHAID), Exhaustive CHAID, Classification and Regression Tree (CART) and Artificial Neural Network (ANN) are used to predict the BW for a total of 85 female Thalli sheep. The data set is partitioned into training (80 %) and test (20 %) sets before the algorithms are used. The minimum number of parent (4) and child nodes (2) are set in order to ensure their predictive ability. The R2 % and RMSE values for CHAID, Exhaustive CHAID, ANN and CART algorithms are 67.38(1.003), 64.37(1.049), 61.45(1.093) and 59.02(1.125), respectively. The mostsignificant predictor is BL in the BW prediction of Thalli sheep. The heaviest BW average of 9.596 kg is obtained from the subgroup of those having BL > 25.000 inches. On behalf of the several goodness of fit criteria, we conclude that the CHAID algorithm performance is better in order to predict the BW of Thalli sheep and more suitable decision tree diagram visually. Also, the obtained CHAID results may help to determine body measurements positively associated with BW for developing better selection strategies with the scope of indirect selection criteria.


Author(s):  
Geert Wets ◽  
Koen Vanhoof ◽  
Theo Arentze ◽  
Harry Timmermans

The utility-maximizing framework—in particular, the logit model—is the dominantly used framework in transportation demand modeling. Computational process modeling has been introduced as an alternative approach to deal with the complexity of activity-based models of travel demand. Current rule-based systems, however, lack a methodology to derive rules from data. The relevance and performance of data-mining algorithms that potentially can provide the required methodology are explored. In particular, the C4 algorithm is applied to derive a decision tree for transport mode choice in the context of activity scheduling from a large activity diary data set. The algorithm is compared with both an alternative method of inducing decision trees (CHAID) and a logit model on the basis of goodness-of-fit on the same data set. The ratio of correctly predicted cases of a holdout sample is almost identical for the three methods. This suggests that for data sets of comparable complexity, the accuracy of predictions does not provide grounds for either rejecting or choosing the C4 method. However, the method may have advantages related to robustness. Future research is required to determine the ability of decision tree-based models in predicting behavioral change.


2021 ◽  
Author(s):  
Nguyen Ha Huy Cuong

Abstract In agriculture, a timely and accurate estimate of ripeness in the orchard improves the post-harvest process. Choosing fruits based on their maturity stages can reduce storage costs and increase market results. In addition, the estimation of the ripeness of the fruit based on the detection of input and output indicators has brought about practical effects in the harvesting process, as well as determining the amount of water needed for irrigation. pepper, the amount of fertilizer for the end of the season appropriate. In this paper, propose a technical solution for a model to detect persimmon green grapefruit fruit at agricultural farms, Vietnam. Aggregation model and transfer learning method are used. The proposed model contains two object detection sub models and the decision model is the pre-processed model, the transfer model and the corresponding aggregation model. Improving the YOLO algorithm is trained with more than one hundred object types, the total proposed processing is 500,000 images, from the COCO image data set used as a preprocessing model. Aggregation model and transfer learning method are also used as an initial step to train the model transferred by the transfer learning technique. Only images are used for transfer model training. Finally, the aggregation model with the techniques used to make decisions selects the best results from the pre-trained model and the transfer model. Using our proposed model, it has improved and reduced the time when analyzing the maximum number of training data sets and training time. The accuracy of model union is 98.20%. The test results of the classifier are proposed through a data set of 10000 images of each layer for sensitivity of 98.2%, specificity 97.2% with accuracy of 96.5% and 0, 98 in training for all grades.


2021 ◽  
Vol 8 ◽  
Author(s):  
I.-Ming Chiu ◽  
Wenhua Lu ◽  
Fangming Tian ◽  
Daniel Hart

Machine learning is about finding patterns and making predictions from raw data. In this study, we aimed to achieve two goals by utilizing the modern logistic regression model as a statistical tool and classifier. First, we analyzed the associations between Major Depressive Episode with Severe Impairment (MDESI) in adolescents with a list of broadly defined sociodemographic characteristics. Using findings from the logistic model, the second and ultimate goal was to identify the potential MDESI cases using a logistic model as a classifier (i.e., a predictive mechanism). Data on adolescents aged 12–17 years who participated in the National Survey on Drug Use and Health (NSDUH), 2011–2017, were pooled and analyzed. The logistic regression model revealed that compared with males and adolescents aged 12-13, females and those in the age groups of 14-15 and 16-17 had higher risk of MDESI. Blacks and Asians had lower risk of MDESI than Whites. Living in single-parent household, having less authoritative parents, having negative school experiences further increased adolescents' risk of having MDESI. The predictive model successfully identified 66% of the MDESI cases (recall rate) and accurately identified 72% of the MDESI and MDESI-free cases (accuracy rate) in the training data set. The rates of both recall and accuracy remained about the same (66 and 72%) using the test data. Results from this study confirmed that the logistic model, when used as a classifier, can identify potential cases of MDESI in adolescents with acceptable recall and reasonable accuracy rates. The algorithmic identification of adolescents at risk for depression may improve prevention and intervention.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Haifan Du ◽  
Haiwen Duan

This paper combines domestic and international research results to analyze and study the difference between the attribute features of English phrase speech and noise to enhance the short-time energy, which is used to improve the threshold judgment sensitivity; noise addition to the discrepancy data set is used to enhance the recognition robustness. The backpropagation algorithm is improved to constrain the range of weight variation, avoid oscillation phenomenon, and shorten the training time. In the real English phrase sound recognition system, there are problems such as massive training data and low training efficiency caused by the super large-scale model parameters of the convolutional neural network. To address these problems, the NWBP algorithm is based on the oscillation phenomenon that tends to occur when searching for the minimum error value in the late training period of the network parameters, using the K-MEANS algorithm to obtain the seed nodes that approach the minimal error value, and using the boundary value rule to reduce the range of weight change to reduce the oscillation phenomenon so that the network error converges as soon as possible and improve the training efficiency. Through simulation experiments, the NWBP algorithm improves the degree of fitting and convergence speed in the training of complex convolutional neural networks compared with other algorithms, reduces the redundant computation, and shortens the training time to a certain extent, and the algorithm has the advantage of accelerating the convergence of the network compared with simple networks. The word tree constraint and its efficient storage structure are introduced, which improves the storage efficiency of the word tree constraint and the retrieval efficiency in the English phrase recognition search.


Sign in / Sign up

Export Citation Format

Share Document