scholarly journals A Novel Algorithm for Soil Image Segmentation using Color and Region Based System

In the recent research era data mining is a very essential research domain. The data mining techniques are used to extract significant knowledge in agriculture management. These techniques are time consuming and less expensive than the statistical techniques. Many Researchers develop efficient techniques to improve the productivity of agriculture. This paper developed a new segment method to segment the soil region from other information. This research introduced Color and Region Based segment method to separate the soil region from its background. To evaluate the proposed segmentation the five metrics are used dice coefficient, jaccard index, Sensitivity, Specificity and Precision. The new approach produced 98% accuracy, 98% Sensitivity and 98% Specificity.

2017 ◽  
Vol 49 (12) ◽  
pp. 2702-2717 ◽  
Author(s):  
Jim Thatcher

Recent years have seen an explosion in the investment into and valuation of mobile spatial applications. With multiple applications currently valued at well over one billion U.S. dollars, mobile spatial applications and the data they generate have come to play an increasingly significant role in the function of late capitalism. Empirically based upon a series of interviews conducted with mobile application designers and developers, this article details the creation of a digital commodity termed ‘location.’ ‘Location’ is developed through three discursive poles: Its storing of space and time as digital data object manipulable by code, its spatial and temporal immediacy, and its ability to ‘add value’ or ‘tell a story’ to both end-users and marketers. As a commodity it represents the sum total of targeted marking information, including credit profiles, purchase history, and a host of other information available through data mining or sensor information, combined with temporal immediacy, physical location, and user intent. ‘Location’ is demonstrated to exist as a commodity from its very inception and, as such, to be a key means through which everyday life is further entangled with processes of capitalist exploitation.


2014 ◽  
Vol 61 (1) ◽  
pp. 217-221
Author(s):  
J. M. Macak ◽  
D. Patil ◽  
M. Fraenkl ◽  
V. Zima ◽  
K. Shimakawa ◽  
...  

Author(s):  
Sam Fletcher ◽  
Md Zahidul Islam

The ability to extract knowledge from data has been the driving force of Data Mining since its inception, and of statistical modeling long before even that. Actionable knowledge often takes the form of patterns, where a set of antecedents can be used to infer a consequent. In this paper we offer a solution to the problem of comparing different sets of patterns. Our solution allows comparisons between sets of patterns that were derived from different techniques (such as different classification algorithms), or made from different samples of data (such as temporal data or data perturbed for privacy reasons). We propose using the Jaccard index to measure the similarity between sets of patterns by converting each pattern into a single element within the set. Our measure focuses on providing conceptual simplicity, computational simplicity, interpretability, and wide applicability. The results of this measure are compared to prediction accuracy in the context of a real-world data mining scenario.


2019 ◽  
Vol 1 (1) ◽  
pp. 14-28
Author(s):  
Ahmad Haidar Mirza

Data Mining is a process that uses statistical techniques, mathematics, artificial intelligence, machine learning to extract and identify useful information and related knowledge from large databases. Data mining is the process of finding new patterns in data by filtering large amounts of data. Data mining uses pattern recognition technology that is similar to statistical techniques and mathematical techniques. The patterns found can provide useful information for generating economic benefits, effectiveness and efficiency. Algorithm Naive Bayes Classifier is one method of data mining that can be used to support effective and efficient promotion strategies. The Naive Bayes Classifier algorithm is used to predict the interest of the study based on the calculations performed. The data used are new student registration data from 2014 until 2016 at Bina Darma University. The results of this study are new models that are expected to provide important information can be used to assist the Marketing Team of Bina Darma University Palembang in policy making and implementation of appropriate marketing strategy. The results obtained are expected to help to support the promotion strategies that impact on the effectiveness and efficiency of promotion and increase the number of new students who will register.


In today’s world social media is one of the most important tool for communication that helps people to interact with each other and share their thoughts, knowledge or any other information. Some of the most popular social media websites are Facebook, Twitter, Whatsapp and Wechat etc. Since, it has a large impact on people’s daily life it can be used a source for any fake or misinformation. So it is important that any information presented on social media should be evaluated for its genuineness and originality in terms of the probability of correctness and reliability to trust the information exchange. In this work we have identified the features that can be helpful in predicting whether a given Tweet is Rumor or Information. Two machine learning algorithm are executed using WEKA tool for the classification that is Decision Tree and Support Vector Machine.


Sign in / Sign up

Export Citation Format

Share Document