scholarly journals IMAGE PROCESSING BASED TILAPIA SORTATION SYSTEM USING NA

2021 ◽  
Vol 7 (1) ◽  
pp. 83-88
Author(s):  
Sukenda Sukenda ◽  
Ari Purno Wahyu ◽  
Benny Yustim ◽  
Sunjana Sunjana ◽  
Yan Puspitarani

Tilapia has a value of export quality and is imported from America and Europe, tilapia is cultivated in freshwater, the largest tilapia producing areas are Java and Bali for the export market in the Middle East, value fish with a size of 250 grams / head (4 fish / kg ) in their intact form is in great demand. According to news circulating, fish of this size in the Middle East are ordered to meet the consumption of workers from Asia. the fish classification process is a very difficult process to find the quality value of the fish to be sold to meet export quality. Fish classification techniques can use the GLCM technique (Gray Level Oc-Currance Matrix) classification using images of fish critters with the GLCM method.The fish image data is analyzed based on the value of Attribute, Energy, Homogenity, Correlation, Contrash, from the attribute the density data matrix is ??generated for each. Fish image data and displayed in the form of a histogram, the data from the GLCM results are then classified with the Naive Bayes algorithm, from the results of the classification of data taken from 3 types of tilapia from the types of gift, Red, and Blue.

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Mingqiang Fan ◽  
Xiangxiang Yang ◽  
Tao Ding ◽  
Yu Cao ◽  
Qiaoke Si ◽  
...  

Cardiovascular disease is a common chronic disease in the medical field, which has a great impact on the health of Chinese residents (especially the elderly). At present, the effectiveness of the prevention and treatment of cardiovascular diseases in my country is not optimistic. Overall, the prevalence and mortality of CVD are still on the rise. The timely and effective detection and treatment of cardiovascular and cerebrovascular diseases are of great practical significance to improve the health of residents and to carry out prevention and treatment. This article aims to study the application of ultrasound-based virtual reality technology in the diagnosis and treatment of cardiovascular diseases to improve the efficiency and accuracy of the diagnosis of cardiovascular and cerebrovascular diseases by medical staff. The focus is on the application of feature attribute selection related algorithms and classification related algorithms in medical and health diagnosis systems, and a cardiovascular and cerebrovascular disease diagnosis system based on naive Bayes algorithm and improved genetic algorithm is designed and developed. The system builds a diagnostic model for cardiovascular and cerebrovascular diseases and diagnoses and displays the corresponding results based on the patient’s examination data. This paper first puts forward the theoretical concepts of ultrasonic virtual reality technology, scientific computing visualization, genetic algorithm, naive Bayes algorithm, and surgery simulation system and describes them in detail. Then, we construct a three-dimensional ultrasonic virtual measurement system, from the collection and reconstruction of image data to the filtering and segmentation of image data, plus the application of three-dimensional visualization and virtual reality technology to construct a three-dimensional measurement system. The experimental results in this paper show that 10 isolated congenital heart disease models with atrial septal defect (ASD) established through the use of three-dimensional visualization and virtual reality technology measured the short diameter, long diameter, and area of the atrial septal defect in the left and right atria. Finally, a value of L less than 0.05 indicates that the statistics are meaningful, and a value of r generally greater than 0.9 indicates that the virtual measurement result is highly correlated with the real measurement result.


2020 ◽  
Vol 4 (2) ◽  
pp. 377-383
Author(s):  
Eko Laksono ◽  
Achmad Basuki ◽  
Fitra Bachtiar

There are many cases of email abuse that have the potential to harm others. This email abuse is commonly known as spam, which contains advertisements, phishing scams, and even malware. This study purpose to know the classification of email spam with ham using the KNN method as an effort to reduce the amount of spam. KNN can classify spam or ham in an email by checking it using a different K value approach. The results of the classification evaluation using confusion matrix resulted in the KNN method with a value of K = 1 having the highest accuracy value of 91.4%. From the results of the study, it is known that the optimization of the K value in KNN using frequency distribution clustering can produce high accuracy of 100%, while k-means clustering produces an accuracy of 99%. So based on the results of the existing accuracy values, the frequency distribution clustering and k-means clustering can be used to optimize the K-optimal value of the KNN in the classification of existing spam emails.


2012 ◽  
Vol 532-533 ◽  
pp. 1445-1449
Author(s):  
Ting Ting Tong ◽  
Zhen Hua Wu

EM algorithm is a common method to solve mixed model parameters in statistical classification of remote sensing image. The EM algorithm based on fuzzification is presented in this paper to use a fuzzy set to represent each training sample. Via the weighted degree of membership, different samples will be of different effect during iteration to decrease the impact of noise on parameter learning and to increase the convergence rate of algorithm. The function and accuracy of classification of image data can be completed preferably.


1987 ◽  
Vol 65 (3) ◽  
pp. 691-707 ◽  
Author(s):  
A. F. L. Nemec ◽  
R. O. Brinkhurst

A data matrix of 23 generic or subgeneric taxa versus 24 characters and a shorter matrix of 15 characters were analyzed by means of ordination, cluster analyses, parsimony, and compatibility methods (the last two of which are phylogenetic tree reconstruction methods) and the results were compared inter alia and with traditional methods. Various measures of fit for evaluating the parsimony methods were employed. There were few compatible characters in the data set, and much homoplasy, but most analyses separated a group based on Stylaria from the rest of the family, which could then be separated into four groups, recognized here for the first time as tribes (Naidini, Derini, Pristinini, and Chaetogastrini). There was less consistency of results within these groups. Modern methods produced results that do not conflict with traditional groupings. The Jaccard coefficient minimizes the significance of symplesiomorphy and complete linkage avoids chaining effects and corresponds to actual similarities, unlike single or average linkage methods, respectively. Ordination complements cluster analysis. The Wagner parsimony method was superior to the less flexible Camin–Sokal approach and produced better measure of fit statistics. All of the aforementioned methods contain areas susceptible to subjective decisions but, nevertheless, they lead to a complete disclosure of both the methods used and the assumptions made, and facilitate objective hypothesis testing rather than the presentation of conflicting phylogenies based on the different, undisclosed premises of manual approaches.


2014 ◽  
Vol 2014 ◽  
pp. 1-19
Author(s):  
Liliana Ibeth Barbosa-Santillán ◽  
Inmaculada Álvarez-de-Mon y-Rego

This paper presents an approach to create what we have called a Unified Sentiment Lexicon (USL). This approach aims at aligning, unifying, and expanding the set of sentiment lexicons which are available on the web in order to increase their robustness of coverage. One problem related to the task of the automatic unification of different scores of sentiment lexicons is that there are multiple lexical entries for which the classification of positive, negative, or neutral{P,N,Z}depends on the unit of measurement used in the annotation methodology of the source sentiment lexicon. Our USL approach computes the unified strength of polarity of each lexical entry based on the Pearson correlation coefficient which measures how correlated lexical entries are with a value between 1 and −1, where 1 indicates that the lexical entries are perfectly correlated, 0 indicates no correlation, and −1 means they are perfectly inversely correlated and so is the UnifiedMetrics procedure for CPU and GPU, respectively. Another problem is the high processing time required for computing all the lexical entries in the unification task. Thus, the USL approach computes a subset of lexical entries in each of the 1344 GPU cores and uses parallel processing in order to unify 155802 lexical entries. The results of the analysis conducted using the USL approach show that the USL has 95.430 lexical entries, out of which there are 35.201 considered to be positive, 22.029 negative, and 38.200 neutral. Finally, the runtime was 10 minutes for 95.430 lexical entries; this allows a reduction of the time computing for the UnifiedMetrics by 3 times.


2004 ◽  
Vol 34 (1) ◽  
pp. 37-52
Author(s):  
Wiktor Jassem ◽  
Waldemar Grygiel

The mid-frequencies and bandwidths of formants 1–5 were measured at targets, at plus 0.01 s and at minus 0.01 s off the targets of vowels in a 100-word list read by five male and five female speakers, for a total of 3390 10-variable spectrum specifications. Each of the six Polish vowel phonemes was represented approximately the same number of times. The 3390* 10 original-data matrix was processed by probabilistic neural networks to produce a classification of the spectra with respect to (a) vowel phoneme, (b) identity of the speaker, and (c) speaker gender. For (a) and (b), networks with added input information from another independent variable were also used, as well as matrices of the numerical data appropriately normalized. Mean scores for classification with respect to phonemes in a multi-speaker design in the testing sets were around 95%, and mean speaker-dependent scores for the phonemes varied between 86% and 100%, with two speakers scoring 100% correct. The individual voices were identified between 95% and 96% of the time, and classifications of the spectra for speaker gender were practically 100% correct.


2015 ◽  
Vol 2015 ◽  
pp. 1-14 ◽  
Author(s):  
Rajesh Kumar ◽  
Rajeev Srivastava ◽  
Subodh Srivastava

A framework for automated detection and classification of cancer from microscopic biopsy images using clinically significant and biologically interpretable features is proposed and examined. The various stages involved in the proposed methodology include enhancement of microscopic images, segmentation of background cells, features extraction, and finally the classification. An appropriate and efficient method is employed in each of the design steps of the proposed framework after making a comparative analysis of commonly used method in each category. For highlighting the details of the tissue and structures, the contrast limited adaptive histogram equalization approach is used. For the segmentation of background cells, k-means segmentation algorithm is used because it performs better in comparison to other commonly used segmentation methods. In feature extraction phase, it is proposed to extract various biologically interpretable and clinically significant shapes as well as morphology based features from the segmented images. These include gray level texture features, color based features, color gray level texture features, Law’s Texture Energy based features, Tamura’s features, and wavelet features. Finally, the K-nearest neighborhood method is used for classification of images into normal and cancerous categories because it is performing better in comparison to other commonly used methods for this application. The performance of the proposed framework is evaluated using well-known parameters for four fundamental tissues (connective, epithelial, muscular, and nervous) of randomly selected 1000 microscopic biopsy images.


2005 ◽  
Vol 27 (3) ◽  
pp. 533-551
Author(s):  
André Lecours

The formulation of a policy that will satisfy several values and interests more or less compatible is a classic problem of political decision making. This phenomenon by which there can be, in a foreign policy issue for example, several divergent values and interests was named value-complexity by Alexander George. When facing a value complexity problem, a decision maker must choose some values and some interests over others. The choice he makes will not necessarily be the one made by other decision makers. This can result in a serious impediment to the decision making process. The American foreign policy towards the Middle East faced, for the major part of the Cold War era, a value-complexity problem because it looked to reconcile four hard-to reconcile values and interests. The Reagan government was confronted rather acutely with this problem in the making of its Iranian policies. The administration was split in at least two factions over Iran : one who thought primarily of containing the Soviet Union in the Middle East region and the other for whom the political stability of moderate regimes threatened by revolutionnary Iran should be the most important priority. The existence of these factions, consequence of value-complexity, produced the making and the implementation of two distinct Iranian policies.


2017 ◽  
Vol 14 (2) ◽  
pp. 55-68 ◽  
Author(s):  
Rita Bužinskienė

AbstractIn accordance with generally accepted accounting standards, most intangibles are not accounted for and not reflected in the traditional financial accounting. For this reason, most companies account intangible assets (IAs) as expenses. In the research, 57 sub-elements of IAs were applied, which are grouped into eight main elements of IAs. The classification of IAs consists in two parts of assets: accounting and non-accounting. This classification can be successfully applied in different branches of enterprises, to expand and supplement the theoretical and practical concepts of the company's financial management. The article proposes to evaluate not only the value of financial information for IAs (accounted) but also the value of non-financial information for IAs (non-accounted), thus revealing the true value of IAs that is available to the companies of Lithuania. It names a value of general IAs. The results of the research confirmed the IA valuation methodology, which allows companies to calculate the fair value of an IA. The obtained extended IAs valuation information may be valuable to both the owners of the company and investors, as this value plays an important practical role in assessing the impact of IAs on the market value of companies.


Sign in / Sign up

Export Citation Format

Share Document