scholarly journals THE CORRELATION BETWEEN LEAN MEAT PERCENTAGE IN PRIMAL CUTS AND TOTAL LEAN MEAT PERCENTAGE IN CARCASS

2018 ◽  
Vol 3 (2) ◽  
pp. 33-39
Author(s):  
Andrey V. Pavlov ◽  
Andrey I. Rud ◽  
Maxim A. Zankevich

With the help of the automated system for the classification of carcasses of pigs, AutoFOM ultrasound have been processed 56682 carcasses of slaughter pigs with an average carcass weight of 94.3 kg. The mass and yield of muscle tissue from the main cuts in the carcass is shown. Correlation coefficients between the mass and the content of muscle tissue in the carcass and the main (premium) cuts (ham, neck, shoulder, belly, and loin) were studied. It is shown how the increase in the weight of each of the cuts affects the content of muscle tissue in the carcass and in the cut. For example, it was found that when the weight of the belly is increased by 10 kg (from 6 to 16 kg), the percentage of muscle tissue from carcass is reduced by 3.3% (from 54.5 to 51.8%), which is approximately 0.33% for 1 kg of additional weight of the belly. With an increase in the weight of the loin from 4 to 14 kg, the yield of muscle tissue from the carcass on the contrary increased by 11.6%, i.е. 1.16% for each additional kg of loin weight. A value (in absolute and relative units) of the main cuts is given. The conclusion is made about the prospects of using the obtained data in the creation of a specialized terminal line of pigs, characterized by an increased content of weight of premium cuts in the carcass.ContributionAll authors bear responsibility for the work and presented data. All authors made an equal contribution to the work. The authors were equally involved in writing the manuscript and bear the equal responsibility for plagiarism.Conflict of interestThe authors declare no conflict of interest.

1999 ◽  
Vol 68 (4) ◽  
pp. 641-645 ◽  
Author(s):  
B. Hulsegge ◽  
G. Mateman ◽  
G. S. M. Merkus ◽  
P. Walstra

AbstractBody length and ultrasonic fat thickness measurements were taken on 86 live pigs in order to find an optimal probing site for estimation of lean meat proportion. The next day pigs were slaughtered and measurements with the Hennessy Grading Probe (HGP) were made in order to estimate the lean meat proportion.Fat thickness, 6 cm off the dorsal mid line, increased from a value of 9·5 mm at a site 4 cm cranial to the last rib, progressively through intermediate sites to a value of 12·4 mm, 22 cm cranial to the last rib. Fat thickness measurements at different sites (live pigs) were highly correlated with HGP fat thickness at the site between 3rd and 4th from last rib (3/4 LR) and estimated lean meat proportion (carcasses); correlations ranged from 0.80 to 0.89 and -071 to -0.85 respectively. The most accurate predictor of estimated lean meat proportion from the live pig measurements was the measurement at 18 cm cranial to the last rib. Measurement at the site half the distance between the occipital bone and the base of the tail (midpoint) was the second-best for estimated lean meat proportion.Generally, this midpoint on live pigs was situated around the 3/4 LR on carcasses. However, the range was considerable. Half of the number of animals had a midpoint in the range of -2.5 to 2.5 cm from 3/4 LR. The site midpoint is easily located on the animal and the results of this study suggest that it can be used as an accurate predictor of estimated lean meat proportion. Therefore it can serve as the probing site for classification of live pigs.


2020 ◽  
Vol 4 (2) ◽  
pp. 377-383
Author(s):  
Eko Laksono ◽  
Achmad Basuki ◽  
Fitra Bachtiar

There are many cases of email abuse that have the potential to harm others. This email abuse is commonly known as spam, which contains advertisements, phishing scams, and even malware. This study purpose to know the classification of email spam with ham using the KNN method as an effort to reduce the amount of spam. KNN can classify spam or ham in an email by checking it using a different K value approach. The results of the classification evaluation using confusion matrix resulted in the KNN method with a value of K = 1 having the highest accuracy value of 91.4%. From the results of the study, it is known that the optimization of the K value in KNN using frequency distribution clustering can produce high accuracy of 100%, while k-means clustering produces an accuracy of 99%. So based on the results of the existing accuracy values, the frequency distribution clustering and k-means clustering can be used to optimize the K-optimal value of the KNN in the classification of existing spam emails.


2018 ◽  
Vol 21 (2) ◽  
pp. 125-137
Author(s):  
Jolanta Stasiak ◽  
Marcin Koba ◽  
Marcin Gackowski ◽  
Tomasz Baczek

Aim and Objective: In this study, chemometric methods as correlation analysis, cluster analysis (CA), principal component analysis (PCA), and factor analysis (FA) have been used to reduce the number of chromatographic parameters (logk/logkw) and various (e.g., 0D, 1D, 2D, 3D) structural descriptors for three different groups of drugs, such as 12 analgesic drugs, 11 cardiovascular drugs and 36 “other” compounds and especially to choose the most important data of them. Material and Methods: All chemometric analyses have been carried out, graphically presented and also discussed for each group of drugs. At first, compounds’ structural and chromatographic parameters were correlated. The best results of correlation analysis were as follows: correlation coefficients like R = 0.93, R = 0.88, R = 0.91 for cardiac medications, analgesic drugs, and 36 “other” compounds, respectively. Next, part of molecular and HPLC experimental data from each group of drugs were submitted to FA/PCA and CA techniques. Results: Almost all results obtained by FA or PCA, and total data variance, from all analyzed parameters (experimental and calculated) were explained by first two/three factors: 84.28%, 76.38 %, 69.71% for cardiovascular drugs, for analgesic drugs and for 36 “other” compounds, respectively. Compounds clustering by CA method had similar characteristic as those obtained by FA/PCA. In our paper, statistical classification of mentioned drugs performed has been widely characterized and discussed in case of their molecular structure and pharmacological activity. Conclusion: Proposed QSAR strategy of reduced number of parameters could be useful starting point for further statistical analysis as well as support for designing new drugs and predicting their possible activity.


2014 ◽  
Vol 2014 ◽  
pp. 1-19
Author(s):  
Liliana Ibeth Barbosa-Santillán ◽  
Inmaculada Álvarez-de-Mon y-Rego

This paper presents an approach to create what we have called a Unified Sentiment Lexicon (USL). This approach aims at aligning, unifying, and expanding the set of sentiment lexicons which are available on the web in order to increase their robustness of coverage. One problem related to the task of the automatic unification of different scores of sentiment lexicons is that there are multiple lexical entries for which the classification of positive, negative, or neutral{P,N,Z}depends on the unit of measurement used in the annotation methodology of the source sentiment lexicon. Our USL approach computes the unified strength of polarity of each lexical entry based on the Pearson correlation coefficient which measures how correlated lexical entries are with a value between 1 and −1, where 1 indicates that the lexical entries are perfectly correlated, 0 indicates no correlation, and −1 means they are perfectly inversely correlated and so is the UnifiedMetrics procedure for CPU and GPU, respectively. Another problem is the high processing time required for computing all the lexical entries in the unification task. Thus, the USL approach computes a subset of lexical entries in each of the 1344 GPU cores and uses parallel processing in order to unify 155802 lexical entries. The results of the analysis conducted using the USL approach show that the USL has 95.430 lexical entries, out of which there are 35.201 considered to be positive, 22.029 negative, and 38.200 neutral. Finally, the runtime was 10 minutes for 95.430 lexical entries; this allows a reduction of the time computing for the UnifiedMetrics by 3 times.


2017 ◽  
Vol 14 (2) ◽  
pp. 55-68 ◽  
Author(s):  
Rita Bužinskienė

AbstractIn accordance with generally accepted accounting standards, most intangibles are not accounted for and not reflected in the traditional financial accounting. For this reason, most companies account intangible assets (IAs) as expenses. In the research, 57 sub-elements of IAs were applied, which are grouped into eight main elements of IAs. The classification of IAs consists in two parts of assets: accounting and non-accounting. This classification can be successfully applied in different branches of enterprises, to expand and supplement the theoretical and practical concepts of the company's financial management. The article proposes to evaluate not only the value of financial information for IAs (accounted) but also the value of non-financial information for IAs (non-accounted), thus revealing the true value of IAs that is available to the companies of Lithuania. It names a value of general IAs. The results of the research confirmed the IA valuation methodology, which allows companies to calculate the fair value of an IA. The obtained extended IAs valuation information may be valuable to both the owners of the company and investors, as this value plays an important practical role in assessing the impact of IAs on the market value of companies.


Author(s):  
Misha Urooj Khan ◽  
Ayesha Farman ◽  
Asad Ur Rehman ◽  
Nida Israr ◽  
Muhammad Zulqarnain Haider Ali ◽  
...  

2005 ◽  
Vol 13 (3) ◽  
pp. 243-246 ◽  
Author(s):  
Fábio Lourenço Romano ◽  
Gláucia Maria Bovi Ambrosano ◽  
Maria Beatriz Borges de Araújo Magnani ◽  
Darcy Flávio Nouer

The coefficient of variation is a dispersion measurement that does not depend on the unit scales, thus allowing the comparison of experimental results involving different variables. Its calculation is crucial for the adhesive experiments performed in laboratories because both precision and reliability can be verified. The aim of this study was to evaluate and to suggest a classification of the coefficient variation (CV) for in vitro experiments on shear and tensile strengths. The experiments were performed in laboratory by fifty international and national studies on adhesion materials. Statistical data allowing the estimation of the coefficient of variation was gathered from each scientific article since none of them had such a measurement previously calculated. Excel worksheet was used for organizing the data while the sample normality was tested by using Shapiro Wilk tests (alpha = 0.05) and the Statistical Analysis System software (SAS). A mean value of 6.11 (SD = 1.83) for the coefficient of variation was found by the data analysis and the data had a normal distribution (p>0.05). A range classification was proposed for the coefficient of variation from such data, that is, it should be considered low for a value lesser than 2.44; intermediate for a value between 2.44 and 7.94, high for a value between 7.94 and 9.78, and finally, very high for a value greater than 9.78. Such classification can be used as a guide for experiments on adhesion materials, thus making the planning easier as well as revealing precision and validity concerning the data.


2021 ◽  
Vol 2107 (1) ◽  
pp. 012022
Author(s):  
F. Abdul Haris ◽  
M.Z.A. Ab Kadir ◽  
S. Sudin ◽  
D. Johari ◽  
J. Jasni ◽  
...  

Abstract Over the years, many studies have been conducted to measure and classify the lightning-generated electric field waveform for a better understanding of the lightning physics phenomenon. Through measurement and classification, the features of the negative lightning return strokes can be accessed and analysed. In most studies, the classification of negative lightning return strokes was performed using a conventional approach based on manual visual inspection. Nevertheless, this traditional method could compromise the accuracy of data analysis due to human error, which also required a longer processing time. Hence, this study developed an automated negative lightning return strokes classification system using MATLAB software. In this study, a total of 115 return strokes was recorded and classified automatically by using the developed system. The data comparison with the Tenaga Nasional Berhad Research (TNBR) lightning report showed a good agreement between the lightning signal detected from this study with those signals recorded from the report. Apart from that, the developed automated system was successfully classified the negative lightning return strokes which this parameter was also illustrated on Graphic User Interface (GUI). Thus, the proposed automatic system could offer a practical and reliable approach by reducing human error and the processing time while classifying the negative lightning return strokes.


In this paper, the authors present an effort to increase the applicability domain (AD) by means of retraining models using a database of 701 great dissimilar molecules presenting anti-tyrosinase activity and 728 drugs with other uses. Atom-based linear indices and best subset linear discriminant analysis (LDA) were used to develop individual classification models. Eighteen individual classification-based QSAR models for the tyrosinase inhibitory activity were obtained with global accuracy varying from 88.15-91.60% in the training set and values of Matthews correlation coefficients (C) varying from 0.76-0.82. The external validation set shows globally classifications above 85.99% and 0.72 for C. All individual models were validated and fulfilled by OECD principles. A brief analysis of AD for the training set of 478 compounds and the new active compounds included in the re-training was carried out. Various assembled multiclassifier systems contained eighteen models using different selection criterions were obtained, which provide possibility of select the best strategy for particular problem. The various assembled multiclassifier systems also estimated the potency of active identified compounds. Eighteen validated potency models by OECD principles were used.


2019 ◽  
Vol 2019 ◽  
pp. 1-17 ◽  
Author(s):  
Sufian A. Badawi ◽  
Muhammad Moazam Fraz

The arterioles and venules (AV) classification of retinal vasculature is considered as the first step in the development of an automated system for analysing the vasculature biomarker association with disease prognosis. Most of the existing AV classification methods depend on the accurate segmentation of retinal blood vessels. Moreover, the unavailability of large-scale annotated data is a major hindrance in the application of deep learning techniques for AV classification. This paper presents an encoder-decoder based fully convolutional neural network for classification of retinal vasculature into arterioles and venules, without requiring the preliminary step of vessel segmentation. An optimized multiloss function is used to learn the pixel-wise and segment-wise retinal vessel labels. The proposed method is trained and evaluated on DRIVE, AVRDB, and a newly created AV classification dataset; and it attains 96%, 98%, and 97% accuracy, respectively. The new AV classification dataset is comprised of 700 annotated retinal images, which will offer the researchers a benchmark to compare their AV classification results.


Sign in / Sign up

Export Citation Format

Share Document