euclidean distances
Recently Published Documents


TOTAL DOCUMENTS

229
(FIVE YEARS 73)

H-INDEX

18
(FIVE YEARS 3)

Author(s):  
Elżbieta Górska-Horczyczak ◽  
Magdalena Zalewska ◽  
Agnieszka Wierzbicka

AbstractThe aim of the study was to compare the effectiveness of the use of low-peak chromatographic fingerprints for the differentiation of various food products. Three groups of unprocessed products (mushrooms, hazelnuts and tomatoes), food preparations (bread, dried herbs and tomato juice) and alcoholic beverages (vodka and two types of blended whiskey) were examined. A commercial electronic nose based on ultrafast gas chromatography (acquisition time 90 s) with a flame ionization detector was used for the research. Static headspace was used as a green procedure to extract volatile compounds without modifying the food matrix. Individual extraction conditions were used for each product group. Similarities and differences between profiles were analyzed by simple Principal Components Analysis. The similarity rating was determined using the Euclidean distances. Global model was built for recognition chromatographic fingerprints of food samples. The best recognition results were 100% and 89% for tomato juices, spices, separate champignon elements and hazelnuts. On the other hand, the worst recognition results were 56% and 77% for breads and strong alcoholic beverages.


2021 ◽  
Vol 10 (16) ◽  
pp. e341101623723
Author(s):  
Jéssika Andreza Oliveira Pinto ◽  
Anne Karoline de Souza Oliveira ◽  
Edmilson Willian Propheta dos Santos ◽  
Ana Mara de Oliveira e Silva ◽  
Arie Fitzgerald Blank ◽  
...  

This study investigates the variations in the chemical profiles and biological activities (antioxidant and cytotoxic) of Eplingiella fruticosa from the state of Sergipe, an endemic species from the Northeast region of Brazil. The essential oils were extracted from six populations by hydrodistillation and analyzed by GC/MS-FID. Cluster analysis was performed with the data of the constituents of the essential oils, and then a dissimilarity matrix, based on Euclidean distances, and a dendrogram, through the Ward clustering method, were constructed. The antioxidant activity of the essential oils was tested by different assays (DPPH, ABTS, β-carotene, and FRAP), and cytotoxic activity was tested by the SRB assay. The compounds found in greater amounts were α-pinene, β-pinene, 1,8-cineole, camphor, borneol, δ-elemene, α-cubebene, α-ylangene, (E)-caryophyllene, germacrene D, bicyclogermacrene, trans-calamenene, spathulenol, caryophyllene oxide, and viridiflorol. These compounds defined the formation of two groups. The first group was composed of the populations of São Cristóvão, Itaporanga, Japaratuba, and Malhada dos Bois municipalities and was characterized by the presence of the monoterpene camphor (8.39-11.27%) as the compound of greatest concentration in relation to the other municipal areas. The second group was composed of the populations of Moita Bonita and Pirambu municipalities and was characterized by the major presence of the sesquiterpene bicyclogermacrene (7.45% and 10.98%). The plants exhibited weak effects in terms of antioxidant activity; however, the essential oil showed significant toxicity for the lines A549 (51.00% cell viability) in the population of Japaratuba, and B16F10 (64.94% cell viability) in Malhada dos Bois. The observations of this study may open a way to optimize the use of the E. fruticosa populations in relation to their cytotoxic properties.


Author(s):  
Mohamed Nasor ◽  
Walid Obaid

<span lang="EN-US">In this article a fully automated machine-vision technique for the detection and segmentation of mesenteric cysts in computed tomography (CT) images of the abdominal space is presented. The proposed technique involves clustering, filtering, morphological operations and evaluation processes to detect and segment mesenteric cysts in the abdomen regardless of their texture variation and location with respect to other surrounding abdominal organs. The technique is comprised of various processing phases, which include K-means clustering, iterative Gaussian filtering, and an evaluation of the segmented regions using area-normalized histograms and Euclidean distances. The technique was tested using 65 different abdominal CT scan images. The results showed that the technique was able to detect and segment mesenteric cysts and achieved 99.31%, 98.44%, 99.84%, 98.86% and 99.63% for precision, recall, specificity, dice score coefficient and accuracy respectively as quantitative performance measures which indicate very high segmentation accuracy.</span>


2021 ◽  
Vol 929 (1) ◽  
pp. 012014
Author(s):  
D V Kenigsberg ◽  
Yu M Salamatina ◽  
O A Prokhorov ◽  
S I Kuzikov

Abstract As part of the research of modern movements of the Earth’s crust, an analysis of 7 high-precision methods for calculating GNSS positions was carried out for the convergence of their daily mean coordinates. Based on Euclidean distances, regular and maximal discrepancies between coordinates of different methods are given. According to the coordinates in the ITRF, 5 methods are stood out with regular coordinate discrepancies <1 mm, and individual maximum discrepancies up to 30 mm. The other two methods have regular discrepancies in coordinates up to 2 cm, and the maximum differences reach 1 m. For a group of stations global coordinates transformation into a local reference frame leads to the effect of coordinate stabilization and increases their relative precision in the time series. As a result of such procedure, the level of maximum coordinate discrepancies between the methods decreased to 46%. Moreover, one of the methods of calculating coordinates has improved its convergence with the other methods by 80%. Based on the Euclidean distance method, the quality of the raw data for each station was evaluated. Thus, there is a group of 8 stations, for which the convergence of coordinates in different methods are approximately at the same level, and 2-3 times better than for the other 2 stations.


2021 ◽  
Vol 2021 (29) ◽  
pp. 328-333
Author(s):  
Davit Gigilashvili ◽  
Philipp Urban ◽  
Jean-Baptiste Thomas ◽  
Marius Pedersen ◽  
Jon Yngve Hardeberg

Translucency optically results from subsurface light transport and plays a considerable role in how objects and materials appear. Absorption and scattering coefficients parametrize the distance a photon travels inside the medium before it gets absorbed or scattered, respectively. Stimuli produced by a material for a distinct viewing condition are perceptually non-uniform w.r.t. these coefficients. In this work, we use multi-grid optimization to embed a non-perceptual absorption-scattering space into a perceptually more uniform space for translucency and lightness. In this process, we rely on A (alpha) as a perceptual translucency metric. Small Euclidean distances in the new space are roughly proportional to lightness and apparent translucency differences measured with A. This makes picking A more practical and predictable, and is a first step toward a perceptual translucency space.


2021 ◽  
Author(s):  
Jawad Khan

Emotion awareness is critical because of the function that emotions play in our daily lives. As a result, automatic emotion recognition aims to provide a machine with the human ability to interpret and comprehend a person's emotional state in order to predict his intents based on his facial expression. In this research, a new method for improving the accuracy of emotion recognition from facial expression is proposed, which is based solely on input attributes deduced from fiducial points. First, 1200 dynamic features representing the percentage of euclidean distances between facial fiducial points in the first frame and facial fiducial points in the last frame are extracted from image sequences. Second, just the most relevant features are chosen using active learning method. Finally, to categorise facial expression input into emotion, the selected features are provided to a ResNet classifier.


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 796
Author(s):  
Darlington Chineye Ikegwuoha ◽  
Harold Louw Weepener ◽  
Megersa Olumana Dinka

Background: Land cover/land cover (LULC) change is one of the major contributors to global environmental and climate variations. The ability to predict future LULC is crucial for environmental engineers, civil engineers, urban designers, and natural resources managers for planning activities. Methods: TerrSet Geospatial Monitoring and Modelling System and ArcGIS Pro 2.8 were used to process LULC data for the region of the Lepelle River Basin (LRB) of South Africa. Driver variables such as population density, slope, elevation as well as the Euclidean distances of cities, roads, highways, railroads, parks and restricted areas, towns to the LRB in combination with LULC data were analysed using the Land Change Modeller (LCM) and Cellular-Automata Markov (CAM) model. Results: The results reveal an array of losses (-) and gains (+) for certain LULC classes in the LRB by the year 2040: natural vegetation (+8.5%), plantations (+3.5%), water bodies (-31.6%), bare ground (-8.8%), cultivated land (-29.3%), built-up areas (+10.6%) and mines (+14.4%). Conclusions: The results point to the conversion of land uses from natural to anthropogenic by 2040. These changes also highlight how the potential losses associated with resources such as water that will negatively impact society and ecosystem functioning in the LRB by exacerbating water scarcity driven by climate change. This modelling study provides a decision support system for the establishment of sustainable land resource utilization policies in the LRB.


Mathematics ◽  
2021 ◽  
Vol 9 (16) ◽  
pp. 1909
Author(s):  
Petr Bujok

This paper proposes the real-world application of the Differential Evolution (DE) algorithm using, distance-based mutation-selection, population size adaptation, and an archive for solutions (DEDMNA). This simple framework uses three widely-used mutation types with the application of binomial crossover. For each solution, the most proper position prior to evaluation is selected using the Euclidean distances of three newly generated positions. Moreover, an efficient linear population-size reduction mechanism is employed. Furthermore, an archive of older efficient solutions is used. The DEDMNA algorithm is applied to three real-life engineering problems and 13 constrained problems. Seven well-known state-of-the-art DE algorithms are used to compare the efficiency of DEDMNA. The performance of DEDMNA and other algorithms are comparatively assessed using statistical methods. The results obtained show that DEDMNA is a very comparable optimiser compared to the best performing DE variants. The simple idea of measuring the distance of the mutant solutions increases the performance of DE significantly.


2021 ◽  
Vol 8 (4) ◽  
pp. 745
Author(s):  
Bain Khusnul Khotimah ◽  
Muhammad Syarief ◽  
Miswanto Miswanto ◽  
Herry Suprajitno

<p class="Abstrak">Nilai yang hilang membutuhkan preprosesing dengan teknik imputasi untuk menghasilkan data yang lengkap. Proses imputasi membutuhkan initial bobot yang sesuai, karena data yang dihasilkan adalah data pengganti. Pemilihan nilai bobot yang optimal dan kesesuaian nilai <em>K</em> pada metode <em>K-Means</em> Imputation (KMI) merupakan masalah besar, sehingga menimbulkan error semakin meningkat. Model gabungan algoritma genetika (GA) dan KMI atau yang dikenal GAKMI digunakan untuk menentukan bobot optimal pada setiap <em>cluster</em> data yang mengandung nilai yang hilang. Algoritma genetika digunakan untuk memilih bobot dengan menggunakan pengkodean bilangan riel pada kromosom. Model hybrid GA dan KMI dengan pengelompokan menggunakan jumlah jarak <em>Euclidian</em> setiap titik data dari pusat clusternya. Pengukuran kinerja algoritma menggunakan fungsi kebugaran optimal dengan nilai MSE terkecil. Hasil percobaan data hepatitis menunjukkan bahwa GA efisien dalam menemukan nilai bobot awal optimal dari ruang pencarian yang besar. Hasil perhitungan menggunakan nilai MSE =0.044 pada K=3 dan replika ke-5 menunjukkan kinerja GAKMI menghasilkan tingkat kesalahan yang rendah untuk data hepatitis dengan atribut campuran. Hasil penelitian dengan menggunakan pengujian tingkat imputasi menunjukkan algoritma GAKMI menghasilkan nilai <em>r</em> = 0.526 lebih tinggi dibandingkan dengan metode lainnya. Penelitian ini menunjukkan GAKMI menghasilkan nilai r yang lebih tinggi dibandingkan metode imputasi lainnya sehingga dianggap paling baik dibandingkan teknik imputasi secara umum. </p><p class="Abstrak"> </p><p class="Abstrak"><em><strong>Abstract</strong></em></p><p class="Judul2"><em>Missing values require preprocessing techniques as imputation to produce complete data. Complete data imputation results require the appropriate initial weights, because the resulting data is replacement data. The choice of the optimal weighting value and the suitability of the network nodes in the K-Means Imputation (KMI) method are big problems, causing increasing errors. The combined model of Genetic Algorithm (GA) and KMI is used to determine the optimal weights for each data cluster containing missing values. Genetic algorithm is used to select weights by using real number coding on chromosomes. GA is applied to the KMI using clustering calculated using the sum of the Euclidean distances of each data point from the center of the cluster. Performance measurement algorithms using the fitness function optimally with the smallest MSE value. The results of the hepatitis data experiment show that GA is efficient in finding the optimal initial weight value from a large search space. The results of calculations using the MSE value = 0.04 </em><em>for</em><em> K = 3 and the 5th replication</em><em>. So, </em><em>GAKMI resulted in a low error rate for mixed data. The results of research using imputation level testing </em><em>performed</em><em> GAKMI  produc</em><em>ed</em><em> r = 0.526 higher than the other methods. Thus, the higher the r value, the best for the imputation technique.</em></p><p class="Abstrak"><em><strong><br /></strong></em></p>


Sign in / Sign up

Export Citation Format

Share Document