metric set
Recently Published Documents


TOTAL DOCUMENTS

15
(FIVE YEARS 4)

H-INDEX

7
(FIVE YEARS 0)

Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2508
Author(s):  
Christoph-Alexander Holst ◽  
Volker Lohweg

In intelligent technical multi-sensor systems, information is often at least partly redundant—either by design or inherently due to the dynamic processes of the observed system. If sensors are known to be redundant, (i) information processing can be engineered to be more robust against sensor failures, (ii) failures themselves can be detected more easily, and (iii) computational costs can be reduced. This contribution proposes a metric which quantifies the degree of redundancy between sensors. It is set within the possibility theory. Information coming from sensors in technical and cyber–physical systems are often imprecise, incomplete, biased, or affected by noise. Relations between information of sensors are often only spurious. In short, sensors are not fully reliable. The proposed metric adopts the ability of possibility theory to model incompleteness and imprecision exceptionally well. The focus is on avoiding the detection of spurious redundancy. This article defines redundancy in the context of possibilistic information, specifies requirements towards a redundancy metric, details the information processing, and evaluates the metric qualitatively on information coming from three technical datasets.


2021 ◽  
Vol 17 (2) ◽  
pp. 55-71
Author(s):  
Rohit Vashisht ◽  
Syed Afzal Murtaza Rizvi

Cross-project defect prediction (CPDP) forecasts flaws in a target project through defect prediction models (DPM) trained by defect data of another project. However, CPDP has a prevalent problem (i.e., distinct projects must have identical features to describe themselves). This article emphasizes on heterogeneous CPDP (HCPDP) modeling that does not require same metric set between two applications and builds DPM based on metrics showing comparable distribution in their values for a given pair of datasets. This paper evaluates empirically and theoretically HCPDP modeling, which comprises of three main phases: feature ranking and feature selection, metric matching, and finally, predicting defects in the target application. The research work has been experimented on 13 benchmarked datasets of three open source projects. Results show that performance of HCPDP is very much comparable to baseline within project defect prediction (WPDP) and XG boosting classification model gives best results when used in conjunction with Kendall's method of correlation as compared to other set of classifiers.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 558
Author(s):  
Taj-Aldeen Naser Abdali ◽  
Rosilah Hassan ◽  
Azana Hafizah Mohd Aman ◽  
Quang Ngoc Nguyen ◽  
Ahmed Salih Al-Khaleefa

Fog computing is an emerging technology. It has the potential of enabling various wireless networks to offer computational services based on certain requirements given by the user. Typically, the users give their computing tasks to the network manager that has the responsibility of allocating needed fog nodes optimally for conducting the computation effectively. The optimal allocation of nodes with respect to various metrics is essential for fast execution and stable, energy-efficient, balanced, and cost-effective allocation. This article aims to optimize multiple objectives using fog computing by developing multi-objective optimization with high exploitive searching. The developed algorithm is an evolutionary genetic type designated as Hyper Angle Exploitative Searching (HAES). It uses hyper angle along with crowding distance for prioritizing solutions within the same rank and selecting the highest priority solutions. The approach was evaluated on multi-objective mathematical problems and its superiority was revealed by comparing its performance with benchmark approaches. A framework of multi-criteria optimization for fog computing was proposed, the Fog Computing Closed Loop Model (FCCL). Results have shown that HAES outperforms other relevant benchmarks in terms of non-domination and optimality metrics with over 70% confidence of the t-test for rejecting the null-hypothesis of non-superiority in terms of the domination metric set coverage.


Author(s):  
Ruya Samli ◽  
Zeynep Behrin Güven Aydın ◽  
Uğur Osman Yücel

Measurement in software is a basic process in all parts of the software development life cycle because it helps to express the quality of a software. But in software engineering, measurement is difficult and not precise. However, researchers accept that any measure is better than zero measure. In this chapter, the software metrics are explained, and some software testing tools are introduced. The software metric sets of Chidamber and Kemerer Metric Set (CK Metric Set), MOOD Metric Set (Brito e Abreu Metric Set), QMOOD Metric Set (Bansiya and Davis Software Metric Set), Rosenberg and Hyatt Metric Set, Lorenz and Kidd Metric Set (L&K Metric Set) are explained. The software testing tools such as Understand, Sonargraph, Findbugs, Metrics, PMD, Coverlipse, Checkstyle, SDMetrics, and Coverity are introduced. Also, 17 literature studies are summarized.


2018 ◽  
Vol 2 (2) ◽  
pp. 88
Author(s):  
Rokhana Ayu Solekhah ◽  
Tri Atmojo Kusmayadi

<p>Let <span class="math"><em>G</em></span> be a connected graph and let <span class="math"><em>u</em>, <em>v</em></span> <span class="math"> ∈ </span> <span class="math"><em>V</em>(<em>G</em>)</span>. For an ordered set <span class="math"><em>W</em> = {<em>w</em><sub>1</sub>, <em>w</em><sub>2</sub>, ..., <em>w</em><sub><em>n</em></sub>}</span> of <span class="math"><em>n</em></span> distinct vertices in <span class="math"><em>G</em></span>, the representation of a vertex <span class="math"><em>v</em></span> of <span class="math"><em>G</em></span> with respect to <span class="math"><em>W</em></span> is the <span class="math"><em>n</em></span>-vector <span class="math"><em>r</em>(<em>v</em>∣<em>W</em>) = (<em>d</em>(<em>v</em>, <em>w</em><sub>1</sub>), <em>d</em>(<em>v</em>, <em>w</em><sub>2</sub>), ..., </span> <span class="math"><em>d</em>(<em>v</em>, <em>w</em><sub><em>n</em></sub>))</span>, where <span class="math"><em>d</em>(<em>v</em>, <em>w</em><sub><em>i</em></sub>)</span> is the distance between <span class="math"><em>v</em></span> and <span class="math"><em>w</em><sub><em>i</em></sub></span> for <span class="math">1 ≤ <em>i</em> ≤ <em>n</em></span>. The set <span class="math"><em>W</em></span> is a local metric set of <span class="math"><em>G</em></span> if <span class="math"><em>r</em>(<em>u</em> ∣ <em>W</em>) ≠ <em>r</em>(<em>v</em> ∣ <em>W</em>)</span> for every pair <span class="math"><em>u</em>, <em>v</em></span> of adjacent vertices of <span class="math"><em>G</em></span>. The local metric set of <span class="math"><em>G</em></span> with minimum cardinality is called a local metric basis for <span class="math"><em>G</em></span> and its cardinality is called a local metric dimension, denoted by <span class="math"><em>l</em><em>m</em><em>d</em>(<em>G</em>)</span>. In this paper we determine the local metric dimension of a <span class="math"><em>t</em></span>-fold wheel graph, <span class="math"><em>P</em><sub><em>n</em></sub></span> <span class="math"> ⊙ </span> <span class="math"><em>K</em><sub><em>m</em></sub></span> graph, and generalized fan graph.</p>


2018 ◽  
Vol 14 (1) ◽  
pp. 98-120
Author(s):  
Tong Ruan ◽  
Liang Zhao ◽  
Yang Li ◽  
Haofen Wang ◽  
Xu Dong

In this article, the authors design two metric sets considering Richness and Correctness based on a quasi-formal conceptual representation. They also design a novel metric set on overlapped instances of different KBs to make the metric results comparable. Finally, they use random sampling techniques to reduce human efforts for assessing the correctness. The authors evaluate three large Chinese KBs including DBpedia Chinese, Zhishi.me and SSCO comparatively, and further compare them with English KBs in terms of data set qualities. They also compare different versions of DBpedia and YAGO. The findings in these KBs not only give a detailed report of the current situation of extracted KBs, but also show the effectiveness of their methods in assessing the quality of Web-Scale KBs comparatively.


2015 ◽  
Vol 59 ◽  
pp. 170-190 ◽  
Author(s):  
Peng He ◽  
Bing Li ◽  
Xiao Liu ◽  
Jun Chen ◽  
Yutao Ma

2010 ◽  
Vol 26 (4) ◽  
pp. 1117-1138 ◽  
Author(s):  
Frank Scherbaum ◽  
Nicolas M. Kuehn ◽  
Matthias Ohrnberger ◽  
Andreas Koehler

Logic trees have become a popular tool to capture epistemic uncertainties in seismic hazard analysis. They are commonly used by assigning weights to models on a purely descriptive basis (nominal scale). This invites the creation of unintended inconsistencies regarding the weights on the corresponding hazard curves. On the other hand, for human experts it is difficult to confidently express degrees-of-beliefs in particular numerical values. Here we demonstrate for ground-motion models how the model and the value-based perspectives can be partially reconciled by using high-dimensional information-visualization techniques. For this purpose we use Sammon's (1969) mapping and self-organizing mapping to project ground-motion models onto a two-dimensional map (an ordered metric set). Here they can be evaluated jointly according to their proximity in predicting similar ground motions, potentially making the assignment of logic tree weights consistent with their ground motion characteristics without having to abandon the model-based perspective.


2008 ◽  
Vol 65 (7) ◽  
pp. 1235-1247 ◽  
Author(s):  
Julian M. Burgos ◽  
John K. Horne

AbstractBurgos, J. M., and Horne, J. K. 2008. Characterization and classification of acoustically detected fish spatial distributions. – ICES Journal of Marine Science, 65: 1235–1247. High-resolution, two-dimensional measurements of aquatic-organism density are collected routinely during echo integration trawl surveys. School-detection algorithms are commonly used to describe and analyse spatial distributions of pelagic and semi-pelagic organisms observed in echograms. This approach is appropriate for species that form well-defined schools, but is limited when used for species that form demersal layers or diffuse pelagic shoals. As an alternative to metrics obtained from school-detection algorithms, we used landscape indices to quantify and characterize spatial heterogeneity in density distributions of walleye pollock (Theragra chalcogramma). Survey transects were divided into segments of equal length and echo integrated at a resolution of 20 m (horizontal) and 1 m (vertical). A series of 20 landscape metrics was calculated in each segment to measure occupancy, patchiness, size distribution of patches, distances among patches, acoustic density, and vertical location and dispersion. Factor analysis indicated that the metric set could be reduced to four factors: spatial occupancy, aggregation, packing density, and vertical distribution. Cluster analysis was used to develop a 12-category classification typology for distribution patterns. Visual inspection revealed that spatial patterns of segments assigned to each type were consistent, but that there was considerable overlap among types.


Sign in / Sign up

Export Citation Format

Share Document