Pengembangan Onti Measures Berbasis Web dengan Pengujian Data Ontology Virus dan Penyakit

Author(s):  
Nur Alfi Ekowati ◽  
Ika Indah Lestari ◽  
Sulistiyasni
Keyword(s):  

Penggunaan dokumen ontology pada kasus COVID-19 adalah sebuah contoh begitu dibutuhkannya inconsistency measure untuk OWL ontology. Ontology adalah representasi pengetahuan pada teknologi web semantik yang merupakan ekstensi dari website. Peran inconsistency measure menjadi penting untuk memastikan bahwa seluruh informasi pada sebuah ontology konsisten. Penelitian dalam makalah ini bertujuan membangun program aplikasi bernama Onti Measures berbasis web yang merupakan pengembangan dari sebuah prototipe program inconsistency measures berbasis ontology yang telah dibuat pada penelitian sebelumnya. Prototipe tersebut memiliki beberapa kelemahan, di antaranya prototipe hanya berupa kode program dan tidak ada antarmuka pengguna, sehingga program tidak dapat diakses oleh umum. Pengumpulan data dilakukan melalui studi pustaka, sedangkan metode pengembangan sistemnya adalah metode waterfall. Sampel pengujian dalam makalah ini adalah berkas ontology kasus virus dan penyakit yang dijadikan sebagai masukan program Onti Measures dengan pemakaian tiga jenis OWL reasoner. Keluaran program adalah informasi nilai inkonsistensi, running time, beserta ukuran ontology. Pengujian dilakukan dengan metode whitebox testing dan blackbox testing.

Author(s):  
Jangbae Jeon

Abstract This work presents a novel method of continuous improvement for faster, better and cheaper TEM sample preparation using Cut Look and Measure (CLM). The improvement of the process is executed by operational monitoring of daily beam conditions, end products, bulk thickness control, recipe usage and tool running time. This process produces a consequent decrease in rework rate and process time. In addition, it also increases throughput with better quality TEM samples.


Author(s):  
Jeffrey L. Adler

For a wide range of transportation network path search problems, the A* heuristic significantly reduces both search effort and running time when compared to basic label-setting algorithms. The motivation for this research was to determine if additional savings could be attained by further experimenting with refinements to the A* approach. We propose a best neighbor heuristic improvement to the A* algorithm that yields additional benefits by significantly reducing the search effort on sparse networks. The level of reduction in running time improves as the average outdegree of the network decreases and the number of paths sought increases.


2018 ◽  
Vol 1 (3) ◽  
pp. 2
Author(s):  
José Stênio De Negreiros Júnior ◽  
Daniel Do Nascimento e Sá Cavalcante ◽  
Jermana Lopes de Moraes ◽  
Lucas Rodrigues Marcelino ◽  
Francisco Tadeu De Carvalho Belchior Magalhães ◽  
...  

Simulating the propagation of optical pulses in a single mode optical fiber is of fundamental importance for studying the several effects that may occur within such medium when it is under some linear and nonlinear effects. In this work, we simulate it by implementing the nonlinear Schrödinger equation using the Split-Step Fourier method in some of its approaches. Then, we compare their running time, algorithm complexity and accuracy regarding energy conservation of the optical pulse. We note that the method is simple to implement and presents good results of energy conservation, besides low temporal cost. We observe a greater precision for the symmetrized approach, although its running time can be up to 126% higher than the other approaches, depending on the parameters set. We conclude that the time window must be adjusted for each length of propagation in the fiber, so that the error regarding energy conservation during propagation can be reduced.


2021 ◽  
Vol 15 (6) ◽  
pp. 1-27
Author(s):  
Marco Bressan ◽  
Stefano Leucci ◽  
Alessandro Panconesi

We address the problem of computing the distribution of induced connected subgraphs, aka graphlets or motifs , in large graphs. The current state-of-the-art algorithms estimate the motif counts via uniform sampling by leveraging the color coding technique by Alon, Yuster, and Zwick. In this work, we extend the applicability of this approach by introducing a set of algorithmic optimizations and techniques that reduce the running time and space usage of color coding and improve the accuracy of the counts. To this end, we first show how to optimize color coding to efficiently build a compact table of a representative subsample of all graphlets in the input graph. For 8-node motifs, we can build such a table in one hour for a graph with 65M nodes and 1.8B edges, which is times larger than the state of the art. We then introduce a novel adaptive sampling scheme that breaks the “additive error barrier” of uniform sampling, guaranteeing multiplicative approximations instead of just additive ones. This allows us to count not only the most frequent motifs, but also extremely rare ones. For instance, on one graph we accurately count nearly 10.000 distinct 8-node motifs whose relative frequency is so small that uniform sampling would literally take centuries to find them. Our results show that color coding is still the most promising approach to scalable motif counting.


Mathematics ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1306
Author(s):  
Elsayed Badr ◽  
Sultan Almotairi ◽  
Abdallah El Ghamry

In this paper, we propose a novel blended algorithm that has the advantages of the trisection method and the false position method. Numerical results indicate that the proposed algorithm outperforms the secant, the trisection, the Newton–Raphson, the bisection and the regula falsi methods, as well as the hybrid of the last two methods proposed by Sabharwal, with regard to the number of iterations and the average running time.


2021 ◽  
Vol 13 (13) ◽  
pp. 7359
Author(s):  
Sadaf Alam ◽  
Miimu Airaksinen ◽  
Risto Lahdelma

Key stakeholders in industry are highly responsible for achieving energy performance targets. Particularly, this paper assesses the attitudes, approaches, and experiences of Finnish construction professionals regarding energy-efficient buildings, or nZEBs. A three-tier investigation was conducted including surveys and expert interviews with several stakeholders. The structure of this approach was informed by preliminary data and information available on the Finnish construction sector. The questionnaire showed that the stakeholders ranked energy efficiency and embodied energy/carbon as very important. The survey highlighted that the importance of the embodied carbon CO2 in the materials is less important than the energy efficiency from many of the stakeholders’ points of view. “Energy efficiency” is very important for ESCOs, contractors, and facility managers followed by architects, HVAC engineers, and construction design engineers. Nevertheless, the opinions of architects ranked “embodied energy CO2” as the most important regarding nZEB. When it comes to the importance of “running time emissions” toward nZEB, contractors and ESCO companies ranked it as 1 for importance followed by property owners (78%) and tenants (75%). It is very fascinating to see from the survey that “running time carbon emissions” has been ranked 1 (very important) by all stakeholders. This study will enable construction industry stakeholders to make provisions for overcoming the barriers, gaps, and challenges identified in the practices of the nZEB projects. It will also inform the formulation of policies that drive retrofit uptake.


Author(s):  
Markus Ekvall ◽  
Michael Höhle ◽  
Lukas Käll

Abstract Motivation Permutation tests offer a straightforward framework to assess the significance of differences in sample statistics. A significant advantage of permutation tests are the relatively few assumptions about the distribution of the test statistic are needed, as they rely on the assumption of exchangeability of the group labels. They have great value, as they allow a sensitivity analysis to determine the extent to which the assumed broad sample distribution of the test statistic applies. However, in this situation, permutation tests are rarely applied because the running time of naïve implementations is too slow and grows exponentially with the sample size. Nevertheless, continued development in the 1980s introduced dynamic programming algorithms that compute exact permutation tests in polynomial time. Albeit this significant running time reduction, the exact test has not yet become one of the predominant statistical tests for medium sample size. Here, we propose a computational parallelization of one such dynamic programming-based permutation test, the Green algorithm, which makes the permutation test more attractive. Results Parallelization of the Green algorithm was found possible by non-trivial rearrangement of the structure of the algorithm. A speed-up—by orders of magnitude—is achievable by executing the parallelized algorithm on a GPU. We demonstrate that the execution time essentially becomes a non-issue for sample sizes, even as high as hundreds of samples. This improvement makes our method an attractive alternative to, e.g. the widely used asymptotic Mann-Whitney U-test. Availabilityand implementation In Python 3 code from the GitHub repository https://github.com/statisticalbiotechnology/parallelPermutationTest under an Apache 2.0 license. Supplementary information Supplementary data are available at Bioinformatics online.


Sign in / Sign up

Export Citation Format

Share Document