The Odyssey Project – Understanding and Implementing User Needs in the Context of Ballistic Crime Data Exchange

Author(s):  
Simeon J. Yates ◽  
Chris Bates ◽  
Babak Akhgar ◽  
Lucasz Jopek ◽  
Richard Wilson ◽  
...  
Keyword(s):  
2021 ◽  
Vol 10 (2) ◽  
pp. 170-175
Author(s):  
I Putu Agus Eka Pratama ◽  
Kevin Christopher Bakkara

The development of information technology and computer network from time to time is increasing along with the increase in user needs for both from the business, education, industrial, to data security side. Data of network traffic that is getting denser in communication and data exchange between users on computer networks can become a problem when using conventional computer network technology. For that, it needs a new technology that is implemented in computer networks, along with the measurement of Quality of Service (QoS) in it. Software-Defined Networking (SDN) is a solution for this, where the stages of network design, management and implementation, separate the data plane and the control plane. In this research, the implementation of SDN was carried out in the form of a simulation using both of Mininet and OpenDaylight with a Tree Topology, then the QoS measurements were carried out in it. The results of testing and measuring QoS on SDN simulations with Tree topology using Mininet and OpenDaylight, showed a Jitter value of 0.425 ms, a Packet Loss value of 0.266%, a Bandwith value of 9.3925 Mbps, a UDP Throughput value of 2.348 bits/sec, and a TCPThroughput value of 2.335 bits/sec.


2020 ◽  
Vol 51 (2) ◽  
pp. 479-493
Author(s):  
Jenny A. Roberts ◽  
Evelyn P. Altenberg ◽  
Madison Hunter

Purpose The results of automatic machine scoring of the Index of Productive Syntax from the Computerized Language ANalysis (CLAN) tools of the Child Language Data Exchange System of TalkBank (MacWhinney, 2000) were compared to manual scoring to determine the accuracy of the machine-scored method. Method Twenty transcripts of 10 children from archival data of the Weismer Corpus from the Child Language Data Exchange System at 30 and 42 months were examined. Measures of absolute point difference and point-to-point accuracy were compared, as well as points erroneously given and missed. Two new measures for evaluating automatic scoring of the Index of Productive Syntax were introduced: Machine Item Accuracy (MIA) and Cascade Failure Rate— these measures further analyze points erroneously given and missed. Differences in total scores, subscale scores, and individual structures were also reported. Results Mean absolute point difference between machine and hand scoring was 3.65, point-to-point agreement was 72.6%, and MIA was 74.9%. There were large differences in subscales, with Noun Phrase and Verb Phrase subscales generally providing greater accuracy and agreement than Question/Negation and Sentence Structures subscales. There were significantly more erroneous than missed items in machine scoring, attributed to problems of mistagging of elements, imprecise search patterns, and other errors. Cascade failure resulted in an average of 4.65 points lost per transcript. Conclusions The CLAN program showed relatively inaccurate outcomes in comparison to manual scoring on both traditional and new measures of accuracy. Recommendations for improvement of the program include accounting for second exemplar violations and applying cascaded credit, among other suggestions. It was proposed that research on machine-scored syntax routinely report accuracy measures detailing erroneous and missed scores, including MIA, so that researchers and clinicians are aware of the limitations of a machine-scoring program. Supplemental Material https://doi.org/10.23641/asha.11984364


Author(s):  
Scot D. Weaver ◽  
Thomas E. Lefchik ◽  
Marc I. Hoit ◽  
Kirk Beach

2014 ◽  
Vol 61 (1) ◽  
pp. 57-74 ◽  
Author(s):  
AS Kiem ◽  
DC Verdon-Kidd ◽  
EK Austin

2016 ◽  
Vol 0 (2) ◽  
Author(s):  
Oleksandr V. Blintsov ◽  
Viktor I. Korytskyi

2009 ◽  
Vol 17 (3) ◽  
pp. 333-334 ◽  
Author(s):  
David Edwards
Keyword(s):  

Author(s):  
Markus Krötzsch

To reason with existential rules (a.k.a. tuple-generating dependencies), one often computes universal models. Among the many such models of different structure and cardinality, the core is arguably the “best”. Especially for finitely satisfiable theories, where the core is the unique smallest universal model, it has advantages in query answering, non-monotonic reasoning, and data exchange. Unfortunately, computing cores is difficult and not supported by most reasoners. We therefore propose ways of computing cores using practically implemented methods from rule reasoning and answer set programming. Our focus is on cases where the standard chase algorithm produces a core. We characterise this desirable situation in general terms that apply to a large class of cores, derive concrete approaches for decidable special cases, and generalise these approaches to non-monotonic extensions of existential rules.


Sign in / Sign up

Export Citation Format

Share Document