Ontological Collaboration Engineering

Author(s):  
Stefan Werner Knoll ◽  
Till Plumbaum ◽  
Ernesto William De Luca ◽  
Livia Predoiu

This chapter gives a comprehensive overview of ongoing research about semantic approaches for Collaboration Engineering. The authors present a new ontology-based approach, where each concept of the ontology corresponds to a specific collaboration step or a resource, to collect, manage, and share collaborative knowledge. The chapter discusses the utility of the proposed ontology in the context of a real-world example where the authors explain how collaboration can be modelled and applied using their ontology in order to improve the collaboration process. Furthermore, they discuss how well-known ontologies, such as FOAF, can be linked to their ontology and extend it. While the focus of the chapter is on semantic Collaboration Engineering, the authors additionally present methods of reasoning and machine learning to derive new knowledge about the collaboration process as a further research direction.

2016 ◽  
Vol 31 (3) ◽  
pp. 278-321 ◽  
Author(s):  
Fu Zhang ◽  
Jingwei Cheng ◽  
Zongmin Ma

AbstractOntology, as a standard (World Wide Web Consortium recommendation) for representing knowledge in the Semantic Web, has become a fundamental and critical component for developing applications in different real-world scenarios. However, it is widely pointed out that classical ontology model is not sufficient to deal with imprecise and vague knowledge strongly characterizing some real-world applications. Thus, a requirement of extending ontologies naturally arises in many practical applications of knowledge-based systems, in particular the Semantic Web. In order to provide the necessary means to handle such vague and imprecise information there are today many proposals for fuzzy extensions to ontologies, and until now the literature on fuzzy ontologies has been flourishing. To investigate fuzzy ontologies and more importantly serve as helping readers grasp the main ideas and results of fuzzy ontologies, and to highlight an ongoing research on fuzzy approaches for knowledge semantic representation based on ontologies, as well as their applications on various domains,in this paper,we provide a comprehensive overview of fuzzy ontologies. In detail, wefirstintroduce fuzzy ontologies from the most common aspects such asrepresentation(including categories, formal definitions, representation languages, and tools of fuzzy ontologies),reasoning(including reasoning techniques and reasoners), andapplications(the most relevant applications about fuzzy ontologies). Then,the other important issueson fuzzy ontologies, such asconstruction,mapping,integration,query,storage,evaluation,extension, anddirections for future research, are also discussed in detail. Also, we make somecomparisons and analysesin our whole review.


Author(s):  
G. Dheepak ◽  
Dr. D. Vaishali

Machine learning (ML) utilises data and algorithms to simulate the way people learn and improve their accuracy over time and it’s also a subdivision of artificial intelligence (AI) and computer science. In AI, ML is a relatively recent domain that involves studying computational methods for discovering new knowledge and managing existing knowledge. Methods of machine learning have been applied to a diversity of application domains. . However, in recent years, as a result of various technological advancements and research efforts, new data has become available, resulting in new domains in which machine learning can be applied. This paper introduces the definition of machine learning and its basic structure. These algorithms are used for various purposes, including data mining, image processing, predictive analytics, and so on. The primary benefit of using machine learning is that once an algorithm learns what to do with data, it can do so automatically. This survey replenishes a brief outline and outlook on numerous machine learning applications.


Author(s):  
Tausifa Jan Saleem ◽  
Mohammad Ahsan Chishti

The rapid progress in domains like machine learning, and big data has created plenty of opportunities in data-driven applications particularly healthcare. Incorporating machine intelligence in healthcare can result in breakthroughs like precise disease diagnosis, novel methods of treatment, remote healthcare monitoring, drug discovery, and curtailment in healthcare costs. The implementation of machine intelligence algorithms on the massive healthcare datasets is computationally expensive. However, consequential progress in computational power during recent years has facilitated the deployment of machine intelligence algorithms in healthcare applications. Motivated to explore these applications, this paper presents a review of research works dedicated to the implementation of machine learning on healthcare datasets. The studies that were conducted have been categorized into following groups (a) disease diagnosis and detection, (b) disease risk prediction, (c) health monitoring, (d) healthcare related discoveries, and (e) epidemic outbreak prediction. The objective of the research is to help the researchers in this field to get a comprehensive overview of the machine learning applications in healthcare. Apart from revealing the potential of machine learning in healthcare, this paper will serve as a motivation to foster advanced research in the domain of machine intelligence-driven healthcare.


2021 ◽  
Vol 186 (Supplement_1) ◽  
pp. 445-451
Author(s):  
Yifei Sun ◽  
Navid Rashedi ◽  
Vikrant Vaze ◽  
Parikshit Shah ◽  
Ryan Halter ◽  
...  

ABSTRACT Introduction Early prediction of the acute hypotensive episode (AHE) in critically ill patients has the potential to improve outcomes. In this study, we apply different machine learning algorithms to the MIMIC III Physionet dataset, containing more than 60,000 real-world intensive care unit records, to test commonly used machine learning technologies and compare their performances. Materials and Methods Five classification methods including K-nearest neighbor, logistic regression, support vector machine, random forest, and a deep learning method called long short-term memory are applied to predict an AHE 30 minutes in advance. An analysis comparing model performance when including versus excluding invasive features was conducted. To further study the pattern of the underlying mean arterial pressure (MAP), we apply a regression method to predict the continuous MAP values using linear regression over the next 60 minutes. Results Support vector machine yields the best performance in terms of recall (84%). Including the invasive features in the classification improves the performance significantly with both recall and precision increasing by more than 20 percentage points. We were able to predict the MAP with a root mean square error (a frequently used measure of the differences between the predicted values and the observed values) of 10 mmHg 60 minutes in the future. After converting continuous MAP predictions into AHE binary predictions, we achieve a 91% recall and 68% precision. In addition to predicting AHE, the MAP predictions provide clinically useful information regarding the timing and severity of the AHE occurrence. Conclusion We were able to predict AHE with precision and recall above 80% 30 minutes in advance with the large real-world dataset. The prediction of regression model can provide a more fine-grained, interpretable signal to practitioners. Model performance is improved by the inclusion of invasive features in predicting AHE, when compared to predicting the AHE based on only the available, restricted set of noninvasive technologies. This demonstrates the importance of exploring more noninvasive technologies for AHE prediction.


2021 ◽  
Vol 51 (3) ◽  
pp. 9-16
Author(s):  
José Suárez-Varela ◽  
Miquel Ferriol-Galmés ◽  
Albert López ◽  
Paul Almasan ◽  
Guillermo Bernárdez ◽  
...  

During the last decade, Machine Learning (ML) has increasingly become a hot topic in the field of Computer Networks and is expected to be gradually adopted for a plethora of control, monitoring and management tasks in real-world deployments. This poses the need to count on new generations of students, researchers and practitioners with a solid background in ML applied to networks. During 2020, the International Telecommunication Union (ITU) has organized the "ITU AI/ML in 5G challenge", an open global competition that has introduced to a broad audience some of the current main challenges in ML for networks. This large-scale initiative has gathered 23 different challenges proposed by network operators, equipment manufacturers and academia, and has attracted a total of 1300+ participants from 60+ countries. This paper narrates our experience organizing one of the proposed challenges: the "Graph Neural Networking Challenge 2020". We describe the problem presented to participants, the tools and resources provided, some organization aspects and participation statistics, an outline of the top-3 awarded solutions, and a summary with some lessons learned during all this journey. As a result, this challenge leaves a curated set of educational resources openly available to anyone interested in the topic.


Animals ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 1549
Author(s):  
Robert D. Chambers ◽  
Nathanael C. Yoder ◽  
Aletha B. Carson ◽  
Christian Junge ◽  
David E. Allen ◽  
...  

Collar-mounted canine activity monitors can use accelerometer data to estimate dog activity levels, step counts, and distance traveled. With recent advances in machine learning and embedded computing, much more nuanced and accurate behavior classification has become possible, giving these affordable consumer devices the potential to improve the efficiency and effectiveness of pet healthcare. Here, we describe a novel deep learning algorithm that classifies dog behavior at sub-second resolution using commercial pet activity monitors. We built machine learning training databases from more than 5000 videos of more than 2500 dogs and ran the algorithms in production on more than 11 million days of device data. We then surveyed project participants representing 10,550 dogs, which provided 163,110 event responses to validate real-world detection of eating and drinking behavior. The resultant algorithm displayed a sensitivity and specificity for detecting drinking behavior (0.949 and 0.999, respectively) and eating behavior (0.988, 0.983). We also demonstrated detection of licking (0.772, 0.990), petting (0.305, 0.991), rubbing (0.729, 0.996), scratching (0.870, 0.997), and sniffing (0.610, 0.968). We show that the devices’ position on the collar had no measurable impact on performance. In production, users reported a true positive rate of 95.3% for eating (among 1514 users), and of 94.9% for drinking (among 1491 users). The study demonstrates the accurate detection of important health-related canine behaviors using a collar-mounted accelerometer. We trained and validated our algorithms on a large and realistic training dataset, and we assessed and confirmed accuracy in production via user validation.


Entropy ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. 460
Author(s):  
Samuel Yen-Chi Chen ◽  
Shinjae Yoo

Distributed training across several quantum computers could significantly improve the training time and if we could share the learned model, not the data, it could potentially improve the data privacy as the training would happen where the data is located. One of the potential schemes to achieve this property is the federated learning (FL), which consists of several clients or local nodes learning on their own data and a central node to aggregate the models collected from those local nodes. However, to the best of our knowledge, no work has been done in quantum machine learning (QML) in federation setting yet. In this work, we present the federated training on hybrid quantum-classical machine learning models although our framework could be generalized to pure quantum machine learning model. Specifically, we consider the quantum neural network (QNN) coupled with classical pre-trained convolutional model. Our distributed federated learning scheme demonstrated almost the same level of trained model accuracies and yet significantly faster distributed training. It demonstrates a promising future research direction for scaling and privacy aspects.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Alan Brnabic ◽  
Lisa M. Hess

Abstract Background Machine learning is a broad term encompassing a number of methods that allow the investigator to learn from the data. These methods may permit large real-world databases to be more rapidly translated to applications to inform patient-provider decision making. Methods This systematic literature review was conducted to identify published observational research of employed machine learning to inform decision making at the patient-provider level. The search strategy was implemented and studies meeting eligibility criteria were evaluated by two independent reviewers. Relevant data related to study design, statistical methods and strengths and limitations were identified; study quality was assessed using a modified version of the Luo checklist. Results A total of 34 publications from January 2014 to September 2020 were identified and evaluated for this review. There were diverse methods, statistical packages and approaches used across identified studies. The most common methods included decision tree and random forest approaches. Most studies applied internal validation but only two conducted external validation. Most studies utilized one algorithm, and only eight studies applied multiple machine learning algorithms to the data. Seven items on the Luo checklist failed to be met by more than 50% of published studies. Conclusions A wide variety of approaches, algorithms, statistical software, and validation strategies were employed in the application of machine learning methods to inform patient-provider decision making. There is a need to ensure that multiple machine learning approaches are used, the model selection strategy is clearly defined, and both internal and external validation are necessary to be sure that decisions for patient care are being made with the highest quality evidence. Future work should routinely employ ensemble methods incorporating multiple machine learning algorithms.


Sign in / Sign up

Export Citation Format

Share Document