scholarly journals Visualizing risk factors of dementia from scholarly literature using knowledge maps and next-generation data models

Author(s):  
Kiran Fahd ◽  
Sitalakshmi Venkatraman

AbstractScholarly communication of knowledge is predominantly document-based in digital repositories, and researchers find it tedious to automatically capture and process the semantics among related articles. Despite the present digital era of big data, there is a lack of visual representations of the knowledge present in scholarly articles, and a time-saving approach for a literature search and visual navigation is warranted. The majority of knowledge display tools cannot cope with current big data trends and pose limitations in meeting the requirements of automatic knowledge representation, storage, and dynamic visualization. To address this limitation, the main aim of this paper is to model the visualization of unstructured data and explore the feasibility of achieving visual navigation for researchers to gain insight into the knowledge hidden in scientific articles of digital repositories. Contemporary topics of research and practice, including modifiable risk factors leading to a dramatic increase in Alzheimer’s disease and other forms of dementia, warrant deeper insight into the evidence-based knowledge available in the literature. The goal is to provide researchers with a visual-based easy traversal through a digital repository of research articles. This paper takes the first step in proposing a novel integrated model using knowledge maps and next-generation graph datastores to achieve a semantic visualization with domain-specific knowledge, such as dementia risk factors. The model facilitates a deep conceptual understanding of the literature by automatically establishing visual relationships among the extracted knowledge from the big data resources of research articles. It also serves as an automated tool for a visual navigation through the knowledge repository for faster identification of dementia risk factors reported in scholarly articles. Further, it facilitates a semantic visualization and domain-specific knowledge discovery from a large digital repository and their associations. In this study, the implementation of the proposed model in the Neo4j graph data repository, along with the results achieved, is presented as a proof of concept. Using scholarly research articles on dementia risk factors as a case study, automatic knowledge extraction, storage, intelligent search, and visual navigation are illustrated. The implementation of contextual knowledge and its relationship for a visual exploration by researchers show promising results in the knowledge discovery of dementia risk factors. Overall, this study demonstrates the significance of a semantic visualization with the effective use of knowledge maps and paves the way for extending visual modeling capabilities in the future.

Information ◽  
2020 ◽  
Vol 11 (7) ◽  
pp. 341
Author(s):  
Vibhuti Gupta ◽  
Rattikorn Hewett

Twitter is a microblogging platform that generates large volumes of data with high velocity. This daily generation of unbounded and continuous data leads to Big Data streams that often require real-time distributed and fully automated processing. Hashtags, hyperlinked words in tweets, are widely used for tweet topic classification, retrieval, and clustering. Hashtags are used widely for analyzing tweet sentiments where emotions can be classified without contexts. However, regardless of the wide usage of hashtags, general tweet topic classification using hashtags is challenging due to its evolving nature, lack of context, slang, abbreviations, and non-standardized expression by users. Most existing approaches, which utilize hashtags for tweet topic classification, focus on extracting hashtag concepts from external lexicon resources to derive semantics. However, due to the rapid evolution and non-standardized expression of hashtags, the majority of these lexicon resources either suffer from the lack of hashtag words in their knowledge bases or use multiple resources at once to derive semantics, which make them unscalable. Along with scalable and automated techniques for tweet topic classification using hashtags, there is also a requirement for real-time analytics approaches to handle huge and dynamic flows of textual streams generated by Twitter. To address these problems, this paper first presents a novel semi-automated technique that derives semantically relevant hashtags using a domain-specific knowledge base of topic concepts and combines them with the existing tweet-based-hashtags to produce Hybrid Hashtags. Further, to deal with the speed and volume of Big Data streams of tweets, we present an online approach that updates the preprocessing and learning model incrementally in a real-time streaming environment using the distributed framework, Apache Storm. Finally, to fully exploit the batch and stream environment performance advantages, we propose a comprehensive framework (Hybrid Hashtag-based Tweet topic classification (HHTC) framework) that combines batch and online mechanisms in the most effective way. Extensive experimental evaluations on a large volume of Twitter data show that the batch and online mechanisms, along with their combination in the proposed framework, are scalable, efficient, and provide effective tweet topic classification using hashtags.


2021 ◽  
Vol 3 ◽  
Author(s):  
Sabrina Luftensteiner ◽  
Michael Mayr ◽  
Georgios C. Chasparis ◽  
Mario Pichler

The amount of sensors in process industry is continuously increasing as they are getting faster, better and cheaper. Due to the rising amount of available data, the processing of generated data has to be automatized in a computationally efficient manner. Such a solution should also be easily implementable and reproducible independently of the details of the application domain. This paper provides a suitable and versatile usable infrastructure that deals with Big Data in the process industry on various platforms using efficient, fast and modern technologies for data gathering, processing, storing and visualization. Contrary to prior work, we provide an easy-to-use, easily reproducible, adaptable and configurable Big Data management solution with a detailed implementation description that does not require expert or domain-specific knowledge. In addition to the infrastructure implementation, we focus on monitoring both infrastructure inputs and outputs, including incoming data of processes and model predictions and performances, thus allowing for early interventions and actions if problems occur.


2014 ◽  
Vol 10 (3) ◽  
pp. 249-261 ◽  
Author(s):  
Tessa Sanderson ◽  
Jo Angouri

The active involvement of patients in decision-making and the focus on patient expertise in managing chronic illness constitutes a priority in many healthcare systems including the NHS in the UK. With easier access to health information, patients are almost expected to be (or present self) as an ‘expert patient’ (Ziebland 2004). This paper draws on the meta-analysis of interview data collected for identifying treatment outcomes important to patients with rheumatoid arthritis (RA). Taking a discourse approach to identity, the discussion focuses on the resources used in the negotiation and co-construction of expert identities, including domain-specific knowledge, access to institutional resources, and ability to self-manage. The analysis shows that expertise is both projected (institutionally sanctioned) and claimed by the patient (self-defined). We close the paper by highlighting the limitations of our pilot study and suggest avenues for further research.


2021 ◽  
pp. 135910532098831
Author(s):  
Zoe Brown ◽  
Marika Tiggemann

Celebrities are well-known individuals who receive extensive public and media attention. There is an increasing body of research on the effect of celebrities on body dissatisfaction and disordered eating. Yet, there has been no synthesis of the research findings. A systematic search for research articles on celebrities and body image or eating disorders resulted in 36 studies meeting inclusion criteria. Overall, the qualitative, correlational, big data, and experimental methodologies used in these studies demonstrated that exposure to celebrity images, appearance comparison, and celebrity worship are associated with maladaptive consequences for individuals’ body image.


2020 ◽  
Vol 20 (S10) ◽  
Author(s):  
Ankur Agrawal ◽  
Licong Cui

AbstractBiological and biomedical ontologies and terminologies are used to organize and store various domain-specific knowledge to provide standardization of terminology usage and to improve interoperability. The growing number of such ontologies and terminologies and their increasing adoption in clinical, research and healthcare settings call for effective and efficient quality assurance and semantic enrichment techniques of these ontologies and terminologies. In this editorial, we provide an introductory summary of nine articles included in this supplement issue for quality assurance and enrichment of biological and biomedical ontologies and terminologies. The articles cover a range of standards including SNOMED CT, National Cancer Institute Thesaurus, Unified Medical Language System, North American Association of Central Cancer Registries and OBO Foundry Ontologies.


Semantic Web ◽  
2020 ◽  
pp. 1-45
Author(s):  
Valentina Anita Carriero ◽  
Aldo Gangemi ◽  
Maria Letizia Mancinelli ◽  
Andrea Giovanni Nuzzolese ◽  
Valentina Presutti ◽  
...  

Ontology Design Patterns (ODPs) have become an established and recognised practice for guaranteeing good quality ontology engineering. There are several ODP repositories where ODPs are shared as well as ontology design methodologies recommending their reuse. Performing rigorous testing is recommended as well for supporting ontology maintenance and validating the resulting resource against its motivating requirements. Nevertheless, it is less than straightforward to find guidelines on how to apply such methodologies for developing domain-specific knowledge graphs. ArCo is the knowledge graph of Italian Cultural Heritage and has been developed by using eXtreme Design (XD), an ODP- and test-driven methodology. During its development, XD has been adapted to the need of the CH domain e.g. gathering requirements from an open, diverse community of consumers, a new ODP has been defined and many have been specialised to address specific CH requirements. This paper presents ArCo and describes how to apply XD to the development and validation of a CH knowledge graph, also detailing the (intellectual) process implemented for matching the encountered modelling problems to ODPs. Relevant contributions also include a novel web tool for supporting unit-testing of knowledge graphs, a rigorous evaluation of ArCo, and a discussion of methodological lessons learned during ArCo’s development.


Sign in / Sign up

Export Citation Format

Share Document