Knowledge Management in Big Data Times for Global Health

2022 ◽  
pp. 149-163
Author(s):  
Jorge Lima de Magalhães ◽  
Luc Quoniam ◽  
Zulmira Hartz ◽  
Henrique Silveira ◽  
Priscila da Nobrega Rito

The 21st century brings an information revolution unprecedented in human history. The knowledge management of the data generated daily is a constant challenge for organizations and in all areas of science. Nevertheless, it is extremely relevant to the health area since it promotes the individual's well-being and health. In this sense, the quality of data, information, processes, and production of products and healthcare for the populations of the countries have increasingly become global concerns. Therefore, thinking about health only as a burden is a short-sighted thought. The new era of big data requires innovative knowledge management for global health, where quality is also guiding the new times. This chapter presents a reflection of the new times and management challenges for quality in global health and One Health.

2021 ◽  
Vol 23 (06) ◽  
pp. 1011-1018
Author(s):  
Aishrith P Rao ◽  
◽  
Raghavendra J C ◽  
Dr. Sowmyarani C N ◽  
Dr. Padmashree T ◽  
...  

With the advancement of technology and the large volume of data produced, processed, and stored, it is becoming increasingly important to maintain the quality of data in a cost-effective and productive manner. The most important aspects of Big Data (BD) are storage, processing, privacy, and analytics. The Big Data group has identified quality as a critical aspect of its maturity. Nonetheless, it is a critical approach that should be adopted early in the lifecycle and gradually extended to other primary processes. Companies are very reliant and drive profits from the huge amounts of data they collect. When its consistency deteriorates, the ramifications are uncertain and may result in completely undesirable conclusions. In the sense of BD, determining data quality is difficult, but it is essential that we uphold the data quality before we can proceed with any analytics. We investigate data quality during the stages of data gathering, preprocessing, data repository, and evaluation/analysis of BD processing in this paper. The related solutions are also suggested based on the elaboration and review of the proposed problems.


Author(s):  
Irene U. Osisioma

The development of Science and Technology has been positively associated with every nation's economic well-being and quality of life. Even though the importance of science in people's daily lives may not be readily noticeable, people engage in many science related activities and experiences, most of which enable them to make science-related decisions and choices every day. This implies that science education will continue to shape humanity, the environment, quality of life, sustainability of the planet, and peaceful coexistence. Effective participation in the scientifically and technologically driven world of the 21st Century implies a science education that produces scientifically literate citizens. This chapter provides justification for rethinking the way science education should be done in Africa generally, and Nigeria, in specific. Recommendations were made for the use of context-based science instruction as an effective way to Africanize science instruction.


Information ◽  
2020 ◽  
Vol 11 (4) ◽  
pp. 175 ◽  
Author(s):  
Tibor Koltay

This paper focuses on the characteristics of research data quality, and aims to cover the most important issues related to it, giving particular attention to its attributes and to data governance. The corporate word’s considerable interest in the quality of data is obvious in several thoughts and issues reported in business-related publications, even if there are apparent differences between values and approaches to data in corporate and in academic (research) environments. The paper also takes into consideration that addressing data quality would be unimaginable without considering big data.


2014 ◽  
Vol 12 (2) ◽  
pp. 93-106 ◽  
Author(s):  
Tobias Matzner

Purpose – Ubiquitous computing and “big data” have been widely recognized as requiring new concepts of privacy and new mechanisms to protect it. While improved concepts of privacy have been suggested, the paper aims to argue that people acting in full conformity to those privacy norms still can infringe the privacy of others in the context of ubiquitous computing and “big data”. Design/methodology/approach – New threats to privacy are described. Helen Nissenbaum's concept of “privacy as contextual integrity” is reviewed concerning its capability to grasp these problems. The argument is based on the assumption that the technologies work, persons are fully informed and capable of deciding according to advanced privacy considerations. Findings – Big data and ubiquitous computing enable privacy threats for persons whose data are only indirectly involved and even for persons about whom no data have been collected and processed. Those new problems are intrinsic to the functionality of these new technologies and need to be addressed on a social and political level. Furthermore, a concept of data minimization in terms of the quality of the data is proposed. Originality/value – The use of personal data as a threat to the privacy of others is established. This new perspective is used to reassess and recontextualize Helen Nissenbaum's concept of privacy. Data minimization in terms of quality of data is proposed as a new concept.


Author(s):  
Varun Vasudevan ◽  
Abeynaya Gnanasekaran ◽  
Varsha Sankar ◽  
Siddarth A. Vasudevan ◽  
James Zou

Background. Transparent and accessible reporting of COVID-19 data is critical for public health efforts. Each state and union territory (UT) of India has its own mechanism for reporting COVID-19 data, and the quality of their reporting has not been systematically evaluated. We present a comprehensive assessment of the quality of COVID-19 data reporting done by the Indian state and union territory governments. This assessment informs the public health efforts in India and serves as a guideline for pandemic data reporting by other governments. Methods. We designed a semi-quantitative framework to assess the quality of COVID-19 data reporting done by the states and union territories of India. This framework captures four key aspects of public health data reporting - availability, accessibility, granularity, and privacy. We then used this framework to calculate a COVID-19 Data Reporting Score (CDRS, ranging from 0 to 1) for 29 states based on the quality of COVID-19 data reporting done by the state during the two-week period from 19 May to 1 June, 2020. States that reported less than 10 total confirmed cases as of May 18 were excluded from the study. Findings. Our results indicate a strong disparity in the quality of COVID-19 data reporting done by the state governments in India. CDRS varies from 0.61 (good) in Karnataka to 0.0 (poor) in Bihar and Uttar Pradesh, with a median value of 0.26. Only ten states provide a visual representation of the trend in COVID-19 data. Ten states do not report any data stratified by age, gender, comorbidities or districts. In addition, we identify that Punjab and Chandigarh compromised the privacy of individuals under quarantine by releasing their personally identifiable information on the official websites. Across the states, the CDRS is positively associated with the state's sustainable development index for good health and well-being (Pearson correlation: r=0.630, p=0.0003). Interpretation. The disparity in CDRS across states highlights three important findings at the national, state, and individual level. At the national level, it shows the lack of a unified framework for reporting COVID-19 data in India, and highlights the need for a central agency to monitor or audit the quality of data reporting done by the states. Without a unified framework, it is difficult to aggregate the data from different states, gain insights from them, and coordinate an effective nationwide response to the pandemic. Moreover, it reflects the inadequacy in coordination or sharing of resources among the states in India. Coordination among states is particularly important as more people start moving across states in the coming months. The disparate reporting score also reflects inequality in individual access to public health information and privacy protection based on the state of residence. Funding. J.Z. is supported by NSF CCF 1763191, NIH R21 MD012867-01, NIH P30AG059307, NIH U01MH098953 and grants from the Silicon Valley Foundation and the Chan-Zuckerberg Initiative.


2021 ◽  
Vol 6 (1) ◽  
pp. 1-8
Author(s):  
Michela Di Trani ◽  

To evaluate the psycho-physical well-being of people coping with infertility who were forced to suspend ART treatment due to restrictions related to the COVID-19 global health emergency.


Big data is facing many challenges in different aspects, which appear in characteristics such as: Velocity, Volume, Value and Veracity. Processing and analysis of big data are challenging issues to acquire quality information in order to support accurate medical drug practice. The quality of data taxonomy is indicated by three basic elements: are meaningful, predication and decision-making. These elements have been encouraged in previous work that focused on the same challenges of big data. Consequently, the proposed approach preserves the quality of medical drug data toward meaningful data lake by clustering. It consists of four components. Data collection and pre-processing represent the first component in the data lake. Profile data is treated with semi-structured data to clean it up. The second component is extracting data through enforcing rules on whole data to produce different groups and generate weight based on constraints within groups. In component three, data is organized and clustering. This component complies with schema profiling referring to component two in the data lake. Weight outputs of component three are inputs for component four, where K-Mean clustering is applied to obtain different clusters. Each cluster presents an alternative drug to achieve meaningful drug data that is consistent with component three in the data lake.This paper addressed two main challenges; the first challenge is extracting meaningful data from big data; whereas the second challenge is using big data technique with K-Mean clustering algorithm. An experimental approach was followed through using Food and Drug Administration (FDA) data and symptoms in R framework. ANOVA statistical test was carried out to calculate sum of square error, P- Value and F-Valuefor the evaluation of variances between clusters and variances within clusters. The results showed the efficiency of the proposed approach.


1999 ◽  
Vol 22 (3) ◽  
pp. 180-194
Author(s):  
Roy I Brown ◽  
Jo Shearer

Quality of life is now well developed in the disability literature yet there are few studies that relate to children. In this paper the implications of quality of life models for the field of inclusion are discussed. Quality of life is seen as an attribute of well being and the principles relevant to this are outlined. Inclusion is not seen simply as an educational process, for the authors argue that educational inclusion can only be effective when it is set within proactive community and family behaviour which is also inclusive. Together these concepts give rise to broad educational criteria and it is the discussion of these which forms the central focus of this paper. Implications for family and community as well as the education system including professional education are also discussed.


1989 ◽  
Vol 3 (4) ◽  
pp. 195-200
Author(s):  
Lawrence P. Grayson

If the USA is to retain its preeminent economic position in the world it must improve the quality of its education and training. This article discusses the contribution of education to a nation's well-being, examines the current education system in the USA and analyses education and training needs for the 21st century.


2021 ◽  
Vol 13 (3) ◽  
pp. 1-15
Author(s):  
Rada Chirkova ◽  
Jon Doyle ◽  
Juan Reutter

Assessing and improving the quality of data are fundamental challenges in Big-Data applications. These challenges have given rise to numerous solutions targeting transformation, integration, and cleaning of data. However, while schema design, data cleaning, and data migration are nowadays reasonably well understood in isolation, not much attention has been given to the interplay between standalone tools in these areas. In this article, we focus on the problem of determining whether the available data-transforming procedures can be used together to bring about the desired quality characteristics of the data in business or analytics processes. For example, to help an organization avoid building a data-quality solution from scratch when facing a new analytics task, we ask whether the data quality can be improved by reusing the tools that are already available, and if so, which tools to apply, and in which order, all without presuming knowledge of the internals of the tools, which may be external or proprietary. Toward addressing this problem, we conduct a formal study in which individual data cleaning, data migration, or other data-transforming tools are abstracted as black-box procedures with only some of the properties exposed, such as their applicability requirements, the parts of the data that the procedure modifies, and the conditions that the data satisfy once the procedure has been applied. As a proof of concept, we provide foundational results on sequential applications of procedures abstracted in this way, to achieve prespecified data-quality objectives, for the use case of relational data and for procedures described by standard relational constraints. We show that, while reasoning in this framework may be computationally infeasible in general, there exist well-behaved cases in which these foundational results can be applied in practice for achieving desired data-quality results on Big Data.


Sign in / Sign up

Export Citation Format

Share Document