scholarly journals Explicitly Disclosing Clients Illness Catalogue Using Data Science Techniques

Author(s):  
A. Sai Ram

Abstract: Across the world in our day-to-day life, we come across various medical inaccuracies caused due to unreliable patient’s reminiscence. Statistically, communication problems are the most significant aspect that hampers the diagnosis of patient’s diseases. So, this paper represents the best theoretical solution to achieve patient care in the most adequate way. In these pandemic days, the communication gap between the patient and the physician has begun to decline to a nominal level. This paper demonstrates a vital solution and a steppingstone to the complete digitalization of the client’s illness catalogue. To attain the solution in a specified manner we are using adverse pre-existential technologies like data warehousing, database management system, cloud computing, big data, etc. We also persistently maintain the most secure, impenetrable infrastructure enabling the client’s data privacy. Keywords: Illness catalogue, cloud computing, data warehousing, database management systems, big data.

Author(s):  
Shaveta Bhatia

 The epoch of the big data presents many opportunities for the development in the range of data science, biomedical research cyber security, and cloud computing. Nowadays the big data gained popularity.  It also invites many provocations and upshot in the security and privacy of the big data. There are various type of threats, attacks such as leakage of data, the third party tries to access, viruses and vulnerability that stand against the security of the big data. This paper will discuss about the security threats and their approximate method in the field of biomedical research, cyber security and cloud computing.


Big Data ◽  
2016 ◽  
pp. 1495-1518
Author(s):  
Mohammad Alaa Hussain Al-Hamami

Big Data is comprised systems, to remain competitive by techniques emerging due to Big Data. Big Data includes structured data, semi-structured and unstructured. Structured data are those data formatted for use in a database management system. Semi-structured and unstructured data include all types of unformatted data including multimedia and social media content. Among practitioners and applied researchers, the reaction to data available through blogs, Twitter, Facebook, or other social media can be described as a “data rush” promising new insights about consumers' choices and behavior and many other issues. In the past Big Data has been used just by very large organizations, governments and large enterprises that have the ability to create its own infrastructure for hosting and mining large amounts of data. This chapter will show the requirements for the Big Data environments to be protected using the same rigorous security strategies applied to traditional database systems.


2015 ◽  
Vol 3 (2) ◽  
pp. 16-23
Author(s):  
Nir Kshetri

Cloud computing and big data applications are likely to have far-reaching and profound impacts on developing world-based smallholder farmers. Especially, the use of mobile devices to access cloudbased applications is a promising approach to deliver value to smallholder farmers in developing countries since according to the International Telecommunication Union, mobile-cellular penetration in developing countries is expected to reach 90% by the end of 2014. This article examines the contexts, mechanisms, processes and consequences associated with cloud computing and big data deployments in farming activities that could affect the lives of developing world-based smallholder farmers. We analyze the roles of big data and cloud-based applications in facilitating input availability, providing access to resources, enhancing farming processes and productivity and improving market access, marketability of products and bargaining power for smallholders. In the developing world’s context, an even bigger question than that of whether agricultural productivity can be improved by using cloud computing and big data is who is likely to benefit from the growth in productivity. The paper analyzes the conditions under which at agricultural productivity associated with the utilization cloud computing and big data applications in developing countries may benefit smallholder farmers. Also investigated in the paper are important privacy and ethical issues involved around cloud computing and big data. While some analysts view that people in developing countries do not need privacy, the paper challenged this view and points out that data privacy and security issues are even more important to smallholder farmers in developing countries.


2018 ◽  
Author(s):  
Jen Schradie

With a growing interest in data science and online analytics, researchers are increasingly using data derived from the Internet. Whether for qualitative or quantitative analysis, online data, including “Big Data,” can often exclude marginalized populations, especially those from the poor and working class, as the digital divide remains a persistent problem. This methodological commentary on the current state of digital data and methods disentangles the hype from the reality of digitally produced data for sociological research. In the process, it offers strategies to address the weaknesses of data that is derived from the Internet in order to represent marginalized populations.


Author(s):  
H. Wu ◽  
K. Fu

Abstract. As a kind of information carrier which is high capacity, remarkable reliability, easy to obtain and the other features,remote sensing image data is widely used in the fields of natural resources survey, monitoring, planning, disaster prevention and the others (Huang, Jie, et al, 2008). Considering about the daily application scenario for the remote sensing image in professional departments, the demand of usage and management of remote sensing big data is about to be analysed in this paper.In this paper, by combining professional department scenario, the application of remote sensing image analysis of remote sensing data in the use and management of professional department requirements, on the premise of respect the habits, is put forward to remote sensing image metadata standard for reference index, based on remote sensing image files and database management system, large data serialization of time management methods, the method to the realization of the design the metadata standard products, as well as to the standard of metadata content indexed storage of massive remote sensing image database management.


Big data is traditionally associated with distributed systems and this is understandable given that the volume dimension of Big Data appears to be best accommodated by the continuous addition of resources over a distributed network rather than the continuous upgrade of a central storage resource. Based on this implementation context, non- distributed relational database models are considered volume-inefficient and a departure from their usage contemplated by the database community. Distributed systems depend on data partitioning to determine chunks of related data and where in storage they can be accommodated. In existing Database Management Systems (DBMS), data partitioning is automated which in the opinion of this paper does not give the best results since partitioning is an NP-hard problem in terms of algorithmic time complexity. The NP-hardness is shown to be reduced by a partitioning strategy that relies on the discretion of the programmer which is more effective and flexible though requires extra coding effort. NP-hard problems are solved more effectively by a combination of discretion rather than full automation. In this paper, the partitioning process is reviewed and a programmer-based partitioning strategy implemented for an application with a relational DBMS backend. By doing this, the relational DBMS is made adaptive in the volume dimension of big data. The ACID properties (atomicity, consistency, isolation, and durability) of the relational database model which constitutes a major attraction especially for applications that process transactions is thus harnessed. On a more general note, the results of this research suggest that databases can be made adaptive in the areas of their weaknesses as a one-size-fits- all database management system may no longer be feasible.


2019 ◽  
Vol 11 (1) ◽  
pp. 36-40 ◽  
Author(s):  
Venky Shankar

AbstractBig data are taking center stage for decision-making in many retail organizations. Customer data on attitudes and behavior across channels, touchpoints, devices and platforms are often readily available and constantly recorded. These data are integrated from multiple sources and stored or warehoused, often in a cloud-based environment. Statistical, econometric and data science models are developed for enabling appropriate decisions. Computer algorithms and programs are created for these models. Machine learning based models, are particularly useful for learning from the data and making predictive decisions. These machine learning models form the backbone for the generation and development of AI-assisted decisions. In many cases, such decisions are automated using systems such as chatbots and robots.Of special interest are issues such as omnichannel shopping behavior, resource allocation across channels, the effects of the mobile channel and mobile apps on shopper behavior, dynamic pricing, data privacy and security. Research on these issues reveals several interesting insights on which retailers can build. To fully leverage big data in today’s retailing environment, CRM strategies must be location specific, time specific and channel specific in addition to being customer specific.


2021 ◽  
Vol 9 ◽  
Author(s):  
Andrea Rau

Data collected in very large quantities are called big data, and big data has changed the way we think about and answer questions in many different fields, like weather forecasting and biology. With all this information available, we need computers to help us store, process, analyze, and understand it. Data science combines tools from fields like statistics, mathematics, and computer science to find interesting patterns in big data. Data scientists write step-by-step instructions called algorithms to teach computers how to learn from data. To help computers understand these instructions, algorithms must be translated from the original question asked by a data scientist into a programming language—and the results must be translated back, so that humans can understand them. That means that data scientists are data detectives, programmers, and translators all in one!


Sign in / Sign up

Export Citation Format

Share Document