Big Data Analytics in Cloud Platform

Author(s):  
Sathishkumar S. ◽  
Devi Priya R. ◽  
Karthika K.

Big data computing in clouds is a new paradigm for next-generation analytics development. It enables large-scale data organizations to share and explore large quantities of ever-increasing data types using cloud computing technology as a back-end. Knowledge exploration and decision-making from this rapidly increasing volume of data encourage data organization, access, and timely processing, an evolving trend known as big data computing. This modern paradigm incorporates large-scale computing, new data-intensive techniques, and mathematical models to create data analytics for intrinsic information extraction. Cloud computing emerged as a service-oriented computing model to deliver infrastructure, platform, and applications as services from the providers to the consumers meeting the QoS parameters by enabling the archival and processing of large volumes of rapidly growing data faster economy models.

Author(s):  
Marcus Tanque ◽  
Harry J Foxwell

Big data and cloud computing are transforming information technology. These comparable technologies are the result of dramatic developments in computational power, virtualization, network bandwidth, availability, storage capability, and cyber-physical systems. The crossroads of these two areas, involves the use of cloud computing services and infrastructure, to support large-scale data analytics research, providing relevant solutions or future possibilities for supply chain management. This chapter broadens the current posture of cloud computing and big data, as associate with the supply chain solutions. This chapter focuses on areas of significant technology and scientific advancements, which are likely to enhance supply chain systems. This evaluation emphasizes the security challenges and mega-trends affecting cloud computing and big data analytics pertaining to supply chain management.


2017 ◽  
pp. 83-99
Author(s):  
Sivamathi Chokkalingam ◽  
Vijayarani S.

The term Big Data refers to large-scale information management and analysis technologies that exceed the capability of traditional data processing technologies. Big Data is differentiated from traditional technologies in three ways: volume, velocity and variety of data. Big data analytics is the process of analyzing large data sets which contains a variety of data types to uncover hidden patterns, unknown correlations, market trends, customer preferences and other useful business information. Since Big Data is new emerging field, there is a need for development of new technologies and algorithms for handling big data. The main objective of this paper is to provide knowledge about various research challenges of Big Data analytics. A brief overview of various types of Big Data analytics is discussed in this paper. For each analytics, the paper describes process steps and tools. A banking application is given for each analytics. Some of research challenges and possible solutions for those challenges of big data analytics are also discussed.


2020 ◽  
Vol 22 (4) ◽  
pp. 60-74
Author(s):  
Emmanuel Wusuhon Yanibo Ayaburi ◽  
Michele Maasberg ◽  
Jaeung Lee

Organizations face both opportunities and risks with big data analytics vendors, and the risks are now profound, as data has been likened to the oil of the digital era. The growing body of research at the nexus of big data analytics and cloud computing is examined from the economic perspective, based on agency theory (AT). A conceptual framework is developed for analyzing these opportunities and challenges regarding the use of big data analytics and cloud computing in e-business environments. This framework allows organizations to engage in contracts that target competitive parity with their service-oriented decision support system (SODSS) to achieve a competitive advantage related to their core business model. A unique contribution of this paper is its perspective on how to engage a vendor contractually to achieve this competitive advantage. The framework provides insights for a manager in selecting a vendor for cloud-based big data services.


2022 ◽  
pp. 245-261
Author(s):  
Emmanuel Wusuhon Yanibo Ayaburi ◽  
Michele Maasberg ◽  
Jaeung Lee

Organizations face both opportunities and risks with big data analytics vendors, and the risks are now profound, as data has been likened to the oil of the digital era. The growing body of research at the nexus of big data analytics and cloud computing is examined from the economic perspective, based on agency theory (AT). A conceptual framework is developed for analyzing these opportunities and challenges regarding the use of big data analytics and cloud computing in e-business environments. This framework allows organizations to engage in contracts that target competitive parity with their service-oriented decision support system (SODSS) to achieve a competitive advantage related to their core business model. A unique contribution of this paper is its perspective on how to engage a vendor contractually to achieve this competitive advantage. The framework provides insights for a manager in selecting a vendor for cloud-based big data services.


Author(s):  
Jayashree K. ◽  
Abirami R.

Developments in information technology and its prevalent growth in several areas of business, engineering, medical, and scientific studies are resulting in information as well as data explosion. Knowledge discovery and decision making from such rapidly growing voluminous data are a challenging task in terms of data organization and processing, which is an emerging trend known as big data computing. Big data has gained much attention from the academia and the IT industry. A new paradigm that combines large-scale compute, new data-intensive techniques, and mathematical models to build data analytics. Thus, this chapter discusses the background of big data. It also discusses the various application of big data in detail. The various related work and the future direction would be addressed in this chapter.


Author(s):  
Sivamathi Chokkalingam ◽  
Vijayarani S.

The term Big Data refers to large-scale information management and analysis technologies that exceed the capability of traditional data processing technologies. Big Data is differentiated from traditional technologies in three ways: volume, velocity and variety of data. Big data analytics is the process of analyzing large data sets which contains a variety of data types to uncover hidden patterns, unknown correlations, market trends, customer preferences and other useful business information. Since Big Data is new emerging field, there is a need for development of new technologies and algorithms for handling big data. The main objective of this paper is to provide knowledge about various research challenges of Big Data analytics. A brief overview of various types of Big Data analytics is discussed in this paper. For each analytics, the paper describes process steps and tools. A banking application is given for each analytics. Some of research challenges and possible solutions for those challenges of big data analytics are also discussed.


2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Ibrahim Muzaferija ◽  
Zerina Mašetić ◽  

While leveraging cloud computing for large-scale distributed applications allows seamless scaling, many companies struggle following up with the amount of data generated in terms of efficient processing and anomaly detection, which is a necessary part of the management of modern applications. As the record of user behavior, weblogs surely become the research item related to anomaly detection. Many anomaly detection methods based on automated log analysis have been proposed. However, not in the context of big data applications where anomalous behavior needs to be detected in understanding phases prior to modeling a system for such use. Big Data Analytics often ignores anomalous point due to high volume of data. To address this problem, we propose a complemented methodology for Big Data Analytics – the Exploratory Data Analysis, which assists in gaining insight into data relationships without the classical hypothesis modeling. In that way, we can gain better understanding of the patterns and spot anomalies. Results show that Exploratory Data Analysis facilitates anomaly detection and the CRISP-DM Business Understanding phase, making it one of the key steps in the Data Understanding phase.


2022 ◽  
pp. 1734-1744
Author(s):  
Jayashree K. ◽  
Abirami R.

Developments in information technology and its prevalent growth in several areas of business, engineering, medical, and scientific studies are resulting in information as well as data explosion. Knowledge discovery and decision making from such rapidly growing voluminous data are a challenging task in terms of data organization and processing, which is an emerging trend known as big data computing. Big data has gained much attention from the academia and the IT industry. A new paradigm that combines large-scale compute, new data-intensive techniques, and mathematical models to build data analytics. Thus, this chapter discusses the background of big data. It also discusses the various application of big data in detail. The various related work and the future direction would be addressed in this chapter.


2019 ◽  
Author(s):  
Meghana Bastwadkar ◽  
Carolyn McGregor ◽  
S Balaji

BACKGROUND This paper presents a systematic literature review of existing remote health monitoring systems with special reference to neonatal intensive care (NICU). Articles on NICU clinical decision support systems (CDSSs) which used cloud computing and big data analytics were surveyed. OBJECTIVE The aim of this study is to review technologies used to provide NICU CDSS. The literature review highlights the gaps within frameworks providing HAaaS paradigm for big data analytics METHODS Literature searches were performed in Google Scholar, IEEE Digital Library, JMIR Medical Informatics, JMIR Human Factors and JMIR mHealth and only English articles published on and after 2015 were included. The overall search strategy was to retrieve articles that included terms that were related to “health analytics” and “as a service” or “internet of things” / ”IoT” and “neonatal intensive care unit” / ”NICU”. Title and abstracts were reviewed to assess relevance. RESULTS In total, 17 full papers met all criteria and were selected for full review. Results showed that in most cases bedside medical devices like pulse oximeters have been used as the sensor device. Results revealed a great diversity in data acquisition techniques used however in most cases the same physiological data (heart rate, respiratory rate, blood pressure, blood oxygen saturation) was acquired. Results obtained have shown that in most cases data analytics involved data mining classification techniques, fuzzy logic-NICU decision support systems (DSS) etc where as big data analytics involving Artemis cloud data analysis have used CRISP-TDM and STDM temporal data mining technique to support clinical research studies. In most scenarios both real-time and retrospective analytics have been performed. Results reveal that most of the research study has been performed within small and medium sized urban hospitals so there is wide scope for research within rural and remote hospitals with NICU set ups. Results have shown creating a HAaaS approach where data acquisition and data analytics are not tightly coupled remains an open research area. Reviewed articles have described architecture and base technologies for neonatal health monitoring with an IoT approach. CONCLUSIONS The current work supports implementation of the expanded Artemis cloud as a commercial offering to healthcare facilities in Canada and worldwide to provide cloud computing services to critical care. However, no work till date has been completed for low resource setting environment within healthcare facilities in India which results in scope for research. It is observed that all the big data analytics frameworks which have been reviewed in this study have tight coupling of components within the framework, so there is a need for a framework with functional decoupling of components.


2020 ◽  
Vol 4 (2) ◽  
pp. 5 ◽  
Author(s):  
Ioannis C. Drivas ◽  
Damianos P. Sakas ◽  
Georgios A. Giannakopoulos ◽  
Daphne Kyriaki-Manessi

In the Big Data era, search engine optimization deals with the encapsulation of datasets that are related to website performance in terms of architecture, content curation, and user behavior, with the purpose to convert them into actionable insights and improve visibility and findability on the Web. In this respect, big data analytics expands the opportunities for developing new methodological frameworks that are composed of valid, reliable, and consistent analytics that are practically useful to develop well-informed strategies for organic traffic optimization. In this paper, a novel methodology is implemented in order to increase organic search engine visits based on the impact of multiple SEO factors. In order to achieve this purpose, the authors examined 171 cultural heritage websites and their retrieved data analytics about their performance and user experience inside them. Massive amounts of Web-based collections are included and presented by cultural heritage organizations through their websites. Subsequently, users interact with these collections, producing behavioral analytics in a variety of different data types that come from multiple devices, with high velocity, in large volumes. Nevertheless, prior research efforts indicate that these massive cultural collections are difficult to browse while expressing low visibility and findability in the semantic Web era. Against this backdrop, this paper proposes the computational development of a search engine optimization (SEO) strategy that utilizes the generated big cultural data analytics and improves the visibility of cultural heritage websites. One step further, the statistical results of the study are integrated into a predictive model that is composed of two stages. First, a fuzzy cognitive mapping process is generated as an aggregated macro-level descriptive model. Secondly, a micro-level data-driven agent-based model follows up. The purpose of the model is to predict the most effective combinations of factors that achieve enhanced visibility and organic traffic on cultural heritage organizations’ websites. To this end, the study contributes to the knowledge expansion of researchers and practitioners in the big cultural analytics sector with the purpose to implement potential strategies for greater visibility and findability of cultural collections on the Web.


Sign in / Sign up

Export Citation Format

Share Document