scholarly journals Bio-Inspired Cost-Effective Access to Big Data

Author(s):  
Lijuan Wang ◽  
◽  
Jun Shen ◽  
Keyword(s):  
Big Data ◽  
Author(s):  
Marco Angrisani ◽  
Anya Samek ◽  
Arie Kapteyn

The number of data sources available for academic research on retirement economics and policy has increased rapidly in the past two decades. Data quality and comparability across studies have also improved considerably, with survey questionnaires progressively converging towards common ways of eliciting the same measurable concepts. Probability-based Internet panels have become a more accepted and recognized tool to obtain research data, allowing for fast, flexible, and cost-effective data collection compared to more traditional modes such as in-person and phone interviews. In an era of big data, academic research has also increasingly been able to access administrative records (e.g., Kostøl and Mogstad, 2014; Cesarini et al., 2016), private-sector financial records (e.g., Gelman et al., 2014), and administrative data married with surveys (Ameriks et al., 2020), to answer questions that could not be successfully tackled otherwise.


Author(s):  
Dr.Anita K.Patil ◽  
Dr.A.R. Laware

Advance researches in the field of Internet of Things (IoT) are helping to make water management smarter and also used for optimizing consumption in the smart agriculture industry. Now days the development and research in Intelligent Smart Farming IoT based devices is turning the face of agriculture production in enhancing as well making it cost-effective and reducing wastage. To create environmental conditions suitable for the growth of animals and plants, modern agriculture that uses artificial techniques to change climatic factors such as temperature a highly efficient protected agriculture mode is used. To handle the increasing challenges of agricultural production, the complex agricultural ecosystems need to be better understood. Modern digital technology used for continuously monitoring the physical environment and producing large quantities of data in an unprecedented pace. For improving productivity the analysis of big data would enable farmers and companies to extract value from it. Moreover big data analysis is leading to advances in various industries; it has not yet been widely applied in agriculture. The objective of this paper is to perform a review on current studies and research works in agriculture which employs the recent practice of big data analysis, in order to solve various relevant problems.


2021 ◽  
Vol 23 (06) ◽  
pp. 1011-1018
Author(s):  
Aishrith P Rao ◽  
◽  
Raghavendra J C ◽  
Dr. Sowmyarani C N ◽  
Dr. Padmashree T ◽  
...  

With the advancement of technology and the large volume of data produced, processed, and stored, it is becoming increasingly important to maintain the quality of data in a cost-effective and productive manner. The most important aspects of Big Data (BD) are storage, processing, privacy, and analytics. The Big Data group has identified quality as a critical aspect of its maturity. Nonetheless, it is a critical approach that should be adopted early in the lifecycle and gradually extended to other primary processes. Companies are very reliant and drive profits from the huge amounts of data they collect. When its consistency deteriorates, the ramifications are uncertain and may result in completely undesirable conclusions. In the sense of BD, determining data quality is difficult, but it is essential that we uphold the data quality before we can proceed with any analytics. We investigate data quality during the stages of data gathering, preprocessing, data repository, and evaluation/analysis of BD processing in this paper. The related solutions are also suggested based on the elaboration and review of the proposed problems.


Author(s):  
Forest Jay Handford

The number of tools available for Big Data processing have grown exponentially as cloud providers have introduced solutions for businesses that have little or no money for capital expenditures. The chapter starts by discussing historic data tools and the evolution to those of today. With Cloud Computing, the need for upfront costs has been removed, costs are continuing to fall and costs can be negotiated. This chapter reviews the current types of Big Data tools, and how they evolved. To give readers an idea of costs, the chapter shows example costs (in today's market) for a sampling of the tools and relative cost comparisons of the other tools like the Grid tools used by the government, scientific communities and academic communities. Readers will take away from this chapter an understanding of what tools work best for several scenarios and how to select cost effective tools (even tools that are unknown today).


Author(s):  
C. K. M. Lee ◽  
Yi Cao ◽  
Kam Hung Ng

Maintenance aims to reduce and eliminate the number of failures occurred during production as any breakdown of machine or equipment may lead to disruption for the supply chain. Maintenance policy is set to provide the guidance for selecting the most cost-effective maintenance approach and system to achieve operational safety. For example, predictive maintenance is most recommended for crucial components whose failure will cause severe function loss and safety risk. Recent utilization of big data and related techniques in predictive maintenance greatly improves the transparency for system health condition and boosts the speed and accuracy in the maintenance decision making. In this chapter, a Maintenance Policies Management framework under Big Data Platform is designed and the process of maintenance decision support system is simulated for a sensor-monitored semiconductor manufacturing plant. Artificial Intelligence is applied to classify the likely failure patterns and estimate the machine condition for the faulty component.


Author(s):  
Manujakshi B. C ◽  
K. B. Ramesh

With increasing adoption of the sensor-based application, there is an exponential rise of the sensory data that eventually take the shape of the big data. However, the practicality of executing high end analytical operation over the resource-constrained big data has never being studied closely. After reviewing existing approaches, it is explored that there is no cost effective schemes of big data analytics over large scale sensory data processiing that can be directly used as a service. Therefore, the propsoed system introduces a holistic architecture where streamed data after performing extraction of knowedge can be offered in the form of services. Implemented in MATLAB, the proposed study uses a very simplistic approach considering energy constrained of the sensor nodes to find that proposed system offers better accuracy, reduced mining duration (i.e. faster response time), and reduced memory dependencies to prove that it offers cost effective analytical solution in contrast to existing system.


2021 ◽  
Author(s):  
Fatimah Alsayoud

Big data ecosystems contain a mix of sophisticated hardware storage components to support heterogeneous workloads. Storage components and the workloads interact and affect each other; therefore, their relationship has to consider when modeling workloads or managing storage. Efficient workload modeling guides optimal storage management decisions, and the right decisions help guarantee the workload’s needs. The first part of this thesis focuses on workload modeling efficiency, and the second part focuses on cost-effective storage management.<div>Workload performance modeling is an essential step in management decisions. The standard modeling approach constructs the model based on a historical dataset collected from one set of setups (scenario). The standard modeling approach requires the model to be reconstructed from scratch with every time the setups changes. To address this issue, we propose a cross-scenario modeling approach that improves the workload’s performance classification accuracy by up to 78% through adopting the Transfer Learning (TL).<br></div><div>The storage system is the most crucial component of the big data ecosystem, where the workload’s execution process starts by fetching data from it and ends by storing data into it. Thus, the workload’s performance is directly affected by storage capability. To provide a high I/O performance in the ecosystems, Solid State Drive (SSD) are utilized as a tier or as a cache on big data distributed ecosystems. SSDs have a short lifespan that is affected by data size and the number of writing operations. Balancing performance requirements and SSD’s lifespan consumption is never easy, and it’s even harder when interacting with a huge amount of data and with heterogeneous I/O patterns. In this thesis, we analysis big data workloads I/O pattern impacts on SSD’s lifespan when SSD is used as a tier or as a cache. Then, we design a Hidden Markov Model (HMM) based I/O pattern controller that manages workload placement and guarantees cost-effective storage that enhances the workload performance by up to 60%, and improves SSD’s lifespan by up to 40%. </div><div>The designed transfer learning modeling approach and the storage management solutions improve workload modeling accuracy, and the quality of the storage management policies while the testing setup changes.<br></div>


2021 ◽  
Author(s):  
Shuo Chen ◽  
Yu Sun

When I was assembling the computer, I found a problem. This problem is that we need to spend a lot of time and energy when we choose a desktop with a configuration and price that we are satisfied with [5]. Some computer websites will only recommend some ordinary desktops to users. Does not allow users to get what they really want, and some other shops that assemble computer mainframes use the characteristics of customers that do not understand computers to increase prices. So I wanted to create a software to help these people who need to assemble a computer to find the most suitable computer efficiently and in accordance with their requirements [6]. This program, according to the needs of users, artificial intelligence application crawler technology can help users find the most suitable computer parts based on big data, and help users get the most cost-effective self-assembled computer host. We applied our application to match a person in need of a computer host with My Platform and conducted a qualitative evaluation of the method [7]. The results showed that My Platform can efficiently and quality match the user's needs and find the best solution for the user.


2020 ◽  
Vol 10 (5) ◽  
pp. 1705
Author(s):  
Martin Štufi ◽  
Boris Bačić ◽  
Leonid Stoimenov

Big data analytics (BDA) in healthcare has made a positive difference in the integration of Artificial Intelligence (AI) in advancements of analytical capabilities, while lowering the costs of medical care. The aim of this study is to improve the existing healthcare eSystem by implementing a Big Data Analytics (BDA) platform and to meet the requirements of the Czech Republic National Health Service (Tender-Id. VZ0036628, No. Z2017-035520). In addition to providing analytical capabilities on Linux platforms supporting current and near-future AI with machine-learning and data-mining algorithms, there is the need for ethical considerations mandating new ways to preserve privacy, all of which are preconditioned by the growing body of regulations and expectations. The presented BDA platform, has met all requirements (N > 100), including the healthcare industry-standard Transaction Processing Performance Council (TPC-H) decision support benchmark in compliance with the European Union (EU) and the Czech Republic legislations. Currently, the presented Proof of Concept (PoC) that has been upgraded to a production environment has unified isolated parts of Czech Republic healthcare over the past seven months. The reported PoC BDA platform, artefacts, and concepts are transferrable to healthcare systems in other countries interested in developing or upgrading their own national healthcare infrastructure in a cost-effective, secure, scalable and high-performance manner.


Sign in / Sign up

Export Citation Format

Share Document