scholarly journals Quality Assessment of "Stress-Strength" Models in the Conditions of Big Data

The conceptual approach to assessing the quality of complex structural systems based on the large data generated during the monitoring of structures of controlled objects is justified. The methodological basis of the proposed study is the big data analytics, the methods of processing unstructured information, the technology of representing the process of changing structures of complex objects in the form of a Markov's type sequence, as well as methods of statistical analysis. It is proposed: to structure monitoring data by time slices (in the form of subsets of "stress" level measurements of controlled parameters) corresponding to a certain stage of the object's life cycle; to simulate a change in the structure of an object in the form of a dichotomous Markov chain; on the basis of the "stress-strength" model, to evaluate probabilistic quality indicators of the structural state of the controlled object, while the indicator of the transition from state to state is the fact that the level of "stress" exceeds the value of "strength". The study of the "stress-strength" model is reduced to the problem of finding the extremum of a definite integral with equality constraints, which is one of the isoperimetric problems. The results can be used in decision support systems during the structural analysis of complex systems. The effectiveness of the investigation is confirmed by a numerical example.

Author(s):  
. Monika ◽  
Pardeep Kumar ◽  
Sanjay Tyagi

In Cloud computing environment QoS i.e. Quality-of-Service and cost is the key element that to be take care of. As, today in the era of big data, the data must be handled properly while satisfying the request. In such case, while handling request of large data or for scientific applications request, flow of information must be sustained. In this paper, a brief introduction of workflow scheduling is given and also a detailed survey of various scheduling algorithms is performed using various parameter.


2018 ◽  
Vol 18 (03) ◽  
pp. e23 ◽  
Author(s):  
María José Basgall ◽  
Waldo Hasperué ◽  
Marcelo Naiouf ◽  
Alberto Fernández ◽  
Francisco Herrera

The volume of data in today's applications has meant a change in the way Machine Learning issues are addressed. Indeed, the Big Data scenario involves scalability constraints that can only be achieved through intelligent model design and the use of distributed technologies. In this context, solutions based on the Spark platform have established themselves as a de facto standard. In this contribution, we focus on a very important framework within Big Data Analytics, namely classification with imbalanced datasets. The main characteristic of this problem is that one of the classes is underrepresented, and therefore it is usually more complex to find a model that identifies it correctly. For this reason, it is common to apply preprocessing techniques such as oversampling to balance the distribution of examples in classes. In this work we present SMOTE-BD, a fully scalable preprocessing approach for imbalanced classification in Big Data. It is based on one of the most widespread preprocessing solutions for imbalanced classification, namely the SMOTE algorithm, which creates new synthetic instances according to the neighborhood of each example of the minority class. Our novel development is made to be independent of the number of partitions or processes created to achieve a higher degree of efficiency. Experiments conducted on different standard and Big Data datasets show the quality of the proposed design and implementation.


2022 ◽  
Vol 11 (3) ◽  
pp. 0-0

Emergence of big data in today’s world leads to new challenges for sorting strategies to analyze the data in a better way. For most of the analyzing technique, sorting is considered as an implicit attribute of the technique used. The availability of huge data has changed the way data is analyzed across industries. Healthcare is one of the notable areas where data analytics is making big changes. An efficient analysis has the potential to reduce costs of treatment and improve the quality of life in general. Healthcare industries are collecting massive amounts of data and look for the best strategies to use these numbers. This research proposes a novel non-comparison based approach to sort a large data that can further be utilized by any big data analytical technique for various analyses.


Author(s):  
Sana Rekik

The advent of geospatial big data has led to a paradigm shift where most related applications became data driven, and therefore intensive in both data and computation. This revolution has covered most domains, namely the real-time systems such as web search engines, social networks, and tracking systems. These later are linked to the high-velocity feature, which characterizes the dynamism, the fast changing and moving data streams. Therefore, the response time and speed of such queries, along with the space complexity, are among data stream analysis system requirements, which still require improvements using sophisticated algorithms. In this vein, this chapter discusses new approaches that can reduce the complexity and costs in time and space while improving the efficiency and quality of responses of geospatial big data stream analysis to efficiently detect changes over time, conclude, and predict future events.


Author(s):  
Fenio Annansingh

The concept of a smart city as a means to enhance the life quality of citizens has been gaining increasing importance in recent years globally. A smart city consists of city infrastructure, which includes smart services, devices, and institutions. Every second, these components of the smart city infrastructure are generating data. The vast amount of data is called big data. This chapter explores the possibilities of using big data analytics to prevent cybersecurity threats in a smart city. It also analyzed how big data tools and concepts can solve cybersecurity challenges and detect and prevent attacks. Using interviews and an extensive review of the literature have developed the data analytics and cyber prevention model. The chapter concludes by indicating that big data analytics allow a smart city to identify and solve cybersecurity challenges quickly and efficiently.


2017 ◽  
pp. 83-99
Author(s):  
Sivamathi Chokkalingam ◽  
Vijayarani S.

The term Big Data refers to large-scale information management and analysis technologies that exceed the capability of traditional data processing technologies. Big Data is differentiated from traditional technologies in three ways: volume, velocity and variety of data. Big data analytics is the process of analyzing large data sets which contains a variety of data types to uncover hidden patterns, unknown correlations, market trends, customer preferences and other useful business information. Since Big Data is new emerging field, there is a need for development of new technologies and algorithms for handling big data. The main objective of this paper is to provide knowledge about various research challenges of Big Data analytics. A brief overview of various types of Big Data analytics is discussed in this paper. For each analytics, the paper describes process steps and tools. A banking application is given for each analytics. Some of research challenges and possible solutions for those challenges of big data analytics are also discussed.


2020 ◽  
pp. 1499-1521
Author(s):  
Sukhpal Singh Gill ◽  
Inderveer Chana ◽  
Rajkumar Buyya

Cloud computing has transpired as a new model for managing and delivering applications as services efficiently. Convergence of cloud computing with technologies such as wireless sensor networking, Internet of Things (IoT) and Big Data analytics offers new applications' of cloud services. This paper proposes a cloud-based autonomic information system for delivering Agriculture-as-a-Service (AaaS) through the use of cloud and big data technologies. The proposed system gathers information from various users through preconfigured devices and IoT sensors and processes it in cloud using big data analytics and provides the required information to users automatically. The performance of the proposed system has been evaluated in Cloud environment and experimental results show that the proposed system offers better service and the Quality of Service (QoS) is also better in terms of QoS parameters.


2019 ◽  
Author(s):  
Abhishek Singh

Abstract Background: The need for big data analysis requires being able to process large data which are being held fine-tuned for usage by corporates. It is only very recently that the need for big data has caught attention for low budget corporate groups and academia who typically do not have money and resources to buy expensive licenses of big data analysis platforms such as SAS. The corporates continue to work on SAS data format largely because of systemic organizational history and that the prior codes have been built on them. The data-providers continue to thus provide data in SAS formats. Acute sudden need has arisen because of this gap of data being in SAS format and the coders not having a SAS expertise or training background as the economic and inertial forces acting of having shaped these two class of people have been different. Method: We analyze the differences and thus the need for SasCsvToolkit which helps to generate a CSV file for a SAS format data so that the data scientist can then make use of his skills in other tools that can process CSVs such as R, SPSS, or even Microsoft Excel. At the same time, it also provides conversion of CSV files to SAS format. Apart from this, a SAS database programmer always struggles in finding the right method to do a database search, exact match, substring match, except condition, filters, unique values, table joins and data mining for which the toolbox also provides template scripts to modify and use from command line. Results: The toolkit has been implemented on SLURM scheduler platform as a `bag-of-tasks` algorithm for parallel and distributed workflow though serial version has also been incorporated. Conclusion: In the age of Big Data where there are way too many file formats and software and analytics environment each having their own semantics to deal with specific file types, SasCsvToolkit will find its functions very handy to a data engineer.


2019 ◽  
Vol 8 (S3) ◽  
pp. 35-40
Author(s):  
S. Mamatha ◽  
T. Sudha

In this digital world, as organizations are evolving rapidly with data centric asset the explosion of data and size of the databases have been growing exponentially. Data is generated from different sources like business processes, transactions, social networking sites, web servers, etc. and remains in structured as well as unstructured form. The term ― Big data is used for large data sets whose size is beyond the ability of commonly used software tools to capture, manage, and process the data within a tolerable elapsed time. Big data varies in size ranging from a few dozen terabytes to many petabytes of data in a single data set. Difficulties include capture, storage, search, sharing, analytics and visualizing. Big data is available in structured, unstructured and semi-structured data format. Relational database fails to store this multi-structured data. Apache Hadoop is efficient, robust, reliable and scalable framework to store, process, transforms and extracts big data. Hadoop framework is open source and fee software which is available at Apache Software Foundation. In this paper we will present Hadoop, HDFS, Map Reduce and c-means big data algorithm to minimize efforts of big data analysis using Map Reduce code. The objective of this paper is to summarize the state-of-the-art efforts in clinical big data analytics and highlight what might be needed to enhance the outcomes of clinical big data analytics tools and related fields.


Sign in / Sign up

Export Citation Format

Share Document