scholarly journals Formal verification of the extension of iStar to support Big data projects

2021 ◽  
Vol 22 (3) ◽  
Author(s):  
Chabane Djeddi ◽  
Nacer-eddine Zarour ◽  
Pierre-Jean Charrel

Identifying all the right requirements is indispensable for the success of anysystem. These requirements need to be engineered with precision in the earlyphases. Principally, late corrections costs are estimated to be more than 200times as much as corrections during requirements engineering (RE). EspeciallyBig data area, it becomes more and more crucial due to its importance andcharacteristics. In fact, and after literature analyzing, we note that currentsRE methods do not support the elicitation of Big data projects requirements. Inthis study, we propose the BiStar novel method as extension of iStar to under-take some Big data characteristics such as (volume, variety ...etc). As a firststep, we identify some missing concepts that currents requirements engineeringmethods do not support. Next, BiStar, an extension of iStar is developed totake into account Big data specifics characteristics while dealing with require-ments. In order to ensure the integrity property of BiStar, formal proofs weremade, we perform a bigraph based description on iStar and BiStar. Finally, anapplication is conducted on iStar and BiStar for the same illustrative scenario.The BiStar shows important results to be more suitable for eliciting Big dataprojects requirements.

2021 ◽  
Vol 11 (13) ◽  
pp. 6047
Author(s):  
Soheil Rezaee ◽  
Abolghasem Sadeghi-Niaraki ◽  
Maryam Shakeri ◽  
Soo-Mi Choi

A lack of required data resources is one of the challenges of accepting the Augmented Reality (AR) to provide the right services to the users, whereas the amount of spatial information produced by people is increasing daily. This research aims to design a personalized AR that is based on a tourist system that retrieves the big data according to the users’ demographic contexts in order to enrich the AR data source in tourism. This research is conducted in two main steps. First, the type of the tourist attraction where the users interest is predicted according to the user demographic contexts, which include age, gender, and education level, by using a machine learning method. Second, the correct data for the user are extracted from the big data by considering time, distance, popularity, and the neighborhood of the tourist places, by using the VIKOR and SWAR decision making methods. By about 6%, the results show better performance of the decision tree by predicting the type of tourist attraction, when compared to the SVM method. In addition, the results of the user study of the system show the overall satisfaction of the participants in terms of the ease-of-use, which is about 55%, and in terms of the systems usefulness, about 56%.


2002 ◽  
Vol 17 (9) ◽  
pp. 2457-2464 ◽  
Author(s):  
Yafei Zhang ◽  
Mikka N.-Gamo ◽  
Kiyoharu Nakagawa ◽  
Toshihiro Ando

A simple and novel method was developed for efficient synthesis of aligned multiwalled carbon nanotubes (CNTs) in methanol and ethanol under normal pressure. The CNTs' alignment and structures were investigated using Raman scattering and x-ray diffraction spectroscopy. A unique kind of coupled CNT was synthesized in which one rotated to the left and one rotated to the right. Chains periodically bridged the coupled CNTs. The growth mechanism of the CNTs within organic liquid is proposed to be a catalytic process at the Fe film surface in a dynamic and thermal nonequilibrium condition in organic liquids.


2017 ◽  
Vol 2 (Suppl. 1) ◽  
pp. 1-10
Author(s):  
Denis Horgan

In the fast-moving arena of modern healthcare with its cutting-edge science it is already, and will become more, vital that stakeholders collaborate openly and effectively. Transparency, especially on drug pricing, is of paramount importance. There is also a need to ensure that regulations and legislation covering, for example the new, smaller clinical trials required to make personalised medicine work effectively, and the huge practical and ethical issues surrounding Big Data and data protection, are common, understood and enforced across the EU. With more integration, collaboration, dialogue and increased trust among each and every one in the field, stakeholders can help mould the right frameworks, in the right place, at the right time. Once achieved, this will allow us all to work more quickly and more effectively towards creating a healthier - and thus wealthier - European Union.


Author(s):  
Dr. Pasumponpandian A.

The integration of two of the biggest giants in the computing world has resulted in the development and advancement of new methodologies in data processing. Cognitive computing and big data analytics are integrated to give rise to advanced technologically sound algorithms like MOIWO and NSGA. There is an important role played by the E-projects portfolio selection (EPPS) issue in the web development environment that is handled with the help of a decision making algorithm based on big data. The EPPS problem tackles choosing the right projects for investment on the social media in order to achieve maximum return at minimal risk conditions. In order to address this issue and further optimize EPPS probe on social media, the proposed work focuses on building a hybrid algorithm known as NSGA-II-MOIWO. This algorithms makes use of the positive aspects of MOIWO algorithm and NSGA-II algorithm in order to develop an efficient one. The experimental results are recorded and analyzed in order to determine the most optimal algorithm based on the return and risk of investment. Based on the results, it is found that NSGA-II-MOIWO outperforms both MOIWO and NSGA, proving to be a better hybrid alternative.


Author(s):  
Javier Conejero ◽  
Sandra Corella ◽  
Rosa M Badia ◽  
Jesus Labarta

Task-based programming has proven to be a suitable model for high-performance computing (HPC) applications. Different implementations have been good demonstrators of this fact and have promoted the acceptance of task-based programming in the OpenMP standard. Furthermore, in recent years, Apache Spark has gained wide popularity in business and research environments as a programming model for addressing emerging big data problems. COMP Superscalar (COMPSs) is a task-based environment that tackles distributed computing (including Clouds) and is a good alternative for a task-based programming model for big data applications. This article describes why we consider that task-based programming models are a good approach for big data applications. The article includes a comparison of Spark and COMPSs in terms of architecture, programming model, and performance. It focuses on the differences that both frameworks have in structural terms, on their programmability interface, and in terms of their efficiency by means of three widely known benchmarking kernels: Wordcount, Kmeans, and Terasort. These kernels enable the evaluation of the more important functionalities of both programming models and analyze different work flows and conditions. The main results achieved from this comparison are (1) COMPSs is able to extract the inherent parallelism from the user code with minimal coding effort as opposed to Spark, which requires the existing algorithms to be adapted and rewritten by explicitly using their predefined functions, (2) it is an improvement in terms of performance when compared with Spark, and (3) COMPSs has shown to scale better than Spark in most cases. Finally, we discuss the advantages and disadvantages of both frameworks, highlighting the differences that make them unique, thereby helping to choose the right framework for each particular objective.


Author(s):  
Kawa Nazemi ◽  
Martin Steiger ◽  
Dirk Burkhardt ◽  
Jörn Kohlhammer

Policy design requires the investigation of various data in several design steps for making the right decisions, validating, or monitoring the political environment. The increasing amount of data is challenging for the stakeholders in this domain. One promising way to access the “big data” is by abstracted visual patterns and pictures, as proposed by information visualization. This chapter introduces the main idea of information visualization in policy modeling. First abstracted steps of policy design are introduced that enable the identification of information visualization in the entire policy life-cycle. Thereafter, the foundations of information visualization are introduced based on an established reference model. The authors aim to amplify the incorporation of information visualization in the entire policy design process. Therefore, the aspects of data and human interaction are introduced, too. The foundation leads to description of a conceptual design for social data visualization, and the aspect of semantics plays an important role.


Web Services ◽  
2019 ◽  
pp. 1301-1329
Author(s):  
Suren Behari ◽  
Aileen Cater-Steel ◽  
Jeffrey Soar

The chapter discusses how Financial Services organizations can take advantage of Big Data analysis for disruptive innovation through examination of a case study in the financial services industry. Popular tools for Big Data Analysis are discussed and the challenges of big data are explored as well as how these challenges can be met. The work of Hayes-Roth in Valued Information at the Right Time (VIRT) and how it applies to the case study is examined. Boyd's model of Observe, Orient, Decide, and Act (OODA) is explained in relation to disruptive innovation in financial services. Future trends in big data analysis in the financial services domain are explored.


Author(s):  
Maryam Fazel-Zarandi ◽  
Mark S. Fox ◽  
Eric Yu

Knowledge Management Systems that enhance and facilitate the process of finding the right expert in an organization have gained much attention in recent years. This chapter explores the potential benefits and challenges of using ontologies for improving existing systems. A modeling technique from requirements engineering is used to evaluate the proposed system and analyze the impact it would have on the goals of the stakeholders. Based on the analysis, an ontology-based expertise finding system is proposed. This chapter also discusses the organizational settings required for the successful deployment of the system in practice.


Big Data ◽  
2016 ◽  
pp. 139-180
Author(s):  
Kawa Nazemi ◽  
Martin Steiger ◽  
Dirk Burkhardt ◽  
Jörn Kohlhammer

Policy design requires the investigation of various data in several design steps for making the right decisions, validating, or monitoring the political environment. The increasing amount of data is challenging for the stakeholders in this domain. One promising way to access the “big data” is by abstracted visual patterns and pictures, as proposed by information visualization. This chapter introduces the main idea of information visualization in policy modeling. First abstracted steps of policy design are introduced that enable the identification of information visualization in the entire policy life-cycle. Thereafter, the foundations of information visualization are introduced based on an established reference model. The authors aim to amplify the incorporation of information visualization in the entire policy design process. Therefore, the aspects of data and human interaction are introduced, too. The foundation leads to description of a conceptual design for social data visualization, and the aspect of semantics plays an important role.


Author(s):  
Robert C. May ◽  
Kai F. Wehmeier

Beginning in Grundgesetze §53, Frege presents proofs of a set of theorems known to encompass the Peano-Dedekind axioms for arithmetic. The initial part of Frege’s deductive development of arithmetic, to theorems (32) and (49), contains fully formal proofs that had merely been sketched out in Grundlagen. Theorems (32) and (49) are significant because they are the right-to-left and left-to-right directions respectively of what we call today “Hume’s Principle” (HP). The core observation that we explore is that in Grundgesetze, Frege does not prove Hume’s Principle, not at least if we take HP to be the principle he introduces, and then rejects, as a definition of number in Grundlagen. In order better to understand why Frege never considers HP as a biconditional principle in Grundgesetze, we explicate the theorems Frege actually proves in that work, clarify their conceptual and logical status within the overall derivation of arithmetic, and ask how the definitional content that Frege intuited in Hume’s Principle is reconstructed by the theorems that Frege does prove.


Sign in / Sign up

Export Citation Format

Share Document