Conceptual Model for Predictive Analysis on Large Data

This chapter provides an overview of the proposed model for pattern extraction and pattern prediction over data warehouses. As discussed before, the main objective of the research is to provide a single model for pattern extraction and prediction. The objectives include an automated way to select variables for the mining process, automated schema design, advanced evaluation of extracted patterns, and visualization of extracted patterns.

This chapter provides an experimental study of the proposed model on Adult data set. The chapter includes the implementation of pattern extraction from this dataset by following a series of steps as discussed before. It also includes detailed implementation of pattern prediction of numeric variables, nominal variables, and aggregate data. The implementation of pattern prediction is also a series of steps as discussed before.


This chapter provides implementation of the proposed model on Automobile data set. The chapter includes the implementation of pattern extraction from this dataset by following a series of steps discussed in the proposed model chapter. It also includes detailed implementation of pattern prediction from Automobile dataset for prediction of numeric variables, nominal variables, and aggregate data. The implementation of pattern prediction is also a series of steps as discussed before.


Author(s):  
Jamel Feki

Within today’s competitive economic context, information acquisition, analysis and exploitation became strategic and unavoidable requirements for every enterprise. Moreover, in order to guarantee their persistence and growth, enterprises are forced, henceforth, to capitalize expertise in this domain. Data warehouses (DW) emerged as a potential solution answering the needs of storage and analysis of large data volumes. In fact, a DW is a database system specialized in the storage of data used for decisional ends. This type of systems was proposed to overcome the incapacities of OLTP (On-Line Transaction Processing) systems in offering analysis functionalities. It offers integrated, consolidated and temporal data to perform decisional analyses. However, the different objectives and functionalities between OLTP and DW systems created a need for a development method appropriate for DW. Indeed, data warehouses still deploy considerable efforts and interests of a large community of both software editors of decision support systems (DSS) and researchers (Kimball, 1996; Inmon, 2002). Current software tools for DW focus on meeting end-user needs. OLAP (On-Line Analytical Processing) tools are dedicated to multidimensional analyses and graphical visualization of results (e.g., Oracle Discoverer?); some products permit the description of DW and Data Mart (DM) schemes (e.g., Oracle Warehouse Builder?). One major limit of these tools is that the schemes must be built beforehand and, in most cases, manually. However, such a task can be tedious, error-prone and time-consuming, especially with heterogeneous data sources. On the other hand, the majority of research efforts focuses on particular aspects in DW development, cf., multidimensional modeling, physical design (materialized views (Moody & Kortnik, 2000), index selection (Golfarelli, Rizzi, & Saltarelli 2002), schema partitioning (Bellatreche & Boukhalfa, 2005)) and more recently applying data mining for a better data interpretation (Mikolaj, 2006; Zubcoff, Pardillo & Trujillo, 2007). While these practical issues determine the performance of a DW, other just as important, conceptual issues (e.g., requirements specification and DW schema design) still require further investigations. In fact, few propositions were put forward to assist in and/or to automate the design process of DW, cf., (Bonifati, Cattaneo, Ceri, Fuggetta & Paraboschi, 2001; Hahn, Sapia & Blaschka, 2000; Phipps & Davis 2002; Peralta, Marotta & Ruggia, 2003).


2012 ◽  
Vol 01 (08) ◽  
pp. 29-34
Author(s):  
Shih-Chih Chen ◽  
Huei-Huang Chen ◽  
Mei-Tzu Lin ◽  
Yu-Bei Chen

Recently, the social networking applications expand rapidly and attract a lot of users in a short time period. This study attempts to develop a conceptual model to understand the continuance intention in the context of social networking. The conceptual model integrates the post-acceptance model of information system continuance with perceived ease-of-use and perceived usefulness proposed by Bhattacherjee (2001a) and Davis (1989), respectively. In the proposed model, continuance intention is influenced by the relationship quality and information system quality. Additionally, nine propositions are developed based the proposed model and literature review. Finally, conclusions, managerial implications, and future direction of research are also provided.


2021 ◽  
Author(s):  
Zhenling Jiang

This paper studies price bargaining when both parties have left-digit bias when processing numbers. The empirical analysis focuses on the auto finance market in the United States, using a large data set of 35 million auto loans. Incorporating left-digit bias in bargaining is motivated by several intriguing observations. The scheduled monthly payments of auto loans bunch at both $9- and $0-ending digits, especially over $100 marks. In addition, $9-ending loans carry a higher interest rate, and $0-ending loans have a lower interest rate. We develop a Nash bargaining model that allows for left-digit bias from both consumers and finance managers of auto dealers. Results suggest that both parties are subject to this basic human bias: the perceived difference between $9- and the next $0-ending payments is larger than $1, especially between $99- and $00-ending payments. The proposed model can explain the phenomena of payments bunching and differential interest rates for loans with different ending digits. We use counterfactuals to show a nuanced impact of left-digit bias, which can both increase and decrease the payments. Overall, bias from both sides leads to a $33 increase in average payment per loan compared with a benchmark case with no bias. This paper was accepted by Matthew Shum, marketing.


Author(s):  
Oleksandra Bazko ◽  
◽  
Nataliia Yushchenko ◽  

The article proposes a conceptual model for improving the efficiency of project activities at the local level. The essence of project activity efficiency is determined and the basic approaches to the analysis of criteria and factors of project activity success in general are analyzed. The factors during the organization of group project activities are generalized and analyzed, including social-psychological, external-organizational factors and the level of readiness of territorial communities for project activities. It is determined that the level of readiness of territorial communities to carry out project activities is a set of motives, knowledge, skills, abilities, methods of project actions, personal qualities that ensure the successful interaction of its subjects. It is substantiated that the concept of the offered model consists in the most effective use of possibilities of group project activity for maintenance of qualitative formation of communicative competence of territorial communities, their personal and professional development. The key components of the proposed model are: conceptual-target, functional, structural, diagnostic. It is determined that the main purpose of the conceptual-target component is to form communicative competence, mastering the methods of solving problem-oriented tasks by means of project activities. The functional component of the model includes the identification of the entities on which the effectiveness of the project activity depends, and which carry out its evaluation. It was found that the structural component of the model contains the stages according to which project activities should be organized at the local level. The diagnostic component of the model contains criteria and indicators, levels, means of evaluating the effectiveness of project activities. Within this component, three levels of project activity efficiency (low, medium, high) are defined on the basis of taking into account the results achieved in the project activity process.


Author(s):  
Aya Taleb ◽  
Rizik M. H. Al-Sayyed ◽  
Hamed S. Al-Bdour

In this research, a new technique to improve the accuracy of the link prediction for most of the networks is proposed; it is based on the prediction ensemble approach using the voting merging technique. The new proposed ensemble called Jaccard, Katz, and Random models Wrapper (JKRW), it scales up the prediction accuracy and provides better predictions for different sizes of populations including small, medium, and large data. The proposed model has been tested and evaluated based on the area under curve (AUC) and accuracy (ACC) measures. These measures applied to the other models used in this study that has been built based on the Jaccard Coefficient, Katz, Adamic/Adar, and Preferential attachment. Results from applying the evaluation matrices verify the improvement of JKRW effectiveness and stability in comparison to the other tested models.  The results from applying the Wilcoxon signed-rank method (one of the non-parametric paired tests) indicate that JKRW has significant differences compared to the other models in the different populations at <strong>0.95</strong> confident interval.


2011 ◽  
Vol 1 (1) ◽  
pp. 5
Author(s):  
Bilal Alvi ◽  
M. Wasim Qureshi ◽  
Shakir Karim

Many a time the irony one can face is the unavailability of the resources when they are needed the most causing unavoidable/irreversible loss. These kinds of scenarios can cost organizations their business. Enterprises suffered and are still suffering a huge loss in terms of revenue, customers’ dissatisfaction and more.  Though availability is one of the three basic components of security besides Confidentiality and Integrity, it could not get its due share and remained under the back- drop. In this paper, the importance of the information availability and its key determinants are discussed, analyzed and a conceptual model is proposed. The model is validated by carrying out an in-depth study of three major multinational oil enterprises of Pakistan. The study proved that if information availability is taken care in true sense, success in business could be assured.  


2022 ◽  
Author(s):  
Jyostna Bodapati ◽  
Rohith V N ◽  
Venkatesulu Dondeti

Abstract Pneumonia is the primary cause of death in children under the age of 5 years. Faster and more accurate laboratory testing aids in the prescription of appropriate treatment for children suspected of having pneumonia, lowering mortality. In this work, we implement a deep neural network model to efficiently evaluate pediatric pneumonia from chest radio graph images. Our network uses a combination of convolutional and capsule layers to capture abstract details as well as low level hidden features from the the radio graphic images, allowing the model to generate more generic predictions. Furthermore, we combine several capsule networks by stacking them together and connected them with dense layers. The joint model is trained as a single model using joint loss and the weights of the capsule layers are updated using the dynamic routing algorithm. The proposed model is evaluated using benchmark pneumonia dataset\cite{kermany2018identifying}, and the outcomes of our experimental studies indicate that the capsules employed in the network enhance the learning of disease level features that are essential in diagnosing pneumonia. According to our comparison studies, the proposed model with Convolution base from InceptionV3 attached with Capsule layers at the end surpasses several existing models by achieving an accuracy of 94.84\%. The proposed model is superior in terms of various performance measures such as accuracy and recall, and is well suited to real-time pediatric pneumonia diagnosis, substituting manual chest radiography examination.


Author(s):  
Dedi Irwansyah

The emerging interest in using literature to teach English has not yet highlighted the significance of Islamic literature within Indonesian educational context. This article presents the portrayal of Islamic literature in English language teaching (ELT) study area and offers a possible conceptual model of integrating Islamic literature into ELT. Following a library research method, with the corpus consisting of fourteen stories and one poem derived from fifteen books, the findings of this study show that: most works of Islamic literature are designed for fluent readers; the presentation of Islamic literature is dominated by Middle East and Western writers; and the Western writers are not always sensitive to the symbols glorified by Muslim English learners in Indonesia. As to deal with the above findings, this study proposes a conceptual model consisting of input, process, and output elements. Not only does the proposed model strengthen the position of Islamic literature, but it also integrates the Islamic literature into English language teaching so that it could reach both fluent readers and beginning readers. The output of the proposed model, abridged and unabridged texts of the Islamic literature, can be utilized to teach vocabulary, grammar, the four basic skills of language, and Islamic values. 


Sign in / Sign up

Export Citation Format

Share Document