scholarly journals Combining event- and variable-centred approaches to institution-facing learning analytics at the unit of study level

Author(s):  
Nick Kelly ◽  
Maximiliano Montenegro ◽  
Carlos Gonzalez ◽  
Paula Clasing ◽  
Augusto Sandoval ◽  
...  

Purpose The purpose of this paper is to demonstrate the utility of combining event-centred and variable-centred approaches when analysing big data for higher education institutions. It uses a large, university-wide data set to demonstrate the methodology for this analysis by using the case study method. It presents empirical findings about relationships between student behaviours in a learning management system (LMS) and the learning outcomes of students, and further explores these findings using process modelling techniques. Design/methodology/approach The paper describes a two-year study in a Chilean university, using big data from a LMS and from the central university database of student results and demographics. Descriptive statistics of LMS use in different years presents an overall picture of student use of the system. Process mining is described as an event-centred approach to give a deeper level of understanding of these findings. Findings The study found evidence to support the idea that instructors do not strongly influence student use of an LMS. It replicates existing studies to show that higher-performing students use an LMS differently from the lower-performing students. It shows the value of combining variable- and event-centred approaches to learning analytics. Research limitations/implications The study is limited by its institutional context, its two-year time frame and by its exploratory mode of investigation to create a case study. Practical implications The paper is useful for institutions in developing a methodology for using big data from a LMS to make use of event-centred approaches. Originality/value The paper is valuable in replicating and extending recent studies using event-centred approaches to analysis of learning data. The study here is on a larger scale than the existing studies (using a university-wide data set), in a novel context (Latin America), that provides a clear description for how and why the methodology should inform institutional approaches.

2014 ◽  
Vol 21 (1) ◽  
pp. 111-126 ◽  
Author(s):  
Palaneeswaran Ekambaram ◽  
Peter E.D. Love ◽  
Mohan M. Kumaraswamy ◽  
Thomas S.T. Ng

Purpose – Rework is an endemic problem in construction projects and has been identified as being a significant factor contributing cost and schedule overruns. Causal ascription is necessary to obtain knowledge about the underlying nature of rework so that appropriate prevention mechanisms can be put in place. The paper aims to discuss these issues. Design/methodology/approach – Using a supervised questionnaire survey and case-study interviews, data from 112 building and engineering projects about the sources and causes of rework in projects were obtained. A multivariate exploration was conducted to examine the underlying relationships between rework variables. Findings – The analysis revealed that there was a significant difference between rework causes for building and civil engineering projects. The set of associations explored in the analyses will be useful to develop a generic causal model to examine the quantitative impact of rework on project performance so that appropriate prevention strategies can be identified and developed. Research limitations/implications – The limitations include: small data set (112 projects), which include 75 from building and 37 from civil engineering projects. Practical implications – Meaningful insights into the rework occurrences in construction projects will pave pathways for rational mitigation and effective management measures. Originality/value – To date there has been limited empirical research that has sought to determine the causal ascription of rework, particularly in Hong Kong.


2018 ◽  
Vol 36 (3) ◽  
pp. 458-481 ◽  
Author(s):  
Yezheng Liu ◽  
Lu Yang ◽  
Jianshan Sun ◽  
Yuanchun Jiang ◽  
Jinkun Wang

Purpose Academic groups are designed specifically for researchers. A group recommendation procedure is essential to support scholars’ research-based social activities. However, group recommendation methods are rarely applied in online libraries and they often suffer from scalability problem in big data context. The purpose of this paper is to facilitate academic group activities in big data-based library systems by recommending satisfying articles for academic groups. Design/methodology/approach The authors propose a collaborative matrix factorization (CoMF) mechanism and implement paralleled CoMF under Hadoop framework. Its rationale is collaboratively decomposing researcher-article interaction matrix and group-article interaction matrix. Furthermore, three extended models of CoMF are proposed. Findings Empirical studies on CiteULike data set demonstrate that CoMF and three variants outperform baseline algorithms in terms of accuracy and robustness. The scalability evaluation of paralleled CoMF shows its potential value in scholarly big data environment. Research limitations/implications The proposed methods fill the gap of group-article recommendation in online libraries domain. The proposed methods have enriched the group recommendation methods by considering the interaction effects between groups and members. The proposed methods are the first attempt to implement group recommendation methods in big data contexts. Practical implications The proposed methods can improve group activity effectiveness and information shareability in academic groups, which are beneficial to membership retention and enhance the service quality of online library systems. Furthermore, the proposed methods are applicable to big data contexts and make library system services more efficient. Social implications The proposed methods have potential value to improve scientific collaboration and research innovation. Originality/value The proposed CoMF method is a novel group recommendation method based on the collaboratively decomposition of researcher-article matrix and group-article matrix. The process indirectly reflects the interaction between groups and members, which accords with actual library environments and provides an interpretable recommendation result.


2019 ◽  
Vol 33 (4) ◽  
pp. 369-379 ◽  
Author(s):  
Xia Liu

Purpose Social bots are prevalent on social media. Malicious bots can severely distort the true voices of customers. This paper aims to examine social bots in the context of big data of user-generated content. In particular, the author investigates the scope of information distortion for 24 brands across seven industries. Furthermore, the author studies the mechanisms that make social bots viral. Last, approaches to detecting and preventing malicious bots are recommended. Design/methodology/approach A Twitter data set of 29 million tweets was collected. Latent Dirichlet allocation and word cloud were used to visualize unstructured big data of textual content. Sentiment analysis was used to automatically classify 29 million tweets. A fixed-effects model was run on the final panel data. Findings The findings demonstrate that social bots significantly distort brand-related information across all industries and among all brands under study. Moreover, Twitter social bots are significantly more effective at spreading word of mouth. In addition, social bots use volumes and emotions as major effective mechanisms to influence and manipulate the spread of information about brands. Finally, the bot detection approaches are effective at identifying bots. Research limitations/implications As brand companies use social networks to monitor brand reputation and engage customers, it is critical for them to distinguish true consumer opinions from fake ones which are artificially created by social bots. Originality/value This is the first big data examination of social bots in the context of brand-related user-generated content.


2019 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Islam Mohamed Hegazy

Purpose The purpose of this paper is the better understanding of the increasing relation between big data 2.0 and neuromarketing, particularly to influence election outcomes, along with a special aim to discuss some raised doubts about Trump’s presidential campaign 2016 and its ability to hijack American political consumers’ minds, and to direct their votes. Design/methodology/approach This paper combines deductive/inductive methodology to define the term of political neuromarketing 2.0 through a brief literature review of related concepts of big data 2.0, virtual identity and neuromarketing. It then applies a single qualitative case study by presenting the history and causes of online voter microtargeting in the USA, and analyzing the political neuromarketing 2.0 mechanisms adopted by Trump’s political campaign team in the 2016 presidential election. Findings Based on Trump’s political marketing mechanisms analysis, the paper believes that big data 2.0 and neuromarketing techniques played an unusual role in reading political consumers’ minds and helping the controversial candidate to meet one of the most unexpected victories in the presidential elections. Nevertheless, this paper argues that the ethics of using political neuromarketing 2.0 to sell candidates and its negative impacts on the quality of democracy are and will continue to be a subject of ongoing debates. Originality/value The marriage of big data 2.0 and political neuromarketing is a new interdisciplinary field of inquiry. This paper provides a useful introduction and further explanations for why and how Trump’s campaign defied initial loss predictions and attained victory during this election.


2017 ◽  
Vol 21 (1) ◽  
pp. 57-70 ◽  
Author(s):  
Lorna Uden ◽  
Wu He

Purpose Current knowledge management (KM) systems cannot be used effectively for decision-making because of the lack of real-time data. This study aims to discuss how KM can benefit by embedding Internet of Things (IoT). Design/methodology/approach The paper discusses how IoT can help KM to capture data and convert data into knowledge to improve the parking service in transportation using a case study. Findings This case study related to intelligent parking service supported by IoT devices of vehicles shows that KM can play a role in turning the incoming big data collected from IoT devices into useful knowledge more quickly and effectively. Originality/value The literature review shows that there are few papers discussing how KM can benefit by embedding IoT and processing incoming big data collected from IoT devices. The case study developed in this study provides evidence to explain how IoT can help KM to capture big data and convert big data into knowledge to improve the parking service in transportation.


foresight ◽  
2017 ◽  
Vol 19 (4) ◽  
pp. 409-420 ◽  
Author(s):  
Stuti Saxena ◽  
Tariq Ali Said Mansour Al-Tamimi

Purpose The purpose of this paper is to underline the significance of invoking Big Data and Internet of Things (IoT) technologies in Omani Banks. Opportunities and challenges are also being discussed in the case study. Design/methodology/approach Four Omani banks representative of local, international, Islamic and specialized banks are being studied in terms of their social networking presence on Facebook and their e-banking facilities. Also, impetus is laid upon the aggregation of internal data and vast amounts of semi-structured external data from public sources, including social media. Findings The case study shows that Big Data analytics and IoT technologies may be utilized by the Omani banks for facilitating them in “forecasting” and “nowcasting”. Besides, customers may be better managed with better and efficient services. However, there are challenges in tapping these technologies such as security, infrastructure, regulatory norms, etc. Practical implications Banks in Oman need to appreciate the utility of Big Data and IoT technologies, and for this, a robust IT infrastructure should be institutionalized. Originality/value The case study is a major step in integrating Big Data and IoT technologies in Omani banks across four variants of national, international, Islamic and specialized banks. This is the first study where such integration has been emphasized in the Omani banking sector.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Alexander Schlegel ◽  
Hendrik Sebastian Birkel ◽  
Evi Hartmann

PurposeThe purpose of this study is to investigate how big data analytics capabilities (BDAC) enable the implementation of integrated business planning (IBP) – the advanced form of sales and operations planning (S&OP) – by counteracting the increasing information processing requirements.Design/methodology/approachThe research model is grounded in the organizational information processing theory (OIPT). An embedded single case study on a multinational agrochemical company with multiple geographically distinguished sub-units of analysis was conducted. Data were collected in workshops, semistructured interviews as well as direct observations and enriched by secondary data from internal company sources as well as publicly available sources.FindingsThe results show the relevancy of establishing BDAC within an organization to apply IBP by providing empirical evidence of BDA solutions in S&OP. The study highlights how BDAC increase an organization's information processing capacity and consequently enable efficient and effective S&OP. Practical guidance toward the development of tangible, human and intangible BDAC in a particular sequence is given.Originality/valueThis study is the first theoretically grounded, empirical investigation of S&OP implementation journeys under consideration of the impact of BDAC.


2019 ◽  
Vol 32 (5) ◽  
pp. 807-823 ◽  
Author(s):  
Wu He ◽  
Xin Tian ◽  
Feng-Kwei Wang

Purpose Few academic studies specifically investigate how businesses can use social media to innovate customer loyalty programs. The purpose of this paper is to present an in-depth case study of the Shop Your Way (SYW) program, which is regarded as one of the most successful customer loyalty programs with social media. Design/methodology/approach This paper uses case study research as the methodology to uncover innovative features associated with the SYW customer loyalty program. The authors collected the data from SYW’s social media forums and tweets. The data set was analyzed using social media analytics tools including the R package and Lexicon. Findings Based on the research results, the authors summarize innovative social media features identified from SYW. The authors also provide insights and recommendations for businesses that are seeking to innovate their customer loyalty programs using social media technologies. Originality/value The results of this case study set a good example for businesses which want to innovate and improve their customer loyalty programs using social media technologies. This is the first in-depth case study on the SYW program, one of the most successful customer loyalty programs with social media. The results shed light on how social media can innovate customer loyalty programs in both theory and practice.


2014 ◽  
Vol 10 (4) ◽  
pp. 394-412 ◽  
Author(s):  
Mai Miyabe ◽  
Akiyo Nadamoto ◽  
Eiji Aramaki

Purpose – This aim of this paper is to elucidate rumor propagation on microblogs and to assess a system for collecting rumor information to prevent rumor-spreading. Design/methodology/approach – We present a case study of how rumors spread on Twitter during a recent disaster situation, the Great East Japan earthquake of March 11, 2011, based on comparison to a normal situation. We specifically examine rumor disaffirmation because automatic rumor extraction is difficult. Extracting rumor-disaffirmation is easier than extracting the rumors themselves. We classify tweets in disaster situations, analyze tweets in disaster situations based on users' impressions and compare the spread of rumor tweets in a disaster situation to that in a normal situation. Findings – The analysis results showed the following characteristics of rumors in a disaster situation. The information transmission is 74.9 per cent, representing the greatest number of tweets in our data set. Rumor tweets give users strong behavioral facilitation, make them feel negative and foment disorder. Rumors of a normal situation spread through many hierarchies but the rumors of disaster situations are two or three hierarchies, which means that the rumor spreading style differs in disaster situations and in normal situations. Originality/value – The originality of this paper is to target rumors on Twitter and to analyze rumor characteristics by multiple aspects using not only rumor-tweets but also disaffirmation-tweets as an investigation object.


2015 ◽  
Vol 22 (4) ◽  
pp. 624-642 ◽  
Author(s):  
Subhadip Sarkar

Purpose – Identification of the best school among other competitors is done using a new technique called most productive scale size based data envelopment analysis (DEA). The paper aims to discuss this issue. Design/methodology/approach – A non-central principal component analysis is used here to create a new plane according to the constant return to scale. This plane contains only ultimate performers. Findings – The new method has a complete discord with the results of CCR DEA. However, after incorporating the ultimate performers in the original data set this difference was eliminated. Practical implications – The proposed frontier provides a way to identify those DMUs which follow cost strategy proposed by Porter. Originality/value – A case study of six schools is incorporated here to identify the superior school and also to visualize gaps in their performances.


Sign in / Sign up

Export Citation Format

Share Document