scholarly journals Extracting scientific trends by mining topics from Call for Papers

2019 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Noor Arshad ◽  
Abu Bakar ◽  
Saira Hanif Soroya ◽  
Iqra Safder ◽  
Sajjad Haider ◽  
...  

Purpose The purpose of this paper is to present a novel approach for mining scientific trends using topics from Call for Papers (CFP). The work contributes a valuable input for researchers, academics, funding institutes and research administration departments by sharing the trends to set directions of research path. Design/methodology/approach The authors procure an innovative CFP data set to analyse scientific evolution and prestige of conferences that set scientific trends using scientific publications indexed in DBLP. Using the Field of Research code 804 from Australian Research Council, the authors identify 146 conferences (from 2006 to 2015) into different thematic areas by matching the terms extracted from publication titles with the Association for Computing Machinery Computing Classification System. Furthermore, the authors enrich the vocabulary of terms from the WordNet dictionary and Growbag data set. To measure the significance of terms, the authors adopt the following weighting schemas: probabilistic, gram, relative, accumulative and hierarchal. Findings The results indicate the rise of “big data analytics” from CFP topics in the last few years. Whereas the topics related to “privacy and security” show an exponential increase, the topics related to “semantic web” show a downfall in recent years. While analysing publication output in DBLP that matches CFP indexed in ERA Core A* to C rank conference, the authors identified that A* and A tier conferences not merely set publication trends, since B or C tier conferences target similar CFP. Originality/value Overall, the analyses presented in this research are prolific for the scientific community and research administrators to study research trends and better data management of digital libraries pertaining to the scientific literature.

2016 ◽  
Vol 24 (1) ◽  
pp. 93-115 ◽  
Author(s):  
Xiaoying Yu ◽  
Qi Liao

Purpose – Passwords have been designed to protect individual privacy and security and widely used in almost every area of our life. The strength of passwords is therefore critical to the security of our systems. However, due to the explosion of user accounts and increasing complexity of password rules, users are struggling to find ways to make up sufficiently secure yet easy-to-remember passwords. This paper aims to investigate whether there are repetitive patterns when users choose passwords and how such behaviors may affect us to rethink password security policy. Design/methodology/approach – The authors develop a model to formalize the password repetitive problem and design efficient algorithms to analyze the repeat patterns. To help security practitioners to analyze patterns, the authors design and implement a lightweight, Web-based visualization tool for interactive exploration of password data. Findings – Through case studies on a real-world leaked password data set, the authors demonstrate how the tool can be used to identify various interesting patterns, e.g. shorter substrings of the same type used to make up longer strings, which are then repeated to make up the final passwords, suggesting that the length requirement of password policy does not necessarily increase security. Originality/value – The contributions of this study are two-fold. First, the authors formalize the problem of password repetitive patterns by considering both short and long substrings and in both directions, which have not yet been considered in past. Efficient algorithms are developed and implemented that can analyze various repeat patterns quickly even in large data set. Second, the authors design and implement four novel visualization views that are particularly useful for exploration of password repeat patterns, i.e. the character frequency charts view, the short repeat heatmap view, the long repeat parallel coordinates view and the repeat word cloud view.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
A. D'Amato

PurposeThe purpose of this paper is to analyze the relationship between intellectual capital and firm capital structure by exploring whether firm profitability and risk are drivers of this relationship.Design/methodology/approachBased on a comprehensive data set of Italian firms over the 2008–2017 period, this paper examines whether intellectual capital affects firm financial leverage. Moreover, it analyzes whether firm profitability and risk mediate the abovementioned relationship. Financial leverage is measured by the debt/equity ratio. Intellectual capital is measured via the value-added intellectual coefficient approach.FindingsThe findings show that firms with a high level of intellectual capital have lower financial leverage and are more profitable and riskier than firms with a low level of intellectual capital. Furthermore, this study finds that firm profitability and risk mediate the relationship between intellectual capital and financial leverage. Thus, the higher profitability and risk of intellectual capital-intensive firms help explain their lower financial leverage.Research limitations/implicationsThe findings have several implications. From a theoretical standpoint, the paper presents and tests a mediating model of the relationship between intellectual capital and financial leverage and its underlying processes. In terms of the more general managerial implications, the results provide managers with a clear interpretation of the relationship between intellectual capital and financial leverage and point to the need to strengthen the capital structure of intangible-intensive firms.Originality/valueThrough a mediation framework, this study provides empirical evidence on the relationship between intellectual capital and firm financial leverage by exploring the underlying mechanisms behind that relationship, which is a novel approach in the literature.


2018 ◽  
Vol 32 (1) ◽  
pp. 75-92 ◽  
Author(s):  
Lisa M. Young ◽  
Swapnil Rajendra Gavade

PurposeThe purpose of this paper is to use the data analysis method of sentiment analysis to improve the understanding of a large data set of employee comments from an annual employee job satisfaction survey of a US hospitality organization.Design/methodology/approachSentiment analysis is used to examine the employee comments by identifying meaningful patterns, frequently used words and emotions. The statistical computing language, R, uses the sentiment analysis process to scan each employee survey comment, compare the words with the predefined word dictionary and classify the employee comments into the appropriate emotion category.FindingsEmployee responses written in English and in Spanish are compared with significant differences identified between the two groups, triggering further investigation of the Spanish comments. Sentiment analysis was then conducted on the Spanish comments comparing two groups, front-of-house vs back-of-house employees and employees with male supervisors vs female supervisors. Results from the analysis of employee comments written in Spanish point to higher scores for job sadness and anger. The negative comments referred to desires for improved healthcare, requests for increased wages and frustration with difficult supervisor relationships. The findings from this study add to the growing body of literature that has begun to focus on the unique work experiences of Latino employees in the USA.Originality/valueThis is the first study to examine a large unstructured English and Spanish text database from a hospitality organization’s employee job satisfaction surveys using sentiment analysis. Applying this big data analytics process to advance new insights into the human capital aspects of hospitality management is intriguing to many researchers. The results of this study demonstrate an issue that needs to be further investigated particularly considering the hospitality industry’s employee demographics.


2019 ◽  
Vol 31 (1) ◽  
pp. 21-41
Author(s):  
Rafay Ishfaq ◽  
Uzma Raja

Purpose The purpose of this paper is to focus on the effectiveness of the inventory audit process to manage operational issues related to inventory errors in retail stores. An evaluation framework is proposed based on developing an error profile of store inventory using product attributes and inventory information. Design/methodology/approach A store inventory error profile is developed using data on price, sales, popularity, replenishment cycle, inventory levels and inventory errors. A simulation model of store inventory management system grounded in empirical data is used to evaluate the effectiveness of the inventory audit process in a high SKU-variety retail store. The framework is tested using a large transaction data set comprised of over 200,000 records for 7,400 SKUs. Findings The results show that store inventory exhibits different inventory error profile groups that would determine the effectiveness of store inventory audits. The results also identify an interaction effect between store inventory policies and replenishment process that moderates the effectiveness of inventory audits. Research limitations/implications The analysis is based on data collected from a single focal firm and does not cover all the different segments of the retail industry. However, the evaluation framework presented in the paper is fully generalizable to different retail settings offering opportunity for additional studies. Practical implications The findings about the role of different error profile groups and the interaction effect of store audits with inventory and store replenishments would help retailers incorporate a more effective inventory audit process in their stores. Originality/value This paper presents a novel approach that uses store inventory profiles to evaluate the effectiveness of inventory audits. Unlike previous papers, it is the first empirical study in this area that is based on inventory error data gathered from multiple audits that identify the interaction effect of inventory policy and replenishments on the inventory audit process.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Rayees Farooq

Purpose The purpose of this study is to offer the bibliometric analysis of the Journal of Knowledge Management (JKM) to understand how the literature has developed over time. Design/methodology/approach This study used bibliometric approaches to analyze a sample of 669 studies from 1997 to 2021. This study focused on performance analysis and scientific mapping of articles using the R package. Findings The results indicate that the number of publications during the period has significantly increased which shows a growing interest of researchers in the JKM. This study highlights new emerging themes such as change management, change readiness, product innovation and digital libraries which uncover exciting avenues for new research opportunities. USA and UK were the most productive countries in terms of the number of citations followed by few European countries including Spain, Finland, Germany and Sweden. However, it is worth noting that India was the most productive country in the emerging economies. Practical implications This study will act as a guide for researchers of various fields to evaluate the development of scientific publications in a particular theme over time, especially for those who are in the field of knowledge management (KM). Originality/value This study aims to accomplish the systematic bibliometric analysis of the JKM for more than two decades, providing useful insights into the key developments in the field of KM. This study is more rigorous and comprehensive in terms of the analytical techniques used.


2018 ◽  
Vol 35 (4) ◽  
pp. 481-504 ◽  
Author(s):  
Samit Paul ◽  
Prateek Sharma

PurposeThis study aims to implement a novel approach of using the Realized generalized autoregressive conditional heteroskedasticity (GARCH) model within the conditional extreme value theory (EVT) framework to generate quantile forecasts. The Realized GARCH-EVT models are estimated with different realized volatility measures. The forecasting ability of the Realized GARCH-EVT models is compared with that of the standard GARCH-EVT models.Design/methodology/approachOne-step-ahead forecasts of Value-at-Risk (VaR) and expected shortfall (ES) for five European stock indices, using different two-stage GARCH-EVT models, are generated. The forecasting ability of the standard GARCH-EVT model and the asymmetric exponential GARCH (EGARCH)-EVT model is compared with that of the Realized GARCH-EVT model. Additionally, five realized volatility measures are used to test whether the choice of realized volatility measure affects the forecasting performance of the Realized GARCH-EVT model.FindingsIn terms of the out-of-sample comparisons, the Realized GARCH-EVT models generally outperform the standard GARCH-EVT and EGARCH-EVT models. However, the choice of the realized estimator does not affect the forecasting ability of the Realized GARCH-EVT model.Originality/valueIt is one of the earliest implementations of the two-stage Realized GARCH-EVT model for generating quantile forecasts. To the best of the authors’ knowledge, this is the first study that compares the performance of different realized estimators within Realized GARCH-EVT framework. In the context of high-frequency data-based forecasting studies, a sample period of around 11 years is reasonably large. More importantly, the data set has a cross-sectional dimension with multiple European stock indices, whereas most of the earlier studies are based on the US market.


2016 ◽  
Vol 11 (4) ◽  
pp. 584-606 ◽  
Author(s):  
Montfort Mlachila ◽  
Sarah Sanya

Purpose The purpose of this paper is to answer one important question: in the aftermath of a systemic banking crisis, can the expected deviations in credit supply, liquidity, and other bank characteristics become entrenched in that they do not converge back to “normal”? Design/methodology/approach Using a panel data set of commercial banks in the Mercosur during the period 1990-2006, the authors analyze the impact of crises on four sets of financial indicators of bank behavior and outcomes – profitability, maturity preference, credit supply, and risk taking. The authors employ convergence methodology – which is often used in the growth literature – to identify the evolution of bank behavior in the region after crises. Findings A key finding of the paper is that bank risk-taking behavior is significantly modified leading to prolonged reduction of intermediation to the private sector in favor of less risky government securities and preference for high levels excess liquidity well after the crisis. This can be attributed to the role played by macroeconomic and institutional volatility that has nurtured a relatively high level of risk aversion in banks in the Mercosur. Originality/value To the best of the authors’ knowledge, using convergence methodology is a relatively novel approach in this area. An added advantage of using this approach over others currently used in the literature is that the authors can empirically quantify the rate of convergence and the institutional and macroeconomic factors that condition the convergence. Moreover, the methodology allows one to identify – in some hierarchical order – factors that condition persistent deviation from “normality.” The lessons learned from the Mercosur case study are useful for countries that suffered systemic banking crises in the aftermath of the global financial crisis.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Ridhima Mehta

Purpose This paper aims to evaluate the user satisfaction criterion for qualitative assessment of timeliness and efficacy of digital libraries based on the multivariate fuzzy logic technique. Design/methodology/approach In this paper, the performance of digital library services using fuzzy logic modeling are evaluated. This model based on fuzzy logic control is used to compute the dynamic response of users by using multiple independent variables. These parameters with inherent uncertainties in practical scenarios are characterized by fuzzy linguistic information. Findings Several parameters determining the user satisfaction metric in the deployment of digital library exhibit implicit uncertainties which can be intelligently modeled by means of fuzzy control systems. Given the sample data set for the proposed fuzzy multi-attribute decision-making framework, the simulation results are used to compute various error performance measures in the estimation of the fuzzy output variables. Research limitations/implications The size of the considered sample data set is considerably small. Scalable real-world data sets can be used to reinforce the statistical efficiency and accuracy of the proposed model. Moreover, other techniques such as evolutionary multi-objective optimization and the Markovian process can be implemented to explore the efficient correlation between different parameters influencing the users’ behavior and facilitate the general application of the proposed technique. Originality/value The paper applied a fuzzy design methodology in which several attributes related to the service of digital library and the affiliated online resource provisions are used to assess their synchronous impact on user convenience in accessing and manipulating the library information. End-users’ satisfaction is crucial for quality-based valuation of compliance with the time limitations and proficiency of digital libraries.


2021 ◽  
Vol 6 (3) ◽  
pp. 063-068
Author(s):  
Barida Baah ◽  
Onate Egerton Taylor ◽  
Chioma Lizzy Nwagbo

The problems of privacy and security is becoming a major challenge when it comes to the distributed systems, federated machine learning system especially when data are been transmitted or learned on a network , this necessitated the reasons for this research work which is all about wireless federated machine learning process using a Raspberry Pi. The Raspberry Pi 4 is a single hardware board with built in Linux operating system. We used data set of names from nine (9) different languages and then develop a training model using recurrent neural network to train this names compare to the names in the existing language like French, Scottish to predict if the names are from any of this language, this is done wirelessly with the Wi-Fi network in a federated machine learning environment for experimental setup with PySft’s that is installed in the python environment. The system was able to predict that name from which the language it originate from, the methodology that is implore in the research work is the Rapid Application Development (RAD). The benefits of this system are to ensure privacy, reduces the computing power, ensure real time learning and most importantly it is cost effective.


2018 ◽  
Vol 44 (3) ◽  
pp. 389-402 ◽  
Author(s):  
Anni Lapatto ◽  
Vesa Puttonen

Purpose The purpose of this paper is to study how the target fund in mutual fund mergers performed compared to the acquiring funds had they not been merged but continued on their own as buy-and-hold portfolios. Design/methodology/approach The authors develop a novel approach to examine post-merger wealth effects. The authors’ study how the target portfolios would have performed compared to the funds acquiring them had they not been merged but continued on their own as passive portfolios. The data set consists of 793 merging US equity funds from January 2003 to December 2014. Findings The authors find that the target portfolio shareholders would have been better off if the target fund had been converted from an actively managed fund to a passively managed fund that maintained their current holdings. Research limitations/implications The findings are the opposite to many previous studies who view target fund shareholders as the major beneficiaries in mutual fund mergers. Practical implications Investors receiving notification of their fund merging should reconsider their investment strategy. If they wish to maintain the original strategy of their fund, they should oppose the merger. Alternatively they may withdraw their money from the (soon-to-be) merged fund, replicate the latest portfolio of their fund, and buy-and-hold that portfolio. Originality/value The authors develop a novel approach to examine post-merger wealth effects.


Sign in / Sign up

Export Citation Format

Share Document