Journal of Global Business Insights
Latest Publications


TOTAL DOCUMENTS

71
(FIVE YEARS 37)

H-INDEX

2
(FIVE YEARS 2)

Published By University Of South Florida Libraries

2640-6489, 2640-6470

2021 ◽  
Vol 6 (2) ◽  
pp. 117-140
Author(s):  
Vincent Karovič ◽  
Jakub Bartaloš ◽  
Vincent Karovič ◽  
Michal Greguš

The article presents the design of a model environment for penetration testing of an organization using virtualization. The need for this model was based on the constantly increasing requirements for the security of information systems, both in legal terms and in accordance with international security standards. The model was created based on a specific team from the unnamed company. The virtual working environment offered the same functions as the physical environment. The virtual working environment was created in OpenStack and tested with a Linux distribution Kali Linux. We demonstrated that the virtual environment is functional and its security testable. Virtualizing the work environment simplified the organization’s security testing, increased resource efficiency, and reduced the total cost of ownership of certain devices.


2021 ◽  
Vol 6 (2) ◽  
pp. 186-206
Author(s):  
Murat Aydinay ◽  
Aysehan Cakici ◽  
A. Celil Cakici

The aim of this study was to find out the effect of destructive leadership on employees’ self-efficacy and counterproductive work behaviors. The data was collected from a convenience sample of 486 service sector employees in Mersin, Turkey. Descriptive statistics, explanatory factor analysis, and regression analysis were conducted to analyze the data. The results showed that lack of competence in leadership, excessive authoritarianism, and favoritism dimensions increased the organization-oriented counterproductive work behaviors while resistance to technology and change dimension decreased these behaviors. In contrast, insensitivity to subordinates had no effect on counterproductive work behaviors. Furthermore, destructive leadership had no effect on employees’ self-efficacy, but self-efficacy affected counterproductive work behaviors. This study provides theoretical and practical implications for understanding the effect of destructive leadership behaviors on the employees’ self-efficacy and counterproductive work behaviors in the context of the service sector.


2021 ◽  
Vol 6 (2) ◽  
pp. 154-171
Author(s):  
Louis Jourdan ◽  
Michael Smith

The purposes of this study were twofold. The first was to encourage other investigators to examine more closely three indices related to economic growth, specifically innovation, entrepreneurship, and creativity. The second was to encourage further investigation of Hofstede’s national culture as explanatory variables. This investigation addressed this research gap by examining the relationships among indices of nations’ creativity, entrepreneurship, and innovation, and their relationships with Hofstede’s (2015) national culture dimensions. No previous research was identified which examined countries’ creativity, entrepreneurship, and innovation in the same study. The relationships among four measures associated with economic development—the Global Innovation Index (GII), the Global Entrepreneurship Index (GEI), the Global Creativity Index (GCI), and Bloomberg 50 most innovative countries (B50) were studied. Two rarely investigated indices (B50 and GCI) were included in this research. Results indicated that all four indices were highly correlated. The factor structure of Hofstede’s six cultural dimensions was reduced to three major factors: heteronomy-autonomy, gratification, and competition-altruism. Using multiple regression analysis, heteronomy-autonomy and gratification predicted GII. Gratification predicted the remaining three criteria. This study addressed this research gap of criterion development by examining the relationships among these variables, their relationships with national culture, and their predictability from different national culture dimensions. Practical implications of these findings for decision-makers and policymakers who want to increase their country’s economic growth through the support of creativity, innovation, and entrepreneurship were discussed.


2021 ◽  
Vol 6 (2) ◽  
pp. 141-153
Author(s):  
Yazeed Muhammed ◽  
Mohammed Dantsoho ◽  
Adamu Abubakar

Despite the usefulness of the theory of planned behavior in predicting intention and behavior in different domains, the sufficiency of its use in predicting and determining intention has been debated by many scholars. This paper extended the theory of planned behavior by including social support as a possible determinant of intention in the entrepreneurship domain while looking at one of the largest universities in Nigeria. Data were collected from 432 final year students of Ahmadu Bello University in Zaria using a simple random sampling technique. Structural equation modeling was adopted using partial least square technique for data analysis. Perceived social support, attitude towards entrepreneurship, and perceived behavioral control all were found to have a significant effect on entrepreneurial intention, while subjective norms had an insignificant effect. The study found perceived social support to be an important social influence factor in the theory of planned behavior because of its influence on entrepreneurial intention. Hence, perceived social support is recommended to be included as a major construct in the theory of planned behavior.


2021 ◽  
Vol 6 (2) ◽  
pp. 172-185
Author(s):  
Hasibul Islam ◽  
Fatema Johora ◽  
Asma Abbasy ◽  
Masud Rana ◽  
Niyungeko Antoine

The study showed the effect of the COVID-19 pandemic on healthcare expenses including the price of medicines, protective equipment, medical devices, healthcare facilities, and food. A self-administered questionnaire was used as the data collection tool and 400 people from different Bangladesh divisions (Dhaka, Chittagong, Barisal, Khulna, Mymensingh, Rajshahi, and Sylhet) participated in this study. Multiple regression analysis was used to estimate the impact of independent variables on dependent variables. R programming environment was used to perform the statistical analysis. Cronbach’s alpha was used for determination of reliability and found acceptable internal consistency. The price of protective equipment (POPE), the price of a healthcare facilities (POHCF), the consequences of rising prices (CRP), and COVID-19 were independent variables. COVID-19 (CRP) was a dependent variable that measured COVID-19’s impact (IC). The results of the regression analysis indicated a positive and significant impact of POPE, POHCF, and CRP on IC. However, the variance explained was still low (54.4%). Bangladesh should control the prices of all goods and services because of their influence on the impact of COVID-19. Future research should be conducted to discover other variables that affect the impact of COVID-19.


2021 ◽  
Vol 6 (2) ◽  
pp. 98-116
Author(s):  
Ozgur Ozdemir ◽  
Murat Kizildag ◽  
Tarik Dogru ◽  
Ilhan Demirer

In this study, the moderating effect of board diversity on the complex relationship between corporate social responsibility (CSR) performance and financial performance is examined. The resource-based view of the firm and stakeholder theory are used as the theoretical foundation of the study. The hypotheses of the study are tested via fixed-effects regression using data for a sample of 1,234 firms and 5,102 firm-year observations for the period 2009–2013. The study finds evidence that CSR performance and financial performance are positively related, and the magnitude of this relationship is contingent on the level of board diversity. As corporate boardrooms become more diverse across several diversity attributes, the positive effect of CSR performance on financial performance becomes more profound. The study also reveals that race and age diversity constructs have a stand-alone moderating effect on this purported relationship. The study offers significant insights for practitioners regarding the potential role of a diverse board structure in effectively monitoring management actions on CSR concerns.


2021 ◽  
Vol 6 (1) ◽  
pp. 73-90
Author(s):  
Serban Bakay Ergene ◽  
Erdinc Karadeniz

This study examined the relationships and interactions between corporate governance and firm values of lodging companies with different characteristics. The companies were analyzed separately using a classification and regression tree (CRT) analysis. The analysis results did not show a direct relationship between value and governance, yet that does not mean there is no relationship between them. When the companies’ governance scores were similar, corporate governance showed no distinguishing variable on firm value but is a hygiene factor. The analysis also found negative relationships between value and size. This may be important in preventing companies from becoming cumbersome. Also, positive relationships were found between value and the debt ratio of the lodging companies from the most valuable brands. This relationship showed the significance of using the debt ratio as a control tool in evaluating management performance.


2021 ◽  
Vol 6 (1) ◽  
pp. 92-97
Author(s):  
Cihan Cobanoglu ◽  
Muhittin Cavusoglu ◽  
Gozde Turktarhan

Introduction Researchers around the globe are utilizing crowdsourcing tools to reach respondents for quantitative and qualitative research (Chambers & Nimon, 2019). Many social science and business journals are receiving studies that utilize crowdsourcing tools such as Amazon Mechanical Turk (MTurk), Qualtrics, MicroWorkers, ShortTask, ClickWorker, and Crowdsource (e.g., Ahn, & Back, 2019; Ali et al., 2021; Esfahani, & Ozturk, 2019; Jeong, & Lee, 2017; Zhang et al., 2017). Even though the use of these tools presents a great opportunity for sharing large quantities of data quickly, some challenges must also be addressed. The purpose of this guide is to present the basic ideas behind the use of crowdsourcing for survey research and provide a primer for best practices that will increase their validity and reliability. What is crowdsourcing research? Crowdsourcing describes the collection of information, opinions, or other types of input from a large number of people, typically via the internet, and which may or may not receive (financial) compensation (Hargrave, 2019; Oxford Dictionary, n.d.). Within the behavioral science realm, crowdsourcing is defined as the use of internet services for hosting research activities and for creating opportunities for a large population of participants. Applications of crowdsourcing techniques have evolved over the decades, establishing the strong informational power of crowds. The advent of Web 2.0 has expanded the possibilities of crowdsourcing, with new online tools such as online reviews, forums, Wikipedia, Qualtrics, or MTurk, but also other platforms such as Crowdflower and Prolific Academic (Peer et al., 2017; Sheehan, 2018). Crowdsourcing platforms in the age of Web 2.0 use remote labor recruited via the internet to assist employers complete tasks that cannot be left to machines. Key characteristics of crowdsourcing include payment for workers, their recruitment from any location, and the completion of tasks (Behrend et al., 2011). They also allow for a relatively quick collection of data compared to data collection in the field, and participants are rewarded with an incentive—often financial compensation. Crowdsourcing not only offers a large participation pool but also a streamlined process for the study design, participant recruitment, and data collection as well as integrated participant compensation system (Buhrmester et al., 2011). Also, compared to other traditional marketing firms, crowdsourcing makes it easier to detect possible sampling biases (Garrow et al., 2020). Due to advantages such as reduced costs, diversity of participants, and flexibility, crowdsourcing platforms have surged in popularity for researchers. Advantages MTurk is one of the most popular crowdsourcing platforms among researchers, allowing Requesters to submit tasks for Workers to complete (Cummings & Sibona, 2017). MTurk has been used as an online crowdsourcing platform for the recruitment of human subjects for research purposes (Paolacci & Chandler, 2014). Research has also shown MTurk to be a reliable and cost-effective tool, capable of providing representative data for research in the behavioral sciences (e.g., Crump et al., 2013; Goodman et al., 2013; Mason & Suri, 2012; Rand, 2012; Simcox & Fiez, 2014). In addition to its use in social science studies, the platform has been used in marketing, hospitality and tourism, psychology, political science, communication, and sociology contexts (Sheehan, 2018). To illustrate, between 2012 and 2017, more than 40% of the studies published in the Journal of Consumer Research used crowdsourcing websites for their data collection (Goodman & Paolacci, 2017). Disadvantages Although researchers have assessed crowdsourcing platforms as reliable and cost-effective for data collection in the behavioral sciences, they are not exempt of flaws. One disadvantage is the possibility of unsatisfactory data quality. In fact, the virtual setting of the survey implies that the investigator is physically separated from the participant, and this lack of monitoring could lead to data quality issues (Sheehan, 2018). In addition, participants in survey research on crowdsourcing platforms are not always who they claim to be, creating issues of trust with the data provided and, ultimately, the quality of the research findings (McGonagle, 2015; Smith et al., 2016). A recurrent concern with MTurk workers, for instance, is their assessment as experienced survey takers (Chandler et al., 2015). This experience is mainly acquired through completion of dozens of surveys per day, especially when they are faced with similar items and scales. Smith et al. (2016) identified two types of problems performing data collection using MTurk; namely, cheaters and speeders. As compared to Qualtrics—which has a strict screening and quality-control processes to ensure that participants are who they claim to be—MTurk appears to be less exigent regarding the workers. However, a downside for data collection with Qualtrics is more expensive fees—about $5.00 per questionnaire on Qualtrics, against $0.50 to $1.50 on MTurk (Ford, 2017). Hence, few researchers were able to conduct surveys and compare respondent pools with Qualtrics or other traditional marketing research firms (Garrow et al., 2020). Another challenge using MTurk arises when trying to collect a desired number of responses from a population targeted to a specific city or area (Ross et al., 2010). The issues inherent to the selection process of MTurk have been the subject of investigations in several studies (e.g., Berinsky et al., 2012; Chandler et al., 2014; 2015; Harms & DeSimone, 2015; Paolacci et al., 2010; Rand, 2012). Feitosa et al. (2015) pointed out that international respondents may still identify themselves as U.S. respondents with the use of fake addresses and accounts. They found that 5% to 10% of participants identifying themselves as U.S. respondents were actually from overseas locations. Moreover, Babin et al. (2016) assessed that the use of trap questions allowed researchers to uncover that many respondents change their genders, ages, careers, or income within the course of a single survey. The issues of (a) experienced workers for the quality control of questions and (b) speeders, which, for MTurk can be attributed to the platform being the main source of revenue for a given respondent, remain the inherent issues of crowdsourcing platforms used for research purposes. Best practices Some best practices can be recommended in the use of crowdsourcing platforms for data collection purposes. Workers IDs can be matched with IDs from previous studies, thus allowing researchers to exclude responses from workers who had answered previous similar studies (Goodman & Paolacci, 2017). Furthermore, proceed to a manual assignment of qualification on MTurk prior to data collection (Litman et al., 2015; Park & Park, 2020). When dealing with experienced workers, both using multiple attention checks and optimizing the survey in a way to have the participants exposed to the stimuli for a sufficient length of time to better address the questions are also recommended (Sheehan, 2018). In this sense, shorter surveys are preferred to longer ones, which affect the participant’s concentration, and may, in turn, adversely impact the quality of their answers. Most importantly, pretest the survey to make sure that all parts are working as expected. Researchers should also keep in mind that in the context of MTurk, the primary method for measurement is the web interface. Thus, to avoid method biases, researchers should ponder whether or not method factors emerge in the latent measurement models (Podsakoff et al., 2012). As such, time-lagged research designs may be preferred as predictor and criterion variables can be measured at different points in time or administered in different platforms, such as Qualtrics vs MTurk (Cheung et al., 2017). In general, the use of crowdsourcing platforms including MTurk may be appropriate according to the research question; and the quality of data is reliant on the quality-control strategies used by researchers to enhance data quality. Trade-offs between various validity types need to be prioritized according to the research objectives (Cheung et al., 2017). From our experience using crowdsourcing tools for our own research as the editorial team members of several journals and chair of several conferences, we provide the best practices as outlined below: MTurk Worker (Respondent) Selection: Researchers should consider their study population before using MTurk for data collection. The MTurk platform should be used for the appropriate study population. For example, if the study targets restaurant owners or company CEOs, MTurk workers may not be suitable for the study. However, if the target population is diners, hotel guests, grocery shoppers, online shoppers, students, or hourly employees, utilizing a sample from MTurk would be suitable. Researchers should use the selection tool in the software. For example, if you target workers only from one country, exclude responses that came from an internet protocol (IP) address outside the targeted country and report the results in the method section. Researchers should consider the demographics of workers on MTurk which must reflect the study targeted population. For example, if the study focuses on baby boomers use of technology, then the MTurk sample should include only baby boomers. Similarly, the gender balance, racial composition, and income of people on MTurk should mirror the targeted population. Researchers should use multiple screening tools that identify quality respondents and avoid problematic response patterns. For example, MTurk provides the approval rate for the respondents. This refers to how many times a respondent is rejected for various reasons (i.e., wrong code entered). We recommend using a 90% or higher approval rate. Researchers should include screening questions in different places with different type of questions to make sure that the respondents are appropriate for your study. One way is to use knowledge-based questions about the subject. For example, rather than asking “How experienced are you with accounting practices?”, a supplemental question such as “Which of the following is a component of an income statement?” should be integrated into the study in a different section of the survey. Survey Validity: Researchers should conduct a pilot survey from MTurk workers to identify and fix any potential data quality and programming problems before the entire data set is collected. Researcher can estimate time required to complete the survey from the pilot study. This average time should be used in calculating incentive payment for the workers in such a way that the payment should equate or exceed minimum wage in the targeted country. Researchers should build multiple validity-check tools into the survey. One of them is to ask attention check questions such as “please click on ‘strongly agree’ in this question” or “What is 2+2? Please choose 5” (Cobanoglu et al., 2016) Even though these attention questions are good and should be implemented, experienced survey takers or bots easily identify them and answer them correctly, but then give random answers to other questions. Instead, we recommend building in more involved validity check questions. One of the best is asking the same question in different places and in different forms. For example, asking the age of the respondent in the beginning of the survey and then asking them the year of their birth at the end of the survey is an effective way to check that they are replying to the survey honestly. Exclude all those who answered the same question differently. Report the results of these validity checks in the methodology. Cavusoglu (2019) found that almost 20% of the surveys were eliminated due to the failure of the validity check questions which were embedded in different places and in different forms in his survey. Researchers should be aware of internet bot, which is a software that runs automated tasks. Some respondents use a bot to reply to the surveys. To avoid this, use Captcha verification, which forces respondents to perform random tasks such as moving the bar to a certain area, clicking in boxes that has cars, or checking boxes to verify the person taking the survey is not a bot. Whenever appropriate, researchers should use time limit options offered by online survey tools such as Qualtrics to control the time that a survey taker must spend to advance to the next question. We found that this is a great tool, especially when you want the respondents to watch a video, read a scenario, or look at a picture before they respond to other questions. Researchers should collect data in different days and times during the week to collect a more diverse and representative sample. Data Cleaning: Researchers should be aware that some respondents do not read questions. They simply select random answers or type nonsense text. To exclude them from the study, manually inspect the data. Exclude anyone who filled out the survey too quickly. We recommend excluding all responses filled out less than 40% of the average time to take the survey. For example, if it takes 10 minutes to fill out a survey, we exclude everyone who fills out this survey in 4 minutes or less. After we separated these two groups, we compared them and found that the speeders’ (aka cheaters) data was significantly different than the regular group. Researchers should always collect more data than needed. Our rule of thumb is to collect 30% more data than needed. For example, if 500 clean data responses are wanted, collect at least 650 data. The targeted number of data will still be available after cleaning the data. Report the process of cleaning data in the method section of your article, showing the editor and reviewers that you have taken steps to increase the validity and reliability of the survey responses. Calculating a response rate for the samples using MTurk is not possible. However, it is possible to calculate active response rate (Ali et al., 2021). It can be calculated as the raw response numbers deducted from all screening and validity check question results. For example, if you have 1000 raw responses and you eliminated 100 responses for coming from IP address outside of the United States, another 100 surveys for failing the validity check questions, then your active response rate would be 800/1000= 80%.


2021 ◽  
Vol 6 (1) ◽  
pp. 26-38
Author(s):  
Pairoj Piyawongwathana ◽  
Sak Onkvisit

The pioneering work of Campbell et al. (1995a) presented four ways a corporate parent either creates or destroys value for the companies it owns. They are (a) stand alone, (b) linkage, (c) central functions and services, and (d) corporate development. Despite widespread acceptance of the concept of parenting advantage, empirical research remains scarce. Examining the methodological issues, this research describes the development of an instrument to measure the four strategies. An exploratory factor analysis yielded six distinct factors, accounting for 74.11% of the variance. The results partially validated the a priori classification scheme. A few factors partly reflected the measurement items (variables) gleaned from the four basic strategies. The factors are represented by a hybrid of items from different strategies. The paper concluded that the original conceptualizations of the strategies need to be better scrutinized and that further refinement of the operational definitions is also necessary.


2021 ◽  
Vol 6 (1) ◽  
pp. 22-26
Author(s):  
Joseph Dipoli

Book Review Foundations of real-world economics: What every economics student needs to know (2nd ed.), by John Komlos, New York, Routledge, 2019, 306 pp., $42.95 (Paperback), ISBN 9781138296541. Many people in the United States of America are dissatisfied with the outcomes of the economy and some are suggesting measures that seem socialistic. The time has come to recognize that mainstream economics as taught in our schools is not serving students at all. The research sharing platform of the Latin American and Caribbean Economic Association (LACEA) states that John Komlos’s textbook Foundations of Real-World Economics: What Every Economics Student Needs to Know demonstrates how misleading it can be to oversimplify models of perfect competition relative to the real world (Vox LACEA, 2019). The math works well on college blackboards, but not so well on the main streets of America. This edition of Komlos’s text explores the realities of oligopolies, the real impact of minimum wage, the double-edged sword of free trade, and other ways that powerful institutions cause distortions in main stream models. Norbert Haring wrote “there is no doubt that a student who would otherwise be left with just the textbook wisdom will benefit greatly from reading this book and seeing that” (Haring, 2014, p. 4).


Sign in / Sign up

Export Citation Format

Share Document