The cleanliness of restaurants: ATP tests (reality) vs consumers’ perception

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Tony J. Kim ◽  
Barbara Almanza ◽  
Jing Ma ◽  
Haeik Park ◽  
Sheryl F. Kline

Purpose This study aims to empirically assess restaurant surfaces’ cleanliness and compare them to customers’ perceptions about the cleanliness of surfaces when dining in a restaurant. Design/methodology/approach This study used two methods to collect data. The first was a survey method to gauge customers’ perceptions and an empirical test to measure cleanliness using an adenosine triphosphate (ATP) meter. Two data sets were collected to compare customers’ perceptions and actual cleanliness measurements. One data set surveyed respondents as to their perceptions of high- and low-touch restaurant surfaces among 19 areas of the dining room and 15 surfaces from the restroom, and their perceived cleanliness or dirtiness of those same surfaces. The second one conducted empirical measurements of the cleanliness of these surfaces using an ATP meter, which were then compared to customers’ perceptions. Findings Although all surfaces had higher ATP readings than a 30 relative light units’ threshold, there were significant differences in ATP readings among surfaces. Results showed a fair amount of consistency between the consumers’ perceptions of cleanliness and the actual results of ATP readings for the cleanest areas, but very little consistency in customers’ perceptions and experimental measurements for the dirtiest areas. Practical implications This study empirically demonstrated the need for improved cleaning techniques and the importance of proper training for foodservice employees. Especially during the COVID-19 pandemic, results of this study suggest an additional responsibility on managers and staff to ensure clean environments and the imperative to address the concerns of their customers. Originality/value Based on an extensive literature review, to the best of the authors’ knowledge, no prior studies have compared consumers’ cleanliness perceptions with empirical measurements of cleanliness in restaurant settings using an ATP meter. The results of this study provide restaurant managers a better understanding of customers’ perceptions of cleanliness. It also provides restaurant managers and staff information to develop more effective cleaning procedures. In the context of the COVID-19 pandemic, perceptions of cleanliness and measures of actual cleanliness are more important than they have been in the past.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Tressy Thomas ◽  
Enayat Rajabi

PurposeThe primary aim of this study is to review the studies from different dimensions including type of methods, experimentation setup and evaluation metrics used in the novel approaches proposed for data imputation, particularly in the machine learning (ML) area. This ultimately provides an understanding about how well the proposed framework is evaluated and what type and ratio of missingness are addressed in the proposals. The review questions in this study are (1) what are the ML-based imputation methods studied and proposed during 2010–2020? (2) How the experimentation setup, characteristics of data sets and missingness are employed in these studies? (3) What metrics were used for the evaluation of imputation method?Design/methodology/approachThe review process went through the standard identification, screening and selection process. The initial search on electronic databases for missing value imputation (MVI) based on ML algorithms returned a large number of papers totaling at 2,883. Most of the papers at this stage were not exactly an MVI technique relevant to this study. The literature reviews are first scanned in the title for relevancy, and 306 literature reviews were identified as appropriate. Upon reviewing the abstract text, 151 literature reviews that are not eligible for this study are dropped. This resulted in 155 research papers suitable for full-text review. From this, 117 papers are used in assessment of the review questions.FindingsThis study shows that clustering- and instance-based algorithms are the most proposed MVI methods. Percentage of correct prediction (PCP) and root mean square error (RMSE) are most used evaluation metrics in these studies. For experimentation, majority of the studies sourced the data sets from publicly available data set repositories. A common approach is that the complete data set is set as baseline to evaluate the effectiveness of imputation on the test data sets with artificially induced missingness. The data set size and missingness ratio varied across the experimentations, while missing datatype and mechanism are pertaining to the capability of imputation. Computational expense is a concern, and experimentation using large data sets appears to be a challenge.Originality/valueIt is understood from the review that there is no single universal solution to missing data problem. Variants of ML approaches work well with the missingness based on the characteristics of the data set. Most of the methods reviewed lack generalization with regard to applicability. Another concern related to applicability is the complexity of the formulation and implementation of the algorithm. Imputations based on k-nearest neighbors (kNN) and clustering algorithms which are simple and easy to implement make it popular across various domains.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jiawei Lian ◽  
Junhong He ◽  
Yun Niu ◽  
Tianze Wang

Purpose The current popular image processing technologies based on convolutional neural network have the characteristics of large computation, high storage cost and low accuracy for tiny defect detection, which is contrary to the high real-time and accuracy, limited computing resources and storage required by industrial applications. Therefore, an improved YOLOv4 named as YOLOv4-Defect is proposed aim to solve the above problems. Design/methodology/approach On the one hand, this study performs multi-dimensional compression processing on the feature extraction network of YOLOv4 to simplify the model and improve the feature extraction ability of the model through knowledge distillation. On the other hand, a prediction scale with more detailed receptive field is added to optimize the model structure, which can improve the detection performance for tiny defects. Findings The effectiveness of the method is verified by public data sets NEU-CLS and DAGM 2007, and the steel ingot data set collected in the actual industrial field. The experimental results demonstrated that the proposed YOLOv4-Defect method can greatly improve the recognition efficiency and accuracy and reduce the size and computation consumption of the model. Originality/value This paper proposed an improved YOLOv4 named as YOLOv4-Defect for the detection of surface defect, which is conducive to application in various industrial scenarios with limited storage and computing resources, and meets the requirements of high real-time and precision.


2019 ◽  
Vol 39 (2) ◽  
pp. 357-380 ◽  
Author(s):  
Eve Rosenzweig ◽  
Carrie Queenan ◽  
Ken Kelley

Purpose Research on the service–profit chain (SPC) provides important insights regarding how organizations attain service excellence. However, this research stream does not shed light on the mechanisms by which service organizations sustain such excellence, despite the struggles of many organizations to do so. Thus, the purpose of this paper is to develop the SPC as a more dynamic system characterized by feedback loops, accumulation processes, and time delays based on the service operations, human resources, and marketing literatures. Design/methodology/approach The authors posit the feedback loops operate as virtuous cycles, such that increases in customer perceptions of service quality and in profit margins lead to subsequent increases in the quality of the internal working environment, which ultimately reimpacts performance in a positive way, and so on. The authors test the hypotheses using five years of archival data on 417 full-service US hotels. The unique data set combines longitudinal data from multiple functions, including employee assessments regarding their tools, practices, and abilities to serve customers, customer perceptions of service quality, and objective measures of financial performance. Findings The authors find support for the idea that some organizations provide customers with high-quality service over time by reinvesting in the inputs responsible for generating the initial success, i.e., in various aspects of the internal working environment. Research limitations/implications The analysis of 417 hotels from a single firm may influence the extent to which the findings can be generalized. Originality/value By expanding the boundaries of previous conceptual and empirical models investigating SPCs, the authors offer a deeper understanding of the cross-functional character of modern operational systems and the complex dynamics that these systems generate.


2017 ◽  
Vol 24 (4) ◽  
pp. 1052-1064 ◽  
Author(s):  
Yong Joo Lee ◽  
Seong-Jong Joo ◽  
Hong Gyun Park

Purpose The purpose of this paper is to measure the comparative efficiency of 18 Korean commercial banks under the presence of negative observations and examine performance differences among them by grouping them according to their market conditions. Design/methodology/approach The authors employ two data envelopment analysis (DEA) models such as a Banker, Charnes, and Cooper (BCC) model and a modified slacks-based measure of efficiency (MSBM) model, which can handle negative data. The BCC model is proven to be translation invariant for inputs or outputs depending on output or input orientation. Meanwhile, the MSBM model is unit invariant in addition to translation invariant. The authors compare results from both models and choose one for interpreting results. Findings Most Korean banks recovered from the worst performance in 2011 and showed similar performance in recent years. Among three groups such as national banks, regional banks, and special banks, the most special banks demonstrated superb performance across models and years. Especially, the performance difference between the special banks and the regional banks was statistically significant. The authors concluded that the high performance of the special banks was due to their nationwide market access and ownership type. Practical implications This study demonstrates how to analyze and measure the efficiency of entities when variables contain negative observations using a data set for Korean banks. The authors have tried two major DEA models that are able to handle negative data and proposed a practical direction for future studies. Originality/value Although there are research papers for measuring the performance of banks in Korea, all of the papers in the topic have studied efficiency or productivity using positive data sets. However, variables such as net incomes and growth rates frequently include negative observations in bank data sets. This is the first paper to investigate the efficiency of bank operations in the presence of negative data in Korea.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Hendri Murfi

PurposeThe aim of this research is to develop an eigenspace-based fuzzy c-means method for scalable topic detection.Design/methodology/approachThe eigenspace-based fuzzy c-means (EFCM) combines representation learning and clustering. The textual data are transformed into a lower-dimensional eigenspace using truncated singular value decomposition. Fuzzy c-means is performed on the eigenspace to identify the centroids of each cluster. The topics are provided by transforming back the centroids into the nonnegative subspace of the original space. In this paper, we extend the EFCM method for scalability by using the two approaches, i.e. single-pass and online. We call the developed topic detection methods as oEFCM and spEFCM.FindingsOur simulation shows that both oEFCM and spEFCM methods provide faster running times than EFCM for data sets that do not fit in memory. However, there is a decrease in the average coherence score. For both data sets that fit and do not fit into memory, the oEFCM method provides a tradeoff between running time and coherence score, which is better than spEFCM.Originality/valueThis research produces a scalable topic detection method. Besides this scalability capability, the developed method also provides a faster running time for the data set that fits in memory.


Kybernetes ◽  
2019 ◽  
Vol 48 (9) ◽  
pp. 2006-2029
Author(s):  
Hongshan Xiao ◽  
Yu Wang

Purpose Feature space heterogeneity exists widely in various application fields of classification techniques, such as customs inspection decision, credit scoring and medical diagnosis. This paper aims to study the relationship between feature space heterogeneity and classification performance. Design/methodology/approach A measurement is first developed for measuring and identifying any significant heterogeneity that exists in the feature space of a data set. The main idea of this measurement is derived from a meta-analysis. For the data set with significant feature space heterogeneity, a classification algorithm based on factor analysis and clustering is proposed to learn the data patterns, which, in turn, are used for data classification. Findings The proposed approach has two main advantages over the previous methods. The first advantage lies in feature transform using orthogonal factor analysis, which results in new features without redundancy and irrelevance. The second advantage rests on samples partitioning to capture the feature space heterogeneity reflected by differences of factor scores. The validity and effectiveness of the proposed approach is verified on a number of benchmarking data sets. Research limitations/implications Measurement should be used to guide the heterogeneity elimination process, which is an interesting topic in future research. In addition, to develop a classification algorithm that enables scalable and incremental learning for large data sets with significant feature space heterogeneity is also an important issue. Practical implications Measuring and eliminating the feature space heterogeneity possibly existing in the data are important for accurate classification. This study provides a systematical approach to feature space heterogeneity measurement and elimination for better classification performance, which is favorable for applications of classification techniques in real-word problems. Originality/value A measurement based on meta-analysis for measuring and identifying any significant feature space heterogeneity in a classification problem is developed, and an ensemble classification framework is proposed to deal with the feature space heterogeneity and improve the classification accuracy.


2020 ◽  
Vol 41 (4/5) ◽  
pp. 247-268 ◽  
Author(s):  
Starr Hoffman ◽  
Samantha Godbey

PurposeThis paper explores trends over time in library staffing and staffing expenditures among two- and four-year colleges and universities in the United States.Design/methodology/approachResearchers merged and analyzed data from 1996 to 2016 from the National Center for Education Statistics for over 3,500 libraries at postsecondary institutions. This study is primarily descriptive in nature and addresses the research questions: How do staffing trends in academic libraries over this period of time relate to Carnegie classification and institution size? How do trends in library staffing expenditures over this period of time correspond to these same variables?FindingsAcross all institutions, on average, total library staff decreased from 1998 to 2012. Numbers of librarians declined at master’s and doctoral institutions between 1998 and 2016. Numbers of students per librarian increased over time in each Carnegie and size category. Average inflation-adjusted staffing expenditures have remained steady for master's, baccalaureate and associate's institutions. Salaries as a percent of library budget decreased only among doctoral institutions and institutions with 20,000 or more students.Originality/valueThis is a valuable study of trends over time, which has been difficult without downloading and merging separate data sets from multiple government sources. As a result, few studies have taken such an approach to this data. Consequently, institutions and libraries are making decisions about resource allocation based on only a fraction of the available data. Academic libraries can use this study and the resulting data set to benchmark key staffing characteristics.


2019 ◽  
Vol 10 (5) ◽  
pp. 1015-1046 ◽  
Author(s):  
Sung Min Kim ◽  
Gopesh Anand ◽  
Eric C. Larson ◽  
Joseph Mahoney

Purpose Enterprise systems are commonly implemented by firms through outsourcing arrangements with software vendors. However, deriving benefits from these implementations has proved to be a challenge, and a great deal of variation has been observed in the extent of value generated for client and vendor firms. This research examines the role of co-specialization as a strategy to make the most out of outsourced enterprise systems. The authors develop hypotheses relating resource co-specialization with two indicators of success for implementation of enterprise software: (1) exchange success and (2) firm growth. Design/methodology/approach The hypotheses are tested using a unique panel data set of 175 firms adopting Advanced Planning and Scheduling (APS) software, a type of enterprise system used for managing manufacturing and logistics. The authors identify organizational factors that support co-specialization and then examine how co-specialization is associated with enterprise software implementation success, controlling for the endogenous choice to co-specialize. Findings The empirical results suggest that resource co-specialization is positively associated with implementation success and that the two resource co-specialization pathways that are examined complement each other in providing performance benefits. Originality/value This paper contributes to the research literature on outsourcing. The study also provides a new empirical test using a unique data set of 175 firms adopting APS Software.


2014 ◽  
Vol 31 (8) ◽  
pp. 1778-1789
Author(s):  
Hongkang Lin

Purpose – The clustering/classification method proposed in this study, designated as the PFV-index method, provides the means to solve the following problems for a data set characterized by imprecision and uncertainty: first, discretizing the continuous values of all the individual attributes within a data set; second, evaluating the optimality of the discretization results; third, determining the optimal number of clusters per attribute; and fourth, improving the classification accuracy (CA) of data sets characterized by uncertainty. The paper aims to discuss these issues. Design/methodology/approach – The proposed method for the solution of the clustering/classifying problem, designated as PFV-index method, combines a particle swarm optimization algorithm, fuzzy C-means method, variable precision rough sets theory, and a new cluster validity index function. Findings – This method could cluster the values of the individual attributes within the data set and achieves both the optimal number of clusters and the optimal CA. Originality/value – The validity of the proposed approach is investigated by comparing the classification results obtained for UCI data sets with those obtained by supervised classification BPNN, decision-tree methods.


2011 ◽  
Vol 77 (19) ◽  
pp. 7000-7006 ◽  
Author(s):  
Nicola M. Reid ◽  
Sarah L. Addison ◽  
Lucy J. Macdonald ◽  
Gareth Lloyd-Jones

ABSTRACTHuhu grubs (Prionoplus reticularis) are wood-feeding beetle larvae endemic to New Zealand and belonging to the family Cerambycidae. Compared to the wood-feeding lower termites, very little is known about the diversity and activity of microorganisms associated with xylophagous cerambycid larvae. To address this, we used pyrosequencing to evaluate the diversity of metabolically active and inactive bacteria in the huhu larval gut. Our estimate, that the gut harbors at least 1,800 phylotypes, is based on 33,420 sequences amplified from genomic DNA and reverse-transcribed RNA. Analysis of genomic DNA- and RNA-derived data sets revealed that 71% of all phylotypes (representing 95% of all sequences) were metabolically active. Rare phylotypes contributed considerably to the richness of the community and were also largely metabolically active, indicating their participation in digestive processes in the gut. The dominant families in the active community (RNA data set) includedAcidobacteriaceae(24.3%),Xanthomonadaceae(16.7%),Acetobacteraceae(15.8%),Burkholderiaceae(8.7%), andEnterobacteriaceae(4.1%). The most abundant phylotype comprised 14% of the active community and affiliated withDyella ginsengisoli(Gammaproteobacteria), suggesting that aDyella-related organism is a likely symbiont. This study provides new information on the diversity and activity of gut-associated microorganisms that are essential for the digestion of the nutritionally poor diet consumed by wood-feeding larvae. Many huhu gut phylotypes affiliated with insect symbionts or with bacteria present in acidic environments or associated with fungi.


Sign in / Sign up

Export Citation Format

Share Document