scholarly journals Computational humor and Christie Davies’ basis for joke comparison

2017 ◽  
Vol 5 (4) ◽  
pp. 169 ◽  
Author(s):  
Julia Rayz

While historically computational humor paid very little attention to sociology and mostly took into account subparts of linguistics and some psychology, Christie Davies wrote a number of papers that should affect the study of computational humor directly. This paper will look at one paper to illustrate this point, namely Christie’s chapter in the Primer of Humor Research.  With the advancements in computational processing and big data analysis/analytics, it is becoming possible to look at a large collection of humorous texts that are available on the web. In particular, older texts, including joke materials, that are being scanned from previously published printed versions. Most of the approaches within computational humor concentrated on comparison of present/existing jokes, without taking into account classes of jokes that are absent in a given setting. While the absence of a class is unlikely to affect classification – something that researchers in computational humor seem to be interested in – it does come into light when features of various classes are compared and conclusions are being made. This paper will describe existing approaches and how they could be enhanced, thanks to Davies’s contributions and the advancements in data processing.

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Jui-Chan Huang ◽  
Po-Chang Ko ◽  
Cher-Min Fong ◽  
Sn-Man Lai ◽  
Hsin-Hung Chen ◽  
...  

With the increase in the number of online shopping users, customer loyalty is directly related to product sales. This research mainly explores the statistical modeling and simulation of online shopping customer loyalty based on machine learning and big data analysis. This research mainly uses machine learning clustering algorithm to simulate customer loyalty. Call the k-means interactive mining algorithm based on the Hash structure to perform data mining on the multidimensional hierarchical tree of corporate credit risk, continuously adjust the support thresholds for different levels of data mining according to specific requirements and select effective association rules until satisfactory results are obtained. After conducting credit risk assessment and early warning modeling for the enterprise, the initial preselected model is obtained. The information to be collected is first obtained by the web crawler from the target website to the temporary web page database, where it will go through a series of preprocessing steps such as completion, deduplication, analysis, and extraction to ensure that the crawled web page is correctly analyzed, to avoid incorrect data due to network errors during the crawling process. The correctly parsed data will be stored for the next step of data cleaning or data analysis. For writing a Java program to parse HTML documents, first set the subject keyword and URL and parse the HTML from the obtained file or string by analyzing the structure of the website. Secondly, use the CSS selector to find the web page list information, retrieve the data, and store it in Elements. In the overall fit test of the model, the root mean square error approximation (RMSEA) value is 0.053, between 0.05 and 0.08. The results show that the model designed in this study achieves a relatively good fitting effect and strengthens customers’ perception of shopping websites, and relationship trust plays a greater role in maintaining customer loyalty.


Big data marks a major turning point in the use of data and is a powerful vehicle for growth and profitability. A comprehensive understanding of a company's data, its potential can be a new vector for performance. It must be recognized that without an adequate analysis, our data are just an unusable raw material. In this context, the traditional data processing tools cannot support such an explosion of volume. They cannot respond to new needs in a timely manner and at a reasonable cost. Big data is a broad term generally referring to very large data collections that impose complications on analytics tools for harnessing and managing such. This chapter details what big data analysis is. It presents the development of its applications. It is interested in the important changes that have touched the analytics context.


2021 ◽  
Author(s):  
Vasyl Melnyk ◽  
Olena Kuzmych ◽  
Nataliia Bahniuk ◽  
Nataliia Cherniashchuk ◽  
Liudmyla Hlynchuk ◽  
...  

Author(s):  
Fernando Almeida ◽  
Mário Santos

Big data is a term that has risen to prominence describing data that exceeds the processing capacity of conventional database systems. Big data is a disruptive force that will affect organizations across industries, sectors, and economies. Hidden in the immense volume, variety, and velocity of data that is produced today is new information, facts, relationships, indicators, and pointers that either could not be practically discovered in the past, or simply did not exist before. This new information, effectively captured, managed, and analyzed, has the power to enhance profoundly the effectiveness of government. This chapter looks to the main challenges and issues that will have to be addressed to capture the full potential of big data. Additionally, the authors present a conceptual framework for big data analysis structured in there layers: (a) data capture and preprocessing, (b) data processing and interaction, and (c) auxiliary tools. Each has a different role to play in capturing, processing, accessing, and analyzing big data.


2021 ◽  
Vol 8 (7) ◽  
pp. 112-116
Author(s):  
Yundong Hao ◽  

With the development of social economy, information technology is developing constantly, and the whole society has entered the era of big data. The era of big data is mainly manifested in the large amount of data, various types, low value density, and high requirements for data processing speed and timeliness. In the era of big data, we should extract data in time and effectively, so as to promote the development of various industries. As an institutional industry that plays an important role in the cure of human health diseases, medical and health institutions have begun to gradually use big data to play an important role in the management of people's diagnosis and treatment. Taking medical and health institutions in Chengdu as an example, this paper explores how to use big data analysis in medical and health institutions, so as to improve the efficiency of medical treatment and medical quality.


2019 ◽  
Vol 9 (1) ◽  
pp. 01-12 ◽  
Author(s):  
Kristy F. Tiampo ◽  
Javad Kazemian ◽  
Hadi Ghofrani ◽  
Yelena Kropivnitskaya ◽  
Gero Michel

Sign in / Sign up

Export Citation Format

Share Document