scholarly journals Evaluation of User Reactions and Verification of the Authenticity of the User’s Identity during a Long Web Survey

2021 ◽  
Vol 11 (22) ◽  
pp. 11034
Author(s):  
Evgeny Nikulchev ◽  
Alexander Gusev ◽  
Dmitry Ilin ◽  
Nurziya Gazanova ◽  
Sergey Malykh

Web surveys are very popular in the Internet space. Web surveys are widely incorporated for gathering customer opinion about Internet services, for sociological and psychological research, and as part of the knowledge testing systems in electronic learning. When conducting web surveys, one of the issues to consider is the respondents’ authenticity throughout the entire survey process. We took 20,000 responses to an online questionnaire as experimental data. The survey took about 45 min on average. We did not take into account the given answers; we only considered the response time to the first question on each page of the survey interface, that is, only the users’ reaction time was taken into account. Data analysis showed that respondents get used to the interface elements and want to finish a long survey as soon as possible, which leads to quicker reactions. Based on the data, we built two neural network models that identify the records in which the respondent’s authenticity was violated or the respondent acted as a random clicker. The amount of data allows us to conclude that the identified dependencies are widely applicable.

Author(s):  
Seung-Geon Lee ◽  
Jaedeok Kim ◽  
Hyun-Joo Jung ◽  
Yoonsuck Choe

Estimating the relative importance of each sample in a training set has important practical and theoretical value, such as in importance sampling or curriculum learning. This kind of focus on individual samples invokes the concept of samplewise learnability: How easy is it to correctly learn each sample (cf. PAC learnability)? In this paper, we approach the sample-wise learnability problem within a deep learning context. We propose a measure of the learnability of a sample with a given deep neural network (DNN) model. The basic idea is to train the given model on the training set, and for each sample, aggregate the hits and misses over the entire training epochs. Our experiments show that the samplewise learnability measure collected this way is highly linearly correlated across different DNN models (ResNet-20, VGG-16, and MobileNet), suggesting that such a measure can provide deep general insights on the data’s properties. We expect our method to help develop better curricula for training, and help us better understand the data itself.


2018 ◽  
Vol 26 (1) ◽  
pp. 190-204 ◽  
Author(s):  
Pitoyo Hartono

Recently, many neural network models have been successfully applied for histopathological analysis, including for cancer classifications. While some of them reach human–expert level accuracy in classifying cancers, most of them have to be treated as black box, in which they do not offer explanation on how they arrived at their decisions. This lack of transparency may hinder the further applications of neural networks in realistic clinical settings where not only decision but also explainability is important. This study proposes a transparent neural network that complements its classification decisions with visual information about the given problem. The auxiliary visual information allows the user to some extent understand how the neural network arrives at its decision. The transparency potentially increases the usability of neural networks in realistic histopathological analysis. In the experiment, the accuracy of the proposed neural network is compared against some existing classifiers, and the visual information is compared against some dimensional reduction methods.


Field Methods ◽  
2017 ◽  
Vol 29 (3) ◽  
pp. 266-280 ◽  
Author(s):  
Melanie Revilla

The development of web surveys has been accompanied by the emergence of new scales, taking advantages of the visual and interactive features provided by the Internet like drop-down menus, sliders, drag-and-drop, or order-by-click scales. This article focuses on the order-by-click scales, studying the comparability of the data obtained for this scale when answered through PCs versus smartphones. I used data from an experiment where panelists from the Netquest opt-in panel in Spain were randomly assigned to a PC, smartphone optimized, or smartphone not-optimized version of the same questionnaire in two waves. I found significant differences due to the device and optimization at least for some indicators and questions.


Every user of the internet has high aspirations on its reliability, efficiency, productivity and in many other aspects of the same. Providing an uninterrupted service is of prime importance .The amount of data along with enormous number of residual traces is increasing rapidly and significantly. As a result, analysis of log data has profoundly influenced many aspects of researcher’s domains. Social media being integral part of the Internet, real time blogging services like Twitter are widely used due to their inherent nature of depicting social graph, propagating information and entire social dynamics. Content of tweets are of major interest to researchers as they reflect individuals experiences, real time events. Researchers have explored several applications of tweet analysis. One such application is detecting service outages through a myriad of messages posted by users regarding unavailability. Simple techniques are enough to extract key semantics from tweets as they are faster alerts for warning about service unavailability. Similarly, the outage mailing lists are text-based messages which are rich in semantic information about the underlying outages. Researchers find it a great challenge to automatically parse and process the data through NLP and text mining for service outage detection. An extensive study was conducted, aiming to explore the research directions and opportunities on log analysis, tweet analysis and outage mailing list analysis for the purpose of detecting and predicting service outages. A systematic- frame work is also articulated with a focus on all stages of analytics and we deliberately discussed potential research challenges & paths in the above said analysis. We introduce three major data analysis methods for diagnosing the causes of service failures , detecting service failures prematurely and predicting them. We analyze Syslogs (contain log data generated by the system) for detecting the cause of a failure by automatically learning over millions of logs and analyze the data of a social networking service (namely, Twitter and outage mails) to detect possible service failures by extracting failure related tweets, which account for less than a percent of all tweet in real time with high accuracy. Paper is an effort not only to detect outages but also to forecast them using twitter analysis based on time series and neural network models. We further propose a log analysis model for the same.


Humans have built broad models of expressing their thoughts via several appliances. The internet has not only become a credible method for expressing one's thoughts, but is also rapidly becoming the single largest means of doing so. In this context, one area of focus is the study of negative online behaviors of users like, toxic comments that are threat, obscenity, insults and abuse. The task of identifying and removing toxic communication from public forums is critical. The undertaking of analyzing a large corpus of comments is infeasible for human moderators. Our approach is to use Natural Language Processing (NLP) techniques to provide an efficient and accurate tool to detect online toxicity. We apply TF-IDF feature extraction technique, Neural Network models to tackle a toxic comment classification problem with a labeled dataset from Wikipedia Talk Page.


2020 ◽  
Vol 5 ◽  
pp. 140-147 ◽  
Author(s):  
T.N. Aleksandrova ◽  
◽  
E.K. Ushakov ◽  
A.V. Orlova ◽  
◽  
...  

Author(s):  
Vera P. Chudinova

The results of the research the interaction problems between children, teenagers and the Internet, and problems of protection and realization of their rights by librarians are considered in the article. The results of foreign researches on the given theme are also presented. A number of its legal aspects is analysed, the rights of children of direct relevance to the theme of “Children and the Information” on the basis of the UN Convention are placed in strong relief. The main features, possibilities and dangers of the Internet to development of the person are shown.The surveys of schoolchildren, teachers and librarians, which were hold by researchers of the Russian State Children Library, have allowed the author to find out various aspects of the younger generation interaction with the Internet, to set out the actual problems facing library experts today.


The neural network models series used in the development of an aggregated digital twin of equipment as a cyber-physical system are presented. The twins of machining accuracy, chip formation and tool wear are examined in detail. On their basis, systems for stabilization of the chip formation process during cutting and diagnose of the cutting too wear are developed. Keywords cyberphysical system; neural network model of equipment; big data, digital twin of the chip formation; digital twin of the tool wear; digital twin of nanostructured coating choice


Author(s):  
Ann-Sophie Barwich

How much does stimulus input shape perception? The common-sense view is that our perceptions are representations of objects and their features and that the stimulus structures the perceptual object. The problem for this view concerns perceptual biases as responsible for distortions and the subjectivity of perceptual experience. These biases are increasingly studied as constitutive factors of brain processes in recent neuroscience. In neural network models the brain is said to cope with the plethora of sensory information by predicting stimulus regularities on the basis of previous experiences. Drawing on this development, this chapter analyses perceptions as processes. Looking at olfaction as a model system, it argues for the need to abandon a stimulus-centred perspective, where smells are thought of as stable percepts, computationally linked to external objects such as odorous molecules. Perception here is presented as a measure of changing signal ratios in an environment informed by expectancy effects from top-down processes.


Sign in / Sign up

Export Citation Format

Share Document