scholarly journals Measuring Task Learning Curve with Usage Graph Eccentricity Distribution Peaks

2018 ◽  
Vol 9 (3) ◽  
pp. 1
Author(s):  
Vagner Figueredo de Santana ◽  
Rogério Abreu de Paula ◽  
Claudio Santos Pinhanez

Interaction logs (or usage data) are abundant in the era of Big Data, but making sense of these data having Human-Computer Interaction (HCI) in mind is becoming a bigger challenge. Interaction Log Analysis involves tackling problems as automatic task identification, modeling task deviation, and computing task learning curve. In this work, we propose a way of measuring task learning curve empirically, based on how task deviations (represented as eccentricity distribution peaks) decrease over time. From the analysis of 427 event-by-event logged sessions (captured under users’ consent) of a technical reference website, this work shows the different types of learning curves obtained through the computation of how deviations decrease over time. The proposed technique supported the identification of 6 different task learning curves in the set of 17 tasks, allowing differentiating tasks easy to perform (e.g., view content and login) from tasks users face more difficulties (e.g., register user and delete content). With such results, HCI specialists can focus on reviewing specific tasks users faced difficulties during real interaction, from large datasets.

2021 ◽  
pp. neurintsurg-2021-017460
Author(s):  
Michael K Tso ◽  
Gary B Rajah ◽  
Rimal H Dossani ◽  
Michael J Meyer ◽  
Matthew J McPheeters ◽  
...  

BackgroundThe perception of a steep learning curve associated with transradial access has resulted in its limited adoption in neurointervention despite the demonstrated benefits, including decreased access-site complications.ObjectiveTo compare learning curves of transradial versus transfemoral diagnostic cerebral angiograms obtained by five neurovascular fellows as primary operator.MethodsThe first 100–150 consecutive transradial and transfemoral angiographic scans performed by each fellow between July 2017 and March 2020 were identified. Mean fluoroscopy time per artery injected (angiographic efficiency) was calculated as a marker of technical proficiency and compared for every 25 consecutive procedures performed (eg, 1–25, 26–50, 51–75).ResultsWe identified 1242 diagnostic angiograms, 607 transradial and 635 transfemoral. The radial cohort was older (64.3 years vs 62.3 years, p=0.01) and demonstrated better angiographic efficiency (3.4 min/vessel vs 3.7 min/vessel, p=0.03). For three fellows without previous endovascular experience, proficiency was obtained between 25 and 50 transfemoral angiograms. One fellow achieved proficiency after performing 25–50 transradial angiograms; and the two other fellows, in <25 transradial angiograms. The two fellows with previous experience had flattened learning curves for both access types. Two patients experienced transient neurologic symptoms postprocedure. Transradial angiograms were associated with significantly fewer access-site complications (3/607, 0.5% vs 22/635, 3.5%, p<0.01). Radial-to-femoral conversion occurred in 1.2% (7/607); femoral-to-radial conversion occurred in 0.3% (2/635). Over time, the proportion of transradial angiographic procedures increased.ConclusionTechnical proficiency improved significantly over time for both access types, typically requiring between 25 and 50 diagnostic angiograms to achieve asymptomatic improvement in efficiency. Reduced access-site complications and decreased fluoroscopy time were benefits associated with transradial angiography.


2010 ◽  
Vol 38 (2) ◽  
pp. 404-432 ◽  
Author(s):  
TAMAR KEREN-PORTNOY ◽  
MICHAEL KEREN

ABSTRACTThis paper sets out to show how facilitation between different clause structures operates over time in syntax acquisition. The phenomenon of facilitation within given structures has been widely documented, yet inter-structure facilitation has rarely been reported so far. Our findings are based on the naturalistic production corpora of six toddlers learning Hebrew as their first language. We use regression analysis, a method that has not been used to study this phenomenon. We find that the proportion of errors among the earliest produced clauses in a structure is related to the degree of acceleration of that structure's learning curve; that with the accretion of structures the proportion of errors among the first clauses of new structures declines, as does the acceleration of their learning curves. We interpret our findings as showing that learning new syntactic structures is made easier, or facilitated, by previously acquired ones.


2019 ◽  
Vol 64 (2) ◽  
pp. 157-174 ◽  
Author(s):  
Tore Nesset

Summary With the advent of large web-based corpora, Russian linguistics steps into the era of “big data”. But how useful are large datasets in our field? What are the advantages? Which problems arise? The present study seeks to shed light on these questions based on an investigation of the Russian paucal construction in the RuTenTen corpus, a web-based corpus with more than ten billion words. The focus is on the choice between adjectives in the nominative (dve/tri/četyre starye knigi) and genitive (dve/tri/četyre staryx knigi) in paucal constructions with the numerals dve, tri or četyre and a feminine noun. Three generalizations emerge. First, the large RuTenTen dataset enables us to identify predictors that could not be explored in smaller corpora. In particular, it is shown that predicates, modifiers, prepositions and word-order affect the case of the adjective. Second, we identify situations where the RuTenTen data cannot be straightforwardly reconciled with findings from earlier studies or there appear to be discrepancies between different statistical models. In such cases, further research is called for. The effect of the numeral (dve, tri vs. četyre) and verbal government are relevant examples. Third, it is shown that adjectives in the nominative have more easily learnable predictors that cover larger classes of examples and show clearer preferences for the relevant case. It is therefore suggested that nominative adjectives have the potential to outcompete adjectives in the genitive over time. Although these three generalizations are valuable additions to our knowledge of Russian paucal constructions, three problems arise. Large internet-based corpora like the RuTenTen corpus (a) are not balanced, (b) involve a certain amount of “noise”, and (c) do not provide metadata. As a consequence of this, it is argued, it may be wise to exercise some caution with regard to conclusions based on “big data”.


2021 ◽  
Vol 37 (2) ◽  
pp. 107-122
Author(s):  
Anh-Cang Phan ◽  
Thanh-Ngoan Trieu ◽  
Thuong-Cang Phan

In the era of information explosion, Big data is receiving increased attention as having important implications for growth, profitability, and survival of modern organizations. However, it also offers many challenges in the way data is processed and queried over time. A join operation is one of the most common operations appearing in many data queries. Specially, a recursive join is a join type used to query hierarchical data but it is more extremely complex and costly. The evaluation of the recursive join in MapReduce includes some iterations of two tasks of a join task and an incremental computation task. Those tasks are significantly expensive and reduce the performance of queries in large datasets because they generate plenty of intermediate data transmitting over the network. In this study, we thus propose a simple but efficient approach for Big recursive joins based on reducing by half the number of the required iterations in the Spark environment. This improvement leads to significantly reducing the number of the required tasks as well as the amount of the intermediate data generated and transferred over the network. Our experimental results show that an improved recursive join is more efficient and faster than a traditional one on large-scale datasets.


2016 ◽  
Author(s):  
Σοφία Κλεισαρχάκη

Variability in Big Data refers to data whose meaning changes continuously. For instance, data derived from social platforms and from monitoring applications, exhibits great variability. This variability is essentially the result of changes in the underlying data distributions of attributes of interest, such as user opinions/ratings, computer network measurements, etc. Difference Analysis aims to study variability in Big Data. To achieve that goal, data scientists need: (a) measures to compare data in various dimensions such as age for users or topic for network traffic, and (b) efficient algorithms to detect changes in massive data. In this thesis, we identify and study three novel analytical tasks to capture data variability: Difference Exploration, Difference Explanation and Difference Evolution. Difference Exploration is concerned with extracting the opinion of different user segments (e.g., on a movie rating website). We propose appropriate measures for comparing user opinions in the form of rating distributions, and efficient algorithms that, given an opinion of interest in the form of a rating histogram, discover agreeing and disagreeing populations. Difference Explanation tackles the question of providing a succinct explanation of differences between two datasets of interest (e.g., buying habits of two sets of customers). We propose scoring functions designed to rank explanations, and algorithms that guarantee explanation conciseness and informativeness. Finally, Difference Evolution tracks change in an input dataset over time and summarizes change at multiple time granularities. We propose a query-based approach that uses similarity measures to compare consecutive clusters over time. Our indexes and algorithms for Difference Evolution are designed to capture different data arrival rates (e.g., low, high) and different types of change (e.g., sudden, incremental). The utility and scalability of all our algorithms relies on hierarchies inherent in data (e.g., time, demographic). We run extensive experiments on real and synthetic datasets to validate the usefulness of the three analytical tasks and the scalability of our algorithms. We show that Difference Exploration guides end-users and data scientists in uncovering the opinion of different user segments in a scalable way. Difference Explanation reveals the need to parsimoniously summarize differences between two datasets and shows that parsimony can be achieved by exploiting hierarchy in data. Finally, our study on Difference Evolution provides strong evidence that a query-based approach is well-suited to tracking change in datasets with varying arrival rates and at multiple time granularities. Similarly, we show that different clustering approaches can be used to capture different types of change.


2020 ◽  
Author(s):  
Christa Kelleher ◽  
Anna Braswell

&lt;p&gt;In the age of big data, hydrologic studies contain more sites, longer and more resolute simulated and observed timeseries, and finer resolution spatial data than ever before.&amp;#160; This growth in capabilities to collect and generate data represents a tremendous opportunity for hydrologic science, but can challenge creating and presenting figures that summarize this information in succinct, interpretable, and meaningful ways.&amp;#160; To address this challenge, this presentation reviews several plotting approaches focused on synthesis of large hydrologic and environmental datasets from across the literature.&amp;#160; We highlight plots that can be used to visualize multi-dimensional spatial and temporal modeling and observational data, to synthesize patterns, to highlight outliers, and above all to convey key messages.&amp;#160; Building on these different types of plots, we highlight a set of best practices for how we as a community can create effective visualizations that synthesize large datasets of a variety of types in scientific presentations and publications.&lt;/p&gt;


2019 ◽  
Author(s):  
Vince Polito ◽  
Amanda Barnier ◽  
Erik Woody

Building on Hilgard’s (1965) classic work, the domain of hypnosis has been conceptualised by Barnier, Dienes, and Mitchell (2008) as comprising three levels: (1) classic hypnotic items, (2) responding between and within items, and (3) state and trait. The current experiment investigates sense of agency across each of these three levels. Forty-six high hypnotisable participants completed an ideomotor (arm levitation), a challenge (arm rigidity) and a cognitive (anosmia) item either following a hypnotic induction (hypnosis condition) or without a hypnotic induction (wake condition). In a postexperimental inquiry, participants rated their feelings of control at three time points for each item: during the suggestion, test and cancellation phases. They also completed the Sense of Agency Rating Scale (Polito, Barnier, &amp; Woody, 2013) for each item. Pass rates, control ratings, and agency scores fluctuated across the different types of items and for the three phases of each item; also, control ratings and agency scores often differed across participants who passed versus failed each item. Interestingly, whereas a hypnotic induction influenced the likelihood of passing items, it had no direct effect on agentive experiences. These results suggest that altered sense of agency is not a unidimensional or static quality “switched on” by hypnotic induction, but a dynamic multidimensional construct that varies across items, over time and according to whether individuals pass or fail suggestions.


Author(s):  
Konrad Huber

The chapter first surveys different types of figurative speech in Revelation, including simile, metaphor, symbol, and narrative image. Second, it considers the way images are interrelated in the narrative world of the book. Third, it notes how the images draw associations from various backgrounds, including biblical and later Jewish sources, Greco-Roman myths, and the imperial cult, and how this enriches the understanding of the text. Fourth, the chapter looks at the rhetorical impact of the imagery on readers and stresses in particular its evocative, persuasive, and parenetic function together with its emotional effect. And fifth, it looks briefly at the way reception history shows how the imagery has engaged readers over time. Thus, illustrated by numerous examples, it becomes clear how essentially the imagery of the book of Revelation constitutes and determines its theological message.


2021 ◽  
pp. 135910452199970
Author(s):  
Naomi Gibbons ◽  
Emma Harrison ◽  
Paul Stallard

Background: There is increased emphasis on the national reporting of Routine Outcome Measures (ROMS) as a way of improving Child and Adolescent Mental Health Services (CAMHS). This data needs to be viewed in context so that reasons for outcome completion rates are understood and monitored over time. Method: We undertook an in-depth prospective audit of consecutive referrals accepted into the Bath and North East Somerset, Swindon and Wiltshire (BSW) CAMHS service from November 2017 to January 2018 ( n = 1074) and April to September 2019 ( n = 1172). Results: Across both audits 90% of those offered an appointment were seen with three quarters completing baseline ROMS. One in three were not seen again with around 30% still being open to the service at the end of each audit. Of those closed to the service, paired ROMS were obtained for 46% to 60% of cases. There were few changes in referral problems or complexity factors over time. Conclusion: Understanding the referral journey and the reasons for attrition will help to put nationally collected data in context and can inform and monitor service transformation over time.


Entropy ◽  
2021 ◽  
Vol 23 (5) ◽  
pp. 510
Author(s):  
Taiyong Li ◽  
Duzhong Zhang

Image security is a hot topic in the era of Internet and big data. Hyperchaotic image encryption, which can effectively prevent unauthorized users from accessing image content, has become more and more popular in the community of image security. In general, such approaches conduct encryption on pixel-level, bit-level, DNA-level data or their combinations, lacking diversity of processed data levels and limiting security. This paper proposes a novel hyperchaotic image encryption scheme via multiple bit permutation and diffusion, namely MBPD, to cope with this issue. Specifically, a four-dimensional hyperchaotic system with three positive Lyapunov exponents is firstly proposed. Second, a hyperchaotic sequence is generated from the proposed hyperchaotic system for consequent encryption operations. Third, multiple bit permutation and diffusion (permutation and/or diffusion can be conducted with 1–8 or more bits) determined by the hyperchaotic sequence is designed. Finally, the proposed MBPD is applied to image encryption. We conduct extensive experiments on a couple of public test images to validate the proposed MBPD. The results verify that the MBPD can effectively resist different types of attacks and has better performance than the compared popular encryption methods.


Sign in / Sign up

Export Citation Format

Share Document