scholarly journals Twitter content classification

Author(s):  
Stephen Dann

This paper delivers a new Twitter content classification framework based sixteen existing Twitter studies and a grounded theory analysis of a personal Twitter history. It expands the existing understanding of Twitter as a multifunction tool for personal, profession, commercial and phatic communications with a split level classification scheme that offers broad categorization and specific sub categories for deeper insight into the real world application of the service.

Author(s):  
Peter Rich

Qualitative research methods have long set an example of rich description, in which data and researchers’ hermeneutics work together to inform readers of findings in specific contexts. Among published works, insight into the analytical process is most often represented in the form of methodological propositions or research results. This paper presents a third type of qualitative report, one in which the researcher’s process of coding, finding themes, and arriving at findings is the focus. Grounded theory analysis methods were applied to the interpretation of a single interview. The resulting document provides a narrative of the process one researcher followed when attempting to apply recommended methodological procedures to a single interview, providing a peek inside the black box of analysis often left unopened in final reports.


2020 ◽  
Vol 31 ◽  
pp. S807-S808
Author(s):  
S. Kuang ◽  
M. Liu ◽  
C. Ho ◽  
E. Berthelet ◽  
J. Laskin ◽  
...  

Author(s):  
León Illanes ◽  
Sheila A. McIlraith

The real-world application of planning techniques often requires models with numeric fluents. However, these fluents are not directly supported by most planners and heuristics. We describe a family of planning algorithms that takes a numeric planning problem and produces an abstracted representation that can be solved using any classical planner. The resulting abstract plan is generalized into a policy and then used to guide the search in the original numeric domain. We prove that our approach is sound, and we evaluate it on a set of standard benchmarks. We show that it can provide competitive performance when compared to other well-known algorithms for numeric planning, and a significant performance improvement in certain domains.


Author(s):  
Tony T. Tran ◽  
Tiago Vaquero ◽  
Goldie Nejat ◽  
J. Christopher Beck

We investigate Constraint Programming and Planning Domain Definition Language-based technologies for planning and scheduling multiple robots in a retirement home environment to assist elderly residents. Our robotics problem and investigation into proposed solution approaches provide a real world application of planning and scheduling, while highlighting the different modeling assumptions required to solve such a problem. This information is valuable to the planning and scheduling community as it provides insight into potential application avenues, in particular for robotics problems. Based on empirical results, we conclude that a constraint-based scheduling approach, specifically a decomposition using constraint programming, provides the most promising results for our application.


2021 ◽  
Vol 5 (OOPSLA) ◽  
pp. 1-30
Author(s):  
Justin Lubin ◽  
Sarah E. Chasins

How working statically-typed functional programmers write code is largely understudied. And yet, a better understanding of developer practices could pave the way for the design of more useful and usable tooling, more ergonomic languages, and more effective on-ramps into programming communities. The goal of this work is to address this knowledge gap: to better understand the high-level authoring patterns that statically-typed functional programmers employ. We conducted a grounded theory analysis of 30 programming sessions of practicing statically-typed functional programmers, 15 of which also included a semi-structured interview. The theory we developed gives insight into how the specific affordances of statically-typed functional programming affect domain modeling, type construction, focusing techniques, exploratory and reasoning strategies, and expressions of intent. We conducted a set of quantitative lab experiments to validate our findings, including that statically-typed functional programmers often iterate between editing types and expressions, that they often run their compiler on code even when they know it will not successfully compile, and that they make textual program edits that reliably signal future edits that they intend to make. Lastly, we outline the implications of our findings for language and tool design. The success of this approach in revealing program authorship patterns suggests that the same methodology could be used to study other understudied programmer populations.


2020 ◽  
Vol 7 (6) ◽  
pp. 911-914
Author(s):  
Allyson S Hughes ◽  
Jeoffrey Bispham ◽  
Ludi Fan ◽  
Magaly Nieves-Perez ◽  
Alicia H McAuliffe-Fogarty

Limited research exists regarding the burdens associated with type 1 diabetes (T1D). The study’s objective was to understand the impact of T1D from people with T1D and caregivers of minors with T1D. Six focus groups were conducted, with a total of 31 participants. Participants included people with T1D, ages 23 to 72 (n = 17) and caregivers ages 34 to 55 (n = 14). Participants were recruited from T1D Exchange Glu. People with T1D reported time spent managing diabetes had greatest impact, while caregivers reported financial and employment sacrifices as most impactful. Our findings provide insight into the real-world daily impact of diabetes.


Author(s):  
Gopala Krishna Behara

This chapter covers the essentials of big data analytics ecosystems primarily from the business and technology context. It delivers insight into key concepts and terminology that define the essence of big data and the promise it holds to deliver sophisticated business insights. The various characteristics that distinguish big data datasets are articulated. It also describes the conceptual and logical reference architecture to manage a huge volume of data generated by various data sources of an enterprise. It also covers drivers, opportunities, and benefits of big data analytics implementation applicable to the real world.


2018 ◽  
Vol 24 (4) ◽  
pp. 523-549 ◽  
Author(s):  
BO LI ◽  
ERIC GAUSSIER ◽  
DAN YANG

AbstractComparable corpora serve as an important substitute for parallel resources in cases of under-resourced language pairs. Previous work mostly aims to find a better strategy to exploit existing comparable corpora, while ignoring the variety in corpus quality. The quality of comparable corpora affects a lot its usability in practice, a fact that has been justified by several studies. However, researchers have not been able to establish a widely accepted and fully validated framework to measure corpus quality. We will thus investigate in this paper a comprehensive methodology to deal with the quality of comparable corpora. To be exact, we will propose several comparability measures and a quantitative strategy to test those measures. Our experiments show that the proposed comparability measure can capture gold-standard comparability levels very well and is robust to the bilingual dictionary used. Moreover, we will show in the task of bilingual lexicon extraction that the proposed measure correlates well with the performance of the real world application.


Sign in / Sign up

Export Citation Format

Share Document