vague language
Recently Published Documents


TOTAL DOCUMENTS

103
(FIVE YEARS 41)

H-INDEX

11
(FIVE YEARS 1)

2021 ◽  
Vol 14 (2) ◽  
pp. 185-202
Author(s):  
Jaufillaili Jaufillaili ◽  
Riska Nurmalita ◽  
Endang Herawan

This paper presents the findings analysis of categories and functions on vague language used in disaster news articles on Thejakartapost.com based on the theory of Channell (1994). In the journalism context, especially in disaster news article, the information often contains vague language that has imprecise statement since it is harmful. Therefore, to avoid wrong statements, the reporters often use vague language in presenting information accurately. The study employed a qualitative descriptive method. All data were 24 news articles. There were 12 news articles of natural disasters and 12 news articles of human-caused disasters. The period was from April 2018 until March 2019. The findings of this study showed that there were three categories of vague language, namely vague additives to numbers that were realized by approximators and adjectives. The others were vagueness of choice of vague words that were realized by nouns, and vagueness by scalar implicatures that were realized by quantifiers, numbers, and exaggerations. In addition, they also have its functions of vague language. Firstly, giving the right amount of information, it is used since the reporters just shared the right number of information although the exact number was not available. Secondly, filling in lexical gaps of uncertainty, it is used since the reporters wanted to cover the imprecise information with another word, and generalized word that was difficult to identify. Last but not least, self-protection. It is used since the reporters wanted to protect and hedge their statements from imprecise information.Keywords: Vague Language, Categories, News Articles, Disasters, implicature


2021 ◽  
Vol 18 (04) ◽  
Author(s):  
Spencer Andrews ◽  
Cara DeAngelis ◽  
Somayeh Hooshmand ◽  
Neysha Martinez-Orengo ◽  
Melissa Zajdel

The state of Maryland has consistently ranked among the top states by opioid-involved overdose deaths. Emergency rooms in Maryland have been overrun with patients struggling with opioid use disorder (OUD). While hospitals are heavily burdened, it has become clear that they serve as a critical entry point for OUD prevention programs. Despite this, when section 19-310 of the Maryland Heroin and Opioid Prevention Effort (HOPE) and Treatment Act of 2017 passed, it included vague language requiring hospitals to create their own discharge protocols for such patients rather than putting into place statewide mandates. We propose two alternative solutions. First, the Maryland General Assembly can amend the HOPE and Treatment Act of 2017 to mandate that peer recovery services be made available during inpatient care, within the emergency department, and post-discharge for patients presenting with OUD. Second, we recommend the addition of a subtitle to describe how to establish and operate mobile clinic treatment programs. The former amendment would offer a prompt solution that could reduce opioid-related hospitalizations and deaths in the state. It will also help reach underrepresented populations who are the least likely to access peer recovery support and other health services in response to OUD.


Author(s):  
Mark Amsler

This chapter continues the previous analysis of heretics’ speech from the perspective of Conversation Analysis. Bakhtin’s theory of dialogism sets Kempe’s pragmatic thinking in a sociolinguistic frame. The narrative of her examinations at the Archbishop of York’s court suggests that people’s thinking about how language defines, expresses, controls, and resists also informed how they pragmatically and metapragmatically constructed their speech for social survival, subjective authority, or agency in asymmetric or hostile interactions. Medieval grammarians’ and logicians’ concerns with reference and equivocatio (ambiguity, polysemy, vagueness) were reinterpreted in controversies about how heretics and nonconformists talk in hostile institutional situations. Kempe’s sophisticated use of evasive, vague, hedged, and recontextualized speech and situational pragmatics proves more than a match for the Archbishop and his clerks.


2021 ◽  
Vol 3 ◽  
Author(s):  
Yannick Frommherz ◽  
Alessandra Zarcone

Despite their increasing success, user interactions with smart speech assistants (SAs) are still very limited compared to human-human dialogue. One way to make SA interactions more natural is to train the underlying natural language processing modules on data which reflects how humans would talk to a SA if it was capable of understanding and producing natural dialogue given a specific task. Such data can be collected applying a Wizard-of-Oz approach (WOz), where user and system side are played by humans. WOz allows researchers to simulate human-machine interaction while benefitting from the fact that all participants are human and thus dialogue-competent. More recent approaches have leveraged simple templates specifying a dialogue scenario for crowdsourcing large-scale datasets. Template-based collection efforts, however, come at the cost of data diversity and naturalness. We present a method to crowdsource dialogue data for the SA domain in the WOz framework, which aims at limiting researcher-induced bias in the data while still allowing for a low-resource, scalable data collection. Our method can also be applied to languages other than English (in our case German), for which fewer crowd-workers may be available. We collected data asynchronously, relying only on existing functionalities of Amazon Mechanical Turk, by formulating the task as a dialogue continuation task. Coherence in dialogues is ensured, as crowd-workers always read the dialogue history, and as a unifying scenario is provided for each dialogue. In order to limit bias in the data, rather than using template-based scenarios, we handcrafted situated scenarios which aimed at not pre-script-ing the task into every single detail and not priming the participants’ lexical choices. Our scenarios cued people’s knowledge of common situations and entities relevant for our task, without directly mentioning them, but relying on vague language and circumlocutions. We compare our data (which we publish as the CROWDSS corpus; n = 113 dialogues) with data from MultiWOZ, showing that our scenario approach led to considerably less scripting and priming and thus more ecologically-valid dialogue data. This suggests that small investments in the collection setup can go a long way in improving data quality, even in a low-resource setup.


Sign in / Sign up

Export Citation Format

Share Document