scholarly journals GENDERPECULIARITIES OF USAGE OF CITATION-APHORISTIC PHRASEOLOGICAL UNITS IN GERMAN NEWSPAPER TEXT: GENDER ASPECT (ON THE MATERIAL OF GERMAN NEWSPAPER ARTICLES OF “DIE ZEIT” AND “SÜDDEUTSCHE ZEITUNG”)

2020 ◽  
Vol 1 (46) ◽  
pp. 183-186
Author(s):  
I. Kolomyis’ka
2020 ◽  
Vol 6 (4) ◽  
pp. 75-81
Author(s):  
I. S. Kashenkova

The article highlights some features of German newspaper texts and related difficulties in translating them into Russian. The topicality of this study is explained by various linguistic and extralinguistic factors. The newspaper text should provide information in an understandable and accessible language form. However, in an effort to produce a greater effect on the reader, as well as to accommodate the maximum amount of information in a compressed form, journalists resort to special techniques to influence the reader, which, in particular, include the use of composites. The main purpose of the article is to remove some of the difficulties of translating composites in the German newspaper text, caused by implicit evaluation or background information (Trümmerfrau, Teflonkanzlerin, N-Wort), which requires additional “decryption” of the overall semantic meaning of the composite. To this end, the article analyzes the specifics of the German newspaper text, establishes the most active types of composites in German newspaper texts. The increased number of complex nouns in German newspaper texts is specified on the principle of linguistic economy, which is intrinsic to German communicative style, and German word formation. Options for translating rather difficult composites into Russian are offered. Attention is drawn to the transfer of the cultural specifics of the original when using lexico-grammatical transformations to preserve the semantic load of the original and its adequate translation. It is concluded that it is necessary to take into account the cultural specifics of composites in the German newspaper text when translating them into Russian.


Author(s):  
Marcus Müller ◽  
Sabine Bartsch ◽  
Jens O. Zinn

Abstract This paper presents an annotation approach to examine uncertainty in British and German newspaper articles on the coronavirus pandemic. We develop a tagset in an interdisciplinary team from corpus linguistics and sociology. After working out a gold standard on a pilot corpus, we apply the annotation to the entire corpus drawing on an “annotation-by-query” approach in CQPWeb, based on uncertainty constructions that have been extracted from the gold standard data. The annotated data are then evaluated and sociologically contextualised. On this basis, we study the development of uncertainty markers in the period under study and compare media discourses in Germany and the UK. Our findings reflect the different courses of the pandemic in Germany and the UK as well as the different political responses, media traditions and cultural concerns: While markers of fear are more important in British discourse, we see a steadily increasing level of disagreement in German discourse. Other forms of uncertainty such as ‘possibility’ or ‘probability’ are similarly frequent in both discourses.


Author(s):  
Valerie Hase

Actors in coverage might be individuals, groups, or organizations, which are discussed, described, or quoted in the news. The datasets referred to in the table are described in the following paragraph: Benoit and Matuso (2020) uses fictional sentences (N = 5) to demonstrate how named entities and noun phrases can be identified automatically. Lind and Meltzer (2020) demonstrate the use of organic dictionaries to identify actors in German newspaper articles (2013-2017, N = 348,785). Puschmann (2019) uses four data sets to demonstrate how sentiment/tone may be analyzed by the computer. Using tweets (2016, N = 18,826), German newspaper articles (2011-2016, N = 377), Swiss newspaper articles (2007-2012, N = 21,280), and debate transcripts (1970-2017, N = 7,897), he extracts nouns and named entities from text. Lastly, Wiedemann and Niekler (2017) extract proper nouns from State of the Union speeches (1790-2017, N = 233). Field of application/theoretical foundation: Related to theories of “Agenda Setting” and “Framing”, analyses might want to know how much weight is given to a specific actor, how these actors are evaluated and what perspectives and frames they might bring into the discussion how prominently. References/combination with other methods of data collection: Oftentimes, studies use both manual and automated content analysis to identify actors in text. This might be a useful tool to extend the lists of actors that can be found as well as to validate automated analyses. For example, Lind and Meltzer (2020) combine manual coding and dictionaries to identify the salience of women in the news.   Table 1. Measurement of “Actors” using automated content analysis. Author(s) Sample Procedure Formal validity check with manual coding as benchmark* Code Benoit & Matuso (2020) Fictional sentences Part-of-Speech tagging; syntactic parsing Not reported https://cran.r-project.org/web/packages/spacyr/vignettes/using_spacyr.html Lind & Meltzer (2020) Newspapers Dictionary approach Reported https://osf.io/yqbcj/?view_only=369e2004172b43bb91a39b536970e50b Puschmann (2019) (a) Tweets (b) German newspaper articles (c) Swiss newspaper articles (d) United Nations General Debate Transcripts Part-of-Speech tagging; syntactic parsing Not reported http://inhaltsanalyse-mit-r.de/ner.html Wiedemann & Niekler (2017) State of the Union speeches Part-of-Speech tagging Not reported https://tm4ss.github.io/docs/Tutorial_8_NER_POS.html *Please note that many of the sources listed here are tutorials on how to conducted automated analyses – and therefore not focused on the validation of results. Readers should simply read this column as an indication in terms of which sources they can refer to if they are interested in the validation of results. References Benoit, K., & Matuso. (2020). A Guide to Using spacyr. Retrieved from https://cran.r-project.org/web/packages/spacyr/vignettes/using_spacyr.html Lind, F., & Meltzer, C. E. (2020). Now you see me, now you don’t: Applying automated content analysis to track migrant women’s salience in German news. Feminist Media Studies, 1–18. Puschmann, C. (2019). Automatisierte Inhaltsanalyse mit R. Retrieved from http://inhaltsanalyse-mit-r.de/index.html Wiedemann, G., Niekler, A. (2017). Hands-on: a five day text mining course for humanists and social scientists in R. Proceedings of the 1st Workshop Teaching NLP for Digital Humanities (Teach4DH@GSCL 2017), Berlin. Retrieved from https://tm4ss.github.io/docs/index.html


2008 ◽  
Vol 1 ◽  
Author(s):  
Aljoscha Burchardt ◽  
Sebastian Padó ◽  
Dennis Spohr ◽  
Anette Frank ◽  
Ulrich Heid

We present a general approach to formally modelling corpora with multi-layered annotation in a typed logical representation language, OWL DL. By defining abstractions over the corpus data, we can generalise from a large set of individual corpus annotations, thereby inducing a lexicon model. The resulting combined corpus and lexicon model can be interpreted as a graph structure that offers flexible querying functionality beyond current XML-based query languages. Its powerful methods for characterising and checking consistency can be used for incremental model refinement. In addition, the formalisation in a graph-based structure offers the means of defining flexible lexicon views over the corpus data. These views can be tailored for linguistic inspection or to define clean interfaces with other linguistic resources. We illustrate our approach by applying it to the syntactically and semantically annotated SALSA/TIGER corpus, a collection of German newspaper text.


Sign in / Sign up

Export Citation Format

Share Document