scholarly journals Zentrum und Peripherie in der deutschen Syntax.

2021 ◽  
Vol 106 (1) ◽  
pp. 115-147
Author(s):  
Anton Näf

The present paper is the first part of a larger essay which, due to space constraints, will be published in two separate parts. Using evidence of two small corpora, I develop a centre-periphery model inspired by the prototype theory and apply it to the syntax of German. In doing so I proceed in two steps. In the first part, published here, the graded four-level model of linguistic variation presented below (with the categories “prototype”, “variants”, “competitive forms” and “free stylistic variation”) is tested and refined on two already well-researched grammartical phenomena, namely conditionality and passive structures. In the second part (to be published in the next issue of Linguistik online) I apply the model to the more complex subject of sentence types in German, in particular the so-called minor sentence types. A complete description of a language should not only list the grammatical categories, but also contain quantitative information, on both the frequency of occurrence of a particular category and on the position and relative share of this category in the field of its competing means of linguistic expression. A grammar of contemporary German, which not only records the structures, but shows the “structures in use” in different domains and text types, still remains a desideratum.

2021 ◽  
Vol 108 (3) ◽  
pp. 67-114
Author(s):  
Anton Näf

In Linguistik online 2021, I presented a four-level centre-periphery model of German syntax (with the categories “prototype”, “variants”, “competitive forms” and “free stylistic variation”) and tested and refined it on the basis of two well-researched grammatical phenomena, conditionality and passive structures. In the present paper, this model is applied to the sentence types of German, with comparative side glances at English and French. The model proves to be particularly fruitful in the functional area of exclamation, where a great variety of forms can be observed. I argue here that scientific grammars should not only record the form inventory of sentence types but should supplement this with information on their frequency of occurrence, especially with key figures on the relative proportions of the individual structures in their functional field of competition, broken down by different communicative situations or text types. The motto for the grammar writing of the future should be: From the “structures” to the “structures in use”.


Author(s):  
Lita Lundquist

The work reported here explores a cognitive-communicative hypothesis of text ty-pology that text types defined on external communicative criteria also exhibit typical constellations of linguistic features text-internally. Inspired by Tversky's (1981) math-ematical "contrast model of similarities", a French contract, a law and a judgment were analyzed using the computer program 'Cohérelle' into sets of syntactic, semantic, pragmatic, and textlinguistic features. Subsequent computations showed that reliable similarities (in the linguistic expression of cognitive content) and differences (in the use of communicative grounding expressions) could in fact be distinguished among the linguistic features of the three text exemplars, thus permitting the postulation of dif-ferent types on text-internal linguistic grounds.


2011 ◽  
Vol 2 (2) ◽  
pp. 153-167 ◽  
Author(s):  
Kawai Chui

The present study investigates whether and to what extent motion-event gestures compensate for the omission of linguistic expression in Chinese discourse and across different languages to understand language-specificity/language-universality and the coordination of motion information across the two modalities. The Chinese conversational and narrative data consistently show that manner fog (i.e., manner absent from speech but present in gesture) was not found. Chinese speakers also demonstrate a preference for compensation — gestures tend to compensate for the lack of path content in speaking. These results differ from those for English and Turkish which do not prefer path gestures in manner-only clauses. The cross-linguistic variation provides evidence for language specificity in gestural compensation. The language-specific coordination of information in speech and gesture suggests Chinese speakers’ habitual focus of attention on PATH in multimodal communication.


Polymers ◽  
2020 ◽  
Vol 12 (12) ◽  
pp. 2917
Author(s):  
Thomas Gkourmpis ◽  
Geoffrey R. Mitchell

Scattering data for polymers in the non-crystalline state, i.e., the glassy state or the molten state, may appear to contain little information. In this work, we review recent developments in the use of scattering data to evaluate in a quantitative manner the molecular organization of such polymer systems. The focus is on the local structure of chain segments, on the details of the chain conformation and on the imprint the inherent chemical connectivity has on this structure. We show the value of tightly coupling the scattering data to atomistic-level computer models. We show how quantitative information about the details of the chain conformation can be obtained directly using a model built from definitions of relatively few parameters. We show how scattering data may be supplemented with data from specific deuteration sites and used to obtain information hidden in the data. Finally, we show how we can exploit the reverse Monte Carlo approach to use the data to drive the convergence of the scattering calculated from a 3d atomistic-level model with the experimental data. We highlight the importance of the quality of the scattering data and the value in using broad Q scattering data obtained using neutrons. We illustrate these various methods with results drawn from a diverse range of polymers.


2017 ◽  
Vol 10 (2) ◽  
pp. 85-101 ◽  
Author(s):  
Nelly Tincheva

The notion of text type emerged as far back as Aristotelian times but it is still surroundedby considerable conceptual confusion. Due to the existence of multiple – and conflicting –viewpoints on the notion, the related term of text type is also characterized by multipleinterpretations. Seeking to propose a means of overcoming the ambiguity surroundingtext type, the present paper argues a case for the overt application of prototype theory tothe notion and term. In accordance with basic postulates of prototypology, the suggestionsput forward here are supported by results from a study involving actual users of the notionof text type. The study includes 28 linguists working in the field of text linguistics anddiscourse analysis. The general method adopted is cognitive as it coheres with (and evencan be argued to derive historically from) prototypology.


2020 ◽  
Vol 4 (1) ◽  
pp. 73-94
Author(s):  
Jonathan R. Kasstan

Abstract The centrality of style is uncontested in sociolinguistics: it is an essential construct in the study of linguistic variation and change in the speech community. This is not the case in the language-obsolescence literature, where stylistic variation among endangered-language speakers is described as an ephemeral, or “marginal” resource, and where speakers exhibiting “stylistic shrinkage” become “monostylistic”. This argument is invoked in variationist theory too, where “monostylism” is presented as support for the tenets of Audience Design (Bell 1984). This article reports on a study that adopts variationist methods in a context of severe language endangerment. Evidence from two linguistic variables in Francoprovençal demonstrates the presence of socially meaningful stylistic variation among the last generation of fluent speakers, offering counter-evidence to classic claims. This evidence is used to argue that accounts of stylistic variation in language obsolescence are not sufficiently nuanced and should be reconsidered in light of recent research.


1965 ◽  
Vol 5 ◽  
pp. 120-130
Author(s):  
T. S. Galkina

It is necessary to have quantitative estimates of the intensity of lines (both absorption and emission) to obtain the physical parameters of the atmosphere of components.Some years ago at the Crimean observatory we began the spectroscopic investigation of close binary systems of the early spectral type with components WR, Of, O, B to try and obtain more quantitative information from the study of the spectra of the components.


Author(s):  
J.N. Chapman ◽  
P.E. Batson ◽  
E.M. Waddell ◽  
R.P. Ferrier

By far the most commonly used mode of Lorentz microscopy in the examination of ferromagnetic thin films is the Fresnel or defocus mode. Use of this mode in the conventional transmission electron microscope (CTEM) is straightforward and immediately reveals the existence of all domain walls present. However, if such quantitative information as the domain wall profile is required, the technique suffers from several disadvantages. These include the inability to directly observe fine image detail on the viewing screen because of the stringent illumination coherence requirements, the difficulty of accurately translating part of a photographic plate into quantitative electron intensity data, and, perhaps most severe, the difficulty of interpreting this data. One solution to the first-named problem is to use a CTEM equipped with a field emission gun (FEG) (Inoue, Harada and Yamamoto 1977) whilst a second is to use the equivalent mode of image formation in a scanning transmission electron microscope (STEM) (Chapman, Batson, Waddell, Ferrier and Craven 1977), a technique which largely overcomes the second-named problem as well.


Author(s):  
Jerrold L. Abraham

Inorganic particulate material of diverse types is present in the ambient and occupational environment, and exposure to such materials is a well recognized cause of some lung disease. To investigate the interaction of inhaled inorganic particulates with the lung it is necessary to obtain quantitative information on the particulate burden of lung tissue in a wide variety of situations. The vast majority of diagnostic and experimental tissue samples (biopsies and autopsies) are fixed with formaldehyde solutions, dehydrated with organic solvents and embedded in paraffin wax. Over the past 16 years, I have attempted to obtain maximal analytical use of such tissue with minimal preparative steps. Unique diagnostic and research data result from both qualitative and quantitative analyses of sections. Most of the data has been related to inhaled inorganic particulates in lungs, but the basic methods are applicable to any tissues. The preparations are primarily designed for SEM use, but they are stable for storage and transport to other laboratories and several other instruments (e.g., for SIMS techniques).


Author(s):  
R.D. Leapman ◽  
S.B. Andrews

Elemental mapping of biological specimens by electron energy loss spectroscopy (EELS) can be carried out both in the scanning transmission electron microscope (STEM), and in the energy-filtering transmission electron microscope (EFTEM). Choosing between these two approaches is complicated by the variety of specimens that are encountered (e.g., cells or macromolecules; cryosections, plastic sections or thin films) and by the range of elemental concentrations that occur (from a few percent down to a few parts per million). Our aim here is to consider the strengths of each technique for determining elemental distributions in these different types of specimen.On one hand, it is desirable to collect a parallel EELS spectrum at each point in the specimen using the ‘spectrum-imaging’ technique in the STEM. This minimizes the electron dose and retains as much quantitative information as possible about the inelastic scattering processes in the specimen. On the other hand, collection times in the STEM are often limited by the detector read-out and by available probe current. For example, a 256 x 256 pixel image in the STEM takes at least 30 minutes to acquire with read-out time of 25 ms. The EFTEM is able to collect parallel image data using slow-scan CCD array detectors from as many as 1024 x 1024 pixels with integration times of a few seconds. Furthermore, the EFTEM has an available beam current in the µA range compared with just a few nA in the STEM. Indeed, for some applications this can result in a factor of ~100 shorter acquisition time for the EFTEM relative to the STEM. However, the EFTEM provides much less spectral information, so that the technique of choice ultimately depends on requirements for processing the spectrum at each pixel (viz., isolated edges vs. overlapping edges, uniform thickness vs. non-uniform thickness, molar vs. millimolar concentrations).


Sign in / Sign up

Export Citation Format

Share Document