scholarly journals Prosodic Phrasing and Syllable Prominence in Spoken Prose. A Validated Coding Manual

2022 ◽  
Author(s):  
Isabelle Franz ◽  
Christine A. Knoop ◽  
Gerrit Kentner ◽  
Sascha Rothbart ◽  
Vanessa Kegel ◽  
...  

Current systems for predicting prosodic prominence and boundaries in texts focus on syntax/semantic-based automatic decoding of sentences that need to be annotated syntactically (Atterer & Klein 2002; Windmann et al. 2011). However, to date, there is no phonetically validated replicable system for manually coding prosodic boundaries and syllable prominence in longer sentences or texts. Based on work in the fields of metrical phonology (Liberman & Prince 1977), phrase formation (Hayes 1989) and existing pause coding systems (Gee and Grosjean 1983), we developed a manual for coding prosodic boundaries (with 6 degrees of juncture) and syllable prominence (8 degrees). Three independent annotators applied the coding system to the beginning pages of four German novels and to four short stories (20 058 syllables, Fleiss kappa .82). For the phonetic validation, eight professional speakers read the excerpts of the novels aloud. We annotated the speech signal automatically with MAUS (Schiel 1999). Using PRAAT (Boersma & Weenink 2019), we extracted pitch, duration, and intensity for each syllable, as well as several phonetic parameters for pauses, and compared all measures obtained to the theoretically predicted levels of syllable prominence and prosodic boundary strength. The validation with the speech signal shows that our annotation system reliably predicts syllable prominence and prosodic boundaries. Since our annotation works with plain text, there are many potential applications of the coding system, covering research on prose rhythm, synthetic speech and (psycho)linguistic research on prosody.

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Jerome Niyirora

Abstract Background Transitioning from an old medical coding system to a new one can be challenging, especially when the two coding systems are significantly different. The US experienced such a transition in 2015. Objective This research aims to introduce entropic measures to help users prepare for the migration to a new medical coding system by identifying and focusing preparation initiatives on clinical concepts with more likelihood of adoption challenges. Methods Two entropic measures of coding complexity are introduced. The first measure is a function of the variation in the alphabets of new codes. The second measure is based on the possible number of valid representations of an old code. Results A demonstration of how to implement the proposed techniques is carried out using the 2015 mappings between ICD-9-CM and ICD-10-CM/PCS. The significance of the resulting entropic measures is discussed in the context of clinical concepts that were likely to pose challenges regarding documentation, coding errors, and longitudinal data comparisons. Conclusion The proposed entropic techniques are suitable to assess the complexity between any two medical coding systems where mappings or crosswalks exist. The more the entropy, the more likelihood of adoption challenges. Users can utilize the suggested techniques as a guide to prioritize training efforts to improve documentation and increase the chances of accurate coding, code validity, and longitudinal data comparisons.


2018 ◽  
Vol 2018 ◽  
pp. 1-16
Author(s):  
Octavio Flores Siordia ◽  
Juan Carlos Estrada Gutiérrez ◽  
Carlos Eduardo Padilla Leyferman ◽  
Jorge Aguilar Santiago ◽  
Maricela Jiménez Rodríguez

Safeguarding the identity of people in photographs or videos published through social networks or television is of great importance to those who do not wish to be recognized. In this paper, a face detecting and coding system is designed with the goal of solving this problem. Mathematical models to generate chaotic orbits are deployed. One of them applies the diffusion technique to scramble the pixels of each face while another implements the confusion technique to alter the relation between plain text and ciphered text. Afterward, another two orbits are utilized for the steganography technique to modify the least significant bit (LSB) to conceal data that would allow authorized users to decipher the faces. To verify the robustness of the proposed encryption algorithm, different tests are performed with the Lena standard image, such as correlation diagrams, histograms, and entropy. In addition, occlusion, noise, and plain image attacks are performed. The results are compared with those of other works, and the proposed system provided high sensitivity at secret key and a large space for the encryption keys, good speed for ciphering, disorder in the cryptogram, security, data integrity, and robustness against different attacks.


2020 ◽  
Vol 12 (2) ◽  
pp. 161-171
Author(s):  
Tsalits Abdul Aziz Al farisi

The purpose of this research is to describe it 1) how is the coding system for creating poetry. 2) how can  student understand objects in the ward then applied to the text of diction. 3) how can students create certain diction in relation to what one imagines. The methods in this research are using quantitatif methods with model design patterns using media as a basis for finding a purpose the study of literature at SMA Kanjeng Sepuh school X class. The result of this study is the observation table of assessments stuents interest in literature especially the poem. On other side of the observation of teacher and student activities carrying out the study of poetry is also a focus quantitatif data results. The conclusion of this study is to find a point of literary learning accuracy through quantitatif measures in coding systems. It needs to be used to find the concrete steps the literary writing learning pattern that currently is of little interest to students.


10.14311/906 ◽  
2007 ◽  
Vol 47 (1) ◽  
Author(s):  
M. Herrera Martinez

This paper deals with subjective evaluation of audio-coding systems. From this evaluation, it is found that, depending on the type of signal and the algorithm of the audio-coding system, different types of audible errors arise. These errors are called coding artifacts. Although three kinds of artifacts are perceivable in the auditory domain, the author proposes that in the coding domain there is only one common cause for the appearance of the artifact, inefficient tracking of transient-stochastic signals. For this purpose, state-of-the art audio coding systems use a wide range of signal processing techniques, including application of the wavelet transform, which is described here. 


2018 ◽  
Vol 57 (01/02) ◽  
pp. 01-42 ◽  
Author(s):  
Yong Chen ◽  
Marko Zivkovic ◽  
Su Su ◽  
Jianyi Lee ◽  
Edward Bortnichak ◽  
...  

Summary Background: Clinical coding systems have been developed to translate real-world healthcare information such as prescriptions, diagnoses and procedures into standardized codes appropriate for use in large healthcare datasets. Due to the lack of information on coding system characteristics and insufficient uniformity in coding practices, there is a growing need for better understanding of coding systems and their use in pharmacoepidemiology and observational real world data research. Objectives: To determine: 1) the number of available coding systems and their characteristics, 2) which pharmacoepidemiology databases are they adopted in, 3) what outcomes and exposures can be identified from each coding system, and 4) how robust they are with respect to consistency and validity in pharmacoepidemiology and observational database studies. Methods: Electronic literature database and unpublished literature searches, as well as hand searching of relevant journals were conducted to identify eligible articles discussing characteristics and applications of coding systems in use and published in the English language between 1986 and 2016. Characteristics considered included type of information captured by codes, clinical setting(s) of use, adoption by a pharmacoepidemiology database, region, and available mappings. Applications articles describing the use and validity of specific codes, code lists, or algorithms were also included. Data extraction was performed independently by two reviewers and a narrative synthesis was performed. Results: A total of 897 unique articles and 57 coding systems were identified, 17% of which included country-specific modifications or multiple versions. Procedures (55%), diagnoses (36%), drugs (38%), and site of disease (39%) were most commonly and directly captured by these coding systems. The systems were used to capture information from the following clinical settings: inpatient (63%), ambulatory (55%), emergency department (ED, 34%), and pharmacy (13%). More than half of all coding systems were used in Europe (59%) and North America (57%). 34% of the reviewed coding systems were utilized in at least 1 of the 16 pharmacoepidemiology databases of interest evaluated. 21% of coding systems had studies evaluating the validity and consistency of their use in research within pharmacoepidemiology databases of interest. The most prevalent validation method was comparison with a review of patient charts, case notes or medical records (64% of reviewed validation studies). The reported performance measures in the reviewed studies varied across a large range of values (PPV 0-100%, NPV 6-100%, sensitivity 0-100%, specificity 23-100% and accuracy 16-100%) and were dependent on many factors including coding system(s), therapeutic area, pharmacoepidemiology database, and outcome. Conclusions: Coding systems vary by type of information captured, clinical setting, and pharmacoepidemiology database and region of use. Of the 57 reviewed coding systems, few are routinely and widely applied in pharmacoepidemiology database research. Indication and outcome dependent heterogeneity in coding system performance suggest that accurate definitions and algorithms for capturing specific exposures and outcomes within large healthcare datasets should be developed on a case-by-case basis and in consultation with clinical experts.


2020 ◽  
pp. 002383092097184
Author(s):  
Jeremy Steffman ◽  
Hironori Katsuda

Recent research has proposed that listeners use prosodic information to guide their processing of phonemic contrasts. Given that prosodic organization of the speech signal systematically modulates durational patterns (e.g., accentual lengthening and phrase-final (PF) lengthening), listeners’ perception of durational contrasts has been argued to be influenced by prosodic factors. For example, given that sounds are generally lengthened preceding a prosodic boundary, listeners may adjust their perception of durational cues accordingly, effectively compensating for prosodically-driven temporal patterns. In the present study we present two experiments designed to test the importance of pitch-based cues to prosodic structure for listeners’ perception of contrastive vowel length (CVL) in Tokyo Japanese along these lines. We tested if, when a target sound is cued as being PF, listeners compensatorily adjust categorization of vowel duration, in accordance with PF lengthening. Both experiments were a two-alternative forced choice task in which listeners categorized a vowel duration continuum as a phonemically short or long vowel. We manipulated only pitch surrounding the target sound in a carrier phrase to cue it as intonational phrase final, or accentual phrase medial. In Experiment 1 we tested perception of an accented target word, and in Experiment 2 we tested perception of an unaccented target word. In both experiments, we found that contextual changes in pitch influenced listeners’ perception of CVL, in accordance with their function as signaling intonational structure. Results therefore suggest that listeners use tonal information to compute prosodic structure and bring this to bear on their perception of durational contrasts in speech.


Author(s):  
Ana Aleixo ◽  
António Pazo Pires ◽  
Lynne Angus ◽  
David Neto ◽  
Alexandre Vaz

Abstract Despite the importance of narrative, emotional and meaning-making processes in psychotherapy, there has been no review of studies using the main instruments developed to address these processes. The objective is to review the studies about client narrative and narrative-emotional processes in psychotherapy that used the Narrative Process Coding System or the Narrative-Emotion Process Coding System (1.0 and 2.0). To identify the studies, we searched The Book Collection, PsycINFO, PsycARTICLES, PsycBOOKS, PEP Archive, Psychology and Behavioral Sciences Collection, Academic Search Complete and the Web of Knowledge databases. We found 27 empirical studies using one of the three coding systems. The studies applied the Narrative Process Coding System and the Narrative-Emotion Process Coding System to different therapeutic modalities and patients with various clinical disorders. In some studies, early, middle and late phases of therapy were compared, while other studies conducted intensive case analyses of Narrative Process Coding System and Narrative-Emotion Process Coding System patterns comparing recovered vs unchanged clients. The review supports the importance to look for the contribution of narrative, emotion, meaning-making patterns or narrative-emotion markers, to treatment outcomes and encourages the application of these instruments in process-outcome research in psychotherapy.


1994 ◽  
Vol 42 (2/3/4) ◽  
pp. 664-672 ◽  
Author(s):  
K. Itoh ◽  
N. Kitawaki ◽  
H. Irii ◽  
H. Nagabuchi

2019 ◽  
Author(s):  
Nicolas Delvaux ◽  
Bert Vaes ◽  
Bert Aertgeerts ◽  
Stijn Van de Velde ◽  
Robert Vander Stichele ◽  
...  

BACKGROUND Effective clinical decision support systems require accurate translation of practice recommendations into machine-readable artifacts; developing code sets that represent clinical concepts are an important step in this process. Many clinical coding systems are currently used in electronic health records, and it is unclear whether all of these systems are capable of efficiently representing the clinical concepts required in executing clinical decision support systems. OBJECTIVE The aim of this study was to evaluate which clinical coding systems are capable of efficiently representing clinical concepts that are necessary for translating artifacts into executable code for clinical decision support systems. METHODS Two methods were used to evaluate a set of clinical coding systems. In a theoretical approach, we extracted all the clinical concepts from 3 preventive care recommendations and constructed a series of code sets containing codes from a single clinical coding system. In a practical approach using data from a real-world setting, we studied the content of 1890 code sets used in an internationally available clinical decision support system and compared the usage of various clinical coding systems. RESULTS SNOMED CT and ICD-10 (International Classification of Diseases, Tenth Revision) proved to be the most accurate clinical coding systems for most concepts in our theoretical evaluation. In our practical evaluation, we found that International Classification of Diseases (Tenth Revision) was most often used to construct code sets. Some coding systems were very accurate in representing specific types of clinical concepts, for example, LOINC (Logical Observation Identifiers Names and Codes) for investigation results and ATC (Anatomical Therapeutic Chemical Classification) for drugs. CONCLUSIONS No single coding system seems to fulfill all the needs for representing clinical concepts for clinical decision support systems. Comprehensiveness of the coding systems seems to be offset by complexity and forms a barrier to usability for code set construction. Clinical vocabularies mapped to multiple clinical coding systems could facilitate clinical code set construction.


2003 ◽  
Vol 42 (03) ◽  
pp. 236-242 ◽  
Author(s):  
R. Jameson ◽  
D. P. Lorence

Summary Objective: Assessment of the adoption of automated classification (encoder) systems in healthcare settings and related effects on perceived data quality. Methods: Survey of all U.S. accredited medical records managers, summarizing their reports of automated encoding systems and data quality change following adoption of systems. Results: Significant improvement in data was seen from adoption of automated encoding systems, though variation existed across regions and key demographic variables. Conclusion: At a national level, there is a need to minimize data quality variation and ensure some degree of nationwide uniformity in the performance of coding systems. If healthcare providers are expected to trust coded data for comparative purposes, there will be a like need for more uniform and standardized system-based performance benchmarks.


Sign in / Sign up

Export Citation Format

Share Document