original sentence
Recently Published Documents


TOTAL DOCUMENTS

40
(FIVE YEARS 21)

H-INDEX

5
(FIVE YEARS 2)

2021 ◽  
Vol 34 (1) ◽  
pp. 71-79
Author(s):  
Colleen M. Berryessa

Using a national sample of U.S. adults (N = 371), this study experimentally examines (1) public support for the use of strategies that provide early release (i.e., “second chance” mechanisms) to individuals serving long-term prison sentences for drug crimes; and (2) how levels of support, and reasons for support, may vary depending on the type of drug-related offense. Results show moderate levels of support for using second chance mechanisms, both generally and in relation to specific strategies commonly available across jurisdictions, for a range of drug offenders. Yet participants showed significantly more support for using presumptive parole, elimination of parole revocations for technical violations, second-look sentencing, and compassionate release in the cases of those incarcerated long term for serious trafficking of marijuana, as compared to serious trafficking of serious drugs. Data also suggest that the public finds a range of factors—including the original sentence being extreme by international standards, extreme due to racially biased practices, out of step with current sentencing values/practices, too costly, and continuing to incarcerate someone unlikely to be a public safety threat—as at least moderately important to their support for the use of second chance mechanisms across drug crimes, and the importance of these factors to that support does not appear to differ significantly based on the type of drug offense. The importance of these results for policy making and utilization are discussed, as well as implications for reducing our historical reliance on drug-related incarceration.


Author(s):  
Linqing Chen ◽  
Junhui Li ◽  
Zhengxian Gong ◽  
Xiangyu Duan ◽  
Boxing Chen ◽  
...  

Document context-aware machine translation remains challenging due to the lack of large-scale document parallel corpora. To make full use of source-side monolingual documents for context-aware NMT, we propose a Pre-training approach with Global Context (PGC). In particular, we first propose a novel self-supervised pre-training task, which contains two training objectives: (1) reconstructing the original sentence from a corrupted version; (2) generating a gap sentence from its left and right neighbouring sentences. Then we design a universal model for PGC which consists of a global context encoder, a sentence encoder and a decoder, with similar architecture to typical context-aware NMT models. We evaluate the effectiveness and generality of our pre-trained PGC model by adapting it to various downstream context-aware NMT models. Detailed experimentation on four different translation tasks demonstrates that our PGC approach significantly improves the translation performance of context-aware NMT. For example, based on the state-of-the-art SAN model, we achieve an averaged improvement of 1.85 BLEU scores and 1.59 Meteor scores on the four translation tasks.


2021 ◽  
Vol 12 ◽  
Author(s):  
Tamás Káldi ◽  
Ágnes Szöllösi ◽  
Anna Babarczy

The present work investigates the memory accessibility of linguistically focused elements and the representation of the alternatives for these elements (i.e., their possible replacements) in Working Memory (WM) and in delayed recognition memory in the case of the Hungarian pre-verbal focus construction (preVf). In two probe recognition experiments we presented preVf and corresponding focusless neutral sentences embedded in five-sentence stories. Stories were followed by the presentation of sentence probes in one of three conditions: (i) the probe was identical to the original sentence in the story, (ii) the focused word (i.e., target) was replaced by a semantically related word and (iii) the target word was replaced by a semantically unrelated but contextually suitable word. In Experiment 1, probes were presented immediately after the stories measuring WM performance, while in Experiment 2, blocks of six stories were presented and sentences were probed with a 2-minute delay measuring delayed recognition memory performance. Results revealed an advantage of the focused element in immediate but not in delayed retrieval. We found no effect of sentence type on the recognition of the two different probe types in WM performance. However, results pertaining to the memory accessibility of focus alternatives in delayed retrieval showed an interference effect resulting in a lower memory performance. We conclude that this effect is indirect evidence for the enhanced activation of focus alternatives. The present work is novel in two respects. First, no study has been conducted on the memory representation of focused elements and their alternatives in the case of the structurally marked Hungarian pre-verbal focus construction. Second, to our knowledge, this is the first study that investigates the focus representation accounts for WM and delayed recognition memory using the same stimuli and same measured variables. Since both experiments used exactly the same stimulus set, and they only differed in terms of the timing of recognition probes, the principle of ceteris paribus fully applied with respect to how we addressed our research question regarding the two different memory systems.


Author(s):  
Jia Jun, Dong Et.al

Paraphrasing is a process to restate the meaning of a text or a passage using different words in the same language to give a clearer understanding of the original sentence to the readers. Paraphrasing is important in many natural language processing tasks such as plagiarism detection, information retrieval, and machine translation. In this article, we describe our work in paraphrasing Chinese idioms by using the definitions from dictionaries. The definitions of the idioms will be reworded and then scored to find the best paraphrase candidates to be used for the given context. With the proposed approach to paraphrase Chinse idioms in sentences, the BLEU was 75.69%, compared to the baseline approach that was 66.34%.


2021 ◽  
Vol 30 (1) ◽  
pp. 868-880
Author(s):  
Xi Wang

Abstract Basic syntactic analysis refers to sentence-level syntactic analysis. In the process of developing the Mat Link English–Chinese machine translation system, the Generalized Maximum Likelihood Ratio algorithm was improved, and a basic English syntax analyzer for English–Chinese translation was designed and implemented. The analyzer approves the structure of the analysis table with a variety of export products, introduces the character mapping function to realize the automatic recognition of the sentence boundary, uses the children of the same level to describe the grammatical structure of the sentence, and realizes the proverb from the original sentence to the target sentence stage conversion. Finally, through the analysis of example sentences, the design concept and working process of the basic grammar are explained.


Author(s):  
O. Sierhieieva ◽  

The article considers phraseological units and antonymic translation as one of the most effective methods of transmission of lexical units. Antonymic translation is shown to be an independent type of translation. Antonymic translation is defined as a translation mode whereby an affirmative (positive) element in the ST is translated by a negative element in the TT and, vice versa, a negative element in the ST is translated using an affirmative element in the TT, without changing the meaning of the original sentence. It is not a word-for-word translation, but a transformation when the translator selects an antonym and combines it with a negation element. Antonymic translation as such can be understood in broader and narrower terms, i.e. it may cover instances of a simple substitution of an element in the ST by its antonymic counterpart (negative or positive) in translation; positive / negative recasting, a translation procedure where the translator modifies the order of the units in the ST in order to conform to the syntactic or idiomatic constraints of the TT; and narrowing of the scope of negation whereby the original negative sentence is turned into an affirmative one in translation by moving the negation element to a word phrase or an elliptical sentence. The term antonymic translation covers all these three types. Generally, antonymic translation consists not only in the transformation of negative constructions to affirmative or vice versa: an original phraseological unit can be substituted for other expressions with the opposite meaning in a target language or an occasional antonym. The usage of antonymic translation as one of the methods of contextual replacement has been investigated. The main types of this lexical and grammatical transformation are systematized. The attention is focused on the reasons for using antonymic translation.


Author(s):  
Toms Bergmanis ◽  
Artūrs Stafanovičs ◽  
Mārcis Pinnis

Neural machine translation systems typically are trained on curated corpora and break when faced with non-standard orthography or punctuation. Resilience to spelling mistakes and typos, however, is crucial as machine translation systems are used to translate texts of informal origins, such as chat conversations, social media posts and web pages. We propose a simple generative noise model to generate adversarial examples of ten different types. We use these to augment machine translation systems’ training data and show that, when tested on noisy data, systems trained using adversarial examples perform almost as well as when translating clean data, while baseline systems’ performance drops by 2-3 BLEU points. To measure the robustness and noise invariance of machine translation systems’ outputs, we use the average translation edit rate between the translation of the original sentence and its noised variants. Using this measure, we show that systems trained on adversarial examples on average yield 50 % consistency improvements when compared to baselines trained on clean data.


2020 ◽  
Vol 10 (18) ◽  
pp. 6196
Author(s):  
Chunhua Wu ◽  
Xiaolong Chen ◽  
Xingbiao Li

Currently, most text style transfer methods encode the text into a style-independent latent representation and decode it into new sentences with the target style. Due to the limitation of the latent representation, previous works can hardly get satisfactory target style sentence especially in terms of semantic remaining of the original sentence. We propose a “Mask and Generation” structure, which can obtain an explicit representation of the content of original sentence and generate the target sentence with a transformer. This explicit representation is a masked text that masks the words with the strong style attribute in the sentence. Therefore, it can preserve most of the semantic meaning of the original sentence. In addition, as it is the input of the generator, it also simplified this process compared to the current work who generate the target sentence from scratch. As the explicit representation is readable and the model has better interpretability, we can clearly know which words changed and why the words changed. We evaluate our model on two review datasets with quantitative, qualitative, and human evaluations. The experimental results show that our model generally outperform other methods in terms of transfer accuracy and content preservation.


Sign in / Sign up

Export Citation Format

Share Document