simple substitution
Recently Published Documents


TOTAL DOCUMENTS

87
(FIVE YEARS 14)

H-INDEX

12
(FIVE YEARS 2)

2021 ◽  
Author(s):  
◽  
Christopher Doran

<p>Codeswitching is the action of switching between languages to better impart information to a recipient. This thesis introduces a set of codeswitching translator tools as a method of disrupting the potentially damaging structures of tribal politics through the manipulation of ideologically specific language norms. We first investigate how tribalism and group identity impact our ability to participate in political discourse. Using this insight from a host of different research disciplines, we design an iterative testing environment for a variety of ‘codeswitching’ translators in order to see the impact of translations ranging in complexity from simple word and syntax substitution through to machine learning back-translation. Though back-translation was not found to be an effective technique, simple substitution methods provided a foundation of effectiveness and proof of concept among test participants, especially those that identified as politically aligned.</p>


2021 ◽  
Author(s):  
◽  
Christopher Doran

<p>Codeswitching is the action of switching between languages to better impart information to a recipient. This thesis introduces a set of codeswitching translator tools as a method of disrupting the potentially damaging structures of tribal politics through the manipulation of ideologically specific language norms. We first investigate how tribalism and group identity impact our ability to participate in political discourse. Using this insight from a host of different research disciplines, we design an iterative testing environment for a variety of ‘codeswitching’ translators in order to see the impact of translations ranging in complexity from simple word and syntax substitution through to machine learning back-translation. Though back-translation was not found to be an effective technique, simple substitution methods provided a foundation of effectiveness and proof of concept among test participants, especially those that identified as politically aligned.</p>


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0253425
Author(s):  
Ari Ercole ◽  
Abhishek Dixit ◽  
David W. Nelson ◽  
Shubhayu Bhattacharyay ◽  
Frederick A. Zeiler ◽  
...  

Statistical models for outcome prediction are central to traumatic brain injury research and critical to baseline risk adjustment. Glasgow coma score (GCS) and pupil reactivity are crucial covariates in all such models but may be measured at multiple time points between the time of injury and hospital and are subject to a variable degree of unreliability and/or missingness. Imputation of missing data may be undertaken using full multiple imputation or by simple substitution of measurements from other time points. However, it is unknown which strategy is best or which time points are more predictive. We evaluated the pseudo-R2 of logistic regression models (dichotomous survival) and proportional odds models (Glasgow Outcome Score—extended) using different imputation strategies on the The Collaborative European NeuroTrauma Effectiveness Research in Traumatic Brain Injury (CENTER-TBI) study dataset. Substitution strategies were easy to implement, achieved low levels of missingness (<< 10%) and could outperform multiple imputation without the need for computationally costly calculations and pooling multiple final models. While model performance was sensitive to imputation strategy, this effect was small in absolute terms and clinical relevance. A strategy of using the emergency department discharge assessments and working back in time when these were missing generally performed well. Full multiple imputation had the advantage of preserving time-dependence in the models: the pre-hospital assessments were found to be relatively unreliable predictors of survival or outcome. The predictive performance of later assessments was model-dependent. In conclusion, simple substitution strategies for imputing baseline GCS and pupil response can perform well and may be a simple alternative to full multiple imputation in many cases.


2021 ◽  
pp. tobaccocontrol-2020-056427
Author(s):  
Ebrahim Karam ◽  
Soha Talih ◽  
Rola Salman ◽  
Rachel El-Hage ◽  
Nareg Karaoghlanian ◽  
...  

In 2019, JUUL Labs began marketing in the European Union ‘new technology’ pods that incorporated a new wick that it claimed provided ‘more satisfaction’. In this study, we compared design and materials of construction, electrical characteristics, liquid composition and nicotine and carbonyl emissions of new technology JUUL pods to their predecessors. Consistent with manufacturer’s claims, we found that the new pods incorporated a different wicking material. However, we also found that the new pod design resulted in 50% greater nicotine emissions per puff than its predecessor, despite exhibiting unchanged liquid composition, device geometry and heating coil resistance. We found that when connected to the new technology pods, the JUUL power unit delivered a more consistent voltage to the heating coil. This behaviour suggests that the new coil-wick system resulted in better surface contact between the liquid and the temperature-regulated heating coil. Total carbonyl emissions did not differ across pod generations. That nicotine yields can be greatly altered with a simple substitution of wick material underscores the fragility of regulatory approaches that centre on product design rather than product performance specifications.


2021 ◽  
Author(s):  
Ari Ercole ◽  
Abhishek Dixit ◽  
David W Nelson ◽  
Frederick A Zeiler ◽  
Daan Nieboer ◽  
...  

Statistical models for outcome prediction are central to traumatic brain injury research and critical to baseline risk adjustment. Glasgow coma score (GCS) and pupil reactivity are crucial co- variates in all such models but may be measured at multiple time points between the time of injury and hospital and are subject to a variable degree of unreliability and/or missingness. Imputation of missing data may be undertaken using full multiple imputation or by simple substitution of measurements from other time points. However it is unknown which strategy is best or which time points are more predictive. We evaluated the pseudo-R2 of logistic regression models (dichotomous survival) and proportional odds models (Glasgow Outcome Score- extended) using different imputation strategies from data from the The Collaborative European NeuroTrauma Effectiveness Research in Traumatic Brain Injury (CENTER-TBI) study. Substitution strategies were easy to implement, achieved low levels of missingness (<< 10%) and could outperform multiple imputation without the need for computationally costly calculations and pooling multiple final models. Model performance was sensitive to imputation strategy although this was small in absolute terms and clinical relevance. A strategy of using the emergency department discharge assessments and working back in time when these were missing generally performed well. Full multiple imputation had the advantage of preserving time-dependence in the models: The pre-hospital assessments were found to be relatively unreliable predictors of survival or outcome. The predictive performance of later assessments was model-dependent. In conclusion, simple substitution strategies for imputing baseline GCS and pupil response can perform well and may be a simple alternative to full multiple imputation in many cases.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Veny Cahya Hardita ◽  
Eka Wahyu Sholeha

The concept that an important information cannot be known by parties who don’t  have rights can be done by making the confidentiality of data using cryptography. Safety in vigenere ciphers depends on the number of keys used, the more the number of keys, the wider the key space. While caesar cipher or often also called caesar code is a classic coding system based on simple substitution. This study uses a combination of Vigenere Cipher and Caesar Cipher algorithms so that the security of the contents of the message is protected stronger and safer so that if the message sent is tapped by someone who is not responsible then the hijacker has difficulty knowing contents of the message. So that the confidentiality and authenticity of message is up to the recipient In this study successfully implemented both methods and has an output in the form of a reading symbol, not letters.  


Author(s):  
O. Sierhieieva ◽  

The article considers phraseological units and antonymic translation as one of the most effective methods of transmission of lexical units. Antonymic translation is shown to be an independent type of translation. Antonymic translation is defined as a translation mode whereby an affirmative (positive) element in the ST is translated by a negative element in the TT and, vice versa, a negative element in the ST is translated using an affirmative element in the TT, without changing the meaning of the original sentence. It is not a word-for-word translation, but a transformation when the translator selects an antonym and combines it with a negation element. Antonymic translation as such can be understood in broader and narrower terms, i.e. it may cover instances of a simple substitution of an element in the ST by its antonymic counterpart (negative or positive) in translation; positive / negative recasting, a translation procedure where the translator modifies the order of the units in the ST in order to conform to the syntactic or idiomatic constraints of the TT; and narrowing of the scope of negation whereby the original negative sentence is turned into an affirmative one in translation by moving the negation element to a word phrase or an elliptical sentence. The term antonymic translation covers all these three types. Generally, antonymic translation consists not only in the transformation of negative constructions to affirmative or vice versa: an original phraseological unit can be substituted for other expressions with the opposite meaning in a target language or an occasional antonym. The usage of antonymic translation as one of the methods of contextual replacement has been investigated. The main types of this lexical and grammatical transformation are systematized. The attention is focused on the reasons for using antonymic translation.


Author(s):  
Darya Makarskaya ◽  
◽  
Yurii Kotov ◽  

The article considers text analysis tools based on a numerical estimate obtained using the cryptographic approach. It contains applying frequency cryptanalysis methods of one-to-one substitution to plaintext. In this case, these methods tend to methods for identifying letters of the text, and the task of cryptanalysis is to the task of identifying letters of text. For frequency cryptanalysis methods of one-to-one substitution, the identification error is determined as a statistic of a number of cryptanalysis errors for certain volumes of texts. The numerical score called the quality factor of a method can be transferred from methods to texts. Text analysis based on the quality factor of texts includes the selection of a cryptographic method with the help of which a quantitative assessment of the text is given, and the subsequent calculation of the quality factor of the text vector. In order to analyze texts based on cryptanalysis of a simple substitution the main stages of this analysis and necessary means are determined. These tools are implemented in the graphical user interface of MS Access and include 30 navigation tabs, on which 143 information and control elements are located. The article includes examples of implementation of some tools.


Sign in / Sign up

Export Citation Format

Share Document