String Comparison in XSLT with tan:diff()

Author(s):  
Joel Kalvesmaki

Classical models of string comparison have been difficult to implement in XSLT, in part because those models are designed for imperative, stateful programming. In this article I introduce tan:diff(), an XSLT function built upon a different approach to string comparison, one more conducive to a declarative, stateless language. tan:diff() is efficient and fast, even on pairs of very long strings (100K to 1M characters), in part because of its staggered-sample approach, in part because of its stategies for optimizing enormous strings (> 1M characters). Its results are of optimal quality: the function normally returns a minimal diff (shortest edit script). As an open-source function, tan:diff() enables developers to incorporate robust text comparison directly into XML applications.

2021 ◽  
pp. e20210023
Author(s):  
Stéphanie E. M. Gauvin ◽  
Kathleen E. Merwin ◽  
Jessica A. Maxwell ◽  
Chelsea D. Kilimnik ◽  
John Kitchener Sakaluk

Sexual scientists typically default to appraising the reliability of their self-report measures by calculating one or more α coefficients. Despite the prolific use of α, few researchers understand how to situate and make sense of α within the psychometric theories used to develop the measures used in their research (e.g., latent variable theory) and many unknowingly violate the assumptions of α. In this paper, we describe the disconnect between α and latent variable theory and the subsequent restrictive assumptions α makes. Simultaneously, we introduce an alternative metric of reliability—omega (ɷ)—that is compatible with latent variable theory. Subsequently, we provide a tutorial to walk readers through didactic examples on how to calculate ɷ metrics of reliability using the getOmega() function—a simple open-source function we created to automate the estimation of ɷ. We then introduce the Measurement of Sexuality and Intimacy Constructs (MoSaIC) project to provide insight into the state of reliability in sexuality science. We do this through contrasting α and ɷ estimates of reliability across seven sexuality measures, selected based on their emerging and pre-existing relevance and influence in the field of sexuality, in both a queer (LGBTQ+) sample ( n = 545) and a United States’ representative sample ( n = 548). We finish our paper with pragmatic suggestions for editors, reviewers, and authors. By more deeply understanding one’s options of reliability metrics, sexual scientists may carefully consider how they present and assess their measures’ reliability, and ultimately help improve our science’s replicability.


Author(s):  
Fadi P. Deek ◽  
James A. M. McHugh
Keyword(s):  

Author(s):  
M Harth ◽  
St Zangos ◽  
Wo Schwarz ◽  
Oe Gürvit ◽  
Ma Lorenz ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document