combining methods
Recently Published Documents


TOTAL DOCUMENTS

167
(FIVE YEARS 44)

H-INDEX

14
(FIVE YEARS 2)

2022 ◽  
Author(s):  
Chris Haffenden ◽  
Elena Fano ◽  
Martin Malmsten ◽  
Love Börjeson

How can novel AI techniques be made and put to use in the library? Combining methods from data and library science, this article focuses on Natural Language Processing technologies in especially national libraries. It explains how the National Library of Sweden’s collections enabled the development of a new BERT language model for Swedish. It also outlines specific use cases for the model in the context of academic libraries, detailing strategies for how such a model could make digital collections available for new forms of research: from automated classification to enhanced searchability and improved OCR cohesion. Highlighting the potential for cross-fertilizing AI with libraries, the conclusion suggests that while AI may transform the workings of the library, libraries can also have a key role to play in the future development of AI.


Author(s):  
John M Westfall ◽  
Linda Zittleman ◽  
Maret Felzien ◽  
Jodi Summers Holtrop ◽  
Tristen Hall ◽  
...  

2021 ◽  
pp. 132-152
Author(s):  
Edwin Amenta ◽  
Alexander M. Hicks

This chapter reviews research methods in the study of welfare states and social policy, focusing on causal research. It addresses both comparative studies, which address the experiences of two or more country or subnational cases and historical studies, which address over-time variation, often with deep knowledge of cases, based on primary research. It also addresses combined comparative and historical studies. The chapter highlights the variety of methodological work in the welfare state area and delineates the advantages and disadvantages of the different approaches employed. These methods range from in-depth historical analyses of a single-country case, historical analyses of a few countries, and Boolean Qualitative Comparative Analyses (QCA) across medium-N samples of countries to cross-sectional, event history, and pooled cross-sectional and time-series analyses of large numbers of countries. The chapter concludes with suggestions for synthesizing, triangulating, and combining methods in order to minimize the disadvantages and maximize the advantages of different approaches.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Hannah Lutzenberger ◽  
Connie de Vos ◽  
Onno Crasborn ◽  
Paula Fikkert

Sign language lexicons incorporate phonological specifications. Evidence from emerging sign languages suggests that phonological structure emerges gradually in a new language. In this study, we investigate variation in the form of signs across 20 deaf adult signers of Kata Kolok, a sign language that emerged spontaneously in a Balinese village community. Combining methods previously used for sign comparisons, we introduce a new numeric measure of variation. Our nuanced yet comprehensive approach to form variation integrates three levels (iconic motivation, surface realisation, feature differences) and allows for refinement through weighting the variation score by token and signer frequency. We demonstrate that variation in the form of signs appears in different degrees at different levels. Token frequency in a given dataset greatly affects how much variation can surface, suggesting caution in interpreting previous findings. Different sign variants have different scopes of use among the signing population, with some more widely used than others. Both frequency weightings (token and signer) identify dominant sign variants, i.e., sign forms that are produced frequently or by many signers. We argue that variation does not equal the absence of conventionalisation. Indeed, especially in micro-community sign languages, variation may be key to understanding patterns of language emergence.


2021 ◽  
Author(s):  
James Panayis ◽  
Navodya S. Römer ◽  
Dom Bellini ◽  
A. Katrine Wallis ◽  
Rudolf A. Römer

AbstractWe use in silico modelling of the SARS-CoV-2 spike protein and its mutations, as deposited on the Protein Data Bank (PDB), to ascertain their dynamics, flexibility and rigidity. Identifying the precise nature of the dynamics for the spike proteins enables, in principle, the use of further in silico design methods to quickly screen for existing and novel drug molecules that might prohibit the natural protein dynamics. We employ a recent protein flexibility modeling approach, combining methods for deconstructing a protein structure into a network of rigid and flexible units with a method that explores the elastic modes of motion of this network, and a geometric modeling of flexible motion. Our results thus far indicate that the overall motion of wild-type and mutated spike protein structures remains largely the same.


2021 ◽  
Vol 2021 (9) ◽  
Author(s):  
J. Mago ◽  
A. Schreiber ◽  
M. Spradlin ◽  
A. Yelleshpur Srikant ◽  
A. Volovich

Abstract Symbol alphabets of n-particle amplitudes in $$ \mathcal{N} $$ N = 4 super-Yang-Mills theory are known to contain certain cluster variables of G(4, n) as well as certain algebraic functions of cluster variables. In this paper we solve the C Z = 0 matrix equations associated to several cells of the totally non-negative Grassmannian, combining methods of arXiv:2012.15812 for rational letters and arXiv:2007.00646 for algebraic letters. We identify sets of parameterizations of the top cell of G+(5, 9) for which the solutions produce all of (and only) the cluster variable letters of the 2-loop nine-particle NMHV amplitude, and identify plabic graphs from which all of its algebraic letters originate.


2021 ◽  
Vol 14 (36) ◽  
pp. 167-196
Author(s):  
Thales Silva

This article advocates for a set of recent transdisciplinary options for the History of Religion, combining methods from the Natural and Human Sciences, through a special focus on the study of so-called “complex systems”. We elucidate their theoretical bases and limitations while assuming a pragmatic positioning between a defense of the historical-scientific study of religion and the promotion of groundbreaking methodological outlooks emerging from the Digital Humanities. From this background, throughout the text, we argue for a complementation of historiographical “close reading” with both “distant reading” techniques and interdisciplinary research, using computer-based methods and a diversity of formal modeling techniques. In short, we conclude that such methods offer novel ways for data representation and are best understood not only as creative schemes for solving issues in historiography, but also as a springboard for new inquiries arising from the transdisciplinarity between the Humanities and the Natural Sciences.


Author(s):  
Nicholas M. Sard ◽  
Robert D. Hunter ◽  
Edward F. Roseman ◽  
Daniel B. Hayes ◽  
Robin L. DeBruyne ◽  
...  

2021 ◽  
Author(s):  
Kathryn S Taylor ◽  
James W Taylor

Background Forecasting models have played a pivotal role in decision making during the COVID-19 pandemic, predicting the numbers of cases, hospitalisations and deaths. However, questions have been raised about the role and reliability of models. The aim of this study was to investigate the potential benefits of combining probabilistic forecasts from multiple models for forecasts of incident and cumulative COVID mortalities. Methods We considered 95% interval and point forecasts of weekly incident and cumulative COVID-19 mortalities between 16 May 2020 and 8 May 2021 in multiple locations in the United States. We compared the accuracy of simple and more complex combining methods, as well as individual models. Results The average of the forecasts from the individual models was consistently more accurate than the average performance of these models, which provides a fundamental motivation for combining. Weighted combining performed well for both incident and cumulative mortalities, and for both interval and point forecasting. Inverse score with tuning was the most accurate method overall. The median combination was a leading method in the last quarter for both mortalities, and it was consistently more accurate than the mean combination for point forecasting of both mortalities. For interval forecasts of cumulative mortality, the mean performed better than the median. The leading individual models were most competitive for point forecasts of incident mortality. Conclusions We recommend that harnessing the wisdom of the crowd can improve the contribution of probabilistic forecasting of epidemics to health policy decision making, and report that the relative performance of the different combining methods depends on several factors including the type of data, type of forecast and timing.


Sign in / Sign up

Export Citation Format

Share Document