North American
Recently Published Documents


TOTAL DOCUMENTS

32338
(FIVE YEARS 7674)

H-INDEX

209
(FIVE YEARS 48)

Author(s):  
Lisa M. Holsinger ◽  
Sean A. Parks ◽  
Lisa B. Saperstein ◽  
Rachel A. Loehman ◽  
Ellen Whitman ◽  
...  

2021 ◽  
Author(s):  
Chris Bryan ◽  
Ehsaan Nasir

Abstract Evaluating Electrical Submersible Pumps (ESPs) [SS1] [NA2] run-lives and performance in unconventional well environments is challenging due to many different factors -including the reservoir, well design, and production fluids. Moreover, reviewing the run-lives of ESPs in a field can be rather complex since the run-life data is incomplete. Often ESPs are pulled while they are still operational, or the ESP has not been allowed to run until failure. These are some of the complications that arise when gauging ESP performance. A large dataset of ESP installs was assessed using Kaplan-Meier survival analysis for the North American unconventional application to better understand those factors that may affect ESP run lives. The factors were studied including but are not limited to the following: Basin and producing formation Comparing different ESP component types such pumps and motors, and new or used ESP components Completion intensity of the frac job (lb/ft of proppant) Kaplan-Meier survival analysis is one of the commonly used methods to measure the fraction or probability of group survival after certain time periods because it accounts for incomplete observations. Using Kaplan-Meier analysis generates a survival curve to show a declining fraction of surviving ESPs over time. Survival curves can be compared by segmenting the runlife data into buckets (based on different factors), therefore to analyze the statistical significance of each and how they affect ESP survivability. Kaplan-Meier analysis was performed on the aforementioned dataset to answer these questions in order to better understand the factors that affect ESP runlives in North American unconventional plays. This work uses a unique dataset that encompasses several different ESP designs, with the ESPs installed in different North American plays. The observations and conclusions drawn from it, by applying survival analysis, can help in benchmarking ESP runtimes and identifying what works in terms of prolonging ESP runlife. The workflow is also applicable to any asset in order to better understand the drivers behind ESP runlife performance.


2021 ◽  
Vol 263 ◽  
pp. 109329
Author(s):  
Nicole L. Michel ◽  
Keith A. Hobson ◽  
Christy A. Morrissey ◽  
Robert G. Clark

2021 ◽  
pp. 1-16
Author(s):  
Miguel Vilches Hinojosa ◽  
Jaime Rivas Castillo ◽  
María Vidal De Haymes

2021 ◽  
Vol 4 ◽  
Author(s):  
Rolando Coto-Solano ◽  
James N. Stanford ◽  
Sravana K. Reddy

In recent decades, computational approaches to sociophonetic vowel analysis have been steadily increasing, and sociolinguists now frequently use semi-automated systems for phonetic alignment and vowel formant extraction, including FAVE (Forced Alignment and Vowel Extraction, Rosenfelder et al., 2011; Evanini et al., Proceedings of Interspeech, 2009), Penn Aligner (Yuan and Liberman, J. Acoust. Soc. America, 2008, 123, 3878), and DARLA (Dartmouth Linguistic Automation), (Reddy and Stanford, DARLA Dartmouth Linguistic Automation: Online Tools for Linguistic Research, 2015a). Yet these systems still have a major bottleneck: manual transcription. For most modern sociolinguistic vowel alignment and formant extraction, researchers must first create manual transcriptions. This human step is painstaking, time-consuming, and resource intensive. If this manual step could be replaced with completely automated methods, sociolinguists could potentially tap into vast datasets that have previously been unexplored, including legacy recordings that are underutilized due to lack of transcriptions. Moreover, if sociolinguists could quickly and accurately extract phonetic information from the millions of hours of new audio content posted on the Internet every day, a virtual ocean of speech from newly created podcasts, videos, live-streams, and other audio content would now inform research. How close are the current technological tools to achieving such groundbreaking changes for sociolinguistics? Prior work (Reddy et al., Proceedings of the North American Association for Computational Linguistics 2015 Conference, 2015b, 71–75) showed that an HMM-based Automated Speech Recognition system, trained with CMU Sphinx (Lamere et al., 2003), was accurate enough for DARLA to uncover evidence of the US Southern Vowel Shift without any human transcription. Even so, because that automatic speech recognition (ASR) system relied on a small training set, it produced numerous transcription errors. Six years have passed since that study, and since that time numerous end-to-end automatic speech recognition (ASR) algorithms have shown considerable improvement in transcription quality. One example of such a system is the RNN/CTC-based DeepSpeech from Mozilla (Hannun et al., 2014). (RNN stands for recurrent neural networks, the learning mechanism for DeepSpeech. CTC stands for connectionist temporal classification, the mechanism to merge phones into words). The present paper combines DeepSpeech with DARLA to push the technological envelope and determine how well contemporary ASR systems can perform in completely automated vowel analyses with sociolinguistic goals. Specifically, we used these techniques on audio recordings from 352 North American English speakers in the International Dialects of English Archive (IDEA1), extracting 88,500 tokens of vowels in stressed position from spontaneous, free speech passages. With this large dataset we conducted acoustic sociophonetic analyses of the Southern Vowel Shift and the Northern Cities Chain Shift in the North American IDEA speakers. We compared the results using three different sources of transcriptions: 1) IDEA’s manual transcriptions as the baseline “ground truth”, 2) the ASR built on CMU Sphinx used by Reddy et al. (Proceedings of the North American Association for Computational Linguistics 2015 Conference, 2015b, 71–75), and 3) the latest publicly available Mozilla DeepSpeech system. We input these three different transcriptions to DARLA, which automatically aligned and extracted the vowel formants from the 352 IDEA speakers. Our quantitative results show that newer ASR systems like DeepSpeech show considerable promise for sociolinguistic applications like DARLA. We found that DeepSpeech’s automated transcriptions had significantly fewer character error rates than those from the prior Sphinx system (from 46 to 35%). When we performed the sociolinguistic analysis of the extracted vowel formants from DARLA, we found that the automated transcriptions from DeepSpeech matched the results from the ground truth for the Southern Vowel Shift (SVS): five vowels showed a shift in both transcriptions, and two vowels didn’t show a shift in either transcription. The Northern Cities Shift (NCS) was more difficult to detect, but ground truth and DeepSpeech matched for four vowels: One of the vowels showed a clear shift, and three showed no shift in either transcription. Our study therefore shows how technology has made progress toward greater automation in vowel sociophonetics, while also showing what remains to be done. Our statistical modeling provides a quantified view of both the abilities and the limitations of a completely “hands-free” analysis of vowel shifts in a large dataset. Naturally, when comparing a completely automated system against a semi-automated system involving human manual work, there will always be a tradeoff between accuracy on the one hand versus speed and replicability on the other hand [Kendall and Joseph, Towards best practices in sociophonetics (with Marianna DiPaolo), 2014]. The amount of “noise” that can be tolerated for a given study will depend on the particular research goals and researchers’ preferences. Nonetheless, our study shows that, for certain large-scale applications and research goals, a completely automated approach using publicly available ASR can produce meaningful sociolinguistic results across large datasets, and these results can be generated quickly, efficiently, and with full replicability.


2021 ◽  
Author(s):  
J. S Sinclair ◽  
M. E. Fraker ◽  
J. M. Hood ◽  
K. T. Frank ◽  
M. R. DuFour ◽  
...  

Author(s):  
Beckett Sterner ◽  
Nathan Upham ◽  
Prashant Gupta ◽  
Caleb Powell ◽  
Nico Franz

Making the most of biodiversity data requires linking observations of biological species from multiple sources both efficiently and accurately (Bisby 2000, Franz et al. 2016). Aggregating occurrence records using taxonomic names and synonyms is computationally efficient but known to experience significant limitations on accuracy when the assumption of one-to-one relationships between names and biological entities breaks down (Remsen 2016, Franz and Sterner 2018). Taxonomic treatments and checklists provide authoritative information about the correct usage of names for species, including operational representations of the meanings of those names in the form of range maps, reference genetic sequences, or diagnostic traits. They increasingly provide taxonomic intelligence in the form of precise description of the semantic relationships between different published names in the literature. Making this authoritative information Findable, Accessible, Interoperable, and Reusable (FAIR; Wilkinson et al. 2016) would be a transformative advance for biodiversity data sharing and help drive adoption and novel extensions of existing standards such as the Taxonomic Concept Schema and the OpenBiodiv Ontology (Kennedy et al. 2006, Senderov et al. 2018). We call for the greater, global Biodiversity Information Standards (TDWG) and taxonomy community to commit to extending and expanding on how FAIR applies to biodiversity data and include practical targets and criteria for the publication and digitization of taxonomic concept representations and alignments in taxonomic treatments, checklists, and backbones. As a motivating case, consider the abundantly sampled North American deer mouse—Peromyscus maniculatus (Wagner 1845)—which was recently split from one continental species into five more narrowly defined forms, so that the name P. maniculatus is now only applied east of the Mississippi River (Bradley et al. 2019, Greenbaum et al. 2019). That single change instantly rendered ambiguous ~7% of North American mammal records in the Global Biodiversity Information Facility (n=242,663, downloaded 2021-06-04; GBIF.org 2021) and ⅓ of all National Ecological Observatory Network (NEON) small mammal samples (n=10,256, downloaded 2021-06-27). While this type of ambiguity is common in name-based databases when species are split, the example of P. maniculatus is particularly striking for its impact upon biological questions ranging from hantavirus surveillance in North America to studies of climate change impacts upon rodent life-history traits. Of special relevance to NEON sampling is recent evidence suggesting deer mice potentially transmit SARS-CoV-2 (Griffin et al. 2021). Automating the updating of occurrence records in such cases and others will require operational representations of taxonomic concepts—e.g., range maps, reference sequences, and diagnostic traits—that are FAIR in addition to taxonomic concept alignment information (Franz and Peet 2009). Despite steady progress, it remains difficult to find, access, and reuse authoritative information about how to apply taxonomic names even when it is already digitized. It can also be difficult to tell without manual inspection whether similar types of concept representations derived from multiple sources, such as range maps or reference sequences selected from different research articles or checklists, are in fact interoperable for a particular application. The issue is therefore different from important ongoing efforts to digitize trait information in species circumscriptions, for example, and focuses on how already digitized knowledge can best be packaged to inform human experts and artifical intelligence applications (Sterner and Franz 2017). We therefore propose developing community guidelines and criteria for FAIR taxonomic concept representations as "semantic artefacts" of general relevance to linked open data and life sciences research (Le Franc et al. 2020).


2021 ◽  
Vol 13 ◽  
pp. 1436-1441
Author(s):  
Luciana Martins Da Rosa ◽  
Bruna Aline Irmão ◽  
Laura Cavalcanti de Farias Brehmer ◽  
Amanda Espíndola De Andrade ◽  
Melissa Orlandi Honório Locks ◽  
...  

Objetivo: Identificar o perfil sociodemográfico, clínico e os diagnósticos de enfermagem pessoas com diabetes mellitus estabelecidos em consultas de enfermagem à beira do leito. Método: Estudo observacional descritivo, realizado em 2017 com 37 participantes, amostra não probabilística, em unidade de clínica médica ou cirúrgica de um hospital escola do sul do Brasil. Variáveis do estudo: dados sociodemográficos, clínicos e diagnósticos de enfermagem da North American Nursing Diagnosis Association, submetidos à estatística descritiva simples. Resultados: 89,21% dos participantes diabéticos tipo 2; tempo médio de diagnóstico de 9,6 anos; 70,2% hipertensos; 56,7% tabagistas; 16,2% insulinodependentes; 32,4% faziam uso de açúcar refinado; 59,45% associavam dois ou mais carboidratos na mesma refeição. Os diagnósticos mais frequentes: Risco de glicemia instável (97,37%), Risco de infecção (97,37%), Conhecimento deficiente (81,58%), Estilo de vida sedentário (60,53%), Controle ineficaz da saúde (60,53%). Conclusão: A identificação do perfil e dos diagnósticos de enfermagem possibilita melhor planejamento de enfermagem.  


Sign in / Sign up

Export Citation Format

Share Document