scholarly journals Deriving Habitat Models for Northern Long-Eared Bats from Historical Detection Data: A Case Study Using the Fernow Experimental Forest

2016 ◽  
Vol 7 (1) ◽  
pp. 86-98 ◽  
Author(s):  
W. Mark Ford ◽  
Alexander Silvis ◽  
Jane L. Rodrigue ◽  
Andrew B. Kniowski ◽  
Joshua B. Johnson

Abstract The listing of the northern long-eared bat (Myotis septentrionalis) as federally threatened under the Endangered Species Act following severe population declines from white-nose syndrome presents considerable challenges to natural resource managers. Because the northern long-eared bat is a forest habitat generalist, development of effective conservation measures will depend on appropriate understanding of its habitat relationships at individual locations. However, severely reduced population sizes make gathering data for such models difficult. As a result, historical data may be essential in development of habitat models. To date, there has been little evaluation of how effective historical bat presence data, such as data derived from mist-net captures, acoustic detection, and day-roost locations, may be in developing habitat models, nor is it clear how models created using different data sources may differ. We explored this issue by creating presence probability models for the northern long-eared bat on the Fernow Experimental Forest in the central Appalachian Mountains of West Virginia using a historical, presence-only data set. Each presence data type produced outputs that were dissimilar but that still corresponded with known traits of the northern long-eared bat or are easily explained in the context of the particular data collection protocol. However, our results also highlight potential limitations of individual data types. For example, models from mist-net capture data only showed high probability of presence along the dendritic network of riparian areas, an obvious artifact of sampling methodology. Development of ecological niche and presence models for northern long-eared bat populations could be highly valuable for resource managers going forward with this species. We caution, however, that efforts to create such models should consider the substantial limitations of models derived from historical data, and address model assumptions.

2021 ◽  
pp. 135481662110088
Author(s):  
Sefa Awaworyi Churchill ◽  
John Inekwe ◽  
Kris Ivanovski

Using a historical data set and recent advances in non-parametric time series modelling, we investigate the nexus between tourism flows and house prices in Germany over nearly 150 years. We use time-varying non-parametric techniques given that historical data tend to exhibit abrupt changes and other forms of non-linearities. Our findings show evidence of a time-varying effect of tourism flows on house prices, although with mixed effects. The pre-World War II time-varying estimates of tourism show both positive and negative effects on house prices. While changes in tourism flows contribute to increasing housing prices over the post-1950 period, this is short-lived, and the effect declines until the mid-1990s. However, we find a positive and significant relationship after 2000, where the impact of tourism on house prices becomes more pronounced in recent years.


Geophysics ◽  
2014 ◽  
Vol 79 (1) ◽  
pp. IM1-IM9 ◽  
Author(s):  
Nathan Leon Foks ◽  
Richard Krahenbuhl ◽  
Yaoguo Li

Compressive inversion uses computational algorithms that decrease the time and storage needs of a traditional inverse problem. Most compression approaches focus on the model domain, and very few, other than traditional downsampling focus on the data domain for potential-field applications. To further the compression in the data domain, a direct and practical approach to the adaptive downsampling of potential-field data for large inversion problems has been developed. The approach is formulated to significantly reduce the quantity of data in relatively smooth or quiet regions of the data set, while preserving the signal anomalies that contain the relevant target information. Two major benefits arise from this form of compressive inversion. First, because the approach compresses the problem in the data domain, it can be applied immediately without the addition of, or modification to, existing inversion software. Second, as most industry software use some form of model or sensitivity compression, the addition of this adaptive data sampling creates a complete compressive inversion methodology whereby the reduction of computational cost is achieved simultaneously in the model and data domains. We applied the method to a synthetic magnetic data set and two large field magnetic data sets; however, the method is also applicable to other data types. Our results showed that the relevant model information is maintained after inversion despite using 1%–5% of the data.


2020 ◽  
Vol 8 ◽  
Author(s):  
Devasis Bassu ◽  
Peter W. Jones ◽  
Linda Ness ◽  
David Shallcross

Abstract In this paper, we present a theoretical foundation for a representation of a data set as a measure in a very large hierarchically parametrized family of positive measures, whose parameters can be computed explicitly (rather than estimated by optimization), and illustrate its applicability to a wide range of data types. The preprocessing step then consists of representing data sets as simple measures. The theoretical foundation consists of a dyadic product formula representation lemma, and a visualization theorem. We also define an additive multiscale noise model that can be used to sample from dyadic measures and a more general multiplicative multiscale noise model that can be used to perturb continuous functions, Borel measures, and dyadic measures. The first two results are based on theorems in [15, 3, 1]. The representation uses the very simple concept of a dyadic tree and hence is widely applicable, easily understood, and easily computed. Since the data sample is represented as a measure, subsequent analysis can exploit statistical and measure theoretic concepts and theories. Because the representation uses the very simple concept of a dyadic tree defined on the universe of a data set, and the parameters are simply and explicitly computable and easily interpretable and visualizable, we hope that this approach will be broadly useful to mathematicians, statisticians, and computer scientists who are intrigued by or involved in data science, including its mathematical foundations.


ZooKeys ◽  
2014 ◽  
Vol 368 ◽  
pp. 79-89 ◽  
Author(s):  
Giovanni Amori ◽  
Gaetano Aloise ◽  
Luca Luiselli

Author(s):  
José Caldas ◽  
Samuel Kaski

Biclustering is the unsupervised learning task of mining a data matrix for useful submatrices, for instance groups of genes that are co-expressed under particular biological conditions. As these submatrices are expected to partly overlap, a significant challenge in biclustering is to develop methods that are able to detect overlapping biclusters. The authors propose a probabilistic mixture modelling framework for biclustering biological data that lends itself to various data types and allows biclusters to overlap. Their framework is akin to the latent feature and mixture-of-experts model families, with inference and parameter estimation being performed via a variational expectation-maximization algorithm. The model compares favorably with competing approaches, both in a binary DNA copy number variation data set and in a miRNA expression data set, indicating that it may potentially be used as a general-problem solving tool in biclustering.


2020 ◽  
Vol 17 (6) ◽  
pp. 607-616
Author(s):  
Anthony Hatswell ◽  
Nick Freemantle ◽  
Gianluca Baio ◽  
Emmanuel Lesaffre ◽  
Joost van Rosmalen

Background While placebo-controlled randomised controlled trials remain the standard way to evaluate drugs for efficacy, historical data are used extensively across the development cycle. This ranges from supplementing contemporary data to increase the power of trials to cross-trial comparisons in estimating comparative efficacy. In many cases, these approaches are performed without in-depth review of the context of data, which may lead to bias and incorrect conclusions. Methods We discuss the original ‘Pocock’ criteria for the use of historical data and how the use of historical data has evolved over time. Based on these factors and personal experience, we created a series of questions that may be asked of historical data, prior to their use. Based on the answers to these questions, various statistical approaches are recommended. The strategy is illustrated with a case study in colorectal cancer. Results A number of areas need to be considered with historical data, which we split into three categories: outcome measurement, study/patient characteristics (including setting and inclusion/exclusion criteria), and disease process/intervention effects. Each of these areas may introduce issues if not appropriately handled, while some may preclude the use of historical data entirely. We present a tool (in the form of a table) for highlighting any such issues. Application of the tool to a colorectal cancer data set demonstrates under what conditions historical data could be used and what the limitations of such an analysis would be. Conclusion Historical data can be a powerful tool to augment or compare with contemporary trial data, though caution is required. We present some of the issues that may be considered when involving historical data and what (if any) statistical approaches may account for differences between studies. We recommend that, where historical data are to be used in analyses, potential differences between studies are addressed explicitly.


2020 ◽  
Vol 12 (23) ◽  
pp. 4007
Author(s):  
Kasra Rafiezadeh Shahi ◽  
Pedram Ghamisi ◽  
Behnood Rasti ◽  
Robert Jackisch ◽  
Paul Scheunders ◽  
...  

The increasing amount of information acquired by imaging sensors in Earth Sciences results in the availability of a multitude of complementary data (e.g., spectral, spatial, elevation) for monitoring of the Earth’s surface. Many studies were devoted to investigating the usage of multi-sensor data sets in the performance of supervised learning-based approaches at various tasks (i.e., classification and regression) while unsupervised learning-based approaches have received less attention. In this paper, we propose a new approach to fuse multiple data sets from imaging sensors using a multi-sensor sparse-based clustering algorithm (Multi-SSC). A technique for the extraction of spatial features (i.e., morphological profiles (MPs) and invariant attribute profiles (IAPs)) is applied to high spatial-resolution data to derive the spatial and contextual information. This information is then fused with spectrally rich data such as multi- or hyperspectral data. In order to fuse multi-sensor data sets a hierarchical sparse subspace clustering approach is employed. More specifically, a lasso-based binary algorithm is used to fuse the spectral and spatial information prior to automatic clustering. The proposed framework ensures that the generated clustering map is smooth and preserves the spatial structures of the scene. In order to evaluate the generalization capability of the proposed approach, we investigate its performance not only on diverse scenes but also on different sensors and data types. The first two data sets are geological data sets, which consist of hyperspectral and RGB data. The third data set is the well-known benchmark Trento data set, including hyperspectral and LiDAR data. Experimental results indicate that this novel multi-sensor clustering algorithm can provide an accurate clustering map compared to the state-of-the-art sparse subspace-based clustering algorithms.


Author(s):  
Arminée Kazanjian ◽  
Kathryn Friesen

AbstractIn order to explore the diffusion of the selected technologies in one Canadian province (British Columbia), two administrative data sets were analyzed. The data included over 40 million payment records for each fiscal year on medical services provided to British Columbia residents (2,968,769 in 1988) and information on physical facilities, services, and personnel from 138 hospitals in the province. Three specific time periods were examined in each data set, starting with 1979–80 and ending with the most current data available at the time. The detailed retrospective analysis of laboratory and imaging technologies provides historical data in three areas of interest: (a) patterns of diffusion and volume of utilization, (b) institutional profile, and (c) provider profile. The framework for the analysis focused, where possible, on the examination of determinants of diffusion that may be amenable to policy influence.


2014 ◽  
Vol 74 (2) ◽  
pp. 509-534 ◽  
Author(s):  
James Kai-sing Kung ◽  
Chicheng Ma

We examine the impact of rigorous trade suppression during 1550–1567 on the sharp rise of piracy in this period of Ming China. By analyzing a uniquely constructed historical data set, we find that the enforcement of a “sea (trade) ban” policy led to a rise in pirate attacks that was 1.3 times greater among the coastal prefectures more suitable for silk manufactures—our proxy for greater trade potential. Our study illuminates the conflicts in which China subsequently engaged with the Western powers, conflicts that eventually resulted in the forced abandonment of its long upheld autarkic principle.


Sign in / Sign up

Export Citation Format

Share Document