scholarly journals Relationship between acoustic indices, length of recordings and processing time: a methodological test

2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Edgar Cifuentes ◽  
Juliana Vélez Gómez ◽  
Simon J Butler

Ecoacoustic approaches have the potential to provide rapid biodiversity assessments and avoid costly fieldwork. Their use in biodiversity studies for improving management and conservation of natural landscapes has grown considerably in recent years. Standardised methods for sampling acoustic information that deliver reliable and consistent results within and between ecosystems are still lacking. Sampling frequency and duration are particularly important considerations because shorter, intermittent recordings mean recorder batteries last longer and data processing is less computationally intensive, but a smaller proportion of the available soundscape is sampled. Here, we compare acoustic indices and processing time for subsamples of increasing duration clipped from 94 one-hour recordings, to test how different acoustic indices behave, in order to identify the minimum sample length required. Our results suggest that short recordings distributed across the survey period accurately represent acoustic patterns, while optimizing data collection and processing. ACI and H are the most stable indices, showing an ideal sampling schedule of ten 1-minute samples in an hour. Although ADI, AEI and NDSI well represent acoustic patterns under the same sampling schedule, these are more robust under continuous recording formats. Such targeted subsampling could greatly reduce data storage and computational power requirements in large-scale and long-term projects.

2019 ◽  
Vol 11 (2) ◽  
pp. 14-40
Author(s):  
Stephen Dass A. ◽  
Prabhu J.

In this fast growing data universe, data generation and data storage are moving into the next-generation process by generating petabytes and gigabytes in an hour. This leads to data accumulation where privacy and preservation are certainly misplaced. This data contains some sensitive and high privacy data which is to be hidden or removed using hashing or anonymization algorithms. In this article, the authors propose a hybrid k anonymity algorithm to handle large scale aircraft datasets with combined concepts of Big Data analytics and privacy preservation of storing the dataset with the help of MapReduce. This published anonymized data are moved by MapReduce to the Hive database for data storage. The authors propose a multi-dimensional hybrid k-anonymity technique to solve the privacy issue and compare the proposed system with other two anonymization methods such as BUG and TDS. Three experiments were performed for evaluating classifier error, calculating disruption value and p% hybrid anonymity and estimation of processing time.


2008 ◽  
Vol 2008 ◽  
pp. 1-5 ◽  
Author(s):  
Tomas Hruz ◽  
Oliver Laule ◽  
Gabor Szabo ◽  
Frans Wessendorp ◽  
Stefan Bleuler ◽  
...  

The Web-based software tool Genevestigator provides powerful tools for biologists to explore gene expression across a wide variety of biological contexts. Its first releases, however, were limited by the scaling ability of the system architecture, multiorganism data storage and analysis capability, and availability of computationally intensive analysis methods. Genevestigator V3 is a novel meta-analysis system resulting from new algorithmic and software development using a client/server architecture, large-scale manual curation and quality control of microarray data for several organisms, and curation of pathway data for mouse and Arabidopsis. In addition to improved querying features, Genevestigator V3 provides new tools to analyze the expression of genes in many different contexts, to identify biomarker genes, to cluster genes into expression modules, and to model expression responses in the context of metabolic and regulatory networks. Being a reference expression database with user-friendly tools, Genevestigator V3 facilitates discovery research and hypothesis validation.


2016 ◽  
Author(s):  
Azat Akhmetov ◽  
Andrew D. Ellington ◽  
Edward M. Marcotte

AbstractEncoding arbitrary digital information in DNA has attracted attention as a potential avenue for large scale and long term data storage. However, in order to enable DNA data storage technologies there needs to be improvements in data storage fidelity (tolerance to mutation), the facility of writing and reading the data (biases and systematic error arising from synthesis and sequencing), and overall scalability. To this end, we have developed and implemented an encoding scheme that is suitable for detecting and correcting errors that may arise during storage, writing, and reading, such as those arising from nucleotide substitutions, insertions, and deletions. We propose a scheme for parallelized long term storage of encoded sequences that relies on overlaps rather than the address blocks found in previously published work. Using computer simulations, we illustrate the encoding, sequencing, decoding, and recovery of encoded information, ultimately demonstrating the possibility of a successful round-trip read/write. These demonstrations show that in theory a precise control over error tolerance is possible. Even after simulated degradation of DNA, recovery of original data is possible owing to the error correction capabilities built into the encoding strategy. A secondary advantage of our method is that the statistical characteristics (such as repetitiveness and GC-composition) of encoded sequences can also be tailored without sacrificing the overall ability to store large amounts of data. Finally, the combination of the overlap-based partitioning of data with the LZMA compression that is integral to encoding means that the entire sequence must be present for successful decoding. This feature enables inordinately strong encryptions. As a potential application, an encrypted pathogen genome could be could be distributed and carried by cells without danger of being expressed, and could not even be read out in the absence of the entire DNA consortium.


Author(s):  
Stephen Dass A. ◽  
Prabhu J.

In this fast growing data universe, data generation and data storage are moving into the next-generation process by generating petabytes and gigabytes in an hour. This leads to data accumulation where privacy and preservation are certainly misplaced. This data contains some sensitive and high privacy data which is to be hidden or removed using hashing or anonymization algorithms. In this article, the authors propose a hybrid k anonymity algorithm to handle large scale aircraft datasets with combined concepts of Big Data analytics and privacy preservation of storing the dataset with the help of MapReduce. This published anonymized data are moved by MapReduce to the Hive database for data storage. The authors propose a multi-dimensional hybrid k-anonymity technique to solve the privacy issue and compare the proposed system with other two anonymization methods such as BUG and TDS. Three experiments were performed for evaluating classifier error, calculating disruption value and p% hybrid anonymity and estimation of processing time.


1994 ◽  
Vol 144 ◽  
pp. 29-33
Author(s):  
P. Ambrož

AbstractThe large-scale coronal structures observed during the sporadically visible solar eclipses were compared with the numerically extrapolated field-line structures of coronal magnetic field. A characteristic relationship between the observed structures of coronal plasma and the magnetic field line configurations was determined. The long-term evolution of large scale coronal structures inferred from photospheric magnetic observations in the course of 11- and 22-year solar cycles is described.Some known parameters, such as the source surface radius, or coronal rotation rate are discussed and actually interpreted. A relation between the large-scale photospheric magnetic field evolution and the coronal structure rearrangement is demonstrated.


1967 ◽  
Vol 06 (01) ◽  
pp. 8-14 ◽  
Author(s):  
M. F. Collen

The utilization of an automated multitest laboratory as a data acquisition center and of a computer for trie data processing and analysis permits large scale preventive medical research previously not feasible. Normal test values are easily generated for the particular population studied. Long-term epidemiological research on large numbers of persons becomes practical. It is our belief that the advent of automation and computers has introduced a new era of preventive medicine.


2014 ◽  
pp. 124-129
Author(s):  
Z. V. Karamysheva

The review contains detailed description of the «Atlas of especially protected natural areas of Saint Petersburg» published in 2013. This publication presents the results of long-term studies of 12 natural protected areas made by a large research team in the years from 2002 to 2013 (see References). The Atlas contains a large number of the historical maps, new satellite images, the original illustrations, detailed texts on the nature of protected areas, summary tables of rare species of vascular plants, fungi and vertebrates recorded in these areas. Special attention is paid to the principles of thematic large-scale mapping. The landscape maps, the vegetation maps as well as the maps of natural processes in landscapes are included. Reviewed Atlas deserves the highest praise.


2000 ◽  
Vol 151 (3) ◽  
pp. 80-83
Author(s):  
Pascal Schneider ◽  
Jean-Pierre Sorg

In and around the state-owned forest of Farako in the region of Sikasso, Mali, a large-scale study focused on finding a compromise allowing the existential and legitimate needs of the population to be met and at the same time conserving the forest resources in the long term. The first step in research was to sketch out the rural socio-economic context and determine the needs for natural resources for autoconsumption and commercial use as well as the demand for non-material forest services. Simultaneously, the environmental context of the forest and the resources available were evaluated by means of inventories with regard to quality and quantity. According to an in-depth comparison between demand and potential, there is a differentiated view of the suitability of the forest to meet the needs of the people living nearby. Propositions for a multipurpose management of the forest were drawn up. This contribution deals with some basic elements of research methodology as well as with results of the study.


Sign in / Sign up

Export Citation Format

Share Document