free data
Recently Published Documents


TOTAL DOCUMENTS

560
(FIVE YEARS 207)

H-INDEX

34
(FIVE YEARS 5)

2022 ◽  
Vol 18 (1) ◽  
pp. 1-31
Author(s):  
Chaojie Gu ◽  
Linshan Jiang ◽  
Rui Tan ◽  
Mo Li ◽  
Jun Huang

Low-power wide-area network technologies such as long-range wide-area network (LoRaWAN) are promising for collecting low-rate monitoring data from geographically distributed sensors, in which timestamping the sensor data is a critical system function. This article considers a synchronization-free approach to timestamping LoRaWAN uplink data based on signal arrival time at the gateway, which well matches LoRaWAN’s one-hop star topology and releases bandwidth from transmitting timestamps and synchronizing end devices’ clocks at all times. However, we show that this approach is susceptible to a frame delay attack consisting of malicious frame collision and delayed replay. Real experiments show that the attack can affect the end devices in large areas up to about 50,000, m 2 . In a broader sense, the attack threatens any system functions requiring timely deliveries of LoRaWAN frames. To address this threat, we propose a LoRaTS gateway design that integrates a commodity LoRaWAN gateway and a low-power software-defined radio receiver to track the inherent frequency biases of the end devices. Based on an analytic model of LoRa’s chirp spread spectrum modulation, we develop signal processing algorithms to estimate the frequency biases with high accuracy beyond that achieved by LoRa’s default demodulation. The accurate frequency bias tracking capability enables the detection of the attack that introduces additional frequency biases. We also investigate and implement a more crafty attack that uses advanced radio apparatuses to eliminate the frequency biases. To address this crafty attack, we propose a pseudorandom interval hopping scheme to enhance our frequency bias tracking approach. Extensive experiments show the effectiveness of our approach in deployments with real affecting factors such as temperature variations.


Author(s):  
Mathias Artus ◽  
Mohamed Alabassy ◽  
Christian Koch

Current bridge inspection practices rely on paper-based data acquisition, digitization, and multiple conversions in between incompatible formats to facilitate data exchange. This practice is time-consuming, error-prone, cumbersome, and leads to information loss. One aim for future inspection procedures is to have a fully digitized workflow that achieves loss-free data exchange, which lowers costs and offers higher efficiency. On the one hand, existing studies proposed methods to automatize data acquisition and visualization for inspections. These studies lack an open standard to make the gathered data available for other processes. On the other hand, several studies discuss data structures for exchanging damage information through out different stakeholders. However, those studies do not cover the process of automatic data acquisition and transfer. This study focused on a framework that incorporates automatic damage data acquisition, transfer, and a damage information model for data exchange. This enables inspectors to use damage data for subsequent analyses and simulations. The proposed framework shows the potentials for a comprehensive damage information model and related (semi-)automatic data acquisition and processing.


2021 ◽  
Vol 14 (1) ◽  
pp. 16
Author(s):  
Wioleta Błaszczak-Bąk ◽  
Joanna Janicka ◽  
Tomasz Kozakiewicz ◽  
Krystian Chudzikiewicz ◽  
Grzegorz Bąk

Airborne Laser Scanning (ALS) is a technology often used to study forest areas. The main area of application of ALS in forests is collecting data to determine the height of individual trees and entire stands, tree density and stand biomass. The content of the ALS data is also classified, i.e., registered objects are identified, including the species affiliation of individual trees. Important information for forest districts includes other parameters related to the structure and share of stands and the number of trees in the forest district. The main goal of this study was to propose the new ALS data processing methodology for detecting single trees in the Samławki Forest District. The idea of the proposed methodology is to indicate a free and accessible solution for any user (at least in Poland). This new ALS data processing methodology contributes to research on the use of ALS data in forest districts to maintain up-to-date and accurate stand statistics. This methodology was based on free data from the geoportal.gov.pl portal and free software, which allowed to minimize the costs of preparing data for the needs of forestry activities. In cooperation with the Samławki Forest District, the proposed methodology was used to detect the number and heights of trees for two forest addresses 13b and 30a, and then to calculate the volume of stands. As a result, the volume of the analyzed stands was calculated, obtaining values differing from the nominal ones included in the FMP (Forest Management Plan) by about 25% and 5%, respectively, for larch and oak.


Mathematics ◽  
2021 ◽  
Vol 9 (24) ◽  
pp. 3314
Author(s):  
Yang You ◽  
Guang Jin ◽  
Zhengqiang Pan ◽  
Rui Guo

Space-filling design selects points uniformly in the experimental space, bringing considerable flexibility to the complex-model-based and model-free data analysis. At present, space-filling designs mostly focus on regular spaces and continuous factors, with a lack of studies into the discrete factors and the constraints among factors. Most of the existing experimental design methods for qualitative factors are not applicable for discrete factors, since they ignore the potential order or spatial distance between discrete factors. This paper proposes a space-filling method, called maximum projection coordinate-exchange (MP-CE), taking into account both the diversity of factor types and the complexity of factor constraints. Specifically, the maximum projection criterion and distance criterion are introduced to capture the “bad” coordinates, and the coordinate-exchange and the optimization of experimental design are realized by solving one-dimensional constrained optimization problem. Meanwhile, by adding iterative perturbations to the traditional coordinate exchange process, the adjacent areas of the local optimal solution are explored and the optimum performances of the current optimal solution are retained, while the shortcomings of random restart are effectively avoided. Experiments in the regular space and constraint space, as well as experimental design for the terminal interception effectiveness of a missile defense system, show that the MP-CE method significantly outperforms existing popular space-filling design methods in terms of space-projection properties, while yielding comparable or superior space-filling properties.


Author(s):  
Damjan Gjurovski ◽  
Sebastian Michel
Keyword(s):  

Author(s):  
Aman Bhonsale ◽  
Ashok Kumar Ahirwar ◽  
Kirti Kaim ◽  
Puja Kumari Jha

Abstract Objective To evaluate the potential of artificial intelligence in combating COVID-19 pandemic. Methods PubMed, Embase, Cochrane Library and Google Scholar were searched for the term “Artificial intelligence and COVID-19” up to March 31, 2021. Results Artificial intelligence (AI) is a potential tool to contain the current pandemic. AI can be used in many fields such as early detection and respective diagnosis, supervision of treatment, projection of cases and mortality, contact tracing of individuals, development of drugs and vaccines, reduces workload on health workers, prevention of disease, analysis of mental health of people amid pandemic. Conclusions AI is being updated and being improved, second by second to be able to interpret like actual human minds. This advancement in AI may lead to a completely different future of COVID-19 pandemic where most of the simpler works may be done by AI and only essential works could be done by health workers in order to increase patient care in current scenario of COVID-19 outbreak. But again one of the main constraint is of limited trustworthy and noise free sources of information. So the need for the hour is to make a free data system where most of the analysed data could be available to feed AI, which could effectively halt the current pandemic.


2021 ◽  
Vol 17 (11) ◽  
pp. e1008591
Author(s):  
Ege Altan ◽  
Sara A. Solla ◽  
Lee E. Miller ◽  
Eric J. Perreault

It is generally accepted that the number of neurons in a given brain area far exceeds the number of neurons needed to carry any specific function controlled by that area. For example, motor areas of the human brain contain tens of millions of neurons that control the activation of tens or at most hundreds of muscles. This massive redundancy implies the covariation of many neurons, which constrains the population activity to a low-dimensional manifold within the space of all possible patterns of neural activity. To gain a conceptual understanding of the complexity of the neural activity within a manifold, it is useful to estimate its dimensionality, which quantifies the number of degrees of freedom required to describe the observed population activity without significant information loss. While there are many algorithms for dimensionality estimation, we do not know which are well suited for analyzing neural activity. The objective of this study was to evaluate the efficacy of several representative algorithms for estimating the dimensionality of linearly and nonlinearly embedded data. We generated synthetic neural recordings with known intrinsic dimensionality and used them to test the algorithms’ accuracy and robustness. We emulated some of the important challenges associated with experimental data by adding noise, altering the nature of the embedding of the low-dimensional manifold within the high-dimensional recordings, varying the dimensionality of the manifold, and limiting the amount of available data. We demonstrated that linear algorithms overestimate the dimensionality of nonlinear, noise-free data. In cases of high noise, most algorithms overestimated the dimensionality. We thus developed a denoising algorithm based on deep learning, the “Joint Autoencoder”, which significantly improved subsequent dimensionality estimation. Critically, we found that all algorithms failed when the intrinsic dimensionality was high (above 20) or when the amount of data used for estimation was low. Based on the challenges we observed, we formulated a pipeline for estimating the dimensionality of experimental neural data.


2021 ◽  
Author(s):  
Abinaya Govindan ◽  
Gyan Ranjan ◽  
Amit Verma

This paper presents named entity recognition as a multi-answer QA task combined with contextual natural-language-inference based noise reduction. This method allows us to use pre-trained models that have been trained for certain downstream tasks to generate unsupervised data, reducing the need for manual annotation to create named entity tags with tokens. For each entity, we provide a unique context, such as entity types, definitions, questions and a few empirical rules along with the target text to train a named entity model for the domain of our interest. This formulation (a) allows the system to jointly learn NER-specific features from the datasets provided, and (b) can extract multiple NER-specific features, thereby boosting the performance of existing NER models (c) provides business-contextualized definitions to reduce ambiguity among similar entities. We conducted numerous tests to determine the quality of the created data, and we find that this method of data generation allows us to obtain clean, noise-free data with minimal effort and time. This approach has been demonstrated to be successful in extracting named entities, which are then used in subsequent components.


Sign in / Sign up

Export Citation Format

Share Document