scholarly journals Simultaneous contact and aerodynamic force estimation (s-CAFE) for aerial robots

2020 ◽  
Vol 39 (6) ◽  
pp. 688-728
Author(s):  
Teodor Tomić ◽  
Philipp Lutz ◽  
Korbinian Schmid ◽  
Andrew Mathers ◽  
Sami Haddadin

In this article, we consider the problem of multirotor flying robots physically interacting with the environment under influence of wind. The results are the first algorithms for simultaneous online estimation of contact and aerodynamic wrenches acting on the robot based on real-world data, without the need for dedicated sensors. For this purpose, we investigated two model-based techniques for discriminating between aerodynamic and interaction forces. The first technique is based on aerodynamic and contact torque models, and uses the external force to estimate wind speed. Contacts are then detected based on the residual between estimated external torque and expected (modeled) aerodynamic torque. Upon detecting contact, wind speed is assumed to change very slowly. From the estimated interaction wrench, we are also able to determine the contact location. This is embedded into a particle filter framework to further improve contact location estimation. The second algorithm uses the propeller aerodynamic power and angular speed as measured by the speed controllers to obtain an estimate of the airspeed. An aerodynamics model is then used to determine the aerodynamic wrench. Both methods rely on accurate aerodynamics models. Therefore, we evaluate data-driven and physics-based models as well as offline system identification for flying robots. For obtaining ground-truth data, we performed autonomous flights in a 3D wind tunnel. Using this data, aerodynamic model selection, parameter identification, and discrimination between aerodynamic and contact forces could be performed. Finally, the developed methods could serve as useful estimators for interaction control schemes with simultaneous compensation of wind disturbances.

Author(s):  
Muhammad Awais ◽  
Syed Suleman Abbas Zaidi ◽  
Murk Marvi ◽  
Muhammad Khurram

Communication and computing shape up base for explosion of Internet of Things (IoT) era. Humans can efficiently control the devices around their environment as per requirements because of IoT, the communication between different devices brings more flexibility in surrounding. Useful data is also gathered from some of these devices to create Big Data; where, further analysis assist in making life easier by developing good business models corresponding to user needs, enhance scientific research, formulating weather prediction or monitoring systems and contributing in other relative fields as well. Thus, in this research a remotely deployable IoT enabled Wind Sonic Anemometer has been designed and deployed to calculate average wind speed, direction, and gust. The proposed design is remotely deployable, user-friendly, power efficient and cost-effective because of opted modules i.e., ultrasonic sensors, GSM module, and solar panel. The testbed was also deployed at the roof of Computer & Information Systems Engineering (CIS) department, NED UET. Further, its calibration has been carried out by using long short-term memory (LSTM), a deep learning technique; where ground truth data has been gathered from mechanical wind speed sensor (NRG-40 H) deployed at top of Industrial & Manufacturing (IM) department of NED UET. The obtained results are satisfactory and the performance of designed sensor is also good under various weather conditions.


Author(s):  
Jose Paredes ◽  
Gerardo Simari ◽  
Maria Vanina Martinez ◽  
Marcelo Falappa

In traditional databases, the entity resolution problem (which is also known as deduplication), refers to the task of mapping multiple manifestations of virtual objects to its corresponding real-world entity. When addressing this problem, in both theory and practice, it is widely assumed that such sets of virtual object appear as the result of clerical errors, transliterations, missing or updated attributes, abbreviations, and so forth. In this paper, we address this problem under the assumption that this situation is caused by malicious actors operating in domains in which they do not wish to be identified, such as hacker forums and markets in which the participants are motivated to remain semi-anonymous (though they wish to keep their true identities secret, they find it useful for customers to identify their products and services). We are therefore in the presence of a different, even more challenging problem that we refer to as adversarial deduplication. In this paper, we study this problem via examples that arise from real-world data on malicious hacker forums and markets arising from collaborations with a cyber threat intelligence company focusing on understanding this kind of behavior. We argue that it is very difficult---if not impossible---to find ground truth data on which to build solutions to this problem, and develop a set of preliminary experiments based on training machine learning classifiers that leverage text analysis to detect potential cases of duplicate entities. Our results are encouraging as a first step towards building tools that human analysts can use to enhance their capabilities towards fighting cyber threats.


Information ◽  
2018 ◽  
Vol 9 (8) ◽  
pp. 189 ◽  
Author(s):  
Jose Paredes ◽  
Gerardo Simari ◽  
Maria Martinez ◽  
Marcelo Falappa

In traditional databases, the entity resolution problem (which is also known as deduplication) refers to the task of mapping multiple manifestations of virtual objects to their corresponding real-world entities. When addressing this problem, in both theory and practice, it is widely assumed that such sets of virtual objects appear as the result of clerical errors, transliterations, missing or updated attributes, abbreviations, and so forth. In this paper, we address this problem under the assumption that this situation is caused by malicious actors operating in domains in which they do not wish to be identified, such as hacker forums and markets in which the participants are motivated to remain semi-anonymous (though they wish to keep their true identities secret, they find it useful for customers to identify their products and services). We are therefore in the presence of a different, and even more challenging, problem that we refer to as adversarial deduplication. In this paper, we study this problem via examples that arise from real-world data on malicious hacker forums and markets arising from collaborations with a cyber threat intelligence company focusing on understanding this kind of behavior. We argue that it is very difficult—if not impossible—to find ground truth data on which to build solutions to this problem, and develop a set of preliminary experiments based on training machine learning classifiers that leverage text analysis to detect potential cases of duplicate entities. Our results are encouraging as a first step towards building tools that human analysts can use to enhance their capabilities towards fighting cyber threats.


Author(s):  
Jose Paredes ◽  
Gerardo Simari ◽  
Maria Vanina Martinez ◽  
Marcelo Falappa

In traditional databases, the entity resolution problem (which is also known as deduplication), refers to the task of mapping multiple manifestations of virtual objects to its corresponding real-world entity. When addressing this problem, in both theory and practice, it is widely assumed that such sets of virtual object appear as the result of clerical errors, transliterations, missing or updated attributes, abbreviations, and so forth. In this paper, we address this problem under the assumption that this situation is caused by malicious actors operating in domains in which they do not wish to be identified, such as hacker forums and markets in which the participants are motivated to remain semi-anonymous (though they wish to keep their true identities secret, they find it useful for customers to identify their products and services). We are therefore in the presence of a different, even more challenging problem that we refer to as adversarial deduplication. In this paper, we study this problem via examples that arise from real-world data on malicious hacker forums and markets arising from collaborations with a cyber threat intelligence company focusing on understanding this kind of behavior. We argue that it is very difficult---if not impossible---to find ground truth data on which to build solutions to this problem, and develop a set of preliminary experiments based on training machine learning classifiers that leverage text analysis to detect potential cases of duplicate entities. Our results are encouraging as a first step towards building tools that human analysts can use to enhance their capabilities towards fighting cyber threats.


Author(s):  
Junji Maeda ◽  
Takashi Takeuchi ◽  
Eriko Tomokiyo ◽  
Yukio Tamura

To quantitatively investigate a gusty wind from the viewpoint of aerodynamic forces, a wind tunnel that can control the rise time of a step-function-like gust was devised and utilized. When the non-dimensional rise time, which is calculated using the rise time of the gusty wind, the wind speed, and the size of an object, is less than a certain value, the wind force is greater than under the corresponding steady wind. Therefore, this wind force is called the “overshoot wind force” for objects the size of orbital vehicles in an actual wind observation. The finding of the overshoot wind force requires a condition of the wind speed recording specification and depends on the object size and the gusty wind speed.


2021 ◽  
Vol 13 (10) ◽  
pp. 1966
Author(s):  
Christopher W Smith ◽  
Santosh K Panda ◽  
Uma S Bhatt ◽  
Franz J Meyer ◽  
Anushree Badola ◽  
...  

In recent years, there have been rapid improvements in both remote sensing methods and satellite image availability that have the potential to massively improve burn severity assessments of the Alaskan boreal forest. In this study, we utilized recent pre- and post-fire Sentinel-2 satellite imagery of the 2019 Nugget Creek and Shovel Creek burn scars located in Interior Alaska to both assess burn severity across the burn scars and test the effectiveness of several remote sensing methods for generating accurate map products: Normalized Difference Vegetation Index (NDVI), Normalized Burn Ratio (NBR), and Random Forest (RF) and Support Vector Machine (SVM) supervised classification. We used 52 Composite Burn Index (CBI) plots from the Shovel Creek burn scar and 28 from the Nugget Creek burn scar for training classifiers and product validation. For the Shovel Creek burn scar, the RF and SVM machine learning (ML) classification methods outperformed the traditional spectral indices that use linear regression to separate burn severity classes (RF and SVM accuracy, 83.33%, versus NBR accuracy, 73.08%). However, for the Nugget Creek burn scar, the NDVI product (accuracy: 96%) outperformed the other indices and ML classifiers. In this study, we demonstrated that when sufficient ground truth data is available, the ML classifiers can be very effective for reliable mapping of burn severity in the Alaskan boreal forest. Since the performance of ML classifiers are dependent on the quantity of ground truth data, when sufficient ground truth data is available, the ML classification methods would be better at assessing burn severity, whereas with limited ground truth data the traditional spectral indices would be better suited. We also looked at the relationship between burn severity, fuel type, and topography (aspect and slope) and found that the relationship is site-dependent.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Nick Le Large ◽  
Frank Bieder ◽  
Martin Lauer

Abstract For the application of an automated, driverless race car, we aim to assure high map and localization quality for successful driving on previously unknown, narrow race tracks. To achieve this goal, it is essential to choose an algorithm that fulfills the requirements in terms of accuracy, computational resources and run time. We propose both a filter-based and a smoothing-based Simultaneous Localization and Mapping (SLAM) algorithm and evaluate them using real-world data collected by a Formula Student Driverless race car. The accuracy is measured by comparing the SLAM-generated map to a ground truth map which was acquired using high-precision Differential GPS (DGPS) measurements. The results of the evaluation show that both algorithms meet required time constraints thanks to a parallelized architecture, with GraphSLAM draining the computational resources much faster than Extended Kalman Filter (EKF) SLAM. However, the analysis of the maps generated by the algorithms shows that GraphSLAM outperforms EKF SLAM in terms of accuracy.


2020 ◽  
Vol 13 (1) ◽  
pp. 26
Author(s):  
Wen-Hao Su ◽  
Jiajing Zhang ◽  
Ce Yang ◽  
Rae Page ◽  
Tamas Szinyei ◽  
...  

In many regions of the world, wheat is vulnerable to severe yield and quality losses from the fungus disease of Fusarium head blight (FHB). The development of resistant cultivars is one means of ameliorating the devastating effects of this disease, but the breeding process requires the evaluation of hundreds of lines each year for reaction to the disease. These field evaluations are laborious, expensive, time-consuming, and are prone to rater error. A phenotyping cart that can quickly capture images of the spikes of wheat lines and their level of FHB infection would greatly benefit wheat breeding programs. In this study, mask region convolutional neural network (Mask-RCNN) allowed for reliable identification of the symptom location and the disease severity of wheat spikes. Within a wheat line planted in the field, color images of individual wheat spikes and their corresponding diseased areas were labeled and segmented into sub-images. Images with annotated spikes and sub-images of individual spikes with labeled diseased areas were used as ground truth data to train Mask-RCNN models for automatic image segmentation of wheat spikes and FHB diseased areas, respectively. The feature pyramid network (FPN) based on ResNet-101 network was used as the backbone of Mask-RCNN for constructing the feature pyramid and extracting features. After generating mask images of wheat spikes from full-size images, Mask-RCNN was performed to predict diseased areas on each individual spike. This protocol enabled the rapid recognition of wheat spikes and diseased areas with the detection rates of 77.76% and 98.81%, respectively. The prediction accuracy of 77.19% was achieved by calculating the ratio of the wheat FHB severity value of prediction over ground truth. This study demonstrates the feasibility of rapidly determining levels of FHB in wheat spikes, which will greatly facilitate the breeding of resistant cultivars.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4050
Author(s):  
Dejan Pavlovic ◽  
Christopher Davison ◽  
Andrew Hamilton ◽  
Oskar Marko ◽  
Robert Atkinson ◽  
...  

Monitoring cattle behaviour is core to the early detection of health and welfare issues and to optimise the fertility of large herds. Accelerometer-based sensor systems that provide activity profiles are now used extensively on commercial farms and have evolved to identify behaviours such as the time spent ruminating and eating at an individual animal level. Acquiring this information at scale is central to informing on-farm management decisions. The paper presents the development of a Convolutional Neural Network (CNN) that classifies cattle behavioural states (`rumination’, `eating’ and `other’) using data generated from neck-mounted accelerometer collars. During three farm trials in the United Kingdom (Easter Howgate Farm, Edinburgh, UK), 18 steers were monitored to provide raw acceleration measurements, with ground truth data provided by muzzle-mounted pressure sensor halters. A range of neural network architectures are explored and rigorous hyper-parameter searches are performed to optimise the network. The computational complexity and memory footprint of CNN models are not readily compatible with deployment on low-power processors which are both memory and energy constrained. Thus, progressive reductions of the CNN were executed with minimal loss of performance in order to address the practical implementation challenges, defining the trade-off between model performance versus computation complexity and memory footprint to permit deployment on micro-controller architectures. The proposed methodology achieves a compression of 14.30 compared to the unpruned architecture but is nevertheless able to accurately classify cattle behaviours with an overall F1 score of 0.82 for both FP32 and FP16 precision while achieving a reasonable battery lifetime in excess of 5.7 years.


2021 ◽  
pp. 0021955X2110210
Author(s):  
Alejandro E Rodríguez-Sánchez ◽  
Héctor Plascencia-Mora

Traditional modeling of mechanical energy absorption due to compressive loadings in expanded polystyrene foams involves mathematical descriptions that are derived from stress/strain continuum mechanics models. Nevertheless, most of those models are either constrained using the strain as the only variable to work at large deformation regimes and usually neglect important parameters for energy absorption properties such as the material density or the rate of the applying load. This work presents a neural-network-based approach that produces models that are capable to map the compressive stress response and energy absorption parameters of an expanded polystyrene foam by considering its deformation, compressive loading rates, and different densities. The models are trained with ground-truth data obtained in compressive tests. Two methods to select neural network architectures are also presented, one of which is based on a Design of Experiments strategy. The results show that it is possible to obtain a single artificial neural networks model that can abstract stress and energy absorption solution spaces for the conditions studied in the material. Additionally, such a model is compared with a phenomenological model, and the results show than the neural network model outperforms it in terms of prediction capabilities, since errors around 2% of experimental data were obtained. In this sense, it is demonstrated that by following the presented approach is possible to obtain a model capable to reproduce compressive polystyrene foam stress/strain data, and consequently, to simulate its energy absorption parameters.


Sign in / Sign up

Export Citation Format

Share Document