median error
Recently Published Documents


TOTAL DOCUMENTS

129
(FIVE YEARS 78)

H-INDEX

10
(FIVE YEARS 3)

Author(s):  
Weiyan Chen ◽  
Fusang Zhang ◽  
Tao Gu ◽  
Kexing Zhou ◽  
Zixuan Huo ◽  
...  

Floor plan construction has been one of the key techniques in many important applications such as indoor navigation, location-based services, and emergency rescue. Existing floor plan construction methods require expensive dedicated hardware (e.g., Lidar or depth camera), and may not work in low-visibility environments (e.g., smoke, fog or dust). In this paper, we develop a low-cost Ultra Wideband (UWB)-based system (named UWBMap) that is mounted on a mobile robot platform to construct floor plan through smoke. UWBMap leverages on low-cost and off-the-shelf UWB radar, and it is able to construct an indoor map with an accuracy comparable to Lidar (i.e., the state-of-the-art). The underpinning technique is to take advantage of the mobility of radar to form virtual antennas and gather spatial information of a target. UWBMap also eliminates both robot motion noise and environmental noise to enhance weak reflection from small objects for the robust construction process. In addition, we overcome the limited view of single radar by combining multi-view from multiple radars. Extensive experiments in different indoor environments show that UWBMap achieves a map construction with a median error of 11 cm and a 90-percentile error of 26 cm, and it operates effectively in indoor scenarios with glass wall and dense smoke.


2021 ◽  
Vol 10 (1) ◽  
pp. 7
Author(s):  
Christoph Tholen ◽  
Tarek A. El-Mihoub ◽  
Lars Nolle ◽  
Oliver Zielinski

In this study, a set of different search strategies for locating submarine groundwater discharge (SGD) are investigated. This set includes pre-defined path planning (PPP), adapted random walk (RW), particle swarm optimisation (PSO), inertia Levy-flight (ILF), self-organising-migration-algorithm (SOMA), and bumblebee search algorithm (BB). The influences of self-localisation and communication errors and limited travel distance of the autonomous underwater vehicles (AUVs) on the performance of the proposed algorithms are investigated. This study shows that the proposed search strategies could not outperform the classic search heuristic based on full coverage path planning if all AUVs followed the same search strategy. In this study, the influence of self-localisation and communication errors was investigated. The simulations showed that, based on the median error of the search runs, the performance of SOMA was in the same order of magnitude regardless the strength of the localisation error. Furthermore, it was shown that the performance of BB was highly affected by increasing localisation errors. From the simulations, it was revealed that all the algorithms, except for PSO and SOMA, were unaffected by disturbed communications. Here, the best performance was shown by PPP, followed by BB, SOMA, ILF, PSO, and RW. Furthermore, the influence of the limited travel distances of the AUVs on the search performance was evaluated. It was shown that all the algorithms, except for PSO, were affected by the shorter maximum travel distances of the AUVs. The performance of PPP increased with increasing maximum travel distances. However, for maximum travel distances > 1800 m the median error appeared constant. The effect of shorter travel distances on SOMA was smaller than on PPP. For maximum travel distances < 1200 m, SOMA outperformed all other strategies. In addition, it can be observed that only BB showed better performances for shorter travel distances than for longer ones. On the other hand, with different search strategies for each AUV, the search performance of the whole swarm can be improved by incorporating population-based search strategies such as PSO and SOMA within the PPP scheme. The best performance was achieved for the combination of two AUVs following PPP, while the third AUV utilised PSO. The best fitness of this combination was 15.9. This fitness was 26.4% better than the performance of PPP, which was 20.4 on average. In addition, a novel mechanism for dynamically selecting a search strategy for an AUV is proposed. This mechanism is based on fuzzy logic. This dynamic approach is able to perform at least as well as PPP and SOMA for different travel distances of AUVs. However, due to the better adaptation to the current situation, the overall performance, calculated based on the fitness achieved for different maximum travel distances, the proposed dynamic search strategy selection performed 32.8% better than PPP and 34.0% better than SOMA.


Ingenius ◽  
2021 ◽  
Author(s):  
Lucas C. Lampier ◽  
Yves L. Coelho ◽  
Eliete M. O. Caldeira ◽  
Teodiano Bastos-Filho

This article describes the methodology used to train and test a Deep Neural Network (DNN) with Photoplethysmography (PPG) data performing a regression task to estimate the Respiratory Rate (RR). The DNN architecture is based on a model used to infer the heart rate (HR) from noisy PPG signals, which is optimized to the RR problem using genetic optimization. Two open-access datasets were used in the tests, the BIDMC and the CapnoBase. With the CapnoBase dataset, the DNN achieved a median error of 1.16 breaths/min, which is comparable with analytical methods in the literature, in which the best error found is 1.1 breaths/min (excluding the 8 % noisiest data). The BIDMC dataset seems to be more challenging, as the minimum median error of the literature’s methods is 2.3 breaths/min (excluding 6 % of the noisiest data), and the DNN based approach achieved a median error of 1.52 breaths/min with the whole dataset.


Ingenius ◽  
2021 ◽  
Author(s):  
Lucas C. Lampier ◽  
Yves L. Coelho ◽  
Eliete M. O. Caldeira ◽  
Teodiano Bastos-Filho

This article describes the methodology used to train and test a Deep Neural Network (DNN) with Photoplethysmography (PPG) data performing a regression task to estimate the Respiratory Rate (RR). The DNN architecture is based on a model used to infer the heart rate (HR) from noisy PPG signals, which is optimized to the RR problem using genetic optimization. Two open-access datasets were used in the tests, the BIDMC and the CapnoBase. With the CapnoBase dataset, the DNN achieved a median error of 1.16 breaths/min, which is comparable with analytical methods in the literature, in which the best error found is 1.1 breaths/min (excluding the 8 % noisiest data). The BIDMC dataset seems to be more challenging, as the minimum median error of the literature’s methods is 2.3 breaths/min (excluding 6 % of the noisiest data), and the DNN based approach achieved a median error of 1.52 breaths/min with the whole dataset.


AI ◽  
2021 ◽  
Vol 2 (4) ◽  
pp. 662-683
Author(s):  
Heiko Oppel ◽  
Michael Munz

Sports climbing has grown as a competitive sport over the last decades. This has leading to an increasing interest in guaranteeing the safety of the climber. In particular, operational errors, caused by the belayer, are one of the major issues leading to severe injuries. The objective of this study is to analyze and predict the severity of a pendulum fall based on the movement information from the belayer alone. Therefore, the impact force served as a reference. It was extracted using an Inertial Measurement Unit (IMU) on the climber. Additionally, another IMU was attached to the belayer, from which several hand-crafted features were explored. As this led to a high dimensional feature space, dimension reduction techniques were required to improve the performance. We were able to predict the impact force with a median error of about 4.96%. Pre-defined windows as well as the applied feature dimension reduction techniques allowed for a meaningful interpretation of the results. The belayer was able to reduce the impact force, which is acting onto the climber, by over 30%. So, a monitoring system in a training center could improve the skills of a belayer and hence alleviate the severity of the injuries.


PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0260609
Author(s):  
Carina Albuquerque ◽  
Leonardo Vanneschi ◽  
Roberto Henriques ◽  
Mauro Castelli ◽  
Vanda Póvoa ◽  
...  

Cell counting is a frequent task in medical research studies. However, it is often performed manually; thus, it is time-consuming and prone to human error. Even so, cell counting automation can be challenging to achieve, especially when dealing with crowded scenes and overlapping cells, assuming different shapes and sizes. In this paper, we introduce a deep learning-based cell detection and quantification methodology to automate the cell counting process in the zebrafish xenograft cancer model, an innovative technique for studying tumor biology and for personalizing medicine. First, we implemented a fine-tuned architecture based on the Faster R-CNN using the Inception ResNet V2 feature extractor. Second, we performed several adjustments to optimize the process, paying attention to constraints such as the presence of overlapped cells, the high number of objects to detect, the heterogeneity of the cells’ size and shape, and the small size of the data set. This method resulted in a median error of approximately 1% of the total number of cell units. These results demonstrate the potential of our novel approach for quantifying cells in poorly labeled images. Compared to traditional Faster R-CNN, our method improved the average precision from 71% to 85% on the studied data set.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7976
Author(s):  
Remo Lazazzera ◽  
Pablo Laguna ◽  
Eduardo Gil ◽  
Guy Carrault

The present paper proposes the design of a sleep monitoring platform. It consists of an entire sleep monitoring system based on a smart glove sensor called UpNEA worn during the night for signals acquisition, a mobile application, and a remote server called AeneA for cloud computing. UpNEA acquires a 3-axis accelerometer signal, a photoplethysmography (PPG), and a peripheral oxygen saturation (SpO2) signal from the index finger. Overnight recordings are sent from the hardware to a mobile application and then transferred to AeneA. After cloud computing, the results are shown in a web application, accessible for the user and the clinician. The AeneA sleep monitoring activity performs different tasks: sleep stages classification and oxygen desaturation assessment; heart rate and respiration rate estimation; tachycardia, bradycardia, atrial fibrillation, and premature ventricular contraction detection; and apnea and hypopnea identification and classification. The PPG breathing rate estimation algorithm showed an absolute median error of 0.5 breaths per minute for the 32 s window and 0.2 for the 64 s window. The apnea and hypopnea detection algorithm showed an accuracy (Acc) of 75.1%, by windowing the PPG in one-minute segments. The classification task revealed 92.6% Acc in separating central from obstructive apnea, 83.7% in separating central apnea from central hypopnea and 82.7% in separating obstructive apnea from obstructive hypopnea. The novelty of the integrated algorithms and the top-notch cloud computing products deployed, encourage the production of the proposed solution for home sleep monitoring.


2021 ◽  
Vol 17 (11) ◽  
pp. e1009449
Author(s):  
Shahab Sarmashghi ◽  
Metin Balaban ◽  
Eleonora Rachtman ◽  
Behrouz Touri ◽  
Siavash Mirarab ◽  
...  

The cost of sequencing the genome is dropping at a much faster rate compared to assembling and finishing the genome. The use of lightly sampled genomes (genome-skims) could be transformative for genomic ecology, and results using k-mers have shown the advantage of this approach in identification and phylogenetic placement of eukaryotic species. Here, we revisit the basic question of estimating genomic parameters such as genome length, coverage, and repeat structure, focusing specifically on estimating the k-mer repeat spectrum. We show using a mix of theoretical and empirical analysis that there are fundamental limitations to estimating the k-mer spectra due to ill-conditioned systems, and that has implications for other genomic parameters. We get around this problem using a novel constrained optimization approach (Spline Linear Programming), where the constraints are learned empirically. On reads simulated at 1X coverage from 66 genomes, our method, REPeat SPECTra Estimation (RESPECT), had < 1.5% error in length estimation compared to 34% error previously achieved. In shotgun sequenced read samples with contaminants, RESPECT length estimates had median error 4%, in contrast to other methods that had median error 80%. Together, the results suggest that low-pass genomic sequencing can yield reliable estimates of the length and repeat content of the genome. The RESPECT software will be publicly available at https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_shahab-2Dsarmashghi_RESPECT.git&d=DwIGAw&c=-35OiAkTchMrZOngvJPOeA&r=ZozViWvD1E8PorCkfwYKYQMVKFoEcqLFm4Tg49XnPcA&m=f-xS8GMHKckknkc7Xpp8FJYw_ltUwz5frOw1a5pJ81EpdTOK8xhbYmrN4ZxniM96&s=717o8hLR1JmHFpRPSWG6xdUQTikyUjicjkipjFsKG4w&e=.


2021 ◽  
Vol 38 (6) ◽  
pp. 109-119
Author(s):  
Aleksandr S. Prylutskyi ◽  
Sergey V. Kapranov ◽  
Kseniia E. Tkachenko ◽  
Lubov I. Yalovega

Objective. To assess the effectiveness of the low-dose air ozonation for disinfection of the air in the working room. Materials and methods. We investigated 90 air samples (3 samples were taken weekly before and after the production meeting using the automatic sampling device of biological aerosols of air PU-1B). The total bacterial contamination, the content of staphylococci and mold spores were determined. Ozonation of the room (83.3 m3) was carried out for 20 minutes by means of domestic ozonator. The accumulated dose of ozone was 133.3 mg (1.6 mg/m3). Statistical data processing was carried out using the MedStat licensed program. The median, median error (Me me), left and right 95 % confidence intervals (95 % CI) were calculated. Paired comparisons were made using Wilcoxon's T-test. Results. After the meeting, the total bacterial contamination of the air was 56.0 9.3 (47.078.0) CFU. The content of staphylococci and mold spores in the air was 85.5 12.5 (76.0100.0) and 44.5 6.5 (32.054.0) CFU, respectively. After ozonation, the total bacterial contamination of the air was 14.5 3.6 (10.021.0) CFU. The content of staphylococci and mold spores in the air after ozonation was 35.5 6.7 (25.052.0) and 26.0 5.0 (18.032.0) CFU, respectively. Ozonation of the room provided a significant decrease (p 0.001) in all three of the above indicators. The room ozonation carried out promoted a reliable decrease (p 0.001) in all the above mentioned parameters. Conclusions. The above data and analysis of the literature show the possibility of using low doses of ozone for the prevention of bacterial, fungal and viral infections including SARS-CoV-2. Further study and development of reasonable modes of ozone disinfection, including low doses of ozone, is needed, as well as determination of the efficiency degree of air disinfection with non-toxic gas concentrations.


2021 ◽  
Author(s):  
Sophie Goliber ◽  
Taryn Black ◽  
Ginny Catania ◽  
James M. Lea ◽  
Helene Olsen ◽  
...  

Abstract. Marine-terminating outlet glacier terminus traces, mapped from satellite and aerial imagery, have been used extensively in understanding how outlet glaciers adjust to climate change variability over a range of time scales. Numerous studies have digitized termini manually, but this process is labor-intensive, and no consistent approach exists. A lack of coordination leads to duplication of efforts, particularly for Greenland, which is a major scientific research focus. At the same time, machine learning techniques are rapidly making progress in their ability to automate accurate extraction of glacier termini, with promising developments across a number of optical and SAR satellite sensors. These techniques rely on high quality, manually digitized terminus traces to be used as training data for robust automatic traces. Here we present a database of manually digitized terminus traces for machine learning and scientific applications. These data have been collected, cleaned, assigned with appropriate metadata including image scenes, and compiled so they can be easily accessed by scientists. The TermPicks data set includes 39,060 individual terminus traces for 278 glaciers with a mean and median number of traces per glacier of 136 ± 190 and 93, respectively. Across all glaciers, 32,567 dates have been picked, of which 4,467 have traces from more than one author (duplication of 14 %). We find a median error of ∼100 m among manually-traced termini. Most traces are obtained after 1999, when Landsat 7 was launched. We also provide an overview of an updated version of The Google Earth Engine Digitization Tool (GEEDiT), which has been developed specifically for future manual picking of the Greenland Ice Sheet.


Sign in / Sign up

Export Citation Format

Share Document