computational simplicity
Recently Published Documents


TOTAL DOCUMENTS

78
(FIVE YEARS 21)

H-INDEX

10
(FIVE YEARS 2)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Petr Jančík ◽  
Michal Schmirler ◽  
Tomáš Hyhlík ◽  
Adam Bláha ◽  
Pavel Sláma ◽  
...  

AbstractHeat storage efficiency is required to maximize the potential of combined heat and power generation or renewable energy sources for heating. Using a phase change material (PCM) could be an attractive choice in several instances. Commercially available paraffin-based PCM was investigated using T-history method with sufficient agreement with the data from the manufacturer. The introduced LHTES with cylindrical capsules is simple and scalable in capacity, charging/discharging time, and temperature level. The overall stored energy density is 9% higher than the previously proposed design of similar design complexity. The discharging process of the designed latent heat thermal energy storage (LHTES) was evaluated for two different flow rates. The PCM inside the capsules and heat transfer fluid (HTF) temperature, as well as the HTF flow rate, were measured. The lumped parameter numerical model was developed and validated successfully. The advantage of the proposed model is its computational simplicity, and thus the possibility to use it in simulations of a whole heat distribution network. The so-called state of charge (SoC), which plays a crucial role in successful heat storage management, is a part of the evaluation of both experimental and computational data.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Walter Ikegami Andersson ◽  
Adeel Akram ◽  
Tord Johansson ◽  
Ralf Kliemt ◽  
Michael Papenbrock ◽  
...  

AbstractThe upcoming PANDA experiment at FAIR will be among a new generation of particle physics experiments to employ a novel event filtering system realised purely in software, i.e. a software trigger. To educate its triggering decisions, online reconstruction algorithms need to offer outstanding performance in terms of efficiency and track quality. We present a method to reconstruct longitudinal track parameters in PANDA’s Straw Tube Tracker, which is general enough to be easily added to other track finding algorithms that focus on transversal reconstruction. For the pattern recognition part of this method, three approaches are employed and compared: a combinatorial path finding approach, a Hough transformation, and a recursive annealing fit. In a systematic comparison, the recursive annealing fit was found to outperform the other approaches in every category of quality parameters and reaches a reconstruction efficacy of 95% and higher. Due to its computational simplicity, the recursive annealing fit was also found to have faster execution times compared to the other algorithms.


2021 ◽  
Vol 11 (19) ◽  
pp. 8867
Author(s):  
Michele Scarpiniti ◽  
Sima Sarv Ahrabi ◽  
Enzo Baccarelli ◽  
Lorenzo Piazzo ◽  
Alireza Momenzadeh

The global COVID-19 pandemic certainly has posed one of the more difficult challenges for researchers in the current century. The development of an automatic diagnostic tool, able to detect the disease in its early stage, could undoubtedly offer a great advantage to the battle against the pandemic. In this regard, most of the research efforts have been focused on the application of Deep Learning (DL) techniques to chest images, including traditional chest X-rays (CXRs) and Computed Tomography (CT) scans. Although these approaches have demonstrated their effectiveness in detecting the COVID-19 disease, they are of huge computational complexity and require large datasets for training. In addition, there may not exist a large amount of COVID-19 CXRs and CT scans available to researchers. To this end, in this paper, we propose an approach based on the evaluation of the histogram from a common class of images that is considered as the target. A suitable inter-histogram distance measures how this target histogram is far from the histogram evaluated on a test image: if this distance is greater than a threshold, the test image is labeled as anomaly, i.e., the scan belongs to a patient affected by COVID-19 disease. Extensive experimental results and comparisons with some benchmark state-of-the-art methods support the effectiveness of the developed approach, as well as demonstrate that, at least when the images of the considered datasets are homogeneous enough (i.e., a few outliers are present), it is not really needed to resort to complex-to-implement DL techniques, in order to attain an effective detection of the COVID-19 disease. Despite the simplicity of the proposed approach, all the considered metrics (i.e., accuracy, precision, recall, and F-measure) attain a value of 1.0 under the selected datasets, a result comparable to the corresponding state-of-the-art DNN approaches, but with a remarkable computational simplicity.


2021 ◽  
Vol 22 (4) ◽  
pp. 878-881
Author(s):  
Sara Nourazari ◽  
Jonathan Harding ◽  
Samuel Davis ◽  
Ori Litvak ◽  
Stephen Traub ◽  
...  

Introduction: Daily patient volume in emergency departments (ED) varies considerably between days and sites. Although studies have attempted to define “high-volume” days, no standard definition exists. Furthermore, it is not clear whether the frequency of high-volume days, by any definition, is related to the size of an ED. We aimed to determine the correlation between ED size and the frequency of high-volume days for various volume thresholds, and to develop a measure to identify high-volume days. Methods: We queried retrospective patient arrival data including 1,682,374 patient visits from 32 EDs in 12 states between July 1, 2018–June 30, 2019 and developed linear regression models to determine the correlation between ED size and volume variability. In addition, we performed a regression analysis and applied the Pearson correlation test to investigate the significance of median daily volumes with respect to the percent of days that crossed four volume thresholds ranging from 5–20% (in 5% increments) greater than each site’s median daily volume. Results: We found a strong negative correlation between ED median daily volume and volume variability (R2 = 81.0%; P < 0.0001). In addition, the four regression models for the percent of days exceeding specified thresholds greater than their daily median volumes had R2 values of 49.4%, 61.2%, 70.0%, and 71.8%, respectively, all with P < 0.0001. Conclusion: We sought to determine whether smaller EDs experience high-volume days more frequently than larger EDs. We found that high-volume days, when defined as days with a count of arrivals at or above certain median-based thresholds, are significantly more likely to occur in lower-volume EDs than in higher-volume EDs. To the extent that EDs allocate resources and plan to staff based on median volumes, these results suggest that smaller EDs are more likely to experience unpredictable, volume-based staffing challenges and operational costs. Given the lack of a standard measure to define a high-volume day in an ED, we recommend 10% above the median daily volume as a metric, for its relevance, generalizability across a broad range of EDs, and computational simplicity.


Author(s):  
Wenbing Wang ◽  
Shengli Liu ◽  
Liu Feng

Generic polar complex exponential transform (GPCET), as continuous orthogonal moment, has the advantages of computational simplicity, numerical stability, and resistance to geometric transforms, which make it suitable for watermarking. However, errors in kernel function discretization can degrade these advantages. To maximize the GPCET utilization in robust watermarking, this paper proposes a secondary grid-division (SGD)-based moment calculation method that divides each grid corresponding to one pixel into nonoverlapping subgrids and increases the number of sampling points. Using the accurate moment calculation method, a nonsubsampled contourlet transform (NSCT)–GPCET-based watermarking scheme with resistance to image processing and geometrical attacks is proposed. In this scheme, the accurate moment calculation can reduce the numerical error and geometrical error of the traditional methods, which is verified by an image reconstruction comparison. Additionally, NSCT and accurate GPCET are utilized to achieve watermark stability. Subsequent experiments test the proposed watermarking scheme for its invisibility and robustness, and verify that the robustness of the proposed scheme outperforms that of other schemes when its level of invisibility is significantly higher.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 3959
Author(s):  
Krzysztof Zarzycki ◽  
Maciej Ławryńczuk

This work is concerned with an original ball-on-plate laboratory process. First, a simplified process model based on state–space process description is derived. Next, a fast state–space MPC algorithm is discussed. Its main advantage is computational simplicity: the manipulated variables are found on-line using explicit formulas with parameters calculated off-line; no real-time optimization is necessary. Software and hardware implementation details of the considered MPC algorithm using the STM32 microcontroller are presented. Tuning of the fast MPC algorithm is discussed. To show the efficacy of the MPC algorithm, it is compared with the classical PID and LQR controllers.


2021 ◽  
Vol 13 (11) ◽  
pp. 2125
Author(s):  
Bardia Yousefi ◽  
Clemente Ibarra-Castanedo ◽  
Martin Chamberland ◽  
Xavier P. V. Maldague ◽  
Georges Beaudoin

Clustering methods unequivocally show considerable influence on many recent algorithms and play an important role in hyperspectral data analysis. Here, we challenge the clustering for mineral identification using two different strategies in hyperspectral long wave infrared (LWIR, 7.7–11.8 μm). For that, we compare two algorithms to perform the mineral identification in a unique dataset. The first algorithm uses spectral comparison techniques for all the pixel-spectra and creates RGB false color composites (FCC). Then, a color based clustering is used to group the regions (called FCC-clustering). The second algorithm clusters all the pixel-spectra to directly group the spectra. Then, the first rank of non-negative matrix factorization (NMF) extracts the representative of each cluster and compares results with the spectral library of JPL/NASA. These techniques give the comparison values as features which convert into RGB-FCC as the results (called clustering rank1-NMF). We applied K-means as clustering approach, which can be modified in any other similar clustering approach. The results of the clustering-rank1-NMF algorithm indicate significant computational efficiency (more than 20 times faster than the previous approach) and promising performance for mineral identification having up to 75.8% and 84.8% average accuracies for FCC-clustering and clustering-rank1 NMF algorithms (using spectral angle mapper (SAM)), respectively. Furthermore, several spectral comparison techniques are used also such as adaptive matched subspace detector (AMSD), orthogonal subspace projection (OSP) algorithm, principal component analysis (PCA), local matched filter (PLMF), SAM, and normalized cross correlation (NCC) for both algorithms and most of them show a similar range in accuracy. However, SAM and NCC are preferred due to their computational simplicity. Our algorithms strive to identify eleven different mineral grains (biotite, diopside, epidote, goethite, kyanite, scheelite, smithsonite, tourmaline, pyrope, olivine, and quartz).


Author(s):  
Rupam Mukherjee

For prognostics in industrial applications, the degree of anomaly of a test point from a baseline cluster is estimated using a statistical distance metric. Among different statistical distance metrics, energy distance is an interesting concept based on Newton’s Law of Gravitation, promising simpler computation than classical distance metrics. In this paper, we review the state of the art formulations of energy distance and point out several reasons why they are not directly applicable to the anomaly-detection problem. Thereby, we propose a new energy-based metric called the P-statistic which addresses these issues, is applicable to anomaly detection and retains the computational simplicity of the energy distance. We also demonstrate its effectiveness on a real-life data-set.


Author(s):  
Bakhe Nleya

The emergence of Internet of Things (IoT), Cloud computing as well as the introducing of   device to deice communication for devices in proximity has resulted in the emerging of new innovative services such as Tele-care in the health sector.  However, issues such as privacy and security associated with such a service (Tele-care) are a challenge as most of the associated devices are resource constrained in terms of both operational power and computing capability requirements. As such it becomes problematic to implement any traditional as well as current privacy and security measures.  Thus, in this paper, we mitigate on a framework to that will ensure a robust privacy as well as security for a Tele-care service. Notably our focus is in ensuring computational simplicity, privacy preservation as well as energy efficiency. Overall analysis shows that the proposed protocol has improved performance in comparison with existing ones.


Author(s):  
Keun Ha Choi ◽  
SooHyun Kim

In this paper, we propose a novel method, illumination-invariant vegetation detection (IVD), to improve many aspects of agriculture for vision-based autonomous machines or robots. The proposed method derives new color feature functions from simultaneously modeling the spectral properties of the color camera and scene illumination. An experiment in which an image sample dataset was acquired under nature illumination, including various intensities, weather conditions, shadows and reflections, was performed. The results show that the proposed method (IVD) yields the highest performance with the lowest error and standard deviation and is superior to six typical methods. Our method has multiple strengths, including computational simplicity and uniformly high-accuracy image segmentation.


Sign in / Sign up

Export Citation Format

Share Document