memory footprint
Recently Published Documents


TOTAL DOCUMENTS

158
(FIVE YEARS 79)

H-INDEX

9
(FIVE YEARS 3)

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 548
Author(s):  
Manuel Córdova ◽  
Allan Pinto ◽  
Christina Carrozzo Hellevik ◽  
Saleh Abdel-Afou Alaliyat ◽  
Ibrahim A. Hameed ◽  
...  

Pollution in the form of litter in the natural environment is one of the great challenges of our times. Automated litter detection can help assess waste occurrences in the environment. Different machine learning solutions have been explored to develop litter detection tools, thereby supporting research, citizen science, and volunteer clean-up initiatives. However, to the best of our knowledge, no work has investigated the performance of state-of-the-art deep learning object detection approaches in the context of litter detection. In particular, no studies have focused on the assessment of those methods aiming their use in devices with low processing capabilities, e.g., mobile phones, typically employed in citizen science activities. In this paper, we fill this literature gap. We performed a comparative study involving state-of-the-art CNN architectures (e.g., Faster RCNN, Mask-RCNN, EfficientDet, RetinaNet and YOLO-v5), two litter image datasets and a smartphone. We also introduce a new dataset for litter detection, named PlastOPol, composed of 2418 images and 5300 annotations. The experimental results demonstrate that object detectors based on the YOLO family are promising for the construction of litter detection solutions, with superior performance in terms of detection accuracy, processing time, and memory footprint.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8241
Author(s):  
Mitko Aleksandrov ◽  
Sisi Zlatanova ◽  
David J. Heslop

Voxel-based data structures, algorithms, frameworks, and interfaces have been used in computer graphics and many other applications for decades. There is a general necessity to seek adequate digital representations, such as voxels, that would secure unified data structures, multi-resolution options, robust validation procedures and flexible algorithms for different 3D tasks. In this review, we evaluate the most common properties and algorithms for voxelisation of 2D and 3D objects. Thus, many voxelisation algorithms and their characteristics are presented targeting points, lines, triangles, surfaces and solids as geometric primitives. For lines, we identify three groups of algorithms, where the first two achieve different voxelisation connectivity, while the third one presents voxelisation of curves. We can say that surface voxelisation is a more desired voxelisation type compared to solid voxelisation, as it can be achieved faster and requires less memory if voxels are stored in a sparse way. At the same time, we evaluate in the paper the available voxel data structures. We split all data structures into static and dynamic grids considering the frequency to update a data structure. Static grids are dominated by SVO-based data structures focusing on memory footprint reduction and attributes preservation, where SVDAG and SSVDAG are the most advanced methods. The state-of-the-art dynamic voxel data structure is NanoVDB which is superior to the rest in terms of speed as well as support for out-of-core processing and data management, which is the key to handling large dynamically changing scenes. Overall, we can say that this is the first review evaluating the available voxelisation algorithms for different geometric primitives as well as voxel data structures.


AI ◽  
2021 ◽  
Vol 2 (4) ◽  
pp. 705-719
Author(s):  
Qian Huang ◽  
Chenghung Hsieh ◽  
Jiaen Hsieh ◽  
Chunchen Liu

Artificial intelligence (AI) is fundamentally transforming smart buildings by increasing energy efficiency and operational productivity, improving life experience, and providing better healthcare services. Sudden Infant Death Syndrome (SIDS) is an unexpected and unexplained death of infants under one year old. Previous research reports that sleeping on the back can significantly reduce the risk of SIDS. Existing sensor-based wearable or touchable monitors have serious drawbacks such as inconvenience and false alarm, so they are not attractive in monitoring infant sleeping postures. Several recent studies use a camera, portable electronics, and AI algorithm to monitor the sleep postures of infants. However, there are two major bottlenecks that prevent AI from detecting potential baby sleeping hazards in smart buildings. In order to overcome these bottlenecks, in this work, we create a complete dataset containing 10,240 day and night vision samples, and use post-training weight quantization to solve the huge memory demand problem. Experimental results verify the effectiveness and benefits of our proposed idea. Compared with the state-of-the-art AI algorithms in the literature, the proposed method reduces memory footprint by at least 89%, while achieving a similar high detection accuracy of about 90%. Our proposed AI algorithm only requires 6.4 MB of memory space, while other existing AI algorithms for sleep posture detection require 58.2 MB to 275 MB of memory space. This comparison shows that the memory is reduced by at least 9 times without sacrificing the detection accuracy. Therefore, our proposed memory-efficient AI algorithm has great potential to be deployed and to run on edge devices, such as micro-controllers and Raspberry Pi, which have low memory footprint, limited power budget, and constrained computing resources.


2021 ◽  
Author(s):  
Dan Flomin ◽  
David Pellow ◽  
Ron Shamir

AbstractThe rapid, continuous growth of deep sequencing experiments requires development and improvement of many bioinformatics applications for analysis of large sequencing datasets, including k-mer counting and assembly. Several applications reduce RAM usage by binning sequences. Binning is done by employing minimizer schemes, which rely on a specific order of the minimizers. It has been demonstrated that the choice of the order has a major impact on the performance of the applications. Here we introduce a method for tailoring the order to the dataset. Our method repeatedly samples the dataset and modifies the order so as to flatten the k-mer load distribution across minimizers. We integrated our method into Gerbil, a state-of-the-art memory efficient k-mer counter, and were able to reduce its memory footprint by 50% or more for large k, with only minor increase in runtime. Our tests also showed that the orders produced by our method produced superior results when transferred across datasets from the same species, with little or no order change. This enables memory reduction with essentially no increase in runtime.


2021 ◽  
Vol 144 ◽  
pp. 407-418
Author(s):  
Jary Pomponi ◽  
Simone Scardapane ◽  
Aurelio Uncini

Mathematics ◽  
2021 ◽  
Vol 9 (23) ◽  
pp. 3091
Author(s):  
Jelena Nikolić ◽  
Danijela Aleksić ◽  
Zoran Perić ◽  
Milan Dinčić

Motivated by the fact that uniform quantization is not suitable for signals having non-uniform probability density functions (pdfs), as the Laplacian pdf is, in this paper we have divided the support region of the quantizer into two disjunctive regions and utilized the simplest uniform quantization with equal bit-rates within both regions. In particular, we assumed a narrow central granular region (CGR) covering the peak of the Laplacian pdf and a wider peripheral granular region (PGR) where the pdf is predominantly tailed. We performed optimization of the widths of CGR and PGR via distortion optimization per border–clipping threshold scaling ratio which resulted in an iterative formula enabling the parametrization of our piecewise uniform quantizer (PWUQ). For medium and high bit-rates, we demonstrated the convenience of our PWUQ over the uniform quantizer, paying special attention to the case where 99.99% of the signal amplitudes belong to the support region or clipping region. We believe that the resulting formulas for PWUQ design and performance assessment are greatly beneficial in neural networks where weights and activations are typically modelled by the Laplacian distribution, and where uniform quantization is commonly used to decrease memory footprint.


2021 ◽  
pp. 419-429
Author(s):  
Vivek Kumar ◽  
Dilip K. Sharma ◽  
Vinay K. Mishra
Keyword(s):  

2021 ◽  
Author(s):  
Felix Lucka ◽  
Mailyn Pérez-Liva ◽  
Bradley E Treeby ◽  
Ben Cox

Abstract Ultrasound tomography (UST) scanners allow quantitative images of the human breast's acoustic properties to be derived with potential applications in screening, diagnosis and therapy planning. Time domain full waveform inversion (TD-FWI) is a promising UST image formation technique that fits the parameter fields of a wave physics model by gradient-based optimization. For high resolution 3D UST, it holds three key challenges: Firstly, its central building block, the computation of the gradient for a single US measurement, has a restrictively large memory footprint. Secondly, this building block needs to be computed for each of the 103-104 measurements, resulting in a massive parallel computation usually performed on large computational clusters for days. Lastly, the structure of the underlying optimization problem may result in slow progression of the solver and convergence to a local minimum. In this work, we design and evaluate a comprehensive computational strategy to overcome these challenges: Firstly, we exploit a gradient computation based on time reversal that dramatically reduces the memory footprint at the expense of one additional wave simulation per source. Secondly, we break the dependence on the number of measurements by using source encoding (SE) to compute stochastic gradient estimates. Also we describe a more accurate, TD-specific SE technique with a finer variance control and use a state-of-the-art stochastic LBFGS method. Lastly, we design an efficient TD multi-grid scheme together with preconditioning to speed up the convergence while avoiding local minima. All components are evaluated in extensive numerical proof-of-concept studies simulating a bowl-shaped 3D UST breast scanner prototype. Finally, we demonstrate that their combination allows us to obtain an accurate 442x442x222 voxel image with a resolution of 0.5mm using Matlab on a single GPU within 24 hours.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7506
Author(s):  
Francisco Erivaldo Fernandes Junior ◽  
Luis Gustavo Nonato ◽  
Caetano Mazzoni Ranieri ◽  
Jó Ueyama

Automatic flood detection may be an important component for triggering damage control systems and minimizing the risk of social or economic impacts caused by flooding. Riverside images from regular cameras are a widely available resource that can be used for tackling this problem. Nevertheless, state-of-the-art neural networks, the most suitable approach for this type of computer vision task, are usually resource-consuming, which poses a challenge for deploying these models within low-capability Internet of Things (IoT) devices with unstable internet connections. In this work, we propose a deep neural network (DNN) architecture pruning algorithm capable of finding a pruned version of a given DNN within a user-specified memory footprint. Our results demonstrate that our proposed algorithm can find a pruned DNN model with the specified memory footprint with little to no degradation of its segmentation performance. Finally, we show that our algorithm can be used in a memory-constraint wireless sensor network (WSN) employed to detect flooding events of urban rivers, and the resulting pruned models have competitive results compared with the original models.


Author(s):  
K. Manikandan ◽  
E. Chandra

Speaker Identification denotes the speech samples of known speaker and it identifies the best matches of the input model. The SGMFC method is the combination of Sub Gaussian Mixture Model (SGMM) with the Mel-frequency Cepstral Coefficients (MFCC) for feature extraction. The SGMFC method minimizes the error rate, memory footprint and also computational throughput measure needs of a medium-vocabulary speaker identification system, supposed for preparation on a transportable or otherwise. Fuzzy C-means and k-means clustering are used in the SGMM method to attain the improved efficiency and their outcomes with parameters such as precision, sensitivity and specificity are compared.


Sign in / Sign up

Export Citation Format

Share Document