K-Means Based Prediction of Transcoded JPEG File Size and Structural Similarity

Author(s):  
Steven Pigeon ◽  
Stéphane Coulombe

The problem of efficiently adapting JPEG images to satisfy given constraints such as maximum file size and resolution arises in a number of applications, from universal media access for mobile browsing to multimedia messaging services. However, optimizing for perceived quality (user experience) commands a non-negligible computational cost which in the authors work, they aim to minimize by the use of low-cost predictors. In previous work, the authors presented predictors and predictor-based systems to achieve low-cost and near-optimal adaption of JPEG images under given constraints of file size and resolution. In this work, they extend and improve these solutions by including more information about images to obtain more accurate predictions of file size and quality resulting from transcoding. The authors show that the proposed method, based on the clustering of transcoding operations represented as high-dimensional vectors, significantly outperforms previous methods in accuracy.

2021 ◽  
Author(s):  
Illia Horenko ◽  
Lukas Pospisil ◽  
Edoardo Vecci ◽  
Steffen Albrecht ◽  
Alexander Gerber ◽  
...  

We propose a pipeline for a synthetic generation of personalized Computer Tomography (CT) images, with a radiation exposure evaluation and a lifetime attributable risk (LAR) assessment. We perform a patient-specific performance evaluation for a broad range of denoising algorithms (including the most popular Deep Learning denoising approaches, wavelets-based methods, methods based on Mumford-Shah denoising etc.), focusing both on accessing the capability to reduce the patient-specific CT-induced LAR and on computational cost scalability. We introduce a parallel probabilistic Mumford-Shah denoising model (PMS), showing that it markedly-outperforms the compared common denoising methods in denoising quality and cost scaling. In particular, we show that it allows an approximately 22-fold robust patient-specific LAR reduction for infants and a 10-fold LAR reduction for adults. Using a normal laptop the proposed algorithm for PMS allows a cheap and robust (with the Multiscale Structural Similarity index > 90%) denoising of very large 2D videos and 3D images (with over 10^7 voxels) that are subject to ultra-strong Gaussian and various non-Gaussian noises, also for Signal-to-Noise Ratios much below 1.0. The code is provided for open access.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 645
Author(s):  
Muhammad Farooq ◽  
Sehrish Sarfraz ◽  
Christophe Chesneau ◽  
Mahmood Ul Hassan ◽  
Muhammad Ali Raza ◽  
...  

Expectiles have gained considerable attention in recent years due to wide applications in many areas. In this study, the k-nearest neighbours approach, together with the asymmetric least squares loss function, called ex-kNN, is proposed for computing expectiles. Firstly, the effect of various distance measures on ex-kNN in terms of test error and computational time is evaluated. It is found that Canberra, Lorentzian, and Soergel distance measures lead to minimum test error, whereas Euclidean, Canberra, and Average of (L1,L∞) lead to a low computational cost. Secondly, the performance of ex-kNN is compared with existing packages er-boost and ex-svm for computing expectiles that are based on nine real life examples. Depending on the nature of data, the ex-kNN showed two to 10 times better performance than er-boost and comparable performance with ex-svm regarding test error. Computationally, the ex-kNN is found two to five times faster than ex-svm and much faster than er-boost, particularly, in the case of high dimensional data.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5038
Author(s):  
Kosuke Shima ◽  
Masahiro Yamaguchi ◽  
Takumi Yoshida ◽  
Takanobu Otsuka

IoT-based measurement systems for manufacturing have been widely implemented. As components that can be implemented at low cost, BLE beacons have been used in several systems developed in previous research. In this work, we focus on the Kanban system, which is a measure used in manufacturing strategy. The Kanban system emphasizes inventory management and is used to produce only required amounts. In the Kanban system, the Kanban cards are rotated through the factory along with the products, and when the products change to a different process route, the Kanban card is removed from the products and the products are assigned to another Kanban. For this reason, a single Kanban cannot trace products from plan to completion. In this work, we propose a system that uses a Bluetooth low energy (BLE) beacon to connect Kanbans in different routes but assigned to the same products. The proposed method estimates the beacon status of whether the Kanban is inside or outside a postbox, which can then be computed by a micro controller at low computational cost. In addition, the system connects the Kanbans using the beacons as paired connection targets. In an experiment, we confirmed that the system connected 70% of the beacons accurately. We also confirmed that the system could connect the Kanbans at a small implementation cost.


Inventions ◽  
2021 ◽  
Vol 6 (4) ◽  
pp. 70
Author(s):  
Elena Solovyeva ◽  
Ali Abdullah

In this paper, the structure of a separable convolutional neural network that consists of an embedding layer, separable convolutional layers, convolutional layer and global average pooling is represented for binary and multiclass text classifications. The advantage of the proposed structure is the absence of multiple fully connected layers, which is used to increase the classification accuracy but raises the computational cost. The combination of low-cost separable convolutional layers and a convolutional layer is proposed to gain high accuracy and, simultaneously, to reduce the complexity of neural classifiers. Advantages are demonstrated at binary and multiclass classifications of written texts by means of the proposed networks under the sigmoid and Softmax activation functions in convolutional layer. At binary and multiclass classifications, the accuracy obtained by separable convolutional neural networks is higher in comparison with some investigated types of recurrent neural networks and fully connected networks.


2020 ◽  
Author(s):  
Qiyuan Zhao ◽  
Brett Savoie

<div> <div> <div> <p>Automated reaction prediction has the potential to elucidate complex reaction networks for applications ranging from combustion to materials degradation. Although substantial progress has been made in predicting specific reaction pathways and resolving mechanisms, the computational cost and inconsistent reaction coverage of automated prediction are still obstacles to exploring deep reaction networks without using heuristics. Here we show that cost can be reduced and reaction coverage can be increased simultaneously by relatively straight- forward modifications of the reaction enumeration, geometry initialization, and transition state convergence algorithms that are common to many emerging prediction methodologies. These changes are implemented in the context of Yet Another Reaction Program (YARP), our reaction prediction package, for which we report a head-to-head comparison with prevailing methods for two benchmark reaction prediction tasks. In all cases, we observe near perfect recapitulation of established reaction pathways and products by YARP, without the use of heuristics or other domain knowledge to guide reaction selection. In addition, YARP also discovers many new kinetically relevant pathways and products reported here for the first time. This is achieved while simultaneously reducing the cost of reaction characterization nearly 100-fold and increasing transition state success rates and intended rates over 2-fold and 10-fold, respectively, compared with recent benchmarks. This combination of ultra-low cost and high reaction-coverage creates opportunities to explore the reactivity of larger sys- tems and more complex reaction networks for applications like chemical degradation, where approaches based on domain heuristics fail. </p> </div> </div> </div>


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Beihong Ji ◽  
Xibing He ◽  
Yuzhao Zhang ◽  
Jingchen Zhai ◽  
Viet Hoang Man ◽  
...  

AbstractIn this study, we developed a novel algorithm to improve the screening performance of an arbitrary docking scoring function by recalibrating the docking score of a query compound based on its structure similarity with a set of training compounds, while the extra computational cost is neglectable. Two popular docking methods, Glide and AutoDock Vina were adopted as the original scoring functions to be processed with our new algorithm and similar improvement performance was achieved. Predicted binding affinities were compared against experimental data from ChEMBL and DUD-E databases. 11 representative drug receptors from diverse drug target categories were applied to evaluate the hybrid scoring function. The effects of four different fingerprints (FP2, FP3, FP4, and MACCS) and the four different compound similarity effect (CSE) functions were explored. Encouragingly, the screening performance was significantly improved for all 11 drug targets especially when CSE = S4 (S is the Tanimoto structural similarity) and FP2 fingerprint were applied. The average predictive index (PI) values increased from 0.34 to 0.66 and 0.39 to 0.71 for the Glide and AutoDock vina scoring functions, respectively. To evaluate the performance of the calibration algorithm in drug lead identification, we also imposed an upper limit on the structural similarity to mimic the real scenario of screening diverse libraries for which query ligands are general-purpose screening compounds and they are not necessarily structurally similar to reference ligands. Encouragingly, we found our hybrid scoring function still outperformed the original docking scoring function. The hybrid scoring function was further evaluated using external datasets for two systems and we found the PI values increased from 0.24 to 0.46 and 0.14 to 0.42 for A2AR and CFX systems, respectively. In a conclusion, our calibration algorithm can significantly improve the virtual screening performance in both drug lead optimization and identification phases with neglectable computational cost.


2021 ◽  
Author(s):  
Janis Heuel ◽  
Wolfgang Friederich

&lt;p&gt;Over the last years, installations of wind turbines (WTs) increased worldwide. Owing to&lt;br&gt;negative effects on humans, WTs are often installed in areas with low population density.&lt;br&gt;Because of low anthropogenic noise, these areas are also well suited for sites of&lt;br&gt;seismological stations. As a consequence, WTs are often installed in the same areas as&lt;br&gt;seismological stations. By comparing the noise in recorded data before and after&lt;br&gt;installation of WTs, seismologists noticed a substantial worsening of station quality leading&lt;br&gt;to conflicts between the operators of WTs and earthquake services.&lt;/p&gt;&lt;p&gt;In this study, we compare different techniques to reduce or eliminate the disturbing signal&lt;br&gt;from WTs at seismological stations. For this purpose, we selected a seismological station&lt;br&gt;that shows a significant correlation between the power spectral density and the hourly&lt;br&gt;windspeed measurements. Usually, spectral filtering is used to suppress noise in seismic&lt;br&gt;data processing. However, this approach is not effective when noise and signal have&lt;br&gt;overlapping frequency bands which is the case for WT noise. As a first method, we applied&lt;br&gt;the continuous wavelet transform (CWT) on our data to obtain a time-scale representation.&lt;br&gt;From this representation, we estimated a noise threshold function (Langston &amp; Mousavi,&lt;br&gt;2019) either from noise before the theoretical P-arrival (pre-noise) or using a noise signal&lt;br&gt;from the past with similar ground velocity conditions at the surrounding WTs. Therefore, we&lt;br&gt;installed low cost seismometers at the surrounding WTs to find similar signals at each WT.&lt;br&gt;From these similar signals, we obtain a noise model at the seismological station, which is&lt;br&gt;used to estimate the threshold function. As a second method, we used a denoising&lt;br&gt;autoencoder (DAE) that learns mapping functions to distinguish between noise and signal&lt;br&gt;(Zhu et al., 2019).&lt;/p&gt;&lt;p&gt;In our tests, the threshold function performs well when the event is visible in the raw or&lt;br&gt;spectral filtered data, but it fails when WT noise dominates and the event is hidden. In&lt;br&gt;these cases, the DAE removes the WT noise from the data. However, the DAE must be&lt;br&gt;trained with typical noise samples and high signal-to-noise ratio events to distinguish&lt;br&gt;between signal and interfering noise. Using the threshold function and pre-noise can be&lt;br&gt;applied immediately on real-time data and has a low computational cost. Using a noise&lt;br&gt;model from our prerecorded database at the seismological station does not improve the&lt;br&gt;result and it is more time consuming to find similar ground velocity conditions at the&lt;br&gt;surrounding WTs.&lt;/p&gt;


Sign in / Sign up

Export Citation Format

Share Document