aggregation step
Recently Published Documents


TOTAL DOCUMENTS

17
(FIVE YEARS 7)

H-INDEX

6
(FIVE YEARS 2)

Author(s):  
Mohd Saad Hamid ◽  
Nurulfajar Abd Manap ◽  
Rostam Affendi Hamzah ◽  
Ahmad Fauzan Kadmin ◽  
Shamsul Fakhar Abd Gani ◽  
...  

This paper proposes a new hybrid method between the learning-based and handcrafted methods for a stereo matching algorithm. The main purpose of the stereo matching algorithm is to produce a disparity map. This map is essential for many applications, including three-dimensional (3D) reconstruction. The raw disparity map computed by a convolutional neural network (CNN) is still prone to errors in the low texture region. The algorithm is set to improve the matching cost computation stage with hybrid CNN-based combined with truncated directional intensity computation. The difference in truncated directional intensity value is employed to decrease radiometric errors. The proposed method’s raw matching cost went through the cost aggregation step using the bilateral filter (BF) to improve accuracy. The winner-take-all (WTA) optimization uses the aggregated cost volume to produce an initial disparity map. Finally, a series of refinement processes enhance the initial disparity map for a more accurate final disparity map. This paper verified the performance of the algorithm using the Middlebury online stereo benchmarking system. The proposed algorithm achieves the objective of generating a more accurate and smooth disparity map with different depths at low texture regions through better matching cost quality.


2021 ◽  
Vol 14 (2) ◽  
pp. 183-200
Author(s):  
Vito Walter Anelli ◽  
Yashar Deldjoo ◽  
Tommaso Di Noia ◽  
Antonio Ferrara

In Machine Learning scenarios, privacy is a crucial concern when models have to be trained with private data coming from users of a service, such as a recommender system, a location-based mobile service, a mobile phone text messaging service providing next word prediction, or a face image classification system. The main issue is that, often, data are collected, transferred, and processed by third parties. These transactions violate new regulations, such as GDPR. Furthermore, users usually are not willing to share private data such as their visited locations, the text messages they wrote, or the photo they took with a third party. On the other hand, users appreciate services that work based on their behaviors and preferences. In order to address these issues, Federated Learning (FL) has been recently proposed as a means to build ML models based on private datasets distributed over a large number of clients, while preventing data leakage. A federation of users is asked to train a same global model on their private data, while a central coordinating server receives locally computed updates by clients and aggregate them to obtain a better global model, without the need to use clients’ actual data. In this work, we extend the FL approach by pushing forward the state-of-the-art approaches in the aggregation step of FL, which we deem crucial for building a high-quality global model. Specifically, we propose an approach that takes into account a suite of client-specific criteria that constitute the basis for assigning a score to each client based on a priority of criteria defined by the service provider. Extensive experiments on two publicly available datasets indicate the merits of the proposed approach compared to standard FL baseline.


Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 41
Author(s):  
Adam Thor Thorgeirsson ◽  
Frank Gauterin

Probabilistic predictions with machine learning are important in many applications. These are commonly done with Bayesian learning algorithms. However, Bayesian learning methods are computationally expensive in comparison with non-Bayesian methods. Furthermore, the data used to train these algorithms are often distributed over a large group of end devices. Federated learning can be applied in this setting in a communication-efficient and privacy-preserving manner but does not include predictive uncertainty. To represent predictive uncertainty in federated learning, our suggestion is to introduce uncertainty in the aggregation step of the algorithm by treating the set of local weights as a posterior distribution for the weights of the global model. We compare our approach to state-of-the-art Bayesian and non-Bayesian probabilistic learning algorithms. By applying proper scoring rules to evaluate the predictive distributions, we show that our approach can achieve similar performance as the benchmark would achieve in a non-distributed setting.


Sensors ◽  
2019 ◽  
Vol 19 (14) ◽  
pp. 3057
Author(s):  
Xiaoshuang Ma ◽  
Penghai Wu

This paper presents a despeckling method for multitemporal images acquired by synthetic aperture radar (SAR) sensors. The proposed method uses a scattering covariance matrix of each image patch as the basic processing unit, which can exploit both the amplitude information of each pixel and the phase difference between any two pixels in a patch. The proposed filtering framework consists of four main steps: (1) a prefiltering result of each image is obtained by a nonlocal weighted average using only the information of the corresponding time phase; (2) an adaptively temporal linear filter is employed to further suppress the speckle; (3) the final output of each patch is obtained by a guided filter using both the original speckled data and the filtering result of step 3; and (4) an aggregation step is used to tackle the multiple estimations problem for each pixel. The despeckling experiments conducted on both simulated and real multitemporal SAR datasets reveal the pleasing performance of the proposed method in both suppressing speckle and retaining details, when compared with both advanced single-temporal and multitemporal SAR despeckling techniques.


2019 ◽  
Author(s):  
Florian Wagner ◽  
Dalia Barkley ◽  
Itai Yanai

AbstractSingle-cell RNA-Seq measurements are commonly affected by high levels of technical noise, posing challenges for data analysis and visualization. A diverse array of methods has been proposed to computationally remove noise by sharing information across similar cells or genes, however their respective accuracies have been difficult to establish. Here, we propose a simple denoising strategy based on principal component analysis (PCA). We show that while PCA performed on raw data is biased towards highly expressed genes, this bias can be mitigated with a cell aggregation step, allowing the recovery of denoised expression values for both highly and lowly expressed genes. We benchmark our resulting ENHANCE algorithm and three previously described methods on simulated data that closely mimic real datasets, showing that ENHANCE provides the best overall denoising accuracy, recovering modules of co-expressed genes and cell subpopulations. Implementations of our algorithm are available at https://github.com/yanailab/enhance.


2019 ◽  
Author(s):  
Bowen Shi ◽  
Shan Shi ◽  
Junhua Wu ◽  
Musheng Chen

In this paper, we propose a new stereo matching algorithm to measure the correlation between two rectified image patches. The difficulty near objects' boundaries and textureless areas is a widely discussed issue in local correlation-based algorithms and most approaches focus on the cost aggregation step to solve the problem. We analyze the inherent limitations of sum of absolute differences (SAD) and sum of squared differences (SSD), then propose a new difference computation method to restrain the noise near objects' boundaries and enlarge the intensity variations in textureless areas. The proposed algorithm can effectively deal with the problems and generate more accurate disparity maps than SAD and SSD without time complexity increasing. Furthermore, proved by experiments, the algorithm can also be applied in some SAD-based and SSD-based algorithms to achieve better results than the original.


2019 ◽  
Vol 58 (2) ◽  
pp. 269-289 ◽  
Author(s):  
Moosup Kim ◽  
Yoo-Bin Yhang ◽  
Chang-Mook Lim

AbstractThe daily precipitation data generated by dynamical models, including regional climate models, generally suffer from biases in distribution and spatial dependence. These are serious flaws if the data are intended to be applied to hydrometeorological studies. This paper proposes a scheme for correcting the biases in both aspects simultaneously. The proposed scheme consists of two steps: an aggregation step and a disaggregation step. The first one aims to obtain a smoothed precipitation pattern that must be retained in correcting the bias, and the second aims to make up for the deficient spatial variation of the smoothed pattern. In both steps, the Gaussian copula plays important roles since it not only provides a feasible way to correct the spatial correlation of model simulations but also can be extended for large-dimension cases by imposing a covariance function on its correlation structure. The proposed scheme is applied to the daily precipitation data generated by a regional climate model. We can verify that the biases are satisfactorily corrected by examining several statistics of the corrected data.


2018 ◽  
Vol 7 (4.11) ◽  
pp. 9
Author(s):  
RA. Hamzah ◽  
MGY. Wei ◽  
NS. Nik Anwar ◽  
AF. Kadmin ◽  
SF. Abd Gani ◽  
...  

This paper presents a new algorithm for object detection using a stereo camera system, which is applicable for machine vision applications. The propose algorithm has four stages which the first stage is matching cost computation. This step acquires the preliminary result using a pixel based differences method. Then, the second stage known as aggregation step uses a guided filter with fixed window support size. This filter is efficiently reduce the noise and increase the edge properties. After that, the optimization stage uses winner-takes-all (WTA) approach which selects the smallest matching differences value and normalized it to the disparity level. The last stage in the framework uses a bilateral filter, which is effectively further reduce the remaining noise on the disparity map. This map is two-dimensional mapping of the final result which contains information of object detection and locations. Based on the standard benchmarking stereo dataset, the proposed work produces good results and performs much better compared with some recently published methods.  


Author(s):  
Lívia Dias Campêlo ◽  
Carlos Adolpho Magalhães Baltar ◽  
Silvia Cristina Alves França

2016 ◽  
Vol 15 (02) ◽  
pp. 285-310 ◽  
Author(s):  
Enrico Bernardi ◽  
Silvia Romagnoli

In this paper, we propose a novel approach for the computation of the probability distribution of a counting variable linked to a multivariate hierarchical Archimedean copula function. The hierarchy has a twofold impact: it acts on the aggregation step but also it determines the arrival policy of the random event. The novelty of this work is to introduce this policy, formalized as an arrival matrix, i.e., a random matrix of dependent 0–1 random variables, into the model. This arrival matrix represents the set of distorted (by the policy itself) combinatorial distributions of the event, i.e., of the most probable scenarios. To this distorted version of the [Formula: see text] approach [see Ref. 7 and Ref. 27], we are now able to apply a pure hierarchical Archimedean dependence structure among variables. As an empirical application, we study the problem of evaluating the probability distribution of losses related to the default of various type of counterparts in a structured portfolio exposed to the credit risk of a selected set among the major banks of European area and to the correlations among these risks.


Sign in / Sign up

Export Citation Format

Share Document