scholarly journals DPlis: Boosting Utility of Differentially Private Deep Learning via Randomized Smoothing

2021 ◽  
Vol 2021 (4) ◽  
pp. 163-183
Author(s):  
Wenxiao Wang ◽  
Tianhao Wang ◽  
Lun Wang ◽  
Nanqing Luo ◽  
Pan Zhou ◽  
...  

Abstract Deep learning techniques have achieved remarkable performance in wide-ranging tasks. However, when trained on privacy-sensitive datasets, the model parameters may expose private information in training data. Prior attempts for differentially private training, although offering rigorous privacy guarantees, lead to much lower model performance than the non-private ones. Besides, different runs of the same training algorithm produce models with large performance variance. To address these issues, we propose DPlis– Differentially Private Learning wIth Smoothing. The core idea of DPlis is to construct a smooth loss function that favors noise-resilient models lying in large flat regions of the loss landscape. We provide theoretical justification for the utility improvements of DPlis. Extensive experiments also demonstrate that DPlis can effectively boost model quality and training stability under a given privacy budget.

2021 ◽  
Vol 7 (3) ◽  
pp. 59
Author(s):  
Yohanna Rodriguez-Ortega ◽  
Dora M. Ballesteros ◽  
Diego Renza

With the exponential growth of high-quality fake images in social networks and media, it is necessary to develop recognition algorithms for this type of content. One of the most common types of image and video editing consists of duplicating areas of the image, known as the copy-move technique. Traditional image processing approaches manually look for patterns related to the duplicated content, limiting their use in mass data classification. In contrast, approaches based on deep learning have shown better performance and promising results, but they present generalization problems with a high dependence on training data and the need for appropriate selection of hyperparameters. To overcome this, we propose two approaches that use deep learning, a model by a custom architecture and a model by transfer learning. In each case, the impact of the depth of the network is analyzed in terms of precision (P), recall (R) and F1 score. Additionally, the problem of generalization is addressed with images from eight different open access datasets. Finally, the models are compared in terms of evaluation metrics, and training and inference times. The model by transfer learning of VGG-16 achieves metrics about 10% higher than the model by a custom architecture, however, it requires approximately twice as much inference time as the latter.


2018 ◽  
Author(s):  
Uri Shaham

AbstractBiological measurements often contain systematic errors, also known as “batch effects”, which may invalidate downstream analysis when not handled correctly. The problem of removing batch effects is of major importance in the biological community. Despite recent advances in this direction via deep learning techniques, most current methods may not fully preserve the true biological patterns the data contains. In this work we propose a deep learning approach for batch effect removal. The crux of our approach is learning a batch-free encoding of the data, representing its intrinsic biological properties, but not batch effects. In addition, we also encode the systematic factors through a decoding mechanism and require accurate reconstruction of the data. Altogether, this allows us to fully preserve the true biological patterns represented in the data. Experimental results are reported on data obtained from two high throughput technologies, mass cytometry and single-cell RNA-seq. Beyond good performance on training data, we also observe that our system performs well on test data obtained from new patients, which was not available at training time. Our method is easy to handle, a publicly available code can be found at https://github.com/ushaham/BatchEffectRemoval2018.


Author(s):  
Vu Tuan Hai ◽  
Dang Thanh Vu ◽  
Huynh Ho Thi Mong Trinh ◽  
Pham The Bao

Recent advances in deep learning models have shown promising potential in object removal, which is the task of replacing undesired objects with appropriate pixel values using known context. Object removal-based deep learning can commonly be solved by modeling it as the Img2Img (image to image) translation or Inpainting. Instead of dealing with a large context, this paper aims at a specific application of object removal, that is, erasing braces trace out of an image having teeth with braces (called braces2teeth problem). We solved the problem by three methods corresponding to different datasets. Firstly, we use the CycleGAN model to deal with the problem that paired training data is not available. In the second case, we try to create pseudo-paired data to train the Pix2Pix model. In the last case, we utilize GraphCut combining generative inpainting model to build a user-interactive tool that can improve the result in case the user is not satisfied with previous results. To our best knowledge, this study is one of the first attempts to take the braces2teeth problem into account by using deep learning techniques and it can be applied in various fields, from health care to entertainment.


Geophysics ◽  
2021 ◽  
pp. 1-45
Author(s):  
Runhai Feng ◽  
Dario Grana ◽  
Niels Balling

Segmentation of faults based on seismic images is an important step in reservoir characterization. With the recent developments of deep-learning methods and the availability of massive computing power, automatic interpretation of seismic faults has become possible. The likelihood of occurrence for a fault can be quantified using a sigmoid function. Our goal is to quantify the fault model uncertainty that is generally not captured by deep-learning tools. We propose to use the dropout approach, a regularization technique to prevent overfitting and co-adaptation in hidden units, to approximate the Bayesian inference and estimate the principled uncertainty over functions. Particularly, the variance of the learned model has been decomposed into aleatoric and epistemic parts. The proposed method is applied to a real dataset from the Netherlands F3 block with two different dropout ratios in convolutional neural networks. The aleatoric uncertainty is irreducible since it relates to the stochastic dependency within the input observations. As the number of Monte-Carlo realizations increases, the epistemic uncertainty asymptotically converges and the model standard deviation decreases, because the variability of model parameters is better simulated or explained with a larger sample size. This analysis can quantify the confidence to use fault predictions with less uncertainty. Additionally, the analysis suggests where more training data are needed to reduce the uncertainty in low confidence regions.


2018 ◽  
Vol 7 (3.27) ◽  
pp. 258 ◽  
Author(s):  
Yecheng Yao ◽  
Jungho Yi ◽  
Shengjun Zhai ◽  
Yuwen Lin ◽  
Taekseung Kim ◽  
...  

The decentralization of cryptocurrencies has greatly reduced the level of central control over them, impacting international relations and trade. Further, wide fluctuations in cryptocurrency price indicate an urgent need for an accurate way to forecast this price. This paper proposes a novel method to predict cryptocurrency price by considering various factors such as market cap, volume, circulating supply, and maximum supply based on deep learning techniques such as the recurrent neural network (RNN) and the long short-term memory (LSTM),which are effective learning models for training data, with the LSTM being better at recognizing longer-term associations. The proposed approach is implemented in Python and validated for benchmark datasets. The results verify the applicability of the proposed approach for the accurate prediction of cryptocurrency price.


Author(s):  
Daniel Gebler ◽  
Agnieszka Kolada ◽  
Agnieszka Pasztaleniec ◽  
Krzysztof Szoszkiewicz

Abstract Since 2000, after the Water Framework Directive came into force, aquatic ecosystems’ bioassessment has acquired immense practical importance for water management. Currently, due to extensive scientific research and monitoring, we have gathered comprehensive hydrobiological databases. The amount of available data increases with each subsequent year of monitoring, and the efficient analysis of these data requires the use of proper mathematical tools. Our study challenges the comparison of the modelling potential between four indices for the ecological status assessment of lakes based on three groups of aquatic organisms, i.e. phytoplankton, phytobenthos and macrophytes. One of the deep learning techniques, artificial neural networks, has been used to predict values of four biological indices based on the limited set of the physicochemical parameters of water. All analyses were conducted separately for lakes with various stratification regimes as they function differently. The best modelling quality in terms of high values of coefficients of determination and low values of the normalised root mean square error was obtained for chlorophyll a followed by phytoplankton multimetric. A lower degree of fit was obtained in the networks for macrophyte index, and the poorest model quality was obtained for phytobenthos index. For all indices, modelling quality for non-stratified lakes was higher than this for stratified lakes, giving a higher percentage of variance explained by the networks and lower values of errors. Sensitivity analysis showed that among physicochemical parameters, water transparency (Secchi disk reading) exhibits the strongest relationship with the ecological status of lakes derived by phytoplankton and macrophytes. At the same time, all input variables indicated a negligible impact on phytobenthos index. In this way, different explanations of the relationship between biological and trophic variables were revealed.


2020 ◽  
Author(s):  
Ghazi Abdalla ◽  
Fatih Özyurt

Abstract In the modern era, Internet usage has become a basic necessity in the lives of people. Nowadays, people can perform online shopping and check the customer’s views about products that purchased online. Social networking services enable users to post opinions on public platforms. Analyzing people’s opinions helps corporations to improve the quality of products and provide better customer service. However, analyzing this content manually is a daunting task. Therefore, we implemented sentiment analysis to make the process automatically. The entire process includes data collection, pre-processing, word embedding, sentiment detection and classification using deep learning techniques. Twitter was chosen as the source of data collection and tweets collected automatically by using Tweepy. In this paper, three deep learning techniques were implemented, which are CNN, Bi-LSTM and CNN-Bi-LSTM. Each of the models trained on three datasets consists of 50K, 100K and 200K tweets. The experimental result revealed that, with the increasing amount of training data size, the performance of the models improved, especially the performance of the Bi-LSTM model. When the model trained on the 200K dataset, it achieved about 3% higher accuracy than the 100K dataset and achieved about 7% higher accuracy than the 50K dataset. Finally, the Bi-LSTM model scored the highest performance in all metrics and achieved an accuracy of 95.35%.


2018 ◽  
Vol 30 (4) ◽  
pp. 513-522 ◽  
Author(s):  
Yuichi Konishi ◽  
◽  
Kosuke Shigematsu ◽  
Takashi Tsubouchi ◽  
Akihisa Ohya

The Tsukuba Challenge is an open experiment competition held annually since 2007, and wherein the autonomous navigation robots developed by the participants must navigate through an urban setting in which pedestrians and cyclists are present. One of the required tasks in the Tsukuba Challenge from 2013 to 2017 was to search for persons wearing designated clothes within the search area. This is a very difficult task since it is necessary to seek out these persons in an environment that includes regular pedestrians, and wherein the lighting changes easily because of weather conditions. Moreover, the recognition system must have a light computational cost because of the limited performance of the computer that is mounted onto the robot. In this study, we focused on a deep learning method of detecting the target persons in captured images. The developed detection system was expected to achieve high detection performance, even when small-sized input images were used for deep learning. Experiments demonstrated that the proposed system achieved better performance than an existing object detection network. However, because a vast amount of training data is necessary for deep learning, a method of generating training data to be used in the detection of target persons is also discussed in this paper.


Forecasting ◽  
2021 ◽  
Vol 3 (4) ◽  
pp. 741-762
Author(s):  
Panagiotis Stalidis ◽  
Theodoros Semertzidis ◽  
Petros Daras

In this paper, a detailed study on crime classification and prediction using deep learning architectures is presented. We examine the effectiveness of deep learning algorithms in this domain and provide recommendations for designing and training deep learning systems for predicting crime areas, using open data from police reports. Having time-series of crime types per location as training data, a comparative study of 10 state-of-the-art methods against 3 different deep learning configurations is conducted. In our experiments with 5 publicly available datasets, we demonstrate that the deep learning-based methods consistently outperform the existing best-performing methods. Moreover, we evaluate the effectiveness of different parameters in the deep learning architectures and give insights for configuring them to achieve improved performance in crime classification and finally crime prediction.


2020 ◽  
Author(s):  
Haiming Tang ◽  
Nanfei Sun ◽  
Steven Shen

Artificial intelligence (AI) has an emerging progress in diagnostic pathology. A large number of studies of applying deep learning models to histopathological images have been published in recent years. While many studies claim high accuracies, they may fall into the pitfalls of overfitting and lack of generalization due to the high variability of the histopathological images. We use the example of Osteosarcoma to illustrate the pitfalls and how the addition of model input variability can help improve model performance. We use the publicly available osteosarcoma dataset to retrain a previously published classification model for osteosarcoma. We partition the same set of images into the training and testing datasets differently than the original study: the test dataset consists of images from one patient while the training dataset consists images of all other patients. The performance of the model on the test set using the new partition schema declines dramatically, indicating a lack of model generalization and overfitting.We also show the influence of training data variability on model performance by collecting a minimal dataset of 10 osteosarcoma subtypes as well as benign tissues and benign bone tumors of differentiation. We show the additions of more and more subtypes into the training data step by step under the same model schema yield a series of coherent models with increasing performances. In conclusion, we bring forward data preprocessing and collection tactics for histopathological images of high variability to avoid the pitfalls of overfitting and build deep learning models of higher generalization abilities.


Sign in / Sign up

Export Citation Format

Share Document