How to Sell a Data Set? Pricing Policies for Data Monetization

Author(s):  
Sameer Mehta ◽  
Milind Dawande ◽  
Ganesh Janakiraman ◽  
Vijay Mookerjee

The wide variety of pricing policies used in practice by data sellers suggests that there are significant challenges in pricing data sets. In this paper, we develop a utility framework that is appropriate for data buyers and the corresponding pricing of the data by the data seller. Buyers interested in purchasing a data set have private valuations in two aspects—their ideal record that they value the most, and the rate at which their valuation for the records in the data set decays as they differ from the buyers’ ideal record. The seller allows individual buyers to filter the data set and select the records that are of interest to them. The multidimensional private information of the buyers coupled with the endogenous selection of records makes the seller’s problem of optimally pricing the data set a challenging one. We formulate a tractable model and successfully exploit its special structure to obtain optimal and near-optimal data-selling mechanisms. Specifically, we provide insights into the conditions under which a commonly used mechanism—namely, a price-quantity schedule—is optimal for the data seller. When the conditions leading to the optimality of a price-quantity schedule do not hold, we show that the optimal price-quantity schedule offers an attractive worst-case guarantee relative to an optimal mechanism. Further, we numerically solve for the optimal mechanism and show that the actual performance of two simple and well-known price-quantity schedules—namely, two-part tariff and two-block tariff—is near optimal. We also quantify the value to the seller from allowing buyers to filter the data set.

Author(s):  
Antonia J. Jones ◽  
Dafydd Evans ◽  
Steve Margetts ◽  
Peter J. Durrant

The Gamma Test is a non-linear modelling analysis tool that allows us to quantify the extent to which a numerical input/output data set can be expressed as a smooth relationship. In essence, it allows us to efficiently calculate that part of the variance of the output that cannot be accounted for by the existence of any smooth model based on the inputs, even though this model is unknown. A key aspect of this tool is its speed: the Gamma Test has time complexity O(Mlog M), where M is the number of datapoints. For data sets consisting of a few thousand points and a reasonable number of attributes, a single run of the Gamma Test typically takes a few seconds. In this chapter we will show how the Gamma Test can be used in the construction of predictive models and classifiers for numerical data. In doing so, we will demonstrate the use of this technique for feature selection, and for the selection of embedding dimension when dealing with a time-series.


1988 ◽  
Vol 254 (1) ◽  
pp. E104-E112
Author(s):  
B. Candas ◽  
J. Lalonde ◽  
M. Normand

The aim of this study is the selection of the number of compartments required for a model to represent the distribution and metabolism of corticotropin-releasing factor (CRF) in rats. The dynamics of labeled rat CRF were measured in plasma for seven rats after a rapid injection. The sampling schedule resulted from the combination of the two D-optimal sampling sets of times corresponding to both rival models. This protocol improved the numerical identifiability of the parameters and consequently facilitated the selection of the relevant model. A three-compartment model fits adequately to the seven individual dynamics and better represents four of them compared with the lower-order model. It was demonstrated, using simulations in which the measurement errors and the interindividual variability of the parameters are included, that his four-to-seven ratio of data sets is consistent with the relevance of the three-compartment model for every individual kinetic data set. Kinetic and metabolic parameters were then derived for each individual rat, their values being consistent with the prolonged effects of CRF on pituitary-adrenocortical secretion.


2021 ◽  
Vol 79 (1) ◽  
Author(s):  
Romana Haneef ◽  
Sofiane Kab ◽  
Rok Hrzic ◽  
Sonsoles Fuentes ◽  
Sandrine Fosse-Edorh ◽  
...  

Abstract Background The use of machine learning techniques is increasing in healthcare which allows to estimate and predict health outcomes from large administrative data sets more efficiently. The main objective of this study was to develop a generic machine learning (ML) algorithm to estimate the incidence of diabetes based on the number of reimbursements over the last 2 years. Methods We selected a final data set from a population-based epidemiological cohort (i.e., CONSTANCES) linked with French National Health Database (i.e., SNDS). To develop this algorithm, we adopted a supervised ML approach. Following steps were performed: i. selection of final data set, ii. target definition, iii. Coding variables for a given window of time, iv. split final data into training and test data sets, v. variables selection, vi. training model, vii. Validation of model with test data set and viii. Selection of the model. We used the area under the receiver operating characteristic curve (AUC) to select the best algorithm. Results The final data set used to develop the algorithm included 44,659 participants from CONSTANCES. Out of 3468 variables from SNDS linked to CONSTANCES cohort were coded, 23 variables were selected to train different algorithms. The final algorithm to estimate the incidence of diabetes was a Linear Discriminant Analysis model based on number of reimbursements of selected variables related to biological tests, drugs, medical acts and hospitalization without a procedure over the last 2 years. This algorithm has a sensitivity of 62%, a specificity of 67% and an accuracy of 67% [95% CI: 0.66–0.68]. Conclusions Supervised ML is an innovative tool for the development of new methods to exploit large health administrative databases. In context of InfAct project, we have developed and applied the first time a generic ML-algorithm to estimate the incidence of diabetes for public health surveillance. The ML-algorithm we have developed, has a moderate performance. The next step is to apply this algorithm on SNDS to estimate the incidence of type 2 diabetes cases. More research is needed to apply various MLTs to estimate the incidence of various health conditions.


2019 ◽  
Vol 5 (10) ◽  
pp. 2120-2130 ◽  
Author(s):  
Suraj Kumar ◽  
Thendiyath Roshni ◽  
Dar Himayoun

Reliable method of rainfall-runoff modeling is a prerequisite for proper management and mitigation of extreme events such as floods. The objective of this paper is to contrasts the hydrological execution of Emotional Neural Network (ENN) and Artificial Neural Network (ANN) for modelling rainfall-runoff in the Sone Command, Bihar as this area experiences flood due to heavy rainfall. ENN is a modified version of ANN as it includes neural parameters which enhance the network learning process. Selection of inputs is a crucial task for rainfall-runoff model. This paper utilizes cross correlation analysis for the selection of potential predictors. Three sets of input data: Set 1, Set 2 and Set 3 have been prepared using weather and discharge data of 2 raingauge stations and 1 discharge station located in the command for the period 1986-2014.  Principal Component Analysis (PCA) has then been performed on the selected data sets for selection of data sets showing principal tendencies.  The data sets obtained after PCA have then been used in the model development of ENN and ANN models. Performance indices were performed for the developed model for three data sets. The results obtained from Set 2 showed that ENN with R= 0.933, R2 = 0.870, Nash Sutcliffe = 0.8689, RMSE = 276.1359 and Relative Peak Error = 0.00879 outperforms ANN in simulating the discharge. Therefore, ENN model is suggested as a better model for rainfall-runoff discharge in the Sone command, Bihar.


2019 ◽  
Vol 12 (3) ◽  
pp. 427-466
Author(s):  
Yiyi Chen

Abstract Existing research on mediation finds that mediation by a strong mediator is both more prevalent and more conducive to a negotiated settlement. However, why disputants select a weak mediator remains unclear. From the perspective of the uncertainty mechanism, the nature of mediation is a procedure for sharing private information and reducing disputants’ uncertainty regarding the resolve to continue fighting. Disputants can benefit from mediation through gaining a comparative advantage regarding uncertainty by focusing on either controlling the sharing of their own information or increasing their opponents’ sharing of information. With regard to these two strategic choices, this article argues that the selection of a weak mediator is more likely when disputants prefer controlling the sharing of their information to expanding their opponents’ information sharing. Correspondingly, three potential factors that influence the disputants’ strategic choice of gaining a comparative advantage regarding uncertainty are applied, namely, a previous mediation in the dispute; the dispute’s level of hostility; and the power disparity between the disputants. The author compiles data from the International Crisis Behaviour (ICB, 1918–2015) data set and the International Conflict Management (ICM, 1945–2003) data set for the empirical analysis. The results show that mediation by a weak mediator is more likely when it is the first time that the disputants have submitted to mediation in the dispute and when the dispute’s level of hostility is low. In some cases, a large power disparity between the disputants also makes the selection of a weak mediator more likely.


2012 ◽  
Vol 52 (No. 4) ◽  
pp. 188-196 ◽  
Author(s):  
Y. Lei ◽  
S. Y Zhang

Forestmodellers have long faced the problem of selecting an appropriate mathematical model to describe tree ontogenetic or size-shape empirical relationships for tree species. A common practice is to develop many models (or a model pool) that include different functional forms, and then to select the most appropriate one for a given data set. However, this process may impose subjective restrictions on the functional form. In this process, little attention is paid to the features (e.g. asymptote and inflection point rather than asymptote and nonasymptote) of different functional forms, and to the intrinsic curve of a given data set. In order to find a better way of comparing and selecting the growth models, this paper describes and analyses the characteristics of the Schnute model. This model has both flexibility and versatility that have not been used in forestry. In this study, the Schnute model was applied to different data sets of selected forest species to determine their functional forms. The results indicate that the model shows some desirable properties for the examined data sets, and allows for discerning the different intrinsic curve shapes such as sigmoid, concave and other curve shapes. Since no suitable functional form for a given data set is usually known prior to the comparison of candidate models, it is recommended that the Schnute model be used as the first step to determine an appropriate functional form of the data set under investigation in order to avoid using a functional form a priori.


2001 ◽  
Vol 57 (4) ◽  
pp. 497-506 ◽  
Author(s):  
A. T. H. Lenstra ◽  
O. N. Kataeva

The crystal structures of the title compounds were determined with net intensities I derived via the background–peak–background procedure. Least-squares optimizations reveal differences between the low-order (0 < s < 0.7 Å−1) and high-order (0.7 < s < 1.0 Å−1) structure models. The scale factors indicate discrepancies of up to 10% between the low-order and high-order reflection intensities. This observation is compound independent. It reflects the scan-angle-induced truncation error, because the applied scan angle (0.8 + 2.0 tan θ)° underestimates the wavelength dispersion in the monochromated X-ray beam. The observed crystal structures show pseudo-I-centred sublattices for three of its non-H atoms in the asymmetric unit. Our selection of observed intensities (I > 3σ) stresses that pseudo-symmetry. Model refinements on individual data sets with (h + k + l) = 2n and (h + k + l) = 2n + 1 illustrate the lack of model robustness caused by that pseudo-symmetry. To obtain a better balanced data set and thus a more robust structure we decided to exploit background modelling. We described the background intensities B(\displaystyle\mathrel{\mathop H^{\rightharpoonup}}) with an 11th degree polynomial in θ. This function predicts the local background b at each position \displaystyle\mathrel{\mathop H^{\rightharpoonup}} and defines the counting statistical distribution P(B), in which b serves as average and variance. The observation R defines P(R). This leads to P(I) = P(R)/P(B) and thus I = R − b and σ2(I) = I so that the error σ(I) is background independent. Within this framework we reanalysed the structure of the copper(II) derivative. Background modelling resulted in a structure model with an improved internal consistency. At the same time the unweighted R value based on all observations decreased from 10.6 to 8.4%. A redetermination of the structure at 120 K concluded the analysis.


2015 ◽  
Vol 2015 ◽  
pp. 1-10 ◽  
Author(s):  
J. Zyprych-Walczak ◽  
A. Szabelska ◽  
L. Handschuh ◽  
K. Górczak ◽  
K. Klamecka ◽  
...  

High-throughput sequencing technologies, such as the Illumina Hi-seq, are powerful new tools for investigating a wide range of biological and medical problems. Massive and complex data sets produced by the sequencers create a need for development of statistical and computational methods that can tackle the analysis and management of data. The data normalization is one of the most crucial steps of data processing and this process must be carefully considered as it has a profound effect on the results of the analysis. In this work, we focus on a comprehensive comparison of five normalization methods related to sequencing depth, widely used for transcriptome sequencing (RNA-seq) data, and their impact on the results of gene expression analysis. Based on this study, we suggest a universal workflow that can be applied for the selection of the optimal normalization procedure for any particular data set. The described workflow includes calculation of the bias and variance values for the control genes, sensitivity and specificity of the methods, and classification errors as well as generation of the diagnostic plots. Combining the above information facilitates the selection of the most appropriate normalization method for the studied data sets and determines which methods can be used interchangeably.


2003 ◽  
Vol 127 (6) ◽  
pp. 680-686 ◽  
Author(s):  
Jules J. Berman

Abstract Context.—In the normal course of activity, pathologists create and archive immense data sets of scientifically valuable information. Researchers need pathology-based data sets, annotated with clinical information and linked to archived tissues, to discover and validate new diagnostic tests and therapies. Pathology records can be used for research purposes (without obtaining informed patient consent for each use of each record), provided the data are rendered harmless. Large data sets can be made harmless through 3 computational steps: (1) deidentification, the removal or modification of data fields that can be used to identify a patient (name, social security number, etc); (2) rendering the data ambiguous, ensuring that every data record in a public data set has a nonunique set of characterizing data; and (3) data scrubbing, the removal or transformation of words in free text that can be used to identify persons or that contain information that is incriminating or otherwise private. This article addresses the problem of data scrubbing. Objective.—To design and implement a general algorithm that scrubs pathology free text, removing all identifying or private information. Methods.—The Concept-Match algorithm steps through confidential text. When a medical term matching a standard nomenclature term is encountered, the term is replaced by a nomenclature code and a synonym for the original term. When a high-frequency “stop” word, such as a, an, the, or for, is encountered, it is left in place. When any other word is encountered, it is blocked and replaced by asterisks. This produces a scrubbed text. An open-source implementation of the algorithm is freely available. Results.—The Concept-Match scrub method transformed pathology free text into scrubbed output that preserved the sense of the original sentences, while it blocked terms that did not match terms found in the Unified Medical Language System (UMLS). The scrubbed product is safe, in the restricted sense that the output retains only standard medical terms. The software implementation scrubbed more than half a million surgical pathology report phrases in less than an hour. Conclusions.—Computerized scrubbing can render the textual portion of a pathology report harmless for research purposes. Scrubbing and deidentification methods allow pathologists to create and use large pathology databases to conduct medical research.


Author(s):  
DAVE WIGHTMAN ◽  
TONY BENDELL

In an Industrial Reliability setting a number of modeling techniques are available which allow the incorporation of explanatory variables; for example, Proportional Hazards Modeling, Proportional Intensity Modeling and Additive Hazards Modeling. However, in many applied settings it is unclear what the form of the underlying process is, and thus which of the above modeling structures is the most appropriate, if any. In this paper we discuss the different modeling formulations with regard to such features as their appropriateness, flexibility, robustness and ease of implementation together with the author’s experience gained from application of the models to a wide selection of reliability data sets. In particular, a comparative study of the models when applied to a software reliability data set is provided.


Sign in / Sign up

Export Citation Format

Share Document