generalization theory
Recently Published Documents


TOTAL DOCUMENTS

15
(FIVE YEARS 5)

H-INDEX

4
(FIVE YEARS 0)

2021 ◽  
pp. 129-145
Author(s):  
Nataliya KRAVCHUK ◽  
Oleh LUTSYSHYN

Introduction. Global transformation processes, the globalization of the financial sphere in particular, which have intensified at the turn of the XX – XXI centuries, created new preconditions for the economization of international relations and the expansion of classical diplomatic tools (including economic and financial) for the foreign policy of individual actors in international relations. Orthodox approaches are being replaced by the realization that financial diplomacy is the basis for forming the economic image of the state; the institution of promoting national interests outside states and integration associations; the form of political influence and foreign economic policy; as well as a public-private partnership in international affairs. A huge variety of interests and contradictions are intertwined when talking about financial diplomacy. In this sense, the study of financial diplomacy involves going beyond the disciplinary division of social sciences and necessities of the formation of the generalization theory principles, which would be adequate to modern transformations in the system of international relations. The purpose of the article is to scientifically argue the need to apply the concept of "financial diplomacy" in scientific circulation; based on the logic of methodological monism, to reveal the interdisciplinary nature of financial diplomacy as a new independent object of economic diplomacy, which requires deepening the conceptual foundations and expanding its application in practice. Methods: scientific synthesis, interdisciplinary exchange, cross-sectional research, integrated thinking. Results. It has been found that financial diplomacy is an insufficiently discussed concept in science. It is argued that financial diplomacy, as an interdisciplinary phenomenon, should be considered in the context of interdisciplinary discourse at the conceptual and empirical levels, combining pragmatism of politics, the rationality of economics, and the art of diplomacy. At the conceptual level, it is proposed to consider financial diplomacy as an important component of economic diplomacy and as an independent area of diplomatic practice, formed under the influence of systemic determinants of global development. At the empirical level, financial diplomacy is a multidisciplinary institution of diplomacy. Within this paper models, forms, and levels of financial diplomacy are singled out; its universal methods are supplemented by specific tools; emphasis is placed on the problems of forming a modern network structure of financial diplomacy at both the formal and informal levels. Conclusions. The objective necessity of separating financial diplomacy into a separate specific branch of modern diplomatic activity is scientifically substantiated; its interdisciplinary nature is revealed and a polymodal concept of research at the conceptual and empirical levels is proposed.


2020 ◽  
pp. 027836492095944
Author(s):  
Anirudha Majumdar ◽  
Alec Farid ◽  
Anoopkumar Sonar

Our goal is to learn control policies for robots that provably generalize well to novel environments given a dataset of example environments. The key technical idea behind our approach is to leverage tools from generalization theory in machine learning by exploiting a precise analogy (which we present in the form of a reduction) between generalization of control policies to novel environments and generalization of hypotheses in the supervised learning setting. In particular, we utilize the probably approximately correct (PAC)-Bayes framework, which allows us to obtain upper bounds that hold with high probability on the expected cost of (stochastic) control policies across novel environments. We propose policy learning algorithms that explicitly seek to minimize this upper bound. The corresponding optimization problem can be solved using convex optimization (relative entropy programming in particular) in the setting where we are optimizing over a finite policy space. In the more general setting of continuously parameterized policies (e.g., neural network policies), we minimize this upper bound using stochastic gradient descent. We present simulated results of our approach applied to learning (1) reactive obstacle avoidance policies and (2) neural network-based grasping policies. We also present hardware results for the Parrot Swing drone navigating through different obstacle environments. Our examples demonstrate the potential of our approach to provide strong generalization guarantees for robotic systems with continuous state and action spaces, complicated (e.g., nonlinear) dynamics, rich sensory inputs (e.g., depth images), and neural network-based policies.


Author(s):  
Maria-Florina Balcan ◽  
Siddharth Prasad ◽  
Tuomas Sandholm

A two-part tariff is a pricing scheme that consists of an up-front lump sum fee and a per unit fee. Various products in the real world are sold via a menu, or list, of two-part tariffs---for example gym memberships, cell phone data plans, etc. We study learning high-revenue menus of two-part tariffs from buyer valuation data, in the setting where the mechanism designer has access to samples from the distribution over buyers' values rather than an explicit description thereof. Our algorithms have clear direct uses, and provide the missing piece for the recent generalization theory of two-part tariffs. We present a polynomial time algorithm for optimizing one two-part tariff. We also present an algorithm for optimizing a length-L menu of two-part tariffs with run time exponential in L but polynomial in all other problem parameters. We then generalize the problem to multiple markets. We prove how many samples suffice to guarantee that a two-part tariff scheme that is feasible on the samples is also feasible on a new problem instance with high probability. We then show that computing revenue-maximizing feasible prices is hard even for buyers with additive valuations. Then, for buyers with identical valuation distributions, we present a condition that is sufficient for the two-part tariff scheme from the unsegmented setting to be optimal for the market-segmented setting. Finally, we prove a generalization result that states how many samples suffice so that we can compute the unsegmented solution on the samples and still be guaranteed that we get a near-optimal solution for the market-segmented setting with high probability.


2020 ◽  
Author(s):  
Saket Pande ◽  
Mehdi Moayeri

<p>It is intuitive that instability of hydrological system representation, in the sense of how perturbations in input forcings translate into perturbation in a hydrologic response, may depend on its hydrological characteristics. Responses of unstable systems are thus complex to model. We interpret complexity in this context and define complexity as a measure of instability in hydrological system representation. We use algorithms to quantify model complexity in this context from Pande et al. (2014). We use Sacramento soil moisture accounting model (SAC-SMA) parameterized for CAMEL data set (Addor et al., 2017) and quantify complexities of corresponding models. Relationships between hydrologic characteristics of CAMEL basins such as location, precipitation seasonality index, slope, hydrologic ratios, saturated hydraulic conductivity and NDVI and respective model complexities are then investigated.</p><p>Recently Pande and Moayeri (2018) introduced an index of basin complexity based on another, non-parameteric, model of least statistical complexity that is needed to reliably model daily streamflow of a basin. This method essentially interprets complexity in terms of difficulty in predicting historically similar stream flow events. Daily streamflow is modeled using k-nearest neighbor model of lagged streamflow. Such models are parameterised by the number of lags and radius of neighborhood that it uses to identify similar streamflow events from the past. These parameters need to be selected for each time step of prediction ’query’. We use 1) Tukey half-space data depth function to identify time steps corresponding to ’difficult’ queries and 2) then use Vapnik-Chervonenkis (VC) generalization theory, which trades off model performance with VC dimension (i.e. a measure of model complexity), to select parameters corresponding to k nearest neighbor model that is of appropriate complexity for modelling difficult queries. Average of selected model complexities corresponding to difficult queries are then related with the same hydrologic characteristics as above for CAMEL basins.</p><p>We find that complexities estimated on SAC-SMA model using the algorithm of Pande et al. (2014) are correlated with those estimated on knn model using VC generalization theory. Further, the relationships between the two complexities and hydrologic characteristics are also similar. This indicates that interpretation of complexity as a measure of instability in hydrological system representation is similar to the interpretation provided by VC generalization theory of difficulty in predicting historically similar stream flow events.  </p><p>Reference:</p><p>Addor, N., Newman, A. J., Mizukami, N., and Clark, M. P. (2017) The CAMELS data set: catchment attributes and meteorology for large-sample studies, Hydrol. Earth Syst. Sci., 21, 5293–5313, https://doi.org/10.5194/hess-21-5293-2017.</p><p>Pande, S., Arkesteijn, L., Savenije, H. H. G., and Bastidas, L. A. (2014) Hydrological model parameter dimensionality is a weak measure of prediction uncertainty, Hydrol. Earth Syst. Sci. Discuss., 11, 2555–2582, https://doi.org/10.5194/hessd-11-2555-2014.</p><p>Pande, S., and Moayeri, M. (2018). Hydrological interpretation of a statistical measure of basin complexity. Water Resources Research, 54. https://doi.org/10.1029/2018WR022675</p>


Nadwa ◽  
2019 ◽  
Vol 1 (1) ◽  
pp. 75
Author(s):  
Lian G. Otaya ◽  
Herson Anwar ◽  
Rahmin Talib Husain

<p class="ABSTRACTJUDUL">The purpose of this study was to estimate the instrument of reading and writing capability of the Qur’an  in the practicum program students of the Faculty of Tarbiyah and Teacher Training IAIN Sultan Amai Gorontalo. This type of research is quantitative with an analysis approach to the generalization theory variance through the G-Study concept with a multifacet design p x r x I three facet variations. The results of the study prove the instrument of reading and reading ability of the Koran was tested, namely the estimated coefficient of reliability of the combined scores of the reading and writing assessment of the Koran from 20 students who were rated by 4 rater of 10 items which were judged by the magnitude of generalizability coefficient value of 0.82749 . The magnitude of this value indicates that the true score of the assessment results is quite high compared to the minimum reliability criteria, which is 0.70 fulfilling the reliable criteria.</p><p class="ABSTRACTJUDUL"> </p><p class="ABSTRACTJUDUL"><strong>Abstrak</strong></p><p class="ABSTRACT">Tujuan penelitian ini untuk mengestimasi instrument kemampuan baca tulis alqur’an  pada program praktikum baca tulis Alqur’an mahasiswa Fakultas Ilmu Tarbiyah dan Keguruan IAIN Sultan Amai Gorontalo. Jenis penelitian ini menggunakan kuantitatif dengan pendekatan analisis varians teori generalisibilitas melalui konsep G-Study dengan desain multifacet p x r x I tiga variasi facet. Hasil penelitian membuktikan instrumen penilaian kemampuan baca tulis al-Qur’an teruji, yakni estimasi koefisien reliabilitas skor gabungan penilaian kemampuan baca tulis Alqur’an dari 20 mahasiswa yang dinilai oleh 4 rater terhadap 10 item yang dinilai diperoleh besarnya nilai koefisien generalizabilitas sebesar 0,82749. Besaran nilai tersebut menunjukkan bahwa true score hasil penilaian cukup tinggi  dibandingkan kriteria reliabilitas minimal yaitu 0,70 memenuhi kriteria reliabel.</p><strong><span lang="EN-US"><br /></span></strong>


2017 ◽  
Vol 2 (1) ◽  
pp. 80-90
Author(s):  
Helmi Noviar

This article discusses the rational expectations theory in the perspective of the appearance process and its contribution to economic thought in terms of the science and its application on the economy. The methodology in writing this paper is through literature review, generally taken from journal articles which examine from two viewpoints: from economists who initiated and supported the concept of this theory and those who opposed it. Furthermore, it also discuss about its application, particularly in terms of future economic studies. As part of the New Classical Economics, the theory pioneered by Robert E. Lucas, Jr. along with Thomas J. Sargent has provided a great contribution of empirical and multifaceted approaches, which provides a compilation of the results of the study and generalization theory by numerous macroeconomist and econometrician in Rational Expectations and Econometric Practice (1981), it quite influential on level of academic and applied economics. In other words, the rational expectation revolution is one of the important components in modern economic theory, the New Classical Economics and New Keynesian Economics.Artikel ini membahas teori ekspektasi rasional dalam perspektif proses munculnya teori ini dan kontribusinya terhadap paham pemikiran ekonomi baik dari sisi ilmu pengetahuan maupun aplikasinya dalam suatu perekonomian. Metodologi penulisan artikel ini melalui review literatur dari sumber jurnal dengan memisahkan dua sudut pandang terhadap teori rasional ekspektasi: dari sisi ekonom yang menggagas dan mendukung konsep teori ini dan yang bertentangan. Selanjutnya, dalam artikel ini juga membahas aplikasi teori ini dalam studi-studi ekonomi di masa mendatang. Sebagai bagian dari paham ekonomi klasik baru, teori yang diinisiasi oleh Robert E. Lucas, Jr. bersamaThomas J. Sargent melahirkan karya kumpulan artikel dari beberapa ahli ekonomi makro dan ekonometrika dalam Rational Expectations and Econometric Practice (1981) yang cukup berpengaruh pada tataran akademis dan aplikasi ilmu ekonomi. Dengan perkataan lain, revolusi ekspektasi rasional, merupakan salah satu komponen penting dalam teori ekonomi modern seperti halnya paham Ekonomi Keynesian baru dan Ekonomi klasik baru.


2014 ◽  
Vol 11 (3) ◽  
pp. 2555-2582 ◽  
Author(s):  
S. Pande ◽  
L. Arkesteijn ◽  
H. H. G. Savenije ◽  
L. A. Bastidas

Abstract. This paper presents evidence that model prediction uncertainty does not necessarily rise with parameter dimensionality (the number of parameters). Here by prediction we mean future simulation of a variable of interest conditioned on certain future values of input variables. We utilize a relationship between prediction uncertainty, sample size and model complexity based on Vapnik–Chervonenkis (VC) generalization theory. It suggests that models with higher complexity tend to have higher prediction uncertainty for limited sample size. However, model complexity is not necessarily related to the number of parameters. Here by limited sample size we mean a sample size that is limited in representing the dynamics of the underlying processes. Based on VC theory, we demonstrate that model complexity crucially depends on the magnitude of model parameters. We do this by using two model structures, SAC-SMA and its simplification, SIXPAR, and 5 MOPEX basin data sets across the United States. We conclude that parsimonious model selection based on parameter dimensionality may lead to a less informed model choice.


Sign in / Sign up

Export Citation Format

Share Document