The Guidance of an Enterprise Economy

Author(s):  
Martin Shubik ◽  
Eric Smith

This final chapter splits naturally into two parts. The first part presents the basic overview of a theory of money and financial institutions that covers the abstract pre-institutional structure of the price system as an allocation device in a tightly defined structure with no need for money or financial institutions to be specified in the illustration of the efficient equilibrium condition. It stresses that by merely following well defined precepts of basic physics and game theory, at the same level of abstraction process models can be completed and force a discipline on the models where items such as money, default laws, grid size, loose coupling, specification of clearance rules, and time lags are all logical necessities.. These steps convert a static pre-institutional set of models into their natural minimal institution monetary and institutional models. . The utilization of these for application still requires the addition of ad hoc physical facts and relevant understanding of behavior. Our broader goal is directed towards a general theory of organization about which this work is only a narrow part. The second part of this chapter concludes with our Nostrodamus section where we conjecture about the future. This includes the need for supranational organizations especially for weapons control. We also suggest that the fundamental problem of the control of dangerous fluctuations in an enterprise economy is predominately a politico-bureaucratic problem and calls for a stress on the design of flexible coordinating devices within the politico-economic system. A sketch of such a device is presented. In a free society the stress should be less on control and forecasting than on guidance and flexibility.

2016 ◽  
Vol 2 (2) ◽  
Author(s):  
Amit Singh ◽  
Nitin Mishra ◽  
Angad Singh

 A Wireless Mobile Ad-hoc Network consists of variety of mobile nodes that temporally kind a dynamic infrastructure less network. To modify communication between nodes that don’t have direct radio contact, every node should operate as a wireless router and potential forward knowledge traffic of behalf of the opposite node. In MANET Localization is a fundamental problem. Current localization algorithm mainly focuses on checking the localizability of a network and/or how to localize as many nodes as possible. It could provide accurate position information foe kind of expanding application. Localization provide information about coverage, deployment, routing, location, services, target tracking and rescue If high mobility among the mobile nodes occurs path failure breaks. Hence the location information cannot be predicted. Here we have proposed a localization based algorithm which will help to provide information about the localized and non-localized nodes in a network. In the proposed approach DREAM protocol and AODV protocol are used to find the localizability of a node in a network. DREAM protocol is a location protocol which helps to find the location of a node in a network whereas AODV is a routing protocol it discover route as and when necessary it does not maintain route from every node to every other. To locate the mobile nodes in a n/w an node identification algorithm is used. With the help of this algorithm localized and non-localized node can be easily detected in respect of radio range. This method helps to improve the performance of a module and minimize the location error and achieves improved performance in the form of UDP packet loss, received packet and transmitted packets, throughput, routing overhead, packet delivery fraction. All the simulation done through the NS-2 module and tested the mobile ad-hoc network.


Author(s):  
Richard S Collier

This book seeks to explain why and how banks ‘game the system’. More specifically, its objective is to account for why banks are so often involved in cases of misconduct and why those cases often involve the exploitation of tax systems. To do this, a case study is presented in Part I of the book. This case study concerns a highly complex transaction (often referred to as ‘cum-ex’) designed to exploit a flaw at the intersection of the tax system and the financial markets settlements system. It was entered into by a very large number of banks and other financial institutions. A number of factors make the cum-ex transaction remarkable, including the sheer scale of the financial amounts involved, the large number of banks and financial institutions involved, the comprehensive failure of the controls infrastructure in this highly regulated sector, and the fact that authorities across Europe have found it so difficult to deal with the transaction. Part II of the book draws out the wider significance of cum-ex and what it tells us about modern banks and their interactions with tax systems. The account demonstrates why the exploitation of tax systems by banks is practically inevitable due to a variety of systemic features of the financial markets and of tax systems themselves. A number of possible responses to the current position are suggested in the final chapter.


1999 ◽  
Vol 48 (4) ◽  
pp. 889-900 ◽  
Author(s):  
Stephen M. Schwebel

When the Statute of the Permanent Court of International Justice was drafted by an Advisory Committee of Jurists in 1920, a paramount question was, should a judge of the nationality of a State party to the case sit?The sensitivity of the issue was encapsulated by a report of a committee of the Court in 1927 on the occasion of a revision of the Rules of Court. It observed that: “In the attempt to establish international courts of justice, the fundamental problem always has been, and probably always will be, that of the representation of the litigants in the constitution of the tribunal. Of all influences to which men are subject, none is more powerful, more pervasive, or more subtle, than the tie of allegiance that binds them to the land of their homes and kindred and to the great sources of the honours and preferments for which they are so ready to spend their fortunes and to risk their lives. This fact, known to all the world, the [Court's] Statute frankly recognises and deals with.”1


2021 ◽  
Vol 5 (1) ◽  
pp. 4-21

Received 30 January 2021. Accepted for publication 20 March 2021 The Convention on the Prohibition of the Development, Production and Stockpiling of Bacteriological (Biological) and Toxin Weapons and on Their Destruction (BTWC) does not have a legally binding verification regime. An attempt by the Ad Hoc Group of Experts, created by the UN Committee on Disarmament, to strengthen the BTWC by developing a legally binding document – the Protocol, was blocked by the United States in July 2001. The purpose of this work is to study the history, main provisions, significance and reasons for not signing the Protocol to the BTWC. The attention is paid to the events in biological weapons control, which have led a number of countries to the understanding of the necessity to develop the Protocol. The background of the US actions to block this document is the subject of special consideration. During the Second Review Conference on the Implementation of the Convention (8–25 September 1986, Geneva) the USSR, the German Democratic Republic and the Hungarian People's Republic proposed to develop and adopt the Protocol as an addition to the BTWC. This document was supposed to establish general provisions, definitions of terms, lists of agents and toxins, lists of equipment that was present or used at production facilities, threshold quantities of biological agents designed to assess means and methods of protection. The proposed verification mechanism was based on three «pillars»: initial declarations with the basic information about the capabilities of each State Party; inspections to assess the reliability of the declarations; investigations to verify and confirm or not confirm the alleged non-compliance with the Convention. The verification regime was to be under the control of an international organization – the Organization for the Prohibition of Bacteriological (Biological) and Toxin Weapons. However, the US military and pharmaceutical companies opposed the idea of international inspections. The then US Undersecretary of State for Arms Control and International Security, John Robert Bolton II, played a special role in blocking the Protocol. During the Fifth Review Conference in December 2001, he demanded the termination of the Ad Hoc Group of Experts mandate for negotiations under the pretext that any international agreement would constrain US actions. The current situation with biological weapons control should not be left to chance. Measures to strengthen the BTWC should be developed, taking into account the new fundamental changes in dual-use biotechnology. It should be borne in mind, that the Protocol, developed in the 1990s, is outdated nowadays.


2012 ◽  
Vol 2 (1) ◽  
pp. 7 ◽  
Author(s):  
Andrzej Kijko

This work is focused on the Bayesian procedure for the estimation of the regional maximum possible earthquake magnitude <em>m</em><sub>max</sub>. The paper briefly discusses the currently used Bayesian procedure for m<sub>max</sub>, as developed by Cornell, and a statistically justifiable alternative approach is suggested. The fundamental problem in the application of the current Bayesian formalism for <em>m</em><sub>max</sub> estimation is that one of the components of the posterior distribution is the sample likelihood function, for which the range of observations (earthquake magnitudes) depends on the unknown parameter <em>m</em><sub>max</sub>. This dependence violates the property of regularity of the maximum likelihood function. The resulting likelihood function, therefore, reaches its maximum at the maximum observed earthquake magnitude <em>m</em><sup>obs</sup><sub>max</sub> and not at the required maximum <em>possible</em> magnitude <em>m</em><sub>max</sub>. Since the sample likelihood function is a key component of the posterior distribution, the posterior estimate of <em>m^</em><sub>max</sub> is biased. The degree of the bias and its sign depend on the applied Bayesian estimator, the quantity of information provided by the prior distribution, and the sample likelihood function. It has been shown that if the maximum posterior estimate is used, the bias is negative and the resulting underestimation of <em>m</em><sub>max</sub> can be as big as 0.5 units of magnitude. This study explores only the maximum posterior estimate of <em>m</em><sub>max</sub>, which is conceptionally close to the classic maximum likelihood estimation. However, conclusions regarding the shortfall of the current Bayesian procedure are applicable to all Bayesian estimators, <em>e.g.</em> posterior mean and posterior median. A simple, <em>ad hoc</em> solution of this non-regular maximum likelihood problem is also presented.


2006 ◽  
Vol 54 (6-7) ◽  
pp. 213-221 ◽  
Author(s):  
E. Lindblom ◽  
K.V. Gernaey ◽  
M. Henze ◽  
P.S. Mikkelsen

This paper presents a dynamic mathematical model that describes the fate and transport of two selected xenobiotic organic compounds (XOCs) in a simplified representation of an integrated urban wastewater system. A simulation study, where the xenobiotics bisphenol A and pyrene are used as reference compounds, is carried out. Sorption and specific biological degradation processes are integrated with standardised water process models to model the fate of both compounds. Simulated mass flows of the two compounds during one dry weather day and one wet weather day are compared for realistic influent flow rate and concentration profiles. The wet weather day induces resuspension of stored sediments, which increases the pollutant load on the downstream system. The potential of the model to elucidate important phenomena related to origin and fate of the model compounds is demonstrated.


1959 ◽  
Vol 14 (1) ◽  
pp. 55-59 ◽  
Author(s):  
Freeman W. Cope

When isolated segments of human descending thoracic aorta were caused to change their volume rapidly and continuously in sinusoidal fashion with pulse pressures and pulse rates maintained in the physiological range, the resulting pressure-volume curves showed slight but consistent increases in stiffness, compared to pressure-volume curves obtained on the same specimens when inflated stepwise. There was introduced into the pressure measuring system a time lag of sufficient magnitude to eliminate the hysteresis loop. The extent of hysteresis in the aorta was not determined because time lags in the aorta could not be distinguished from time lags in the measuring equipment. Submitted on September 10, 1958


2020 ◽  
Vol 10 (5) ◽  
pp. 1719
Author(s):  
Yang Zhou ◽  
Yan Shi ◽  
Shanzhi Chen

Most mobile ad hoc wireless networks have social features. It is a fundamental problem how to understand the performances of social-aware mobile ad hoc wireless networks. In this paper, we consider a wireless network area, with restricted mobility model and rank-based social model. On this basis, we investigate the upper bound of throughput capacity in such networks using the protocol interference model. By tessellating the network area into cells spatially and dividing time into slots temporally, we propose a multi-hop relay and slots allocation scheduling strategy. Then, we derive the achieved throughput capacity under this strategy. Results show per-node throughput is related to parameter of social model and range of node motion. In addition, we also study the delay varies by queueing theory in such network. Finally, we discuss capacity-delay tradeoffs in such networks. These results are beneficial to the design of network protocols in large social-aware mobile ad hoc wireless networks.


Industries such as, textile, food processing, chemical and water treatment plants are part of our global development. The efficiency of processes used by them is always a matter of great importance. Efficiency can be greatly improved by obtaining an exact model of the process. This paper studies the two main classifications of model development – First-Principles Model and Empirical Model. First-Principles Model can be obtained with an understanding of the basic physics of the system. On the other hand, Empirical Models require only the input-output data and can thus factor in process non-linearity, disturbances and unexpected errors. This paper utilizes the System Identification Toolbox in MATLAB for empirical model development. Models are developed for a single tank system, a classic SISO problem and for the two interacting tank system. Both systems are studied with respect to three operating points, each from a local linear region. The obtained models are validated with the real-time setup. They are satisfactory in their closeness to the real time process and hence deemed fit for use in control algorithms and other process manipulations


2020 ◽  
Vol 16 (1) ◽  
Author(s):  
Madapuri Rudra Kumar ◽  
Vinit Kumar Gunjan

Introduction:Increase in computing power and the deeper usage of the robust computing systems in the financial system is propelling the business growth, improving the operational efficiency of the financial institutions, and increasing the effectiveness of the transaction processing solutions used by the organizations. Problem:Despite that the financial institutions are relying on the credit scoring patterns for analyzing the credit worthiness of the clients, still there are many factors that are imminent for improvement in the credit score evaluation patterns.  Objective:Machine learning is offering immense potential in Fintech space and determining a personal credit score. Organizations by applying deep learning and machine learning techniques can tap individuals who are not being serviced by traditional financial institutions. Methodology:One of the major insights into the system is that the traditional models of banking intelligence solutions are predominantly the programmed models that can align with the information and banking systems that are used by the banks. But in the case of the machine-learning models that rely on algorithmic systems require more integral computation which is intrinsic.  Results:The test analysis of the proposed machine learning model indicates effective and enhanced analysis process compared to the non-machine learning solutions. The model in terms of using various classifiers indicate potential ways in which the solution can be significant. Conclusion: If the systems can be developed to align with more pragmatic terms for analysis, it can help in improving the process conditions of customer profile analysis, wherein the process models have to be developed for comprehensive analysis and the ones that can make a sustainable solution for the credit system management. Originality:The proposed solution is effective and the one conceptualized to improve the credit scoring system patterns.  Limitations: The model is tested in isolation and not in comparison to any of the existing credit scoring patterns. 


Sign in / Sign up

Export Citation Format

Share Document