scholarly journals A Hierarchy Model of Income Distribution

2018 ◽  
Author(s):  
Blair Fix

Based on worldly experience, most people would agree that firms are hierarchically organized, and that pay tends to increase as one moves up the hierarchy. But how this hierarchical structure affects income distribution has not been widely studied. To remedy this situation, this paper presents a new model of income distribution that explores the effects of social hierarchy. This 'hierarchy model' takes the limited available evidence on the structure of firm hierarchies, and generalizes it to create a large-scale simulation of the hierarchical structure of the United States economy. Using this model, I conduct the first quantitative investigation of hierarchy's effect on income distribution. I find that hierarchy plays a dominant role in shaping the tail of US income distribution. The model suggests that hierarchy is responsible for generating the power-law scaling of top incomes. Moreover, I find that hierarchy can be used to unify the study of personal and functional income distribution, as well as to understand historical trends in income inequality.

2018 ◽  
Author(s):  
Blair Fix

What explains the power-law distribution of top incomes? This paper tests the hypothesis that it is firm hierarchy that creates the power-law income distribution tail. Using the available case-study evidence on firm hierarchy, I create the first large-scale simulation of the hierarchical structure of the US private sector. Although not tuned to do so, this model reproduces the power-law scaling of top US incomes. I show that this is purely an effect of firm hierarchy. This raises the possibility that the ubiquity of power-law income distribution tails is due to the ubiquity of hierarchical organization in human societies.


Author(s):  
Masataka Yoshimura ◽  
Kazuhiro Izui

Abstract A large-scale machine system often has a general hierarchical structure. For hierarchical structures, optimization is difficult because many local optima almost always arise, however genetic algorithms that have a hierarchical genotype can be applied to treat such problems directly. Relations between the structural components are analyzed and this information is used to partition the hierarchical structure. Partitioning large-scale problems into sub-problems that can be solved using parallel processed GAs increases the efficiency of the optimization search. The optimization of large-scale systems then becomes possible due to information sharing of Pareto optimum solutions for the sub-problems.


1982 ◽  
Vol 12 (2) ◽  
pp. 227-239 ◽  
Author(s):  
Carol Nackenoff

During the past fifteen years, several economists, historians and sociologists have propounded a sectoral model of economic growth and change in the United States. According to this analysis, as large-scale, monopolistic enterprises began to emerge in the late nineteenth century, different investment considerations and labour market requirements were also evolving. A dual economy was beginning to be formed. The large-scale capital sector, and the small-scale capital sector each had its own economic environment of conduct. Each sector tended, too, to develop its own corresponding labour market, with monopoly sector or ‘core’ firms holding out certain economic advantages for employees: money, job security, benefits, and opportunities for advancement within the firm. Thus, the work experience in these two sectors increasingly diverged. Even if the large-scale capital sector did offer economic advantages, growth tended to be capital-intensive, and the growth of employment in this sector slowed down, and then stopped by the end of the Second World War. Employment shifted to trades and services, with lower wage rates, and, of course, to the public sector, which currently employs nearly a third of the American workforce.


2017 ◽  
Vol 9 (3) ◽  
pp. 36-71 ◽  
Author(s):  
Shuhei Aoki ◽  
Makoto Nirei

We construct a tractable neoclassical growth model that generates Pareto's law of income distribution and Zipf's law of the firm size distribution from idiosyncratic, firm-level productivity shocks. Executives and entrepreneurs invest in risk-free assets, as well as their own firms' risky stocks, through which their wealth and income depend on firm-level shocks. By using the model, we evaluate how changes in tax rates can account for the evolution of top incomes in the United States. The model matches the decline in the Pareto exponent of the income distribution and the trend of the top 1 percent income share in recent decades. (JEL D31, H24, L11)


2004 ◽  
Vol 126 (2) ◽  
pp. 217-224 ◽  
Author(s):  
Masataka Yoshimura ◽  
Kazuhiro Izui

A large-scale machine system often has a general hierarchical structure. For hierarchical structures, optimization is difficult because many local optima almost always arise, however genetic algorithms that have a hierarchical genotype can be applied to treat such problems directly. Relations between the structural components are analyzed and this information is used to partition the hierarchical structure. Partitioning large-scale problems into sub-problems that can be solved using parallel processed GAs increases the efficiency of the optimization search. The optimization of large-scale systems then becomes possible due to information sharing of Pareto optimum solutions for the sub-problems.


2021 ◽  
Author(s):  
İpek Tekin ◽  
Başak Gül Akar

In the neoliberal era, financialization of the economies is associated both with large-scale speculative movements in the financial sector and over-indebtedness. The fact that there were significant increases in household indebtedness in the United States before the 2008/09 global financial crisis made the growing indebtedness an outstanding issue that should be examined in terms of its supply and demand-side causes and its distributive consequences. Increasing inequality in income distribution has been an important consideration associated with the increase in household indebtedness. In a sense, the borrowing opportunities enable working households to maintain their consumption and living standards in the short term despite the stagnation in wages and thus increasing inequality, but it does not prevent them from undergoing an unsustainable debt burden. This debt burden creates a feedback effect by deepening the existing inequality. The purpose of this study is to reveal the macro and micro dynamics associated with neoliberal policies that create the supposed relationship between inequality and household indebtedness and to try to interpret the increasing household indebtedness and income inequality in Turkey in the 2000s within this framework.


2019 ◽  
Vol 12 (S10) ◽  
Author(s):  
Junning Gao ◽  
Lizhi Liu ◽  
Shuwei Yao ◽  
Xiaodi Huang ◽  
Hiroshi Mamitsuka ◽  
...  

Abstract Background As a standardized vocabulary of phenotypic abnormalities associated with human diseases, the Human Phenotype Ontology (HPO) has been widely used by researchers to annotate phenotypes of genes/proteins. For saving the cost and time spent on experiments, many computational approaches have been proposed. They are able to alleviate the problem to some extent, but their performances are still far from satisfactory. Method For inferring large-scale protein-phenotype associations, we propose HPOAnnotator that incorporates multiple Protein-Protein Interaction (PPI) information and the hierarchical structure of HPO. Specifically, we use a dual graph to regularize Non-negative Matrix Factorization (NMF) in a way that the information from different sources can be seamlessly integrated. In essence, HPOAnnotator solves the sparsity problem of a protein-phenotype association matrix by using a low-rank approximation. Results By combining the hierarchical structure of HPO and co-annotations of proteins, our model can well capture the HPO semantic similarities. Moreover, graph Laplacian regularizations are imposed in the latent space so as to utilize multiple PPI networks. The performance of HPOAnnotator has been validated under cross-validation and independent test. Experimental results have shown that HPOAnnotator outperforms the competing methods significantly. Conclusions Through extensive comparisons with the state-of-the-art methods, we conclude that the proposed HPOAnnotator is able to achieve the superior performance as a result of using a low-rank approximation with a graph regularization. It is promising in that our approach can be considered as a starting point to study more efficient matrix factorization-based algorithms.


2021 ◽  
Vol 58 (2) ◽  
pp. 4903-4909
Author(s):  
Dhiraj Vij, Dr. Jyotika Teckchandani

India and the international community at large to witness transformations in the global order post-Covid-19 era and this provides large scale opportunities to the nation of 1.3 billion to play a dominant role in international geo-political map. Pre Covid-19 days, world witnessed the dominance of United States of America and newly emerging superpower China in the first two decades of 21st century. The international community, always, was sceptical of China's economic growth story and military might, worried about its muscular and aggressive foreign policy in Asian continent and across the boundaries. The business will no longer be equivalent to the pre-Covid days and other nations are looking for self-reliance or other sources for meeting their indispensable needs, apart from China. Therefore, in the post-Covid era in meeting these global expectations, India can be the global leader, whether in a health, economic, technological or pharmaceutical sector.    India, the largest democracy in the world, enjoys international support and cooperation and with US support can be the leading player in the international order, being better placed than any other nation in fighting this pandemic. India can be the new supply chain of the US and other western states. To become the global leader, India first need to act and survive this onslaught by Covid-19 and then thrive towards achieving the global leader status after taking advantages of opportunities such as new alliances in middle-east, rivalries between the United States of America and China.


1966 ◽  
Vol 05 (02) ◽  
pp. 67-74 ◽  
Author(s):  
W. I. Lourie ◽  
W. Haenszeland

Quality control of data collected in the United States by the Cancer End Results Program utilizing punchcards prepared by participating registries in accordance with a Uniform Punchcard Code is discussed. Existing arrangements decentralize responsibility for editing and related data processing to the local registries with centralization of tabulating and statistical services in the End Results Section, National Cancer Institute. The most recent deck of punchcards represented over 600,000 cancer patients; approximately 50,000 newly diagnosed cases are added annually.Mechanical editing and inspection of punchcards and field audits are the principal tools for quality control. Mechanical editing of the punchcards includes testing for blank entries and detection of in-admissable or inconsistent codes. Highly improbable codes are subjected to special scrutiny. Field audits include the drawing of a 1-10 percent random sample of punchcards submitted by a registry; the charts are .then reabstracted and recoded by a NCI staff member and differences between the punchcard and the results of independent review are noted.


Sign in / Sign up

Export Citation Format

Share Document