scholarly journals Pricing the common stocks in an emerging capital market: Comparison of the factor models

2020 ◽  
Vol 20 (4) ◽  
pp. 334-346 ◽  
Author(s):  
Asil Azimli
2021 ◽  
Vol 66 (1) ◽  
pp. 140-149
Author(s):  
Eric A. Posner

Empirical findings that common ownership is associated with anticompetitive outcomes including higher prices raise questions about possible policy responses. This comment evaluates the major proposals, including antitrust enforcement against common owners, regulation of corporate governance, regulation of compensation of management of portfolio firms, regulation of capital market structure, and greater antitrust enforcement against portfolio firms.


2002 ◽  
Vol 227 (7) ◽  
pp. 500-508 ◽  
Author(s):  
Richard A. Miller ◽  
James M. Harper ◽  
Robert C. Dysko ◽  
Stephen J. Durkee ◽  
Steven N. Austad

Nearly all the experimental mice used in aging research are derived from lineages that have been selected for many generations for adaptation to laboratory breeding conditions and are subsequently inbred. To see if inbreeding and laboratory adaptation might have altered the frequencies of genes that influence life span, we have developed three lines of mice (Idaho [Id], Pohnpel [Po], and Majuro [Ma]) from wild-trapped progenitors, and have compared them with a genetically heterogeneous mouse stock (DC) representative of the laboratoryadapted gene pool. Mean life span of the Id stock exceeded that of the DC stock by 24% (P < 0.00002), and maximal life span, estimated as mean longevity of the longest-lived 10% of the mice, was also increased by 16% (P < 0.003). Mice of the Ma stock also had a significantly longer maximal longevity than DC mice (9%, P = 0.04). The longest-lived Id mouse died at the age of 1450 days, which appears to exceed the previous longevity record for fully fed, non-mutant mice. The life table of the Po mice resembled that of the DC controls. Ma and Id mice differ from DC mice in several respects: both are shorter and lighter, and females of both stocks, particularly Id, are much slower to reach sexual maturity. As young adults, Id mice have lower levels of insulin-like growth factor 1 (IGF-I), leptin, and glycosylated hemoglobin compared with DC controls, implicating several biochemical pathways as potential longevity mediators. The results support the idea that inadvertent selection for rapid maturation and large body size during the adaptation of the common stocks of laboratory mice may have forced the loss of natural alleles that retard the aging process. Genes present in the Id and Ma stocks may be valuable tools for the analysis of the physiology and biochemistry of aging in mice.


Author(s):  
Marco Lippi

High-dimensional dynamic factor models have their origin in macroeconomics, more specifically in empirical research on business cycles. The central idea, going back to the work of Burns and Mitchell in the 1940s, is that the fluctuations of all the macro and sectoral variables in the economy are driven by a “reference cycle,” that is, a one-dimensional latent cause of variation. After a fairly long process of generalization and formalization, the literature settled at the beginning of the 2000s on a model in which (a) both n, the number of variables in the data set, and T, the number of observations for each variable, may be large; (b) all the variables in the data set depend dynamically on a fixed, independent of n, number of common shocks, plus variable-specific, usually called idiosyncratic, components. The structure of the model can be exemplified as follows: (*)xit=αiut+βiut−1+ξit,i=1,…,n,t=1,…,T, where the observable variables xit are driven by the white noise ut, which is common to all the variables, the common shock, and by the idiosyncratic component ξit. The common shock ut is orthogonal to the idiosyncratic components ξit, the idiosyncratic components are mutually orthogonal (or weakly correlated). Last, the variations of the common shock ut affect the variable xitdynamically, that is, through the lag polynomial αi+βiL. Asymptotic results for high-dimensional factor models, consistency of estimators of the common shocks in particular, are obtained for both n and T tending to infinity. The time-domain approach to these factor models is based on the transformation of dynamic equations into static representations. For example, equation (∗) becomes xit=αiF1t+βiF2t+ξit,F1t=ut,F2t=ut−1. Instead of the dynamic equation (∗) there is now a static equation, while instead of the white noise ut there are now two factors, also called static factors, which are dynamically linked: F1t=ut,F2t=F1,t−1. This transformation into a static representation, whose general form is xit=λi1F1t+⋯+λirFrt+ξit, is extremely convenient for estimation and forecasting of high-dimensional dynamic factor models. In particular, the factors Fjt and the loadings λij can be consistently estimated from the principal components of the observable variables xit. Assumption allowing consistent estimation of the factors and loadings are discussed in detail. Moreover, it is argued that in general the vector of the factors is singular; that is, it is driven by a number of shocks smaller than its dimension. This fact has very important consequences. In particular, singularity implies that the fundamentalness problem, which is hard to solve in structural vector autoregressive (VAR) analysis of macroeconomic aggregates, disappears when the latter are studied as part of a high-dimensional dynamic factor model.


Author(s):  
Marco Lippi

High-Dimensional Dynamic Factor Models have their origin in macroeconomics, precisely in empirical research on Business Cycles. The central idea, going back to the work of Burns and Mitchell in the years 1940, is that the fluctuations of all the macro and sectoral variables in the economy are driven by a “reference cycle,” that is, a one-dimensional latent cause of variation. After a fairly long process of generalization and formalization, the literature settled at the beginning of the year 2000 on a model in which (1) both n the number of variables in the dataset and T, the number of observations for each variable, may be large, and (2) all the variables in the dataset depend dynamically on a fixed independent of n, a number of “common factors,” plus variable-specific, usually called “idiosyncratic,” components. The structure of the model can be exemplified as follows: xit=αiut+βiut−1+ξit,i=1,…,n,t=1,…,T,(*) where the observable variables xit are driven by the white noise ut, which is common to all the variables, the common factor, and by the idiosyncratic component ξit. The common factor ut is orthogonal to the idiosyncratic components ξit, the idiosyncratic components are mutually orthogonal (or weakly correlated). Lastly, the variations of the common factor ut affect the variable xit dynamically, that is through the lag polynomial αi+βiL. Asymptotic results for High-Dimensional Factor Models, particularly consistency of estimators of the common factors, are obtained for both n and T tending to infinity. Model (∗), generalized to allow for more than one common factor and a rich dynamic loading of the factors, has been studied in a fairly vast literature, with many applications based on macroeconomic datasets: (a) forecasting of inflation, industrial production, and unemployment; (b) structural macroeconomic analysis; and (c) construction of indicators of the Business Cycle. This literature can be broadly classified as belonging to the time- or the frequency-domain approach. The works based on the second are the subject of the present chapter. We start with a brief description of early work on Dynamic Factor Models. Formal definitions and the main Representation Theorem follow. The latter determines the number of common factors in the model by means of the spectral density matrix of the vector (x1tx2t⋯xnt). Dynamic principal components, based on the spectral density of the x’s, are then used to construct estimators of the common factors. These results, obtained in early 2000, are compared to the literature based on the time-domain approach, in which the covariance matrix of the x’s and its (static) principal components are used instead of the spectral density and dynamic principal components. Dynamic principal components produce two-sided estimators, which are good within the sample but unfit for forecasting. The estimators based on the time-domain approach are simple and one-sided. However, they require the restriction of finite dimension for the space spanned by the factors. Recent papers have constructed one-sided estimators based on the frequency-domain method for the unrestricted model. These results exploit results on stochastic processes of dimension n that are driven by a q-dimensional white noise, with q<n, that is, singular vector stochastic processes. The main features of this literature are described with some detail. Lastly, we report and comment the results of an empirical paper, the last in a long list, comparing predictions obtained with time- and frequency-domain methods. The paper uses a large monthly U.S. dataset including the Great Moderation and the Great Recession.


2021 ◽  
Vol 0 (0) ◽  
pp. 1-30
Author(s):  
Chunyan Lin ◽  
Jia Liu ◽  
Peide Liu

In this paper, the quantitative analysis is implemented on the relationship between strategy deviation of listed firms and institutional investors’ recognition. For research methodology, financial complex networks and clustering techniques are employed to measure the de-gree of recognition by creating links to the common stockholding behaviour of institutional investors. Besides, quarterly panel data from 2006 to 2020 are constructed for an innovative study of the degree of recognition of institutional investors’ strategy deviation of listed firms under different innovation fields, firm properties, and market style heterogeneity and asymmetry. The stability test is conducted by the transformation of the measures and methods, thereby effectively avoiding the “cluster fallacy”. We validate the mechanism by which the differences in strategic choices and propensities of listed firms affect capital market recognition, and enrich the microscopic research perspective and methodology on related issues.


Author(s):  
Oldřich Rejnuš

The article deals with the theoretical classification of “classic” capital market securities, i.e. corporate stocks and bonds. Its aim was to make a detailed analysis of the individual types of these securities from the viewpoint of their main characteristic features, and to look for possible ways of systemizing them and distinguishing them as unambiguously as possible. As the aim of this analysis was to identify the most important and typical properties of not only corporate stocks and long-term bonds that are commonplace in investing but also of those that are rare on financial markets, the analysis was made from a global viewpoint, i.e. without regard to the individual countries’ legislative conditions.The analysis focused, above all, on looking for ways to construct the individual types of stocks and bonds and of the most important rights connected with them. Using the obtained results, these types were mutually compared and possible ways of their systemization were explored. Taking into account these facts, certain significant properties (which, however, concern all securities in general, such as “issuer type” or conditions of transferability/ways of tradeability) were intentionally abstracted.The result of the analysis confirms the meaningfulness of certain existing theories concerning the existence of three relatively different groups of “classic” securities: common stocks, preferred stocks, and bonds. At the same time, the analysis has shown that as far as this classification is concerned, it is based mainly on the function of the securities, which means that the properties regarding their structure and legal content are covered only partially. This is also proved by making a proposal for a comprehensive systemization, which shows that on the current financial market there are many situations when (except the legal identification) it is difficult to judge from the particular properties of a security whether it is a bond or a stock, or (in the latter case) which type of stock it is. For the above-mentioned reason, the conclusion stresses the necessity to create at least partly harmonised international legislation in the given area, and presents recommendations for the establishment of the fundamental part of a harmonised system of legislation, which increasingly appears to be essential.


Author(s):  
Sangeeta Gupta ◽  
Rajanikanth Aluvalu

Huge crowds heading towards smart investment options need a secure and trustworthy environment to earn good profits. Twitter, a social networking platform, is a major source generating huge information on share market consortium. People get excited when they come across the tweets that specify the shares yielding huge profits within a short time. Due to this, they end up tweeting about their credentials and amount they are willing to invest. It paves a path for the intruders to access confidential data and leave the common man in danger by gaining access and misusing the information. Towards this end, the goal of this work is to address the challenge of providing better inputs to the customers interested to invest in the share market in a secure way to earn better returns on investment. In this work, as a first module, pre-processing techniques are used to remove the unwanted characters from tweets. In the second part, to enhance the security, encryption module is developed, and the data is then stored in Cassandra. It is observed from results that the time taken to encrypt 100,000 tweets after pre-processing is 500 msec, and the time taken to decrypt the same set of 100,000 tweets is 50 msec, respectively. This shows the effectiveness of the proposed work in terms of attaining better and fast outcomes for a huge set of tweets after filling the voids by pre-processing techniques.


2018 ◽  
Vol 19 (4) ◽  
pp. 466-492
Author(s):  
Holger Gillet ◽  
Johannes Pauser

Abstract This paper examines efficiency in public input provision in two large regions with labor market imperfections. Because employment and pecuniary externalities are associated with public input provision, the provision level exceeds the optimal amount under the presence of wage rigidities in the capital-exporting jurisdiction if only head taxes are used to finance government expenditures. Efficiency in public input provision will remain ambiguous in the capital-importing jurisdiction unless a specific functional form is assumed for the production technology. The constrained efficient provision with public inputs can be restored with an additional tax (subsidy) on capital that is used to strategically influence the interest rate on the common capital market and to increase employment by attracting foreign capital.


Author(s):  
Dr. L. Senthilkumar

A Mutual Fund is a professionally-managed form of collective investments that pools money from many investors and invests it in stocks, bonds, short-term money market instruments, and/or other securities. Mutual Fund is a trust that pools the savings of a number of investors who share a common financial goal. This pool of money is invested in accordance with a stated objective. The joint ownership of the fund is thus “Mutual”, i.e. the fund belongs to all investors. The money thus collected is then invested in capital market instruments such as shares, debentures and other securities. The income earned through these investments and the capital appreciations realized are shared by its unit holders in proportion the number of units owned by them. Thus a Mutual Fund is the most suitable investment for the common man as it offers an opportunity to invest in a diversified, professionally managed basket of securities at a relatively low cost.


Sign in / Sign up

Export Citation Format

Share Document