scholarly journals IMPLICATIONS OF THE CURRENT TREND IN MORTGAGE VALUATION PRACTICE IN NIGERIA

2007 ◽  
Vol 11 (1) ◽  
pp. 17-31 ◽  
Author(s):  
Bioye Tajudeen Aluko

The emerging concern on the reliability of property valuations coupled with the attendant consequences of growing number of distressed banks in Nigeria necessitate this study. Therefore, the study examined the mortgage valuation process including sources of valuation instructions, bases and methods being adopted and their implications on lending decisions and valuation profession in the study area. To accomplish the study, questionnaires were randomly administered on samples of estate surveying and valuation firms and lending institutions respectively in Lagos Metropolis. The data emanating there from were analysed using frequency distributions, ranking mean and relative important index ranking. The study, prima facie, established that mortgage valuation has been an important input in lending decisions. It further showed the implications of the blind adoption of the cost approach, inconsistency amongst valuers and non ‐ inclusion of insurance valuation in the mortgage valuation process. Būsto paskolų vertinimas Nigerijoje: esamų tendencijų prasmė Santrauka Ši tyrimą paskatino didėjantis rūpestis dėl nekilnojamojo turto vertinimu variantiškumo bei tikslumo ir nuskurdusiu Nigerijos banku sukelti padariniai. Kadangi, priimant sprendimą suteikti paskola, vertinti ja būtina, tyrimo išvados galbūt padėtu ieškoti politiniu sprendimu dėlnesuvaldomo banku žlugimo šalyje. Todėl darbe nustatyta ir ištirta būsto paskolų vertinimo svarba, įskaitant vertinimo instrukcijų šaltiniu, pagrindu ir taikomu metodu reikšme, nagrinėtas būsto paskolų vertinimas, priimat sprendimus suteikti paskola, ir vertintojo profesija šioje srityje. Anketomis ir pokalbiu metu apklaustos 59 atsitiktine tvarka atrinktos turto tikrinimo bei vertinimo firmos (iš 146 aktyviai veikiančiu; iš viso yra 239 tokio pobūdžio firmos) ir 42 skolinančios institucijos iš 89, veikiančiu Lagose. Gauti duomenys išanalizuoti skaičiuojant pasiskirstymą, garantavimas ir santykines svarbos indeksą. Tyrimu nustatyta, kad teikiamas paskolas vertinti yra svarbu. Tolesne analize parodę, kad į būsto paskolų vertinimąneįeina draudimo vertinimas, aklai pasirenkamas sąnaudųaspektas ir vertintoju nuomones dėl būsto paskolų skiriasi. Tai, kas išdėstyta, skirta vertintojams, skolintojams arba kitiems vertinimo paslaugu vartotojams ir turto rinkai apskritai.

2007 ◽  
Vol 8 (3) ◽  
pp. 225-233 ◽  
Author(s):  
Bioye Tajudeen Aluko

Of all the sub‐sectors of the national economy, the banking industry and the property market have arguably been most severely affected by the current recession. Thus, the prevailing credit crunch in real estate finance and market conditions have implication for disposal and valuations of real estate for mortgage purposes. The study examined whether forced sale valuations of mortgage properties were a good proxy for their auction sale prices. Relevant data involving 67 auction sales of foreclosed residential property transactions together with their contemporaneous forced sale valuations were pooled together in Lagos Metropolis during the period 1994 to 2003 from sample of estate surveying and valuation/auctioneering firms, the lending institutions and the Nigeria Deport Insurance Corporation. The data obtained were analyzed with the aid of frequency distributions and multiple regression models. The study revealed, amongst others, that forced sale values are not good proxies for auction sale prices as against the conclusions of previous studies on accuracy of open market valuations either in Nigeria or other countries like UK, USA and Australia. The implications of the foregoing conclusions on the lending decisions and valuation profession in the country were further examined in the paper.


1997 ◽  
Vol 64 (3) ◽  
pp. 413-421 ◽  
Author(s):  
A. B. Pleasants

AbstractA model of a birthdate distribution for a herd of beef cows is constructed using the probability distributions of the variables that affect reproduction in the cow — anoestrous interval, oestrous cycle length, conception to each oestrus, gestation length, period of mating and the prior calving frequency distribution. The model is general and can be reparamaterized to deal with issues such as intervention to synchronize oestrous cycles among cows in the herd by changing the form of the relevant probability distributions.The model is applied to the question of what time to begin mating in a herd of beef cows. The average calf live weight at day 200, herd conception rate and proportion of cows calving before the planned start of calving were calculated from the model output. The model parameters given by the anoestrous period, conception rate to each oestrus and the regression between prior calving date and anoestrous period, were varied in a factorial design to investigate a range of circumstances found on a farm. Prior calving distributions were generated by random sampling from eight actual calving frequency distributions.Generally starling mating earlier produced an advantage in terms of extra calf live weight and herd conception rate. However, the proportion of the herd calving earlier than expected increased with early mating. Thus, the feasibility of early mating depends on the cost to the farmer of dealing with early calving cows as well as the advantage of heavier older calves.Altering the fixed parameters in the model (variances and covariances, prior calving distributions, mating period) to accommodate the circumstances of herds run under different conditions may produce different results. Model structure allows easy alteration of these parameters and also the introduction of different probability distributions for some variables. This might be necessary to model oestrous synchronization and artificial insemination, issues not considered in this paper.


2012 ◽  
Vol 157-158 ◽  
pp. 784-787
Author(s):  
Yu Guo Zhuo ◽  
Jun Liu ◽  
Jia Min Gao ◽  
Yu Liu ◽  
Zhen Zhen Kang

Total hardness is an important index to evaluate drinking water quality. Total hardness in drinking water was determined by micro titration and conventional titration seperately and parallel determination results of microscale chemical laboratory and constants experiment were compared. The results of micro titration were accurate and reliable. Microscale laboratory has such characteristics as obvious phenomena, saving reagents, fast analysis. So the cost was reduced and the innovative ability of students was improved by full compliance with the concept of green chemistry. Microscale laboratory is worth promoting.


2020 ◽  
Vol 27 (10) ◽  
pp. 2859-2891
Author(s):  
Douglas Alleman ◽  
Eul-Bum Lee

PurposeThe publication presents an analysis of the cost and schedule performance of incentive/disincentive projects and case studies toward developing a systematic disincentive valuation process, with Construction Analysis for Pavement Rehabilitation Strategies (CA4PRS) software integration that aids agencies in minimizing the likelihood of court challenges of disincentives.Design/methodology/approachFrom a California transportation database, the authors performed cost and schedule analyses of 43 incentive/disincentive (I/D) projects and case studies on four of those I/D projects. Interviewees included subject matter experts from transportation organizations to ensure applicability and maximum value-adding, and the process was implemented on ten California transportation projects and monitored for performance.FindingsThe presented process mitigates the contractor's ability to claim disincentives as penalties in a court of law through the following: (1) all calculations are performed using project-specific bases, backed by estimations of actual incurred costs; (2) the CA4PRS software allows for estimation transparency and (3) the clarity of cost inclusions reduces any chances of “double-dipping” between disincentives and liquidated damages.Practical implicationsTransportation agencies have historically faced legal challenges to their enforcements of disincentives. As agencies continue to apply disincentives on more megaprojects, contractors will likely attempt to pursue legal challenges more frequently. The presented process mitigates the likelihood of these challenges going to court and increases the accuracy and efficiency of disincentives.Originality/valueWhile there have been publications that discuss the legal challenges of imposing disincentives, they mainly provide guidelines and lack applicable processes. Existing literature that does present incentive/disincentive valuation process focuses on incentive valuations and neglects the disincentives' legal challenges. The following publication fills this gap by presenting an applicable disincentive valuation process for transportation projects which incorporates the guidelines for legal mitigation.


2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Amjed S. Al-Fahoum ◽  
Ausilah A. Al-Fraihat

Technically, a feature represents a distinguishing property, a recognizable measurement, and a functional component obtained from a section of a pattern. Extracted features are meant to minimize the loss of important information embedded in the signal. In addition, they also simplify the amount of resources needed to describe a huge set of data accurately. This is necessary to minimize the complexity of implementation, to reduce the cost of information processing, and to cancel the potential need to compress the information. More recently, a variety of methods have been widely used to extract the features from EEG signals, among these methods are time frequency distributions (TFD), fast fourier transform (FFT), eigenvector methods (EM), wavelet transform (WT), and auto regressive method (ARM), and so on. In general, the analysis of EEG signal has been the subject of several studies, because of its ability to yield an objective mode of recording brain stimulation which is widely used in brain-computer interface researches with application in medical diagnosis and rehabilitation engineering. The purposes of this paper, therefore, shall be discussing some conventional methods of EEG feature extraction methods, comparing their performances for specific task, and finally, recommending the most suitable method for feature extraction based on performance.


2012 ◽  
Vol 10 (2) ◽  
pp. 97
Author(s):  
Denis O. Boudreaux ◽  
Praveen Das ◽  
Nancy Rumore ◽  
SPUma Rao

A companys cost of capital is the average rate it pays for the use of its capital funds. Estimating the cost of equity capital for a publicly traded firm is much simpler than estimating the same for a small privately held firm. For privately owned firms there is the lack of market based financial information. In business damage cases, valuation of the firm is often a prime interest. A necessary variable in the valuation process is the estimate of the firms cost of capital. Part of the cost of capital is the equity holders or owners required rate of return. The purpose of this paper is to explore the theoretical structure that underlies the valuation process for business damage cases that involve privately owned businesses. Specifically, cost of equity capital estimate methods which appear in the current literature are examined, and a theoretically correct and simple method to measure cost of equity capital for closely held companies is offered.


2008 ◽  
Vol 6 (2) ◽  
pp. 157
Author(s):  
Felipe Pretti Casotti ◽  
Luiz Felipe Jacques da Motta

The pricing process of new shares in IPOs has been under study in several countries. This paper initially looks at the valuation process using multiples and seeks to classify the new shares under two categories: underpriced or overpriced at the time of the IPOs. Analysis of the cost of equity, comparing betas at the time of the offerings (usually calculated as the betas of comparable companies) and the betas of the companies after 12 months of trading, is also carried out. Companies in the sample are those that went public between 2004 and 2006. Results indicated that companies were not undervalued, even after some high short-term returns. However there is no statistical evidence that they were overvalued. Finally, results indicated that betas after twelve months of trading are significantly higher than the comparable companies’ betas used at the time of the IPOs.


Author(s):  
Alison Fraser ◽  
Stevie Kinnear ◽  
Ken Smith

IntroductionHispanic naming conventions frequently follow historical traditions. A person’s name consists of a given name or names followed by the father’s first surname and the mother’s first surname or reversed if the parents wish. The challenge occurs in keying and linking these non-standard names resulting in a potential linking bias. Objectives and ApproachHistorically the Utah Population Database (UPDB) has combined multiple surnames into a single surname to standardize names such as VAN WINKLE, however this resulted in Hispanic surnames combined into nonsensical names, for example ‘MARTINEZCRUZ’ that were difficult to match to surnames stored in separate fields in assorted combinations. The objective of this study was to see if name specific frequencies and name arrays created with the second and third given name, maiden name and surname allowed for ultimate flexibility in matching to records which did not adhere to any standardized keying convention and resulted in better linking results. ResultsA “Gold Standard” set of Hispanic individuals with multiple record sources in UPDB and the presence of two names in the surname field were evaluated. Two Linking approaches, one using the UPDB standard methodology and the other using name arrays were compared. Both methodologies resulted in high linking rates into complete or partial sets of records per individual. Overall, the array methodology linked more records into complete sets than the standard (94.7% cf. 82.6%). Using arrays, males linked at a higher rate than females and persons from Spanish speaking countries linked at the highest rate compared with USA born or other countries. However, there was an increase in incorrect links using arrays. Name frequency distributions specific to Hispanics also proved important. Conclusion/ImplicationsThis study found weights based on frequencies specific to the population being linked is critical to complete linking. Using name arrays for Hispanics was most effective in males with indicators of strong ethnic ties. However, the cost of using arrays was an increase in incorrect links and further refinement is needed.


1983 ◽  
Vol 12 (1) ◽  
pp. 33-40
Author(s):  
George A. Stevens ◽  
Herbert L. Brodie

This study examines the economic feasibility of substituting electricity generated on dairy farms by methane gas systems for electricity purchased from local utility companies. Electric power is an important input in the operation of a dairy farm. The central question was which source of this input was the cheaper? Herd sizes included in the study were 50, 100, 200 and 300 cows. The cost of methane generated electricity is compared with the cost of purchased electricity. Results are presented by size of dairy herd.


2016 ◽  
Vol 43 (2) ◽  
pp. 178-202
Author(s):  
Marcos Valli Jorge ◽  
Wilfredo Leiva Maldonado

Purpose – The purpose of this paper is to model a credit card market where the retailers may charge differential prices depending on the instrument of payment used by the consumer. According to the research agenda proposed by Rochet and Wright (2010), the authors find conditions for the existence of differential prices equilibrium and analyze the effects of that price differentiation on the consumer’s welfare. Design/methodology/approach – This is done when the consumer has also the store credit as an alternative of payment. The equilibrium prices are computed assuming a Hotelling competition among retailers in both scenarios, when the cost of the store credit provided by the retailer is greater than that provided by the credit card and vice versa. Findings – From this, the authors prove that the average price under the price differentiation is lower than the single price under the no-surcharge rule; nevertheless, the retailer’s margins remain the same in both situations. Furthermore, some cross-subsidies are expunged when price differentiation is allowed. The authors also conclude that the consumers’ welfare is greater when the no-surcharge rule is abolished. Finally, if the retailers face menu costs whenever they differentiate prices, the authors provide sufficient conditions for differential prices remain as equilibrium. Practical implications – This is an important input for discussions among regulators and players of the credit card market. Originality/value – From the analysis the authors can conclude that price differentiation, according to the instrument of payment, is a welfare improving policy. The authors explicitly determine the average price in that setting and the differentiated prices even in presence of costs that arise from price differentiation. The obtained theoretical results can be used as an input for econometric modeling purposes.


Sign in / Sign up

Export Citation Format

Share Document