Threshold value estimation of journey-distance using generalized polynomial function

2021 ◽  
pp. 1-17
Author(s):  
Roy Subhojit

The present work demonstrates an experience in estimating the threshold value of journey distances travelled by transit passengers using generalized polynomial function. The threshold value of journey distances may be defined as that distance beyond which passengers might no more be interested to travel by their reported mode. A knowledge on this threshold value is realized to be useful to limit the upper-most slab of transit fare, while preparing of a length-based fare matrix table. Theoretically, the threshold value can be obtained at that point on the cumulative frequency distribution (CFD) curve of journey distances at which the maximum rate of change of the slope of curve occurs. In this work, the CFD curve of the journey distance values is empirically modelled using Newton’s Polynomial Interpolation method, which helps to overcome various challenges usually encountered while an assumption of a theoretical probability distribution is considered a priori for the CFD.

1928 ◽  
Vol 11 (6) ◽  
pp. 715-741 ◽  
Author(s):  
Hudson Hoagland

1. The durations of successive periods of induced tonic immobility in the lizard Anolis carolinensis was examined as a function of temperature. An automatic recording method was employed and observations were made of 12,000 to 15,000 immobilizations with six animals over a temperature range of 5° to 35°C. during 5 months. 2. The durations of the immobile periods were found to vary rhythmically in most cases. The reciprocal of the duration of the rhythm, i.e., the rate of change of the process underlying the rhythms, when plotted as a function of temperature according to the Arrhenius equation show distributions of points in two straight line groups. One of these groups or bands of points extends throughout the entire temperature range with a temperature characteristic of approximately µ = 31,000 calories, and the other covers the range of 20° to 35°C. with µ equal to approximately 9,000 calories. 3. The initial stimulus in a series of inductions of immobility appears to set off a mechanism which determines the duration of the state of quiescence. Succeeding forced recoveries seem to have no effect on the normal duration of the rhythm. 4. These results are interpreted by assuming the release, through reflex stimulation, of hormonal substances, one effective between 5° and 35°C. and the other effective between 20° and 35°C. These substances are assumed to act as selective inhibitors of impulses from so called "higher centers," allowing impulses from tonic centers to pass to the muscles. 5. In some experiments a progressive lengthening in successively induced periods of immobility was observed. The logarithm of the frequency of recovery when plotted against time in most of these cases (i.e., except for a few in which irregularities occurred) gave a linear function of negative slope which was substantially unaffected by temperature. In these cases it is assumed that a diffusion process is controlling the amount of available A substance. 6. The results are similar to those obtained by Crozier with Cylisticus convexus. The duration of tonic immobility seems to be maintained in both arthropod and vertebrate by the chemical activity of "hormonal" selective inhibitors. The details of the mechanisms differ, but there is basic similarity. 7. Injections of small amounts of adrenalin above a threshold value are found to prolong the durations of tonic immobility of Anolis, by an amount which is a logarithmic function of the "dose." It is possible that internally secreted adrenalin, above a threshold amount, may be involved in the maintenance of tonic immobility. 8. The production of tonic immobility reflexly is a problem distinct from that of the duration of immobility. It is suggested that the onset may be induced by "shock" to the centers of reflex tonus causing promiscuous discharge of these centers with accompanying inhibition of the higher centers. Such a condition may result when an animal is suddenly lifted from the substratum and overturned, or when, as in the case of Anolis, it struggles with dorsum down. This reaction of the "tonic centers" may at the same time lead to discharge of the adrenal glands by way of their spinal connections thus prolonging the state.


2018 ◽  
Vol 10 (8) ◽  
pp. 2749
Author(s):  
Qi Wang ◽  
Fenzhen Su ◽  
Yu Zhang ◽  
Huiping Jiang ◽  
Fei Cheng

In addition to remote-sensing monitoring, reconstructing morphologic surface models through interpolation is an effective means to reflect the geomorphological evolution, especially for the lagoons of coral atolls, which are underwater. However, which interpolation method is optimal for lagoon geomorphological reconstruction and how to assess the morphological precision have been unclear. To address the aforementioned problems, this study proposed a morphological precision index system including the root mean square error (RMSE) of the elevation, the change rate of the local slope shape (CRLSS), and the change rate of the local slope aspect (CRLSA), and introduced the spatial appraisal and valuation approach of environment and ecosystems (SAVEE). In detail, ordinary kriging (OK), inverse distance weighting (IDW), radial basis function (RBF), and local polynomial interpolation (LPI) were used to reconstruct the lagoon surface models of a typical coral atoll in South China Sea and the morphological precision of them were assessed, respectively. The results are as follows: (i) OK, IDW, and RBF exhibit the best performance in terms of RMSE (0.3584 m), CRLSS (51.43%), and CRLSA (43.29%), respectively, while with insufficiently robust when considering all three aspects; (ii) IDW, LPI, and RBF are suitable for lagoon slopes, lagoon bottoms, and patch reefs, respectively; (iii) The geomorphic decomposition scale is an important factor that affects the precision of geomorphologic reconstructions; and, (iv) This system and evaluation approach can more comprehensively consider the differences in multiple precision indices.


Robotica ◽  
2005 ◽  
Vol 23 (6) ◽  
pp. 709-720 ◽  
Author(s):  
F. Belkhouche ◽  
B. Belkhouche

This paper deals with a method for robot navigation towards a moving goal. The goal maneuvers are not a priori known to the robot. Our method is based on the use of the kinematics equations of the robot and the goal combined with geometrical rules. First a kinematics model for the tracking problem is derived and two strategies are suggested for robot navigation, namely the velocity pursuit guidance law and the deviated pursuit guidance law. It turns out that in both cases, the robot's angular velocity is equal to the line of sight angle rate. Important properties of the navigation strategies are discussed and proven. In the presence of obstacles, two navigation modes are used: the tracking mode, which has a global aspect and the obstacle avoidance mode, which has a local aspect. In the obstacle avoidance mode, a polar diagram combining information about obstacles and directions corresponding to the pursuit is constructed. An extensive simulation study is carried out, where the efficiency of both strategies is illustrated for different scenarios.


Author(s):  
Saif Ur Rehman ◽  
Kexing Liu ◽  
Tariq Ali ◽  
Asif Nawaz ◽  
Simon James Fong

AbstractGraph mining is a well-established research field, and lately it has drawn in considerable research communities. It allows to process, analyze, and discover significant knowledge from graph data. In graph mining, one of the most challenging tasks is frequent subgraph mining (FSM). FSM consists of applying the data mining algorithms to extract interesting, unexpected, and useful graph patterns from the graphs. FSM has been applied to many domains, such as graphical data management and knowledge discovery, social network analysis, bioinformatics, and security. In this context, a large number of techniques have been suggested to deal with the graph data. These techniques can be classed into two primary categories: (i) a priori-based FSM approaches and (ii) pattern growth-based FSM approaches. In both of these categories, an extensive research work is available. However, FSM approaches are facing some challenges, including enormous numbers of frequent subgraph patterns (FSPs); no suitable mechanism for applying ranking at the appropriate level during the discovery process of the FSPs; extraction of repetitive and duplicate FSPs; user involvement in supplying the support threshold value; large number of subgraph candidate generation. Thus, the aim of this research is to make do with the challenges of enormous FSPs, avoid duplicate discovery of FSPs, and use the ranking for such patterns. Therefore, to address these challenges a new FSM framework A RAnked Frequent pattern-growth Framework (A-RAFF) is suggested. Consequently, A-RAFF provides an efficacious answer to these challenges through the initiation of a new ranking measure called FSP-Rank. The proposed ranking measure FSP-Rank effectively reduced the duplicate and enormous frequent patterns. The effectiveness of the techniques proposed in this study is validated by extensive experimental analysis using different benchmark and synthetic graph datasets. Our experiments have consistently demonstrated the promising empirical results, thus confirming the superiority and practical feasibility of the proposed FSM framework.


2021 ◽  
Vol 119 ◽  
pp. 07002
Author(s):  
Youness Rtal ◽  
Abdelkader Hadjoudja

Graphics Processing Units (GPUs) are microprocessors attached to graphics cards, which are dedicated to the operation of displaying and manipulating graphics data. Currently, such graphics cards (GPUs) occupy all modern graphics cards. In a few years, these microprocessors have become potent tools for massively parallel computing. Such processors are practical instruments that serve in developing several fields like image processing, video and audio encoding and decoding, the resolution of a physical system with one or more unknowns. Their advantages: faster processing and consumption of less energy than the power of the central processing unit (CPU). In this paper, we will define and implement the Lagrange polynomial interpolation method on GPU and CPU to calculate the sodium density at different temperatures Ti using the NVIDIA CUDA C parallel programming model. It can increase computational performance by harnessing the power of the GPU. The objective of this study is to compare the performance of the implementation of the Lagrange interpolation method on CPU and GPU processors and to deduce the efficiency of the use of GPUs for parallel computing.


Author(s):  
Claudio Garuti

This paper has two main objectives. The first objective is to provide a mathematically grounded technique to construct local and global thresholds using the well-known rate of change method. The next objective, which is secondary, is to show the relevance and possibilities of applying the AHP/ANP in absolute measurement (AM) compared to the relative measurement (RM) mode, which is currently widely used in the AHP/ANP community. The ability to construct a global threshold would help increase the use of AHP/ANP in the AM mode (rating mode) in the AHP/ANP community. Therefore, if the first specific objective is achieved, it would facilitate reaching the second, more general objective.   For this purpose, a real-life example based on the construction of a multi-criteria index and threshold will be described. The index measures the degree of lag of a neighborhood through the Urban and Social Deterioration Index (USDI) based on an AHP risks model. The global threshold represents the tolerable lag value for the specific neighborhood. The difference or gap between the neighborhood’s current status (actual USDI value) and this threshold represents the level of neighborhood deterioration that must be addressed to close the gap from a social and urban standpoint. The global threshold value is a composition of 45 terminal criteria with their own local threshold that must be evaluated for the specific neighborhood. This example is the most recent in a large list of AHP applications in AM mode in vastly different decision making fields, such as risk disaster assessment, environmental assessment, the problem of medical diagnoses, social responsibility problems, BOCR analysis for the evolution of nuclear energy in Chile in the next 20 years and many others. (See list of projects in Appendix).


2007 ◽  
Vol 7 (4) ◽  
pp. 321-340
Author(s):  
A. Masjukov

AbstractFor bivariate and trivariate interpolation we propose in this paper a set of integrable radial basis functions (RBFs). These RBFs are found as fundamental solutions of appropriate PDEs and they are optimal in a special sense. The condition number of the interpolation matrices as well as the order of convergence of the inter- polation are estimated. Moreover, the proposed RBFs provide smooth approximations and approximate fulfillment of the interpolation conditions. This property allows us to avoid the undecidable problem of choosing the right scale parameter for the RBFs. Instead we propose an iterative procedure in which a sequence of improving approx- imations is obtained by means of a decreasing sequence of scale parameters in an a priori given range. The paper provides a few clear examples of the advantage of the proposed interpolation method.


Interpolation methods and curve fitting represent so huge problem that each individual interpolation is exceptional and requires specific solutions. PNC method is such a novel tool with its all pros and cons. The user has to decide which interpolation method is the best in a single situation. The choice is yours if you have any choice. Presented method is such a new possibility for curve fitting and interpolation when specific data (for example handwritten symbol or character) starts up with no rules for polynomial interpolation. This chapter consists of two generalizations: generalization of previous MHR method with various nodes combinations and generalization of linear interpolation with different (no basic) probability distribution functions and nodes combinations. This probabilistic view is novel approach a problem of modeling and interpolation. Computer vision and pattern recognition are interested in appropriate methods of shape representation and curve modeling.


1988 ◽  
Vol 129 ◽  
pp. 353-353
Author(s):  
Jeanne Sauber ◽  
Thomas H. Jordan ◽  
Gregory C. Beroza ◽  
Thomas A. Clark ◽  
Michael Lisowski

To accommodate the relative motion across the North American-Pacific plate boundary predicted by global plate solutions, significant deformation on faults other than the San Andreas is necessary. In central California, this deformation is thought to include distributed compression perpendicular to the San Andreas as well as right-lateral strike-slip motion parallel to the San Andreas on faults such as the San Gregorio/Hosgri system. A self-consistent set of VLBI observations from experiments beginning in October 1982 is used to determine the vector rate of change of station position at central California VLBI sites Ovro, Mojave, Vandenberg, Fort Ord, Presidio, and Point Reyes. To estimate VLBI station positions, a procedure is used that minimizes the uncertainties in defining a reference frame by including a priori geologic and geodetic information. The vector rate of change of station positions provides constraints on the integrated deformation rates between stations. Geologic and geophysical data suggest that the rate and mode of deformation varies on both local and regional scales. Thus, the VLBI derived results are interpreted in the context of an overall tectonic framework by examining geologic and ground-based geodetic data.


Perfusion ◽  
2000 ◽  
Vol 15 (6) ◽  
pp. 485-494 ◽  
Author(s):  
J W Mulholland ◽  
W Massey ◽  
J C Shelton

Blood is exposed to various dynamic forces during cardiopulmonary bypass (CPB). Understanding the damaging nature of these forces is paramount for research and development of the CPB circuit. The object of this study was to identify the most damaging dynamic non-physiological forces and then quantify this damage. A series of in vitro experiments simulated the different combinations of dynamic forces experienced during CPB while damage to the blood was closely monitored. A combination of air interface ( a) and negative pressure ( P) caused the greatest rate of change in plasma Hb (Δp Hb) (4.94 10-3 mg/dl/s) followed by negative pressure and then an air interface. Shear stresses, positive pressures, wall impact forces and a blood-nonendothelial surface caused the least damage (0.26 10-3 mg/dl/s). An air interface showed no threshold value for blood damage, with the relationship between the size of the interface and the blood damage modelled by a second-order polynomial. However, negative pressure did exhibit a threshold value at -120 mmHg, beyond which point there was a linear relationship. Investigating the reasons for the increased blood trauma caused by the low-pressure suction (LPS) system makes it clear how research into minimizing or completely avoiding certain forces must be the next step to advancing extracorporeal technology.


Sign in / Sign up

Export Citation Format

Share Document