scholarly journals Mapping similarities in temporal parking occupancy behavior based on city-wide parking meter data

2018 ◽  
Vol 1 ◽  
pp. 1-5
Author(s):  
Fabian Bock ◽  
Karen Xia ◽  
Monika Sester

The search for a parking space is a severe and stressful problem for drivers in many cities. The provision of maps with parking space occupancy information assists drivers in avoiding the most crowded roads at certain times. Since parking occupancy reveals a repetitive pattern per day and per week, typical parking occupancy patterns can be extracted from historical data.<br> In this paper, we analyze city-wide parking meter data from Hannover, Germany, for a full year. We describe an approach of clustering these parking meters to reduce the complexity of this parking occupancy information and to reveal areas with similar parking behavior. The parking occupancy at every parking meter is derived from a timestamp of ticket payment and the validity period of the parking tickets. The similarity of the parking meters is computed as the mean-squared deviation of the average daily patterns in parking occupancy at the parking meters. Based on this similarity measure, a hierarchical clustering is applied. The number of clusters is determined with the Davies-Bouldin Index and the Silhouette Index.<br> Results show that, after extensive data cleansing, the clustering leads to three clusters representing typical parking occupancy day patterns. Those clusters differ mainly in the hour of the maximum occupancy. In addition, the lo-cations of parking meter clusters, computed only based on temporal similarity, also show clear spatial distinctions from other clusters.

2020 ◽  
Vol 30 (Supplement_5) ◽  
Author(s):  
C Quercioli ◽  
G A Carta ◽  
G Cevenini ◽  
G Messina ◽  
N Nante ◽  
...  

Abstract Background Careful scheduling of elective surgery Operating Rooms (ORs) is crucial for their efficient use, to avoid low/over utilization and staff overtime. Accurate estimation of procedures duration is essential to improve ORs scheduling. Therefore analysis of historical data about surgical times is fundamental to ORs management. We analyzed the effect, in a real setting, of an ORs scheduling model based on estimated optimum surgical time in improving ORs efficiency and decreasing the risk of overtime. Methods We studied all the 2014-2019 elective surgery sessions (3,758 sessions, 12,449 interventions) of a district general hospital in Siena's Province, Italy. The hospital had3 ORs open 5 days/week 08:00-14:00. Surgery specialties were general surgery, orthopedics, gynecology and urology. Based on a pilot study conducted in 2016, which estimated a 5 times greater risk of having an OR overtime for sessions with a surgical time (incision-suture)&gt;200 minutes, from 2017 all the ORs were scheduled using a maximum surgical time of 200 minutes calculated summing the mean surgical times for intervention and surgeon (obtained from 2014-2016 data). We carried out multivariate logistic regression to calculate the probability of ORs overtime (of 15 and 30 minutes) for the periods 2014-2016 and 2017-2019adjusting for raw ORs utilization. Results The 2017-2019 risk of an OR overtime of 15 minutes decreased by 25% compared to the 2014-2016 period (OR = 0.75, 95%CI=0.618-0.902, p = 0.003); the risk of a OR overtime of 30 minutes decreased by 33% (OR = 0.67, 95%CI= 0.543-0.831, p &lt; 0.001). Mean raw OR utilization increase from 62% to 66% (p &lt; 0.001). Mean number of interventions per surgery sessions increased from 3.1 to 3.5 (p &lt; 0.001). Conclusions This study has shown that an analysis of historical data and an estimate of the optimal surgical time per surgical session could be helpful to avoid both a low and excessive use of the ORs and therefore to increase the efficiency of the ORs. Key messages An accurate analysis of surgical procedures duration is crucial to optimize operating room utilization. A data-based approach can improve OR management efficiency without extra resources.


1983 ◽  
Vol 104 ◽  
pp. 185-186
Author(s):  
M. Kalinkov ◽  
K. Stavrev ◽  
I. Kuneva

An attempt is made to establish the membership of Abell clusters in superclusters of galaxies. The relation is used to calibrate the distances to the clusters of galaxies with two redshift estimates. One is m10, the magnitude of the ten-ranked galaxy, and the other is the “mean population,” P, defined by: where p = 40, 65, 105 … galaxies for richness groups 0, 1, 2 …, and r is the apparent radius in degrees given by: The first iteration for redshift, z1, is obtained from m10 alone: The standard deviation for Eq. (1) is 0.105, the number of clusters with known velocities is 342 and the correlation coefficient between observed and fitted values is 0.921. With zi from Eq. (1), we define Cartesian galactic coordinates Xi = Rih−1 cosBi cosLi, Yi = Rih−1 cosBi sinLi, Zi = Rih−1 sinBi for each Abell cluster, i = 1, …, 2712, where Ri is the distance to the cluster (Mpc), and Ho = 100 h km s−1 Mpc−1.


2018 ◽  
Vol 7 (4.30) ◽  
pp. 281
Author(s):  
Nazirah Ramli ◽  
Siti Musleha Ab Mutalib ◽  
Daud Mohamad

This paper proposes an enhanced fuzzy time series (FTS) prediction model that can keep some information under a various level of confidence throughout the forecasting procedure. The forecasting accuracy is developed based on the similarity between the fuzzified historical data and the fuzzy forecast values. No defuzzification process involves in the proposed method. The frequency density method is used to partition the interval, and the area and height type of similarity measure is utilized to get the forecasting accuracy. The proposed model is applied in a numerical example of the unemployment rate in Malaysia. The results show that on average 96.9% of the forecast values are similar to the historical data. The forecasting error based on the distance of the similarity measure is 0.031. The forecasting accuracy can be obtained directly from the forecast values of trapezoidal fuzzy numbers form without experiencing the defuzzification procedure.


2013 ◽  
Vol 126 (3) ◽  
pp. 204 ◽  
Author(s):  
Philip A. Cochran ◽  
Mark A. Ross ◽  
Thomas S. Walker ◽  
Trevor Biederman

Record-setting warm temperatures in the upper Midwest during early 2012 resulted in early spawning by the American Brook Lamprey (Lethenteron appendix) in southeastern Minnesota. American Brook Lampreys in a total of five streams in three drainages spawned up to one month earlier than typical. Mean day of year of spawning groups observed in 2012 was significantly different from the mean for groups observed during the period 2002–2010, but mean water temperature was not significantly different. Limited historical data are not sufficient to show an effect of climate change on spawning phenology because some data are confounded with the effects of latitude and year-to-year variability in thermal regime.


Entropy ◽  
2020 ◽  
Vol 22 (3) ◽  
pp. 332 ◽  
Author(s):  
Peter Joseph Mercurio ◽  
Yuehua Wu ◽  
Hong Xie

This paper presents an improved method of applying entropy as a risk in portfolio optimization. A new family of portfolio optimization problems called the return-entropy portfolio optimization (REPO) is introduced that simplifies the computation of portfolio entropy using a combinatorial approach. REPO addresses five main practical concerns with the mean-variance portfolio optimization (MVPO). Pioneered by Harry Markowitz, MVPO revolutionized the financial industry as the first formal mathematical approach to risk-averse investing. REPO uses a mean-entropy objective function instead of the mean-variance objective function used in MVPO. REPO also simplifies the portfolio entropy calculation by utilizing combinatorial generating functions in the optimization objective function. REPO and MVPO were compared by emulating competing portfolios over historical data and REPO significantly outperformed MVPO in a strong majority of cases.


Entropy ◽  
2019 ◽  
Vol 21 (6) ◽  
pp. 588 ◽  
Author(s):  
Tao Zhang ◽  
Shiyuan Wang ◽  
Haonan Zhang ◽  
Kui Xiong ◽  
Lin Wang

As a nonlinear similarity measure defined in the reproducing kernel Hilbert space (RKHS), the correntropic loss (C-Loss) has been widely applied in robust learning and signal processing. However, the highly non-convex nature of C-Loss results in performance degradation. To address this issue, a convex kernel risk-sensitive loss (KRL) is proposed to measure the similarity in RKHS, which is the risk-sensitive loss defined as the expectation of an exponential function of the squared estimation error. In this paper, a novel nonlinear similarity measure, namely kernel risk-sensitive mean p-power error (KRP), is proposed by combining the mean p-power error into the KRL, which is a generalization of the KRL measure. The KRP with p = 2 reduces to the KRL, and can outperform the KRL when an appropriate p is configured in robust learning. Some properties of KRP are presented for discussion. To improve the robustness of the kernel recursive least squares algorithm (KRLS) and reduce its network size, two robust recursive kernel adaptive filters, namely recursive minimum kernel risk-sensitive mean p-power error algorithm (RMKRP) and its quantized RMKRP (QRMKRP), are proposed in the RKHS under the minimum kernel risk-sensitive mean p-power error (MKRP) criterion, respectively. Monte Carlo simulations are conducted to confirm the superiorities of the proposed RMKRP and its quantized version.


2012 ◽  
Vol 433-440 ◽  
pp. 3910-3917
Author(s):  
Hilary Green ◽  
Nino Kordzakhia ◽  
Ruben Thoplan

In this paper bivariate modelling methodology, solely applied to the spot price of electricity or demand for electricity in earlier studies, is extended to a bivariate process of spot price of electricity and demand for electricity. The suggested model accommodates common idiosyncrasies observed in deregulated electricity markets such as cyclical trends in price and demand for electricity, occurrence of extreme spikes in prices, and mean-reversion effect seen in settling of prices from extreme values to the mean level over a short period of time. The paper presents detailed statistical analysis of historical data of daily averages of electricity spot prices and corresponding demand for electricity. The data is obtained from the NSW section of Australian Energy Markets.


Author(s):  
Jordan Anaya

GRIMMER (Granularity-Related Inconsistency of Means Mapped to Error Repeats) builds upon the GRIM test and allows for testing whether reported measures of variability are mathematically possible. GRIMMER relies upon the statistical phenomenon that variances display a simple repetitive pattern when the data is discrete, i.e. granular. This observation allows for the generation of an algorithm that can quickly identify whether a reported statistic of any size or precision is consistent with the stated sample size and granularity. My implementation of the test is available at PrePubMed (http://www.prepubmed.org/grimmer) and currently allows for testing variances, standard deviations, and standard errors for integer data. It is possible to extend the test to other measures of variability such as deviation from the mean, or apply the test to non-integer data such as data reported to halves or tenths. The ability of the test to identify inconsistent statistics relies upon four factors: (1) the sample size; (2) the granularity of the data; (3) the precision (number of decimals) of the reported statistic; and (4) the size of the standard deviation or standard error (but not the variance). The test is most powerful when the sample size is small, the granularity is large, the statistic is reported to a large number of decimal places, and the standard deviation or standard error is small (variance is immune to size considerations). This test has important implications for any field that routinely reports statistics for granular data to at least two decimal places because it can help identify errors in publications, and should be used by journals during their initial screen of new submissions. The errors detected can be the result of anything from something as innocent as a typo or rounding error to large statistical mistakes or unfortunately even fraud. In this report I describe the mathematical foundations of the GRIMMER test and the algorithm I use to implement it.


Sign in / Sign up

Export Citation Format

Share Document