A structured model for evaluating countermeasures at highway–railway grade crossings

2005 ◽  
Vol 32 (4) ◽  
pp. 627-635 ◽  
Author(s):  
Young-Jin Park ◽  
Frank F Saccomanno

Various countermeasures can be introduced to reduce collisions at highway–railway grade crossings. These countermeasures may take different forms, such as passive and (or) active driver warning devices, supplementary traffic controls (four quadrant barriers, wayside horn, closed circuit television (CCTV) monitoring, etc.), illumination, signage and highway speed limit, etc. In this research, we present a structured model that makes use of data mining techniques to estimate the effect of changes in countermeasures on the expected number of collisions at a given crossing. This model serves as a decision-support tool for the evaluation and development of cost-effective and practicable safety program at highway–railway grade crossings. The use of data mining techniques helps to resolve many of the problems associated with conventional statistical models used to predict the expected number of collisions for a given type of crossing. Statistical models introduce biases that limit their ability to fully represent the relationship between selected countermeasures and resultant collisions for a mix of crossing attributes. This paper makes use of Canadian inventory and collision data to illustrate the potential merits of the proposed model to provide decision support.Key words: highway–railway grade crossing, collision prediction model, countermeasures, Poisson regression.

Author(s):  
Neeti Sangwan ◽  
Naveen Dahiya

Recommendation making is an important part of the information and e-commerce ecosystem. Recommendation represent a powerful method that filter large amount of information to provide relevant choice to end users. To provide recommendations to the users, efficient and cost effective methods needs to be introduced. Collaborative filtering is an emerging technique used in making recommendations which makes use of filtering by data mining. This chapter presents a classification framework on the use of data mining techniques in collaborative filtering to extract the best recommendations to the users on the basis of their interests.


2013 ◽  
Vol 19 (2) ◽  
pp. 121 ◽  
Author(s):  
Peyman Rezaei Hachesu ◽  
Maryam Ahmadi ◽  
Somayyeh Alizadeh ◽  
Farahnaz Sadoughi

Author(s):  
Naveen Dahiya ◽  
Vishal Bhatnagar ◽  
Manjeet Singh ◽  
Neeti Sangwan

Data mining has proven to be an important technique in terms of efficient information extraction, classification, clustering, and prediction of future trends from a database. The valuable properties of data mining have been put to use in many applications. One such application is Software Development Life Cycle (SDLC), where effective use of data mining techniques has been made by researchers. An exhaustive survey on application of data mining in SDLC has not been done in the past. In this chapter, the authors carry out an in-depth survey of existing literature focused towards application of data mining in SDLC and propose a framework that will classify the work done by various researchers in identification of prominent data mining techniques used in various phases of SDLC and pave the way for future research in the emerging area of data mining in SDLC.


Author(s):  
ThippaReddy Gadekallu ◽  
Bushra Kidwai ◽  
Saksham Sharma ◽  
Rishabh Pareek ◽  
Sudheer Karnam

Weather forecasting is a vital application in meteorology and has been one of the most scientifically and technologically challenging problems around the world in the last century. In this chapter, the authors investigate the use of data mining techniques in forecasting maximum temperature, rainfall, evaporation, and wind speed. This was carried out using artificial decision tree, naive Bayes, random forest, K-nearest neighbors (IBk) algorithms, and meteorological data collected between 2013 and 2014 from the city of Delhi. The performances of these algorithms were compared using standard performance metrics, and the algorithm which gave the best results used to generate classification rules for the mean weather variables. The results show that given enough case data, data mining techniques can be used for weather forecasting and climate change studies.


Author(s):  
Sunny Sharma ◽  
Manisha Malhotra

Web usage mining is the use of data mining techniques to analyze user behavior in order to better serve the needs of the user. This process of personalization uses a set of techniques and methods for discovering the linking structure of information on the web. The goal of web personalization is to improve the user experience by mining the meaningful information and presented the retrieved information in a way the user intends. The arrival of big data instigated novel issues to the personalization community. This chapter provides an overview of personalization, big data, and identifies challenges related to web personalization with respect to big data. It also presents some approaches and models to fill the gap between big data and web personalization. Further, this research brings additional opportunities to web personalization from the perspective of big data.


Author(s):  
Chitrasen Samantra ◽  
Saurav Datta ◽  
Siba Sankar Mahapatra

Recently competition in the global marketplace has stimulated immense attention being paid by the enterprises towards securing highest quality, cost effective components and materials, consistently delivered on time. This objective can only be achieved by establishing long term, close working relationships with suppliers, who adopt a proper quality philosophy. Supplier Quality Assurance is the confidence in a supplier's ability to deliver a commodity or service towards satisfying customer's needs. Supplier Quality Assurance can be achieved through interactive relationship between the customer and the supplier; it aims at ensuring the product's ‘suitably fit' to the customer's requirements with little or no adjustment or inspection. In the present context, the study develops a decision-making framework to assure as well as to assess suppliers' existing quality philosophy, current policy and related practices. An Interval-Valued Fuzzy Set (IVFS) theory has been adopted to develop such an evaluation model.


2012 ◽  
Vol 39 (3) ◽  
pp. 192 ◽  
Author(s):  
Michael Bode ◽  
Karl E. C. Brennan ◽  
Keith Morris ◽  
Neil Burrows ◽  
Neville Hague

Context Exclosure fences are widely used to reintroduce locally extinct animals. These fences function either as permanent landscape-scale areas free from most predators, or as small-scale temporary acclimatisation areas for newly translocated individuals to be ‘soft released’ into the wider landscape. Existing research can help managers identify the best design for their exclosure fence, but there are currently no methods available to help identify the optimal location for these exclosures in the local landscape (e.g. within a property). Aims We outline a flexible decision-support tool that can help managers choose the best location for a proposed exclosure fence. We applied this method to choose the site of a predator-exclusion fence within the proposed Lorna Glen (Matuwa) Conservation Park in the rangelands of central Western Australia. Methods The decision was subject to a set of economic, ecological and political constraints that were applied sequentially. The final exclosure fence location, chosen from among those sites that satisfied the constraints, optimised conservation outcomes by maximising the area enclosed. Key results From a prohibitively large set of potential exclosure locations, the series of constraints reduced the number of candidates down to 32. When ranked by the total area enclosed, one exclosure location was clearly superior. Conclusions By describing the decision-making process explicitly and quantitatively, and systematically considering each of the candidate solutions, our approach identifies an efficient exclosure fence location via a repeatable and transparent process. Implications The construction of an exclusion fence is an expensive management option, and therefore needs to convincingly demonstrate a high expected return-on-investment. A systematic approach for choosing the location of an exclosure fence provides managers with a decision that can be justified to funding sources and stakeholders.


2003 ◽  
Vol 1856 (1) ◽  
pp. 125-135 ◽  
Author(s):  
Sravanthi Konduri ◽  
Samuel Labi ◽  
Kumares C. Sinha

Incident prediction models are presented for the Interstate 80/Interstate 94 (Borman Expressway in northwestern Indiana) and Interstate 465 (northeastern Indianapolis, Indiana) freeway sections developed as a function of traffic volume, truck percentage, and weather. Separate models were developed for all incidents and noncrash incidents. Three model types were considered (Poisson regression, negative binomial regression, and nonlinear regression), and the results were compared based on magnitudes and signs of model parameter estimates and t-statistics. Least-squares estimation and maximum-likelihood methods were used to estimate the model parameters. Data from the Indiana Department of Transportation and the Indiana Climatology Database were used to establish the relationships. For a given session and incident category, the results from the Poisson and negative binomial models were found to be consistent. It was observed that, unlike section length, traffic volume is nonlinearly related to incidents, and therefore these two variables have to be considered as separate terms in the modeling process. Truck percentage was found to be a statistically significant factor affecting incident occurrence. It was also found that the weather variable (rain and snow) was negatively correlated to incidents. The freeway incident models developed constitute a useful decision support tool for implementation of new freeway patrol systems or for expansion of existing ones. They are also useful for simulating incident occurrences with a view to identifying elements of cost-effective freeway patrol strategies (patrol deployment policies, fleet size, crew size, and beat routes).


Author(s):  
Young-Jin Park ◽  
Frank F. Saccomanno

Various countermeasures can be introduced to reduce collisions at highway–railway grade crossings. Existing improvements to crossings include the installation of flashing lights or gates, the addition of extra warning devices such as four-quadrant barriers or wayside horns, and the enforcement of speed limits on the approaching highway. Statistical models are needed to ensure that countermeasures introduced at a given crossing are both cost-effective and practicable. However, in large part because of issues of colinearity, poor statistical significance, and parametric bias, many existing statistical models are simple in structure and feature few statistically significant explanatory variables. Accordingly, they fail to reflect the full gamut of factor inputs that explain variation in collision frequency at individual crossings over a given period of time. Before statistical models can be used to investigate the cost-effectiveness of specific countermeasures, models must be developed that more fully reflect the complex relationships that link a specific countermeasure to collision occurrence. This study presents a sequential modeling approach based on data mining and statistical methods to estimate the main and interactive effects of introducing countermeasures at individual grade crossings. This paper makes use of Canadian inventory and collision data to illustrate the potential merits of the model in decision support.


Sign in / Sign up

Export Citation Format

Share Document