scholarly journals Modeling HMM Map Matching Using Multi-label Classification

Author(s):  
Atichart Sinsongsuk ◽  
Thapana Boonchoo ◽  
Wanida Putthividhya

Map matching deals with matching GPS coordinates to corresponding points or segments on a road network map. The work has various applications in both vehicle navigating and tracking domains. Traditional rule-based approach for solving the Map matching problem yielded great matching results. However, its performance depends on the underlying algorithm and Mathematical/Statistical models employed in the approach. For example, HMM Map Matching yielded O(N2) time complexity, where N is the number of states in the underlying Hidden Markov Model. Map matching techniques with large order of time complexity are impractical for providing services, especially within time-sensitive applications. This is due to their slow responsiveness and the critical amount of computing power required to obtain the results. This paper proposed a novel data-driven approach for projecting GPS trajectory onto a road network. We constructed a supervised-learning classifier using the Multi-Label Classification (MLC) technique and HMM Map Matching results. Analytically, our approach yields O(N) time complexity, suggesting that the approach has a better running performance when applied to the Map matching-based applications in which the response time is the major concern. In addition, our experimental results indicated that we could achieve Jaccard Similarity index of 0.30 and Overlap Coefficient of 0.70.

2021 ◽  
Vol 67 (4) ◽  
pp. 37-41
Author(s):  
Simon Koblar

Public transport plays a major role in sustainable mobility planning. This is even more obvious on regional level, where distances are often too long for cycling, therefore public transport remains only viable sustainable travel mode. In the process of preparation of regional SUMP, evaluation of accessibility is one of crucial steps. However, accessibility measurement can be a challenging task. In Slovenia, there have been several studies measuring frequency and access to closest stop, ignoring travel speed and destinations that could be reached. However rapid increase in computing power, software development and availability of schedule data in GTFS format, opened an opportunity to evaluate accessibility more precisely. We performed an analysis for Koroška region in Slovenia. Accessibility was measured with OpenTripPlanner with OpenStreetMap data for road network and schedules in GTFS format. Travel times were measured in both directions for all inhabited cells in a grid resolution of one hectare and central settlements of intermunicipal importance. The results of the analysis are important in terms of understanding how many citizens can access settlements of intermunicipal importance with public transport. This will serve as a baseline measure in regional SUMP preparation and will enable future iterations and comparisons. It also enables us to see the gaps in public transport supply and propose improvements. Open-source tools and open data enables this method to be used in other regions as well.


2021 ◽  
pp. 1-16
Author(s):  
Xiaohan Wang ◽  
Pei Wang ◽  
Weilong Chen ◽  
Wangwu Hu ◽  
Long Yang

Many location-based services require a pre-processing step of map matching. Due to the error of the original position data and the complexity of the road network, the matching algorithm will have matching errors when the complex road network is implemented, which is therefore challenging. Aiming at the problems of low matching accuracy and low efficiency of existing algorithms at Y-shaped intersections and roundabouts, this paper proposes a space-time-based continuous window average direction feature trajectory map matching algorithm (STDA-matching). Specifically, the algorithm not only adaptively generates road network topology data, but also obtains more accurate road network relationships. Based on this, the transition probability is calculated by using the average direction feature of the continuous window of the track points to improve the matching accuracy of the algorithm. Secondly, the algorithm simplifies the trajectory by clustering the GPS trajectory data aggregation points to improve the matching efficiency of the algorithm. Finally, we use a real and effective data set to compare the algorithm with the two existing algorithms. Experimental results show that our algorithm is effective.


2021 ◽  
Vol 19 (3) ◽  
pp. 125-138
Author(s):  
S. Inichinbia ◽  
A.L. Ahmed

This paper presents a rigorous but pragmatic and data driven approach to the science of making seismic-to-well ties. This pragmatic  approach is consistent with the interpreter’s desire to correlate geology to seismic information by the use of the convolution model,  together with least squares matching techniques and statistical measures of fit and accuracy to match the seismic data to the well data. Three wells available on the field provided a chance to estimate the wavelet (both in terms of shape and timing) directly from the seismic and also to ascertain the level of confidence that should be placed in the wavelet. The reflections were interpreted clearly as hard sand at H1000 and soft sand at H4000. A synthetic seismogram was constructed and matched to a real seismic trace and features from the well are correlated to the seismic data. The prime concept in constructing the synthetic is the convolution model, which represents a seismic reflection signal as a sequence of interfering reflection pulses of different amplitudes and polarity but all of the same shape. This pulse shape is the seismic wavelet which is formally, the reflection waveform returned by an isolated reflector of unit strength at the target  depth. The wavelets are near zero phase. The goal and the idea behind these seismic-to-well ties was to obtain information on the sediments, calibration of seismic processing parameters, correlation of formation tops and seismic reflectors, and the derivation of a  wavelet for seismic inversion among others. Three seismic-to-well ties were done using three partial angle stacks and basically two formation tops were correlated. Keywords: seismic, well logs, tie, synthetics, angle stacks, correlation,


Author(s):  
Ahmed Gater ◽  
Daniela Grigori ◽  
Mokrane Bouzeghoub

One of the key tasks in the service oriented architecture that Semantic Web services aim to automate is the discovery of services that can fulfill the applications or user needs. OWL-S is one of the proposals for describing semantic metadata about Web services, which is based on the OWL ontology language. Majority of current approaches for matching OWL-S processes take into account only the inputs/outputs service profile. This chapter argues that, in many situations the service matchmaking should take into account also the process model. We present matching techniques that operate on OWL-S process models and allow retrieving in a given repository, the processes most similar to the query. To do so, the chapter proposes to reduce the problem of process matching to a graph matching problem and to adapt existing algorithms for this purpose. It proposes a similarity measure used to rank the discovered services. This measure captures differences in process structure and semantic differences between input/outputs used in the processes.


PLoS ONE ◽  
2016 ◽  
Vol 11 (5) ◽  
pp. e0156089 ◽  
Author(s):  
Hongsheng Qi ◽  
Meiqi Liu ◽  
Lihui Zhang ◽  
Dianhai Wang

Literator ◽  
2008 ◽  
Vol 29 (1) ◽  
pp. 21-42 ◽  
Author(s):  
S. Pilon ◽  
M.J. Puttkammer ◽  
G.B. Van Huyssteen

The development of a hyphenator and compound analyser for Afrikaans The development of two core-technologies for Afrikaans, viz. a hyphenator and a compound analyser is described in this article. As no annotated Afrikaans data existed prior to this project to serve as training data for a machine learning classifier, the core-technologies in question are first developed using a rule-based approach. The rule-based hyphenator and compound analyser are evaluated and the hyphenator obtains an fscore of 90,84%, while the compound analyser only reaches an f-score of 78,20%. Since these results are somewhat disappointing and/or insufficient for practical implementation, it was decided that a machine learning technique (memory-based learning) will be used instead. Training data for each of the two core-technologies is then developed using “TurboAnnotate”, an interface designed to improve the accuracy and speed of manual annotation. The hyphenator developed using machine learning has been trained with 39 943 words and reaches an fscore of 98,11% while the f-score of the compound analyser is 90,57% after being trained with 77 589 annotated words. It is concluded that machine learning (specifically memory-based learning) seems an appropriate approach for developing coretechnologies for Afrikaans.


2013 ◽  
Vol 33 (2) ◽  
pp. 145-164
Author(s):  
Peili Wu ◽  
Kuien Liu ◽  
Kai Zheng ◽  
Zhiming Ding ◽  
Yuan Tan

Sign in / Sign up

Export Citation Format

Share Document