From machine learning to sustainable taxation: GPS traces of trucks circulating in Belgium

Author(s):  
Arnaud Adam ◽  
Isabelle Thomas

<p>Transport geography has always been characterized by a lack of accurate data, leading to surveys often based on samples that are spatially not representative. However, the current deluge of data collected through sensors promises to overpass this scarcity of data. We here consider one example: since April 1<sup>st</sup> 2016, a GPS tracker is mandatory within each truck circulating in Belgium for kilometre taxes. Every 30 seconds, this tracker collects the position of the truck (as well as some other information such as speed or direction), leading to an individual taxation of trucks. This contribution uses a one-week exhaustive database containing the totality of trucks circulating in Belgium, in order to understand transport fluxes within the country, as well as the spatial effects of the taxation on the circulation of trucks.</p><p>Machine learning techniques are applied on over 270 million of GPS points to detect stops of trucks, leading to transform GPS sequences into a complete Origin-Destination matrix. Using machine learning allows to accurately classify stops that are different in nature (leisure stop, (un-)loading areas, or congested roads). Based on this matrix, we firstly propose an overview of the daily traffic, as well as an evaluation of the number of stops made in every Belgian place. Secondly, GPS sequences and stops are combined, leading to characterise sub-trajectories of each truck (first/last miles and transit) by their fiscal debit. This individual characterisation, as well as its variation in space and time, are here discussed: is the individual taxation system always efficient in space and time?</p><p>This contribution helps to better understand the circulation of trucks in Belgium, the places where they stopped, as well as the importance of their locations in a fiscal point of view. What are the potential modifications of the trucks routes that would lead to a more sustainable kilometre taxation? This contribution illustrates that combining big-data and machine learning open new roads for accurately measuring and modelling transportation.</p>

Author(s):  
Giovanni Semeraro ◽  
Pierpaolo Basile ◽  
Marco de Gemmis ◽  
Pasquale Lops

Exploring digital collections to find information relevant to a user’s interests is a challenging task. Information preferences vary greatly across users; therefore, filtering systems must be highly personalized to serve the individual interests of the user. Algorithms designed to solve this problem base their relevance computations on user profiles in which representations of the users’ interests are maintained. The main focus of this chapter is the adoption of machine learning to build user profiles that capture user interests from documents. Profiles are used for intelligent document filtering in digital libraries. This work suggests the exploiting of knowledge stored in machine-readable dictionaries to obtain accurate user profiles that describe user interests by referring to concepts in those dictionaries. The main aim of the proposed approach is to show a real-world scenario in which the combination of machine learning techniques and linguistic knowledge is helpful to achieve intelligent document filtering.


The prediction of price for a vehicle has been more popular in research area, and it needs predominant effort and information about the experts of this particular field. The number of different attributes is measured and also it has been considerable to predict the result in more reliable and accurate. To find the price of used vehicles a well defined model has been developed with the help of three machine learning techniques such as Artificial Neural Network, Support Vector Machine and Random Forest. These techniques were used not on the individual items but for the whole group of data items. This data group has been taken from some web portal and that same has been used for the prediction. The data must be collected using web scraper that was written in PHP programming language. Distinct machine learning algorithms of varying performances had been compared to get the best result of the given data set. The final prediction model was integrated into Java application


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Qiang Zhao

The archeological sites are a heritage that we have gained from our ancestors. These sites are crucial for understanding the past and the way of life of people during those times. The monuments and the immovable relics of ancient times are a getaway to the past. The critical cultural relics however actually over the years have faced the brunt of nature. The environmental conditions have deteriorated the condition of many important immovable relics over the years since these could not be just shifted away. People also move around the ancient cultural relics that may also deform these relics. The machine learning algorithms were used to identify the location of the relics. The data from the satellite images were used and implemented machine learning algorithm to maintain and monitor the relics. This research study dwells into the importance of the area from a research point of view and utilizes machine learning techniques called CaffeNet and deep convolutional neural network. The result showed that 96% accuracy of predicting the image, which can be used for tracking human activity, protects heritage sites in a unique way.


2020 ◽  
Vol 8 (6) ◽  
pp. 1667-1671

Speech is the most proficient method of correspondence between people groups. Discourse acknowledgment is an interdisciplinary subfield of computational phonetics that creates approaches and advances that empowers the acknowledgment and interpretation of communicated in language into content by PCs. It is otherwise called programmed discourse acknowledgment (ASR), PC discourse acknowledgment or discourse to content (STT). It consolidates information and research in the etymology, software engineering, and electrical building fields. This, being the best methodology of correspondence, could likewise be a helpful interface to speak with machines. Machine learning consists of supervised and unsupervised learning among which supervised learning is used for the speech recognition objectives. Supervised learning is that the data processing task of inferring a perform from labeled coaching information. Speech recognition is the current trend that has gained focus over the decades. Most automation technologies use speech and speech recognition for various perspectives. This paper offers a diagram of major innovative point of view and valuation for the fundamental advancement of speech recognitionand offers review method created in each phase of discourse acknowledgment utilizing supervised learning. The project will use ANN to recognize speeches using magnitudes with large datasets.


Author(s):  
Matthias Mühlbauer ◽  
Hubert Würschinger ◽  
Dominik Polzer ◽  
Nico Hanenkamp

AbstractThe prediction of the power consumption increases the transparency and the understanding of a cutting process, this delivers various potentials. Beside the planning and optimization of manufacturing processes, there are application areas in different kinds of deviation detection and condition monitoring. Due to the complicated stochastic processes during the cutting processes, analytical approaches quickly reach their limits. Since the 1980s, approaches for predicting the time or energy consumption use empirical models. Nevertheless, most of the existing models regard only static snapshots and are not able to picture the dynamic load fluctuations during the entire milling process. This paper describes a data-driven way for a more detailed prediction of the power consumption for a milling process using Machine Learning techniques. To increase the accuracy we used separate models and machine learning algorithms for different operations of the milling machine to predict the required time and energy. The merger of the individual models allows finally the accurate forecast of the load profile of the milling process for a specific machine tool. The following method introduces the whole pipeline from the data acquisition, over the preprocessing and the model building to the validation.


2021 ◽  
pp. 01-07
Author(s):  
Gande Akhila ◽  
◽  
◽  
◽  
Hemachandran K ◽  
...  

The purpose of the present article is to highlight the outcomes of Indian premier league cricket match utilizing a managed taking in come nearer from a team-based point of view. The methodology consists of prescriptive and descriptive models. Descriptive model focuses mainly on two aspects they are, it describes data and statistics of the previous information. i.e., batting, balling or allrounder and It predicts past matches of IPL. Predictive model predicts ranking and winning percentage of the team. The two models show the measurements of winning level of the group Winner that the user has selected. This paper predicts the result through which technique match has highest result. The dataset consists of two groups that is the toss outcome, venue date, which tells about of the counterpart for all matches. Since the nature impact can't be expected in the game, 109 matches which were either finished by downpour or draw/tie, have been taken out from the dataset. The dataset is partitioned into two sections to be specific the test information and the train information.The readiness dataset contains the 70% of the information from our dataset and the test dataset contains 30% of the information from our dataset. There were all out of 3500 coordinates in getting ready dataset and 1500 matches. This paper has been researched earlier by different scholars like Pathak and Wadwa, Munir etl ,and many other scholars. This viewpoint discusses the application of INDIAN PREMIER LEAGUE Matches held in different states. Gives the score of batsman and bowler with the help of machine learning techniques. Focuses on predicted analysis which is predicted by applying with various AI strategies to the real outcome actual result and gives the percentage of predicted result.


Author(s):  
Abikoye Oluwakemi Christiana ◽  
Benjamin Aruwa Gyunka ◽  
Akande Noah

<p class="0abstract">The open source nature of Android Operating System has attracted wider adoption of the system by multiple types of developers. This phenomenon has further fostered an exponential proliferation of devices running the Android OS into different sectors of the economy. Although this development has brought about great technological advancements and ease of doing businesses (e-commerce) and social interactions, they have however become strong mediums for the uncontrolled rising cyberattacks and espionage against business infrastructures and the individual users of these mobile devices. Different cyberattacks techniques exist but attacks through malicious applications have taken the lead aside other attack methods like social engineering. Android malware have evolved in sophistications and intelligence that they have become highly resistant to existing detection systems especially those that are signature-based. Machine learning techniques have risen to become a more competent choice for combating the kind of sophistications and novelty deployed by emerging Android malwares. The models created via machine learning methods work by first learning the existing patterns of malware behaviour and then use this knowledge to separate or identify any such similar behaviour from unknown attacks. This paper provided a comprehensive review of machine learning techniques and their applications in Android malware detection as found in contemporary literature.</p>


Author(s):  
Aleksei Netšunajev ◽  
Sven Nõmm ◽  
Aaro Toomela ◽  
Kadri Medijainen ◽  
Pille Taba

Analysis of the sentence writing test is conducted in this paper to support diagnostics of the Parkinsons disease. Drawing and writing tests digitization has become a trend where synergy of machine learning techniques on the one side and knowledge base of the neurology and psychiatry on the other side leading sophisticated result in computer aided diagnostics. Such rapid progress has a drawback. In many cases, decisions made by machine learning algorithm are difficult to explain in a language human practitioner familiar with. The method proposed in this paper employs unsupervised learning techniques to segment the sentence into the individual characters. Then, feature engineering process is applied to describe writing of each letter using a set of kinematic and pressure parameters. Following feature selection process applicability of different machine learning classifiers is evaluated. To guarantee that achieved results may be interpreted by human, two major guidelines are established. The first one is to keep dimensionality of the feature set low. The second one is clear physical meaning of the features describing the writing process. Features describing amount and smoothness of the motion observed during the writing alongside with letter size are considered. Resulting algorithm does not take into account any semantic information or language particularities and therefore may be easily adopted to any language based on Latin or Cyrillic alphabets.


2020 ◽  
Vol 9 (1) ◽  
pp. 1000-1004

The automatic extraction of bibliographic data remains a difficult task to the present day, when it's realized that the scientific publications are not in a standard format and every publications has its own template. There are many “regular expression” techniques and “supervised machine learning” techniques for extracting the entire details of the references mentioned within the bibliographic section. But there's no much difference within the percentage of their success. Our idea is to seek out whether unsupervised machine learning techniques can help us in increasing the share of success. This paper presents a technique for segregating and automatically extracting the individual components of references like Authors, Title of the references, publications details, etc., using “Unsupervised technique”, “Named-Entity recognition”(NER) technique and link these references to their corresponding full text article with the assistance of google


Sign in / Sign up

Export Citation Format

Share Document