scholarly journals Anticipatory routing methods for an on-demand ridepooling mobility system

2021 ◽  
Author(s):  
Andres Fielbaum ◽  
Maximilian Kronmueller ◽  
Javier Alonso-Mora

AbstractOn-demand mobility systems in which passengers use the same vehicle simultaneously are a promising transport mode, yet difficult to control. One of the most relevant challenges relates to the spatial imbalances of the demand, which induce a mismatch between the position of the vehicles and the origins of the emerging requests. Most ridepooling models face this problem through rebalancing methods only, i.e., moving idle vehicles towards areas with high rejections rate, which is done independently from routing and vehicle-to-orders assignments, so that vehicles serving passengers (a large portion of the total fleet) remain unaffected. This paper introduces two types of techniques for anticipatory routing that affect how vehicles are assigned to users and how to route vehicles to serve such users, so that the whole operation of the system is modified to reach more efficient states for future requests. Both techniques do not require any assumption or exogenous knowledge about the future demand, as they depend only on current and recent requests. Firstly, we introduce rewards that reduce the cost of an assignment between a vehicle and a group of passengers if the vehicle gets routed towards a high-demand zone. Secondly, we include a small set of artificial requests, whose request times are in the near future and whose origins are sampled from a probability distribution that mimics observed generation rates. These artificial requests are to be assigned together with the real requests. We propose, formally discuss and experimentally evaluate several formulations for both approaches. We test these techniques in combination with a state-of-the-art trip-vehicle assignment method, using a set of real rides from Manhattan. Introducing rewards can diminish the rejection rate to about nine-tenths of its original value. On the other hand, including future requests can reduce users’ traveling times by about one-fifth, but increasing rejections. Both methods increase the vehicles-hour-traveled by about 10%. Spatial analysis reveals that vehicles are indeed moved towards the most demanded areas, such that the reduction in rejections rate is achieved mostly there.

Author(s):  
Chen Luo ◽  
Anshumali Shrivastava

Split-Merge MCMC (Monte Carlo Markov Chain) is one of the essential and popular variants of MCMC for problems when an MCMC state consists of an unknown number of components. It is well known that state-of-the-art methods for split-merge MCMC do not scale well. Strategies for rapid mixing requires smart and informative proposals to reduce the rejection rate. However, all known smart proposals involve expensive operations to suggest informative transitions. As a result, the cost of each iteration is prohibitive for massive scale datasets. It is further known that uninformative but computationally efficient proposals, such as random split-merge, leads to extremely slow convergence. This tradeoff between mixing time and per update cost seems hard to get around.We leverage some unique properties of weighted MinHash, which is a popular LSH, to design a novel class of split-merge proposals which are significantly more informative than random sampling but at the same time efficient to compute. Overall, we obtain a superior tradeoff between convergence and per update cost. As a direct consequence, our proposals are around 6X faster than the state-of-the-art sampling methods on two large real datasets KDDCUP and PubMed with several millions of entities and thousands of clusters.


2000 ◽  
Vol 151 (1) ◽  
pp. 1-10 ◽  
Author(s):  
Stephan Wild-Eck ◽  
Willi Zimmermann

Two large-scale surveys looking at attitudes towards forests, forestry and forest policy in the second half ofthe nineties have been carried out. This work was done on behalf of the Swiss Confederation by the Chair of Forest Policy and Forest Economics of the Federal Institute of Technology (ETH) in Zurich. Not only did the two studies use very different methods, but the results also varied greatly as far as infrastructure and basic conditions were concerned. One of the main differences between the two studies was the fact that the first dealt only with mountainous areas, whereas the second was carried out on the whole Swiss population. The results of the studies reflect these differences:each produced its own specific findings. Where the same (or similar) questions were asked, the answers highlight not only how the attitudes of those questioned differ, but also views that they hold in common. Both surveys showed positive attitudes towards forests in general, as well as a deep-seated appreciation ofthe forest as a recreational area, and a positive approach to tending. Detailed results of the two surveys will be available in the near future.


2021 ◽  
Vol 11 (10) ◽  
pp. 4553
Author(s):  
Ewelina Ziajka-Poznańska ◽  
Jakub Montewka

The development of autonomous ship technology is currently in focus worldwide and the literature on this topic is growing. However, an in-depth cost and benefit estimation of such endeavours is in its infancy. With this systematic literature review, we present the state-of-the-art system regarding costs and benefits of the operation of prospective autonomous merchant ships with an objective for identifying contemporary research activities concerning an estimation of operating, voyage, and capital costs in prospective, autonomous shipping and vessel platooning. Additionally, the paper outlines research gaps and the need for more detailed business models for operating autonomous ships. Results reveal that valid financial models of autonomous shipping are lacking and there is significant uncertainty affecting the cost estimates, rendering only a reliable evaluation of specific case studies. The findings of this paper may be found relevant not only by academia, but also organisations considering to undertake a challenge of implementing Maritime Autonomous Surface Ships in their operations.


2020 ◽  
Vol 9 (1) ◽  
pp. 303-322 ◽  
Author(s):  
Zhifang Zhao ◽  
Tianqi Qi ◽  
Wei Zhou ◽  
David Hui ◽  
Cong Xiao ◽  
...  

AbstractThe behavior of cement-based materials is manipulated by chemical and physical processes at the nanolevel. Therefore, the application of nanomaterials in civil engineering to develop nano-modified cement-based materials is a promising research. In recent decades, a large number of researchers have tried to improve the properties of cement-based materials by employing various nanomaterials and to characterize the mechanism of nano-strengthening. In this study, the state of the art progress of nano-modified cement-based materials is systematically reviewed and summarized. First, this study reviews the basic properties and dispersion methods of nanomaterials commonly used in cement-based materials, including carbon nanotubes, carbon nanofibers, graphene, graphene oxide, nano-silica, nano-calcium carbonate, nano-calcium silicate hydrate, etc. Then the research progress on nano-engineered cementitious composites is reviewed from the view of accelerating cement hydration, reinforcing mechanical properties, and improving durability. In addition, the market and applications of nanomaterials for cement-based materials are briefly discussed, and the cost is creatively summarized through market survey. Finally, this study also summarizes the existing problems in current research and provides future perspectives accordingly.


2021 ◽  
Vol 15 (1) ◽  
pp. 408-433
Author(s):  
Margaux Dugardin ◽  
Werner Schindler ◽  
Sylvain Guilley

Abstract Extra-reductions occurring in Montgomery multiplications disclose side-channel information which can be exploited even in stringent contexts. In this article, we derive stochastic attacks to defeat Rivest-Shamir-Adleman (RSA) with Montgomery ladder regular exponentiation coupled with base blinding. Namely, we leverage on precharacterized multivariate probability mass functions of extra-reductions between pairs of (multiplication, square) in one iteration of the RSA algorithm and that of the next one(s) to build a maximum likelihood distinguisher. The efficiency of our attack (in terms of required traces) is more than double compared to the state-of-the-art. In addition to this result, we also apply our method to the case of regular exponentiation, base blinding, and modulus blinding. Quite surprisingly, modulus blinding does not make our attack impossible, and so even for large sizes of the modulus randomizing element. At the cost of larger sample sizes our attacks tolerate noisy measurements. Fortunately, effective countermeasures exist.


2020 ◽  
Vol 15 (1) ◽  
pp. 4-17
Author(s):  
Jean-François Biasse ◽  
Xavier Bonnetain ◽  
Benjamin Pring ◽  
André Schrottenloher ◽  
William Youmans

AbstractWe propose a heuristic algorithm to solve the underlying hard problem of the CSIDH cryptosystem (and other isogeny-based cryptosystems using elliptic curves with endomorphism ring isomorphic to an imaginary quadratic order 𝒪). Let Δ = Disc(𝒪) (in CSIDH, Δ = −4p for p the security parameter). Let 0 < α < 1/2, our algorithm requires:A classical circuit of size $2^{\tilde{O}\left(\log(|\Delta|)^{1-\alpha}\right)}.$A quantum circuit of size $2^{\tilde{O}\left(\log(|\Delta|)^{\alpha}\right)}.$Polynomial classical and quantum memory.Essentially, we propose to reduce the size of the quantum circuit below the state-of-the-art complexity $2^{\tilde{O}\left(\log(|\Delta|)^{1/2}\right)}$ at the cost of increasing the classical circuit-size required. The required classical circuit remains subexponential, which is a superpolynomial improvement over the classical state-of-the-art exponential solutions to these problems. Our method requires polynomial memory, both classical and quantum.


2021 ◽  
Vol 54 (6) ◽  
pp. 1-35
Author(s):  
Ninareh Mehrabi ◽  
Fred Morstatter ◽  
Nripsuta Saxena ◽  
Kristina Lerman ◽  
Aram Galstyan

With the widespread use of artificial intelligence (AI) systems and applications in our everyday lives, accounting for fairness has gained significant importance in designing and engineering of such systems. AI systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that these decisions do not reflect discriminatory behavior toward certain groups or populations. More recently some work has been developed in traditional machine learning and deep learning that address such challenges in different subdomains. With the commercialization of these systems, researchers are becoming more aware of the biases that these applications can contain and are attempting to address them. In this survey, we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and ways they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.


2021 ◽  
Vol 15 (3) ◽  
pp. 1-28
Author(s):  
Xueyan Liu ◽  
Bo Yang ◽  
Hechang Chen ◽  
Katarzyna Musial ◽  
Hongxu Chen ◽  
...  

Stochastic blockmodel (SBM) is a widely used statistical network representation model, with good interpretability, expressiveness, generalization, and flexibility, which has become prevalent and important in the field of network science over the last years. However, learning an optimal SBM for a given network is an NP-hard problem. This results in significant limitations when it comes to applications of SBMs in large-scale networks, because of the significant computational overhead of existing SBM models, as well as their learning methods. Reducing the cost of SBM learning and making it scalable for handling large-scale networks, while maintaining the good theoretical properties of SBM, remains an unresolved problem. In this work, we address this challenging task from a novel perspective of model redefinition. We propose a novel redefined SBM with Poisson distribution and its block-wise learning algorithm that can efficiently analyse large-scale networks. Extensive validation conducted on both artificial and real-world data shows that our proposed method significantly outperforms the state-of-the-art methods in terms of a reasonable trade-off between accuracy and scalability. 1


2018 ◽  
Vol 27 (07) ◽  
pp. 1860013 ◽  
Author(s):  
Swair Shah ◽  
Baokun He ◽  
Crystal Maung ◽  
Haim Schweitzer

Principal Component Analysis (PCA) is a classical dimensionality reduction technique that computes a low rank representation of the data. Recent studies have shown how to compute this low rank representation from most of the data, excluding a small amount of outlier data. We show how to convert this problem into graph search, and describe an algorithm that solves this problem optimally by applying a variant of the A* algorithm to search for the outliers. The results obtained by our algorithm are optimal in terms of accuracy, and are shown to be more accurate than results obtained by the current state-of-the- art algorithms which are shown not to be optimal. This comes at the cost of running time, which is typically slower than the current state of the art. We also describe a related variant of the A* algorithm that runs much faster than the optimal variant and produces a solution that is guaranteed to be near the optimal. This variant is shown experimentally to be more accurate than the current state-of-the-art and has a comparable running time.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Khalid Mahmood ◽  
Muhammad Amir Khan ◽  
Mahmood ul Hassan ◽  
Ansar Munir Shah ◽  
Shahzad Ali ◽  
...  

Wireless sensor networks are envisioned to play a very important role in the Internet of Things in near future and therefore the challenges associated with wireless sensor networks have attracted researchers from all around the globe. A common issue which is well studied is how to restore network connectivity in case of failure of single or multiple nodes. Energy being a scarce resource in sensor networks drives all the proposed solutions to connectivity restoration to be energy efficient. In this paper we introduce an intelligent on-demand connectivity restoration technique for wireless sensor networks to address the connectivity restoration problem, where nodes utilize their transmission range to ensure the connectivity and the replacement of failed nodes with their redundant nodes. The proposed technique helps us to keep track of system topology and can respond to node failures effectively. Thus our system can better handle the issue of node failure by introducing less overhead on sensor node, more efficient energy utilization, better coverage, and connectivity without moving the sensor nodes.


Sign in / Sign up

Export Citation Format

Share Document