scholarly journals Planning of Electric Taxi Charging Stations Based on Travel Data Characteristics

Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 1947
Author(s):  
Yan Wang ◽  
Shan Gao ◽  
Hongyan Chu ◽  
Xuefei Wang

In view of the practical application requirements for the rapid expansion of electric taxis (ETs) and the reasonable planning of charging stations, this paper presents a method for mining latent semantic correlation of large data by the trajectory of ETs and the planning of charging stations with optimal cost. Firstly, the vector space modeling method of ET trajectory data is studied, and the semantic similarity of the trajectory data matrix is evaluated. Secondly, the hidden characteristics of the mass trajectory data are extracted by matrix decomposition. Then, the latent semantic correlation characteristics of trajectory data are mined. Finally, the fast clustering of ETs is realized by the spectral clustering method. On this basis, with the objective of minimizing the annual construction and maintenance costs of charging stations, the optimal planning scheme of charging stations for ETs is given. In this paper, the spectrum clustering processing method of the potential semantic correlation of the big data of the driving track of ETs can be combined with the operation and maintenance costs of the charging station, and the convenience of charging for ET users is also considered. This provides decision support information for the reasonable planning of charging stations.

2019 ◽  
Vol 35 (22) ◽  
pp. 4748-4753 ◽  
Author(s):  
Ahmad Borzou ◽  
Razie Yousefi ◽  
Rovshan G Sadygov

Abstract Motivation High throughput technologies are widely employed in modern biomedical research. They yield measurements of a large number of biomolecules in a single experiment. The number of experiments usually is much smaller than the number of measurements in each experiment. The simultaneous measurements of biomolecules provide a basis for a comprehensive, systems view for describing relevant biological processes. Often it is necessary to determine correlations between the data matrices under different conditions or pathways. However, the techniques for analyzing the data with a low number of samples for possible correlations within or between conditions are still in development. Earlier developed correlative measures, such as the RV coefficient, use the trace of the product of data matrices as the most relevant characteristic. However, a recent study has shown that the RV coefficient consistently overestimates the correlations in the case of low sample numbers. To correct for this bias, it was suggested to discard the diagonal elements of the outer products of each data matrix. In this work, a principled approach based on the matrix decomposition generates three trace-independent parts for every matrix. These components are unique, and they are used to determine different aspects of correlations between the original datasets. Results Simulations show that the decomposition results in the removal of high correlation bias and the dependence on the sample number intrinsic to the RV coefficient. We then use the correlations to analyze a real proteomics dataset. Availability and implementation The python code can be downloaded from http://dynamic-proteome.utmb.edu/MatrixCorrelations.aspx. Supplementary information Supplementary data are available at Bioinformatics online.


2019 ◽  
Vol 191 (1) ◽  
pp. 1-17 ◽  
Author(s):  
Matt H Buys ◽  
Richard C Winkworth ◽  
Peter J de Lange ◽  
Peter G Wilson ◽  
Nora Mitchell ◽  
...  

Abstract Leptospermum scoparium (Myrtaceae) is a morphologically highly variable species found in mainland Australia, Tasmania and New Zealand. For example, in New Zealand up to six morphologically distinct varieties of this species have been described, although only two (var. scoparium and var. incanum) are now formally recognized. In the present study we provide a first examination of genetic diversity in this culturally and commercially important species with the aim of gaining insights into its origins and evolution. We used anchored hybrid enrichment to acquire sequence data from 485 orthologous low-copy nuclear loci for 27 New Zealand and three Australian accessions of L. scoparium and representatives of several other Leptospermum spp. The final concatenated data matrix contained 421 687 nucleotide positions of which 55 102 were potentially informative. Despite the relative large data set, our analyses suggest that a combination of low and incompatible data signal limits the resolution of relationships among New Zealand populations of L. scoparium. Nevertheless, our analyses are consistent with genetic diversity being geographically structured, with three groups of L. scoparium recovered. We discuss the evolutionary and taxonomic implications of our findings.


2009 ◽  
Vol 60 (10) ◽  
pp. 1995-2003 ◽  
Author(s):  
Dietmar Wolfram ◽  
Hope A. Olson ◽  
Raina Bloom

2003 ◽  
Vol 57 (8) ◽  
pp. 996-1006 ◽  
Author(s):  
Slobodan Šašić ◽  
Yukihiro Ozaki

In this paper we report two new developments in two-dimensional (2D) correlation spectroscopy; one is the combination of the moving window concept with 2D spectroscopy to facilitate the analysis of complex data sets, and the other is the definition of the noise level in synchronous/asynchronous maps. A graphical criterion for the latter is also proposed. The combination of the moving window concept with correlation spectra allows one to split a large data matrix into smaller and simpler subsets and to analyze them instead of computing overall correlation. A three-component system that mimics a consecutive chemical reaction is used as a model for the illustration of the two ideas. Both types of correlation matrices, variable–variable and sample–sample, are analyzed, and a very good agreement between the two is met. The proposed innovations enable one to comprehend the complexity of the data to be analyzed by 2D spectroscopy and thus to avoid the risks of over-interpretation, liable to occur whenever improper caution about the number of coexisting species in the system is taken.


2020 ◽  
Vol 9 (3) ◽  
pp. 43-52
Author(s):  
Alaidine Ben Ayed ◽  
Ismaïl Biskri ◽  
Jean Guy Meunier

2021 ◽  
Author(s):  
Joris Vanhoutven ◽  
Bart Cuypers ◽  
Pieter Meysman ◽  
Jef Hooyberghs ◽  
Kris Laukens ◽  
...  

AbstractIn high-throughput omics disciplines like transcriptomics, researchers face a need to assess the quality of an experiment prior to an in-depth statistical analysis. To efficiently analyze such voluminous collections of data, researchers need triage methods that are both quick and easy to use. Such a normalization method for relative quantitation, CONSTANd, was recently introduced for isobarically-labeled mass spectra in proteomics. It transforms the data matrix of abundances through an iterative, convergent process enforcing three constraints: (I) identical column sums; (II) each row sum is fixed (across matrices) and (III) identical to all other row sums. In this study, we investigate whether CONSTANd is suitable for count data from massively parallel sequencing, by qualitatively comparing its results to those of DESeq2. Further, we propose an adjustment of the method so that it may be applied to identically balanced but differently sized experiments for joint analysis. We find that CONSTANd can process large data sets with about 2 million count records in less than a second whilst removing unwanted systematic bias and thus quickly uncovering the underlying biological structure when combined with a PCA plot or hierarchical clustering. Moreover, it allows joint analysis of data sets obtained from different batches, with different protocols and from different labs but without exploiting information from the experimental setup other than the delineation of samples into identically processed sets (IPSs). CONSTANd’s simplicity and applicability to proteomics as well as transcriptomics data make it an interesting candidate for integration in multi-omics workflows.


2014 ◽  
Vol 6 (1) ◽  
pp. 14-33
Author(s):  
Ali Gürkan ◽  
Luca Iandoli

While online conversations are very popular, the content generated by participants is very often overwhelming, poorly organized and often of questionable quality. In this article we use two methods, a text analysis technique, Vector Space Modeling (VSM) and clustering to create a methodology to organize and aggregate information generated by users using Online collaborative Argumentation (OA) in their online debate. An alternative to other widely used conversational tools such as online forums, OA is supposed to help users to join their efforts to construct a shared knowledge representation in the form of an argument map in which multiple points of view can coexist and be presented in the form of a well-organized knowledge object. To see whether this supposition comes into effect we first apply VSM to summarize argument map content as a document space and then use clustering to transform it to a limited number of higher order semantic categories. We apply the methodology to more than 3000 posts created in an online debate of about 160 participants using an online argumentation platform and we show how this methodology can be used to effectively organize and evaluate content generated by a large number of users in online discussions.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3461 ◽  
Author(s):  
Jingwei Yin ◽  
Bing Liu ◽  
Guangping Zhu ◽  
Zhinan Xie

It is challenging to detect a moving target in the reverberant environment for a long time. In recent years, a kind of method based on low-rank and sparse theory was developed to study this problem. The multiframe data containing the target echo and reverberation are arranged in a matrix, and then, the detection is achieved by low-rank and sparse decomposition of the data matrix. In this paper, we introduce a new method for the matrix decomposition using dynamic mode decomposition (DMD). DMD is usually used to calculate eigenmodes of an approximate linear model. We divided the eigenmodes into two categories to realize low-rank and sparse decomposition such that we detected the target from the sparse component. Compared with the previous methods based on low-rank and sparse theory, our method improves the computation speed by approximately 4–90-times at the expense of a slight loss of detection gain. The efficient method has a big advantage for real-time processing. This method can spare time for other stages of processing to improve the detection performance. We have validated the method with three sets of underwater acoustic data.


Sign in / Sign up

Export Citation Format

Share Document