scholarly journals Elevator Trip Distribution for Inconsistent Passenger Input-Output Data

2007 ◽  
Vol 1 (2) ◽  
pp. 175-190 ◽  
Author(s):  
Kiyoshi Yoneda

Accurate traffic data are the basis for group control of elevators and its performance evaluation by trace driven simulation. The present practice estimates a time series of inter-floor passenger traffic based on commonly available elevator sensor data. The method demands that the sensor data be transformed into sets of passenger input-output data which are consistent in the sense that the transportation preserves the number of passengers. Since observation involves various behavioral assumptions, which may actually be violated, as well as measurement errors, it has been necessary to apply data adjustment procedures to secure the consistency. This paper proposes an alternative algorithm which reconstructs elevator passenger origin-destination tables from inconsistent passenger input-output data sets, thus eliminating the ad hoc data adjustment.

Author(s):  
Lu Sun ◽  
Jie Zhou

Empirical speed–density relationships are important not only because of the central role that they play in macroscopic traffic flow theory but also because of their connection to car-following models, which are essential components of microscopic traffic simulation. Multiregime traffic speed– density relationships are more plausible than single-regime models for representing traffic flow over the entire range of density. However, a major difficulty associated with multiregime models is that the breakpoints of regimes are determined in an ad hoc and subjective manner. This paper proposes the use of cluster analysis as a natural tool for the segmentation of speed–density data. After data segmentation, regression analysis can be used to fit each data subset individually. Numerical examples with three real traffic data sets are presented to illustrate such an approach. Using cluster analysis, modelers have the flexibility to specify the number of regimes. It is shown that the K-means algorithm (where K represents the number of clusters) with original (nonstandardized) data works well for this purpose and can be conveniently used in practice.


Author(s):  
Yiannis G. Smirlis ◽  
Dimitris K. Despotis

A recent development in data envelopment analysis (DEA) concerns the introduction of a piece-wise linear representation of the virtual inputs and/or outputs as a means to model situations where the marginal value of an output (input) is assumed to diminish (increase) as the output (input) increases. Currently, this approach is limited to crisp data sets. In this paper, the authors extend the piece-wise linear approach to interval DEA, i.e. to cases where the input/output data are only known to lie within intervals with given bounds. The authors also define appropriate interval segmentations to implement the piece-wise linear forms in conjunction with the interval bounds of the input/output data and the authors propose a new models, compliant with the interval DEA methodology. They finally illustrate their developments with an artificial data set.


Author(s):  
Kyungkoo Jun

Background & Objective: This paper proposes a Fourier transform inspired method to classify human activities from time series sensor data. Methods: Our method begins by decomposing 1D input signal into 2D patterns, which is motivated by the Fourier conversion. The decomposition is helped by Long Short-Term Memory (LSTM) which captures the temporal dependency from the signal and then produces encoded sequences. The sequences, once arranged into the 2D array, can represent the fingerprints of the signals. The benefit of such transformation is that we can exploit the recent advances of the deep learning models for the image classification such as Convolutional Neural Network (CNN). Results: The proposed model, as a result, is the combination of LSTM and CNN. We evaluate the model over two data sets. For the first data set, which is more standardized than the other, our model outperforms previous works or at least equal. In the case of the second data set, we devise the schemes to generate training and testing data by changing the parameters of the window size, the sliding size, and the labeling scheme. Conclusion: The evaluation results show that the accuracy is over 95% for some cases. We also analyze the effect of the parameters on the performance.


Author(s):  
Cong Gao ◽  
Ping Yang ◽  
Yanping Chen ◽  
Zhongmin Wang ◽  
Yue Wang

AbstractWith large deployment of wireless sensor networks, anomaly detection for sensor data is becoming increasingly important in various fields. As a vital data form of sensor data, time series has three main types of anomaly: point anomaly, pattern anomaly, and sequence anomaly. In production environments, the analysis of pattern anomaly is the most rewarding one. However, the traditional processing model cloud computing is crippled in front of large amount of widely distributed data. This paper presents an edge-cloud collaboration architecture for pattern anomaly detection of time series. A task migration algorithm is developed to alleviate the problem of backlogged detection tasks at edge node. Besides, the detection tasks related to long-term correlation and short-term correlation in time series are allocated to cloud and edge node, respectively. A multi-dimensional feature representation scheme is devised to conduct efficient dimension reduction. Two key components of the feature representation trend identification and feature point extraction are elaborated. Based on the result of feature representation, pattern anomaly detection is performed with an improved kernel density estimation method. Finally, extensive experiments are conducted with synthetic data sets and real-world data sets.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1573
Author(s):  
Loris Nanni ◽  
Giovanni Minchio ◽  
Sheryl Brahnam ◽  
Gianluca Maguolo ◽  
Alessandra Lumini

Traditionally, classifiers are trained to predict patterns within a feature space. The image classification system presented here trains classifiers to predict patterns within a vector space by combining the dissimilarity spaces generated by a large set of Siamese Neural Networks (SNNs). A set of centroids from the patterns in the training data sets is calculated with supervised k-means clustering. The centroids are used to generate the dissimilarity space via the Siamese networks. The vector space descriptors are extracted by projecting patterns onto the similarity spaces, and SVMs classify an image by its dissimilarity vector. The versatility of the proposed approach in image classification is demonstrated by evaluating the system on different types of images across two domains: two medical data sets and two animal audio data sets with vocalizations represented as images (spectrograms). Results show that the proposed system’s performance competes competitively against the best-performing methods in the literature, obtaining state-of-the-art performance on one of the medical data sets, and does so without ad-hoc optimization of the clustering methods on the tested data sets.


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Ikuo Kuroiwa

AbstractExtending the technique of unit structure analysis, which was originally developed by Ozaki (J Econ 73(5):720–748, 1980), this study introduces a method of value chain mapping that uses international input–output data and reveals both the upstream and downstream transactions of goods and services, as well as primary input (value added) and final output (final demand) transactions, which emerge along the entire value chain. This method is then applied to the agricultural value chain of three Greater Mekong Subregion countries: Thailand, Vietnam, and Cambodia. The results show that the agricultural value chain has been increasingly internationalized, although there is still room to benefit from participating in global value chains, especially in a country such as Cambodia. Although there are some constraints regarding the methodology and data, the method proves useful in tracing the entire value chain.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 573
Author(s):  
Xiaochang Li ◽  
Zhengjun Zhai ◽  
Xin Ye

Emerging scale-out I/O intensive applications are broadly used now, which process a large amount of data in buffer/cache for reorganization or analysis and their performances are greatly affected by the speed of the I/O system. Efficient management scheme of the limited kernel buffer plays a key role in improving I/O system performance, such as caching hinted data for reuse in future, prefetching hinted data, and expelling data not to be accessed again from a buffer, which are called proactive mechanisms in buffer management. However, most of the existing buffer management schemes cannot identify data reference regularities (i.e., sequential or looping patterns) that can benefit proactive mechanisms, and they also cannot perform in the application level for managing specified applications. In this paper, we present an A pplication Oriented I/O Optimization (AOIO) technique automatically benefiting the kernel buffer/cache by exploring the I/O regularities of applications based on program counter technique. In our design, the input/output data and the looping pattern are in strict symmetry. According to AOIO, each application can provide more appropriate predictions to operating system which achieve significantly better accuracy than other buffer management schemes. The trace-driven simulation experiment results show that the hit ratios are improved by an average of 25.9% and the execution times are reduced by as much as 20.2% compared to other schemes for the workloads we used.


Sign in / Sign up

Export Citation Format

Share Document