transmission cost
Recently Published Documents


TOTAL DOCUMENTS

204
(FIVE YEARS 56)

H-INDEX

13
(FIVE YEARS 4)

2021 ◽  
Vol 2108 (1) ◽  
pp. 012068
Author(s):  
Jianlong Zhang

Abstract With the rapid development of computer technology, sensor technology, and communication technology, people have gradually designed and continuously improved the health monitoring system of bridge structure. Due to the uninterrupted long-term work of the bridge monitoring system, the long-term operation of a large number of sensors will generate massive amounts of data, and a lot of data to be transmitted from the data collector at the bridge site to the data monitoring center requires the use of a wireless network with limited speed, which will consume a lot of time, thereby bring such negative effects as reducing transmission efficiency, reducing throughput, increasing transmission costs, and affecting the entire bridge inspection system. This paper proposes a parallel compression method of LZSS data based on FPGA, which can compress the incoming data from the sensor system in parallel in the data acquisition system to improve the transmission efficiency, increase the throughput, and reduce the transmission cost.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Xinyu Cui ◽  
Guifen Chen

In recent years, the application of intelligent transportation systems has gradually made the transportation industry safe, efficient, and environmentally friendly and has led to a broader research prospect of vehicle wireless communication technology. Distributed vehicular self-organizing networks are mobile self-organizing networks in realistic traffic situations. Data interaction and transmission between nodes are achieved through the establishment of a vehicular self-organizing network. In this paper, a multipath routing protocol considering path stability and load balancing is proposed to address the shortcomings of existing distributed vehicular wireless self-organizing routing protocols. This protocol establishes three loop-free paths in the route discovery phase and uses the path stability parameter and load level parameter together to measure the total transmission cost. The one with the lowest total transmission cost is selected as the highest priority path for data transmission in the route selection phase, and the other two are used as alternate paths, and when the primary path breaks, the higher priority of the remaining path will continue to transmit data as the primary route. In this paper, to improve the content distribution performance of target vehicles in scenarios where communication blind zones exist between adjacent roadside units, an assisted download distribution mechanism for video-like large file content is designed in the V2V and V2I cooperative communication regime. That is, considering a two-way lane scenario, we use the same direction driving vehicles to build clusters, reverse driving vehicles to carry prefetched data, and build clusters to forward prefetched data to improve the data download volume of target vehicles in nonhot scenarios such as highways with the sparse deployment of roadside units, to meet the data volume download demand of in-vehicle users for large files and give guidance for efficient distribution of large file content in highway scenarios.


2021 ◽  
Vol 12 (4) ◽  
pp. 98-117
Author(s):  
Arun Agarwal ◽  
Khushboo Jain ◽  
Amita Dev

Recent developments in information gathering procedures and the collection of big data over a period of time as a result of introducing high computing devices pose new challenges in sensor networks. Data prediction has emerged as a key area of research to reduce transmission cost acting as principle analytic tool. The transformation of huge amount of data into an equivalent reduced dataset and maintaining data accuracy and integrity is the prerequisite of any sensor network application. To overcome these challenges, a data prediction technique is suggested to reduce transmission of redundant data by developing a regression model on linear descriptors on continuous sensed data values. The proposed model addresses the basic issues involved in data aggregation. It uses a buffer based linear filter algorithm which compares all incoming values and establishes a correlation between them. The cluster head is accountable for predicting data values in the same time slot, calculates the deviation of data values, and propagates the predicted values to the sink.


2021 ◽  
Vol 11 (18) ◽  
pp. 8645
Author(s):  
Davide Careglio ◽  
Fernando Agraz ◽  
Dimitri Papadimitriou

With the globalisation of the multimedia entertainment industry and the popularity of streaming and content services, multicast routing is (re-)gaining interest as a bandwidth saving technique. In the 1990’s, multicast routing received a great deal of attention from the research community; nevertheless, its main problems still remain mostly unaddressed and do not reach the acceptance level required for its wide deployment. Among other reasons, the scaling limitation and the relative complexity of the standard multicast protocol architecture can be attributed to the conventional approach of overlaying the multicast routing on top of the unicast routing topology. In this paper, we present the Greedy Compact Multicast Routing (GCMR) scheme. GMCR is characterised by its scalable architecture and independence from any addressing and unicast routing schemes; more specifically, the local knowledge of the cost to direct neighbour nodes is enough for the GCMR scheme to properly operate. The branches of the multicast tree are constructed directly by the joining destination nodes which acquire the routing information needed to reach the multicast source by means of an incremental two-stage search process. In this paper we present the details of GCMR and evaluate its performance in terms of multicast tree size (i.e., the stretch), the memory space consumption, the communication cost, and the transmission cost. The comparative performance analysis is performed against one reference algorithm and two well-known protocol standards. Both simulation and emulation results show that GCMR achieves the expected performance objectives and provide the guidelines for further improvements.


Author(s):  
Puneet Raj ◽  
Kirti Pal

Abstract In this paper Power factor correction coefficient based transmission pricing is proposed to analyze an individual customer’s effect due to green energy transactions in existing power system. In this novel approach power factor correction coefficient is calculated for each customer under every transaction. This power factor correction coefficient is then added in conventional embedded cost distance based MW-mile and MVA-mile method for transmission pricing calculation for both active and reactive power flow through transmission line. This new proposed transmission pricing method calculate transmission charges for each customer and also help an ISO (independent system operator) to decide whether transaction increases or decreases the transmission cost. On the basis of performance of transaction an ISO can penalize or reward them. Proposed analysis is implemented on a 3-area IEEE-30 bus system with seven tie-lines in MATLAB environment. To show the effectiveness of the proposed method the results are compared with and without power factor correction based transmission pricing for each customer.


2021 ◽  
Vol 7 ◽  
pp. e633
Author(s):  
Swaraj Dube ◽  
Yee Wan Wong ◽  
Hermawan Nugroho

Incremental learning evolves deep neural network knowledge over time by learning continuously from new data instead of training a model just once with all data present before the training starts. However, in incremental learning, new samples are always streaming in whereby the model to be trained needs to continuously adapt to new samples. Images are considered to be high dimensional data and thus training deep neural networks on such data is very time-consuming. Fog computing is a paradigm that uses fog devices to carry out computation near data sources to reduce the computational load on the server. Fog computing allows democracy in deep learning by enabling intelligence at the fog devices, however, one of the main challenges is the high communication costs between fog devices and the centralized servers especially in incremental learning where data samples are continuously arriving and need to be transmitted to the server for training. While working with Convolutional Neural Networks (CNN), we demonstrate a novel data sampling algorithm that discards certain training images per class before training even starts which reduces the transmission cost from the fog device to the server and the model training time while maintaining model learning performance both for static and incremental learning. Results show that our proposed method can effectively perform data sampling regardless of the model architecture, dataset, and learning settings.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Mahsa Beigrezaei ◽  
Abolfazel Toroghi Haghighat ◽  
Seyedeh Leili Mirtaheri

The efficiency of data-intensive applications in distributed environments such as Cloud, Fog, and Grid is directly related to data access delay. Delays caused by queue workload and delays caused by failure can decrease data access efficiency. Data replication is a critical technique in reducing access latency. In this paper, a fuzzy-based replication algorithm is proposed, which avoids the mentioned imposed delays by considering a comprehensive set of significant parameters to improve performance. The proposed algorithm selects the appropriate replica using a hierarchical method, taking into account the transmission cost, queue delay, and failure probability. The algorithm determines the best place for replication using a fuzzy inference system considering the queue workload, number of accesses in the future, last access time, and communication capacity. It uses the Simple Exponential Smoothing method to predict future file popularity. The OptorSim simulator evaluates the proposed algorithm in different access patterns. The results show that the algorithm improves performance in terms of the number of replications, the percentage of storage filled, and the mean job execution time. The proposed algorithm has the highest efficiency in random access patterns, especially random Zipf access patterns. It also has good performance when the number of jobs and file size are increased.


Author(s):  
K. S. Surekha ◽  
B. P. Patil ◽  
Ranjeet Kumar ◽  
Davinder Pal Sharma

An electrocardiogram (ECG) signal is an important diagnostic tool for cardiologists to detect the abnormality. In continuous monitoring, an ambulatory huge amount of ECG data is involved. This leads to high storage requirements and transmission costs. Hence, to reduce the storage and transmission cost, there is a requirement for an efficient compression or coding technique. One of the most promising compression techniques is Compressive Sensing (CS) which makes efficient compression of signals. By this methodology, a signal can easily be reconstructed if it has a sparse representation. This paper presents the Block Sparse Bayesian Learning (BSBL)-based multiscale compressed sensing (MCS) method for the compression of ECG signals. The main focus of the proposed technique is to achieve a reconstructed signal with less error and more energy efficiency. The ECG signal is sparsely represented by wavelet transform. MIT-BIH Arrhythmia database is used for testing purposes. The Huffman technique is used for encoding and decoding. The signal recovery is appropriate up to 75% of compression. The quality of the signal is ascertained using the standard performance measures such as signal-to-noise ratio (SNR) and Percent root mean square difference (PRD). The quality of the reconstructed ECG signal is also validated through the visual method. This method is most suitable for telemedicine applications.


Sign in / Sign up

Export Citation Format

Share Document