data burst
Recently Published Documents


TOTAL DOCUMENTS

23
(FIVE YEARS 6)

H-INDEX

3
(FIVE YEARS 0)

MAUSAM ◽  
2021 ◽  
Vol 57 (3) ◽  
pp. 499-506
Author(s):  
E. MUTHURAMALINGAM ◽  
SANJAY KUMAR ◽  
R. D. VASHISTHA

  lkj & Lopkfyr ekSle dsanz ¼,-MCY;w- ,l-½ og iz.kkyh gS ftlesa losndksa ds lkFk ml {ks= dh mi iz.kkyh vkSj lapkj midj.k lEc) :Ik ls dk;Z djrs gSa tks Lopkfyr :Ik ls vkSj yxkrkj lgh le;kuqlkj ml LFkku dh ekSle dh fLFkfr;ksa dh eki djrs gSa rFkk ekSle foKku ds ekin.Mksa ds vuqlkj ?kaVkokj fy, x, izs{k.kksa dks dasnz ls tqM+s mixzg ds }kjk dsanzh; LVs’ku dks rhu ckj Lo;a fu/kkZfjr i)fr }kjk vxyk izs{k.k ysus ds iwoZ 60 feuV ds vUnj fu/kkZfjr 10&10 feuV ds vUrjky ij fcuk fdlh Øe ds  vkHkklh ladsrksa dkss Hkstrs jgrs gSa A dHkh dHkh nks ;k vf/kd Lopkfyr ekSle dsanzksa ls vk¡dM+s ,d gh le; esa laizsf"kr gksus ij muds fefJzr gks tkus ds dkj.k ,- MCY;w- ,l- ds vk¡dM+s Bhd ls izkIr ugha gks ikrs gSa A eq[;r;k ,- MCY;w- ,l- ds vk¡dM+ksa dk lafeJ.k muds laizs"k.k ds le; vFkok laizs"k.k dh xfr latky esa ,- MCY;w- ,l- dh la[;k rFkk ,- MCY;w- ,l- ds vk¡dM+ksa ds lafeJ.k dh ek=k ij fuHkZj djrk gS A bl ’kks/k Ik= esa ,- MCY;w- ,l- ds vk¡dM+ksa ds mixzg ds ek/;e ls laizsf"kr vk¡dM+ksa ds lkFk vkil esa lafefJr gks tkus ls iM+us okys izHkko ds ckjs esa crk;k x;k gS A   ” Automatic Weather Station (AWS) is a system consisting of  sensors, associated field sub-systems and communication equipment, which automatically and continuously measure real time surface weather conditions and sends three times the hourly observed meteorological parameters to the central station through  satellite link in a self timed pseudo random manner in its prescribed 10 minute time slot within the next 60 minutes before the next observation takes place.  Loss of AWS data is due to collision of data burst transmitted simultaneously by any two or more  AWS.  Generally, the  collision of AWS data  burst depends upon the transmission time or transmission baud rate, number of  AWS in a network and total number of bits in  AWS data burst. This paper  describes the influence of  data burst collision on  transmission of  AWS data  through satellite.  


2021 ◽  
Vol 6 (7) ◽  
pp. 38-41
Author(s):  
Raghavendra Dakuri Venkata

The report's main aim was to transmit the frequency signals without any disturbances and compute the time and frequencies of energies. Conduct the data burst system with the help of an anechoic chamber to study the electromagnetic interference and utilise the spectrum analyser to identify the required frequency during the conduction of signal transmission and minimise the noise disturbances.


Author(s):  
Dariusz Jacek Jakobczak ◽  
Ahan Chatterjee

The huge amount of data burst which occurred with the arrival of economic access to the internet led to the rise of market of cloud computing which stores this data. And obtaining results from these data led to the growth of the “big data” industry which analyses this humongous amount of data and retrieve conclusion using various algorithms. Hadoop as a big data platform certainly uses map-reduce framework to give an analysis report of big data. The term “big data” can be defined as modern technique to store, capture, and manage data which are in the scale of petabytes or larger sized dataset with high-velocity and various structures. To address this massive growth of data or big data requires a huge computing space to ensure fruitful results through processing of data, and cloud computing is that technology that can perform huge-scale and computation which are very complex in nature. Cloud analytics does enable organizations to perform better business intelligence, data warehouse operation, and online analytical processing (OLAP).


In an Optical Burst Switched (OBS) network, data packets sourced from peripheral networks are assembled into huge sized data bursts. For each assembled data burst, an associated control signal in the form of a burst control packet is (BCP) is generated and scheduled at an offset time ahead of the data burst. The offset timing is to allow for the pre-configuration of required resources at all subsequent intermediate nodes prior to the actual data burst’s arrival. In that way, the data burst will fly-by each node and hence no requirement for temporary buffering at all intermediate nodes. An operational requirement of an OBS network is that it be loss-less as in that way a consistent as well as acceptable quality of service (QoS) for all applications and services it serves as a platform can be guaranteed. Losses in such a network are mainly caused by improper provisioning as well as dimensioning of resources thus leading to contentions among bursts and consequently discarding of some of the contending data bursts. Key to both provisioning as well as proper dimensioning of the available resources in an optimized way is the implementation of effective routing and wavelength (RWA) that will seclude any data losses due to contention occurrences. On the basis of the effects of the streamline effect (SLE), that is, effectively secluding primary contention among flows (streams) in the network, we propose in this paper a limited intermediate buffering that couples with SLE aware prioritized RWA (LIB-PRWA) scheme that combats secondary contention as well. The scheme makes routing decisions such as selection of primary and deflection routes based on current resources states in the candidate paths. A performance comparison of the proposed scheme is carried out and simulation results demonstrate its comparative abilities to effectively reduce losses as well as maintaining both high network resources utilization as well as QoS.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Bakhe Nleya ◽  
Andrew Mutsvangwa

Optical Burst Switching (OBS) paradigm coupled with Dense Wavelength Division Multiplexing (DWDM) has become a practical candidate solution for the next-generation optical backbone networks. In its practical deployment only the edge nodes are provisioned with buffering capabilities, whereas all interior (core) nodes remain buffer-less. In that way the implementation becomes quite simple as well as cost effective as there will be no need for optical buffers in the interior. However, the buffer-less nature of the interior nodes makes such networks prone to data burst contention occurrences that lead to a degradation in overall network performance as a result of sporadic heavy burst losses. Such drawbacks can be partly countered by appropriately dimensioning available network resources and reactively by way of deflecting excess as well as contending data bursts to available least-cost alternate paths. However, the deflected data bursts (traffic) must not cause network performance degradations in the deflection routes. Because minimizing contention occurrences is key to provisioning a consistent Quality of Service (QoS), we therefore in this paper propose and analyze a framework (scheme) that seeks to intelligently deflect traffic in the core network such that QoS degradations caused by contention occurrences are minimized. This is by way of regulated deflection routing (rDr) in which neural network agents are utilized in reinforcing the deflection route choices at core nodes. The framework primarily relies on both reactive and proactive regulated deflection routing approaches in order to prevent or resolve data burst contentions. Simulation results show that the scheme does effectively improve overall network performance when compared with existing contention resolution approaches. Notably, the scheme minimizes burst losses, end-to-end delays, frequency of contention occurrences, and burst deflections.


2016 ◽  
Vol 65 (11) ◽  
pp. 9414-9419 ◽  
Author(s):  
Francesco Chiti ◽  
Romano Fantacci ◽  
Tommaso Pecorella

Author(s):  
A. K. Rauniyar ◽  
A. S. Mandloi

<p>Optical Burst Switching (OBS) is considered to be a promising paradigm for bearing IP traffic in Wavelength Division Multiplexing (WDM) optical networks.  Scheduling of data burst in data channels in an optimal way is one of a key problem in Optical Burst Switched networks. The main concerns in this paper is to schedule the incoming bursts in proper data channel such that more burst can be scheduled so burst loss will be less. There are different algorithms exists to schedule data burst on data channels. Non-preemptive Delay-First Minimum Overlap Channel with Void Filling (NP-DFMOC-VF) and Non-preemptive Segment-First Minimum Overlap Channel with Void Filling (NP-SFMOC-VF) are best among other existing segmentation based void filling algorithms. Though it gives less burst loss but not existing the channel utilization efficiently. In this paper we propose a new approach, which will give less burst loss and also utilize existing channels in efficient way. Also analyze the performance of this proposed scheduling algorithm and compare it with the existing void filling algorithms. It is shown that the proposed algorithm gives some better performances compared to the existing algorithms.</p><p><em>Journal of Advanced College of Engineering and Management, Vol.1, 2015,</em> pp. 1-10</p><p> </p>


2015 ◽  
Vol 73 (2) ◽  
Author(s):  
Mohammed Al-Shargabi ◽  
Faisal Saeed ◽  
Zaid Shamsan ◽  
Abdul Samad Ismail ◽  
Sevia M Idrus

The Optical burst switching (OBS) networks have been attracting much consideration as a promising approach to build the next generation optical Internet. Aggregating the burst in the OBS networks from the high priority traffic will increase the average of the loss of its packets. However, the ratio of the high priority traffic (e.g. real-time traffic) in the burst is a very important factor for reducing the data loss, and ensuring the fairness between network traffic types. This paper introduces a statistical study based on the significant difference between the traffics to find the fairness ratio for the high priority traffic packets against the low priority traffic packets inside the data burst with various network traffic loads. The results show an improvement in the OBS quality of service (QoS) performance and the high priority traffic packets fairness ratio inside the data burst is 50 to 60%, 30 to 40%, and 10 to 20% for high, normal, and low traffic loads, respectively.


Sign in / Sign up

Export Citation Format

Share Document