tree traversal
Recently Published Documents


TOTAL DOCUMENTS

109
(FIVE YEARS 24)

H-INDEX

11
(FIVE YEARS 2)

2021 ◽  
Vol 28 (4) ◽  
Author(s):  
Shishuo Fu ◽  
Zhicong Lin ◽  
Yaling Wang

A di-sk tree is a rooted binary tree whose nodes are labeled by $\oplus$ or $\ominus$, and no node has the same label as its right child. The di-sk trees are in natural bijection with separable permutations. We construct a combinatorial bijection on di-sk trees proving  the two quintuples $(\mathrm{LMAX},\mathrm{LMIN},\mathrm{DESB},\mathsf{iar},\mathsf{comp})$ and $(\mathrm{LMAX},\mathrm{LMIN},\mathrm{DESB},\mathsf{comp},\mathsf{iar})$ have the same distribution over separable permutations. Here for a permutation $\pi$, $\mathrm{LMAX}(\pi)/\mathrm{LMIN}(\pi)$ is the set of values of the left-to-right maxima/minima of $\pi$ and $\mathrm{DESB}(\pi)$ is the set of descent bottoms of $\pi$, while $\mathsf{comp}(\pi)$ and $\mathsf{iar}(\pi)$ are respectively  the number of components of $\pi$ and the length of initial ascending run of $\pi$.  Interestingly, our bijection specializes to a bijection on $312$-avoiding permutations, which provides  (up to the classical Knuth–Richards bijection) an alternative approach to a result of Rubey (2016) that asserts the  two triples $(\mathrm{LMAX},\mathsf{iar},\mathsf{comp})$ and $(\mathrm{LMAX},\mathsf{comp},\mathsf{iar})$ are equidistributed  on $321$-avoiding permutations. Rubey's result is a symmetric extension of an equidistribution due to Adin–Bagno–Roichman, which implies the class of $321$-avoiding permutations with a prescribed number of components is Schur positive.  Some equidistribution results for various statistics concerning tree traversal are presented in the end.


2021 ◽  
Vol 20 ◽  
pp. 108-125
Author(s):  
Indranil Roy ◽  
Swathi Kaluvakuri ◽  
Koushik Maddali ◽  
Ziping Liu ◽  
Bidyut Gupta

In this paper, we have considered a recently reported 2-layer non-DHT-based structured P2P network. Residue Class based on modular arithmetic has been used to realize the overlay topology. At the heart of the architecture (layer-1), there exists a tree like structure, known as pyramid tree. It is not a conventional tree. A node i in this tree represents the cluster-head of a cluster of peers which are interested in a particular resource of type Ri (i.e. peers with a common interest). The cluster-head is the first among these peers to join the system. Root of the tree is assumed to be at level 1. Such a tree is a complete one if at each level j, there are j number of nodes. It is an incomplete one if only at its leaf level, say k, there are less than k number of nodes. Layer 2 consists of the different clusters. The network has some unique structural properties, e.g. each cluster has a diameter of only 1 overlay hop and the diameter of the network is just (2+2d); d being the number of levels of the layer-1 pyramid tree and d depends only on the number of distinct resources. Therefore, the diameter of the network is independent of the number of peers in the whole network. In the present work, we have used some such properties to design low latency intra and inter cluster data lookup protocols. Our choice of considering non-DHT and interest-based overlay networks is justified by the following facts: 1) intra-cluster data lookup protocol has constant complexity and complexity of inter-cluster data lookup is O(d) if tree traversal is used and 2) search latency is independent of the total number of peers present in the overlay network unlike any structured DHT-based network (as a matter fact unlike any existing P2P network, structured or unstructured). Experimental results as well show superiority of the proposed protocols to some noted structured networks from the viewpoints of search latency and complexity involved in it. In addition, we have presented in detail the process of handling churns and proposed a simple yet very effective technique related to cluster partitioning, which, in turn, helps in reducing the number of messages required to be exchanged to handle churns.


2021 ◽  
Vol 20 ◽  
pp. 66-73
Author(s):  
Mohammad A. Jassim ◽  
Wesam A. Almobaideen

Wireless Sensor Networks (WSNs) are sink-based networks in which assigned sinks gather all data sensed by lightweight devices that are deployed in natural areas. The sensor devices are energyscarce, therefore, energy-efficient protocols need to be designed for this kind of technology. PowerEfficient GAthering in Sensor Information Systems (PEGASIS) protocol is an energy-efficient data gathering protocol in which a chain is constructed using a greedy approach. This greedy approach has appeared to have unbalanced distances among the nodes which result in unfair energy consumption. Tree traversal algorithms have been used to improve the constructed chain to distribute the energy consumption fairly. In this research, however, a new segmentbased tree traversal approach is introduced to further improve the constructed chain. Our new proposed algorithm first constructs initial segments based on a list of nodes that are sorted according to post-order traversal. Afterwards, it groups these segments and concatenates them one by one according to their location; thus, our proposed approach uses location-awareness to construct a single balanced chain in order to use it for the data gathering process. This approach has been evaluated under various numbers of sensor devices in the network field with respect to various crucial performance metrics. It is shown in our conducted simulation results that our proposed segment-based chain construction approach produces shorter chains and shorter transmission ranges which as a result has improved the overall energy consumption per round, network lifetime, and end-to-end delay.


Author(s):  
Jose Triny K, Et. al.

Web pages have an increasing number of been used because thepatron interface of many software programsoftwarestructures. The simplicity of interplay with internet pages is an idealbenefit of the usage of them. However, the character interface also can get extracomplicatedwhilegreatercomplexnet pages are used to construct it. Understanding the complexity of net pages as perceived subjectively with the resource of clients is thereforecrucial to betterlayout this sort ofconsumer interface. Searching is one of thenot unusual placeassignmentachievedon the Internet. Search engines are the essentialtool of the net, from whereinyou willcollectassociatedstatistics and searched in keeping with the favoredkey-word given by the character. The recordson theinternet is developing dramatically. The consumer has to spend extra time with inside theinternetin case youneed to find outthe correctfactsthey may befascinated in. Existing net engines like Google do now no longerundergo in thoughtsuniqueneeds of character and serve eachpatron similarly. For this ambiguous query, some offiles on wonderfulsubjects are decreaselower backby engines like Google. Hence it will becomedifficult for the consumer to get the requiredcontent materialfabric. Moreover it additionally takes extra time in searching a pertinent content materialfabric. In this paper, we are able to survey the numerous algorithms for decreasing complexity in internetweb page navigations.


Author(s):  
Iain Duncan Stalker ◽  
Nikolai Kazantsev

AbstractOur interest here lies in supporting important, but routine and time-consuming activities that underpin success in highly distributed, collaborative design and manufacturing environments; and how information structuring can facilitate this. To that end, we present a simple, yet powerful approach to team formation, partner selection, scheduling and communication that employs a different approach to the task of matching candidates to opportunities or partners to requirements (matchmaking): traditionally, this is approached using either an idea of ‘nearness’ or ‘best fit’ (metric-based paradigms); or by finding a subtree within a tree (data structure) (tree traversal). Instead, we prefer concept lattices to establish notions of ‘inclusion’ or ‘membership’: essentially, a topological paradigm. While our approach is substantive, it can be used alongside traditional approaches and in this way one could harness the strengths of multiple paradigms.


Author(s):  
Keshav Sinha ◽  
Partha Paul ◽  
Amritanjali

Distributed computing is one of the thrust areas in the field of computer science, but when we are concerned about security a question arises, “Can it be secure?” From this note, the authors start this chapter. In the distributed environment, when the system is connected to a network, and the operating system firewall is active, it will take care of all the authentication and access control requests. There are several traditional cryptographic approaches which implement authentication and access control. The encryption algorithms such as Rijndael, RSA, A3, and A5 is used for providing data secrecy. Some of the key distribution techniques have been discussed such as Diffie Hellman key exchange for symmetric key, and random key generation (LCG) technique is used in red-black tree traversal which provides the security of the digital contents. The chapter deals with the advanced versions of the network security techniques and cryptographic algorithms for the security of multimedia contents over the internet.


2021 ◽  
pp. 418-428
Author(s):  
Zhuowei Li ◽  
Qing Xia ◽  
Zhiqiang Hu ◽  
Wenji Wang ◽  
Lijian Xu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document