segment tree
Recently Published Documents


TOTAL DOCUMENTS

37
(FIVE YEARS 12)

H-INDEX

4
(FIVE YEARS 1)

2021 ◽  
Vol 13 (23) ◽  
pp. 4816
Author(s):  
Jianmei Ling ◽  
Lu Li ◽  
Haiyan Wang

Compared with traditional optical and multispectral remote sensing images, hyperspectral images have hundreds of bands that can provide the possibility of fine classification of the earth’s surface. At the same time, a hyperspectral image is an image that coexists with the spatial and spectral. It has become a hot research topic to combine the spatial spectrum information of the image to classify hyperspectral features. Based on the idea of spatial–spectral classification, this paper proposes a novel hyperspectral image classification method based on a segment forest (SF). Firstly, the first principal component of the image was extracted by the process of principal component analysis (PCA) data dimension reduction, and the data constructed the segment forest after dimension reduction to extract the non-local prior spatial information of the image. Secondly, the images’ initial classification results and probability distribution were obtained using support vector machine (SVM), and the spectral information of the images was extracted. Finally, the segment forest constructed above is used to optimize the initial classification results and obtain the final classification results. In this paper, three domestic and foreign public data sets were selected to verify the segment forest classification. SF effectively improved the classification accuracy of SVM, and the overall accuracy of Salinas was enhanced by 11.16%, WHU-Hi-HongHu by 15.89%, and XiongAn by 19.56%. Then, it was compared with six decision-level improved space spectrum classification methods, including guided filtering (GF), Markov random field (MRF), random walk (RW), minimum spanning tree (MST), MST+, and segment tree (ST). The results show that the segment forest-based hyperspectral image classification improves accuracy and efficiency compared with other algorithms, proving the algorithm’s effectiveness.


2021 ◽  
Vol 12 (5) ◽  
Author(s):  
Josué Ttito ◽  
Renato Marroquín ◽  
Sergio Lifschitz ◽  
Lewis McGibbney ◽  
José Talavera

Key-value stores propose a straightforward yet powerful data model. Data is modeled using key-value pairs where values can be arbitrary objects and written/read using the key associated with it. In addition to their simple interface, such data stores also provide read operations such as full and range scans. However, due to the simplicity of its interface, trying to optimize data accesses becomes challenging. This work aims to enable the shared execution of concurrent range and point queries on key-value stores. Thus, reducing the overall data movement when executing a complete workload. To accomplish this, we analyze different possible data structures and propose our variation of a segment tree, Updatable Interval Tree. Our data structure helps us co-planning and co-executing multiple range queries together and reduces redundant work. This results in executing workloads more efficiently and overall increased throughput, as we show in our evaluation.


2021 ◽  
Author(s):  
Yang Zhao ◽  
Zhou Zhao ◽  
Zhu Zhang ◽  
Zhijie Lin
Keyword(s):  

2021 ◽  
Vol 13 (3) ◽  
pp. 352
Author(s):  
Romain Neuville ◽  
Jordan Steven Bates ◽  
François Jonard

Monitoring the structure of forest stands is of high importance for forest managers to help them in maintaining ecosystem services. For that purpose, Unmanned Aerial Vehicles (UAVs) open new prospects, especially in combination with Light Detection and Ranging (LiDAR) technology. Indeed, the shorter distance from the Earth’s surface significantly increases the point density beneath the canopy, thus offering new possibilities for the extraction of the underlying semantics. For example, tree stems can now be captured with sufficient detail, which is a gateway to accurately locating trees and directly retrieving metrics—e.g., the Diameter at Breast Height (DBH). Current practices usually require numerous site-specific parameters, which may preclude their use when applied beyond their initial application context. To overcome this shortcoming, the machine learning Hierarchical Density-Based Spatial Clustering of Application of Noise (HDBSCAN) clustering algorithm was further improved and implemented to segment tree stems. Afterwards, Principal Component Analysis (PCA) was applied to extract tree stem orientation for subsequent DBH estimation. This workflow was then validated using LiDAR point clouds collected in a temperate deciduous closed-canopy forest stand during the leaf-on and leaf-off seasons, along with multiple scanning angle ranges. The results show that the proposed methodology can correctly detect up to 82% of tree stems (with a precision of 98%) during the leaf-off season and have a Maximum Scanning Angle Range (MSAR) of 75 degrees, without having to set up any site-specific parameters for the segmentation procedure. In the future, our method could then minimize the omission and commission errors when initially detecting trees, along with assisting further tree metrics retrieval. Finally, this research shows that, under the study conditions, the point density within an approximately 1.3-meter height above the ground remains low within closed-canopy forest stands even during the leaf-off season, thus restricting the accurate estimation of the DBH. As a result, autonomous UAVs that can both fly above and under the canopy provide a clear opportunity to achieve this purpose.


2020 ◽  
Author(s):  
Josue Joel Ttito ◽  
Renato Marroquin ◽  
Sergio Lifschitz

Key-value stores propose a very simple yet powerful data model. Data is modeled using key-value pairs where values can be arbitrary objects and can be written/read using the key associated with it. In addition to their simple interface, such data stores also provide read operations such as full and range scans. However, due to the simplicity of its interface, trying to optimize data accesses becomes challenging. This work aims to enable the shared execution of concurrent range and point queries on key-value stores. Thus, reducing the overall data movement when executing a complete workload. To accomplish this, we analyze different possible data structures and propose our variation of a segment tree, Updatable Interval Tree. This data structure helps us co-planning and co-executing multiple range queries together, as we show in our evaluation.


2020 ◽  
Vol 15 (1) ◽  
Author(s):  
John L. Spouge ◽  
Joseph M. Ziegelbauer ◽  
Mileidy Gonzalez

Abstract Background Data about herpesvirus microRNA motifs on human circular RNAs suggested the following statistical question. Consider independent random counts, not necessarily identically distributed. Conditioned on the sum, decide whether one of the counts is unusually large. Exact computation of the p-value leads to a specific algorithmic problem. Given $$n$$ n elements $$g_{0} ,g_{1} , \ldots ,g_{n - 1}$$ g 0 , g 1 , … , g n - 1 in a set $$G$$ G with the closure and associative properties and a commutative product without inverses, compute the jackknife (leave-one-out) products $$\bar{g}_{j} = g_{0} g_{1} \cdots g_{j - 1} g_{j + 1} \cdots g_{n - 1}$$ g ¯ j = g 0 g 1 ⋯ g j - 1 g j + 1 ⋯ g n - 1 ($$0 \le j < n$$ 0 ≤ j < n ). Results This article gives a linear-time Jackknife Product algorithm. Its upward phase constructs a standard segment tree for computing segment products like $$g_{{\left[ {i,j} \right)}} = g_{i} g_{i + 1} \cdots g_{j - 1}$$ g i , j = g i g i + 1 ⋯ g j - 1 ; its novel downward phase mirrors the upward phase while exploiting the symmetry of $$g_{j}$$ g j and its complement $$\bar{g}_{j}$$ g ¯ j . The algorithm requires storage for $$2n$$ 2 n elements of $$G$$ G and only about $$3n$$ 3 n products. In contrast, the standard segment tree algorithms require about $$n$$ n products for construction and $$\log_{2} n$$ log 2 n products for calculating each $$\bar{g}_{j}$$ g ¯ j , i.e., about $$n\log_{2} n$$ n log 2 n products in total; and a naïve quadratic algorithm using $$n - 2$$ n - 2 element-by-element products to compute each $$\bar{g}_{j}$$ g ¯ j requires $$n\left( {n - 2} \right)$$ n n - 2 products. Conclusions In the herpesvirus application, the Jackknife Product algorithm required 15 min; standard segment tree algorithms would have taken an estimated 3 h; and the quadratic algorithm, an estimated 1 month. The Jackknife Product algorithm has many possible uses in bioinformatics and statistics.


2020 ◽  
Author(s):  
John Spouge ◽  
Joseph M. Ziegelbauer ◽  
Mileidy Gonzalez

Abstract [Please see the manuscript file pdf to view the full abstract.]Background: Data about herpesvirus microRNA motifs on human circular RNAs suggested the following statistical question. Consider independent random counts, not necessarily identically distributed. Conditioned on the sum, decide whether one of the counts is unusually large. Exact computation of the p-value leads to a specific algorithmic problem. Given elements in a set with the closure and associative properties and a commutative product without inverses, compute the jackknife (leave-one-out) products ( ).Results: This article gives a linear-time Jackknife Product algorithm. Its upward phase constructs a standard segment tree for computing segment products like ; its novel downward phase mirrors the upward phase while exploiting the symmetry of and its complement . The algorithm requires storage for elements of and only about products. In contrast, the standard segment tree algorithms require about products for construction and products for calculating each , i.e., about products in total; and a naïve quadratic algorithm using element-by-element products to compute each requires products.Conclusions: In the herpesvirus application, the Jackknife Product algorithm required 15 minutes; standard segment tree algorithms would have taken an estimated 3 hours; and the quadratic algorithm, an estimated 1 month. The Jackknife Product algorithm has many possible uses in bioinformatics and statistics.


2020 ◽  
Author(s):  
John Spouge ◽  
Joseph M. Ziegelbauer ◽  
Mileidy Gonzalez

Abstract Background: Data about herpesvirus microRNA motifs on human circular RNAs suggested the following statistical question. Consider independent random counts, not necessarily identically distributed. Conditioned on the sum, decide whether one of the counts is unusually large. Exact computation of the p-value leads to a specific algorithmic problem. Given elements in a set with the closure and associative properties and a commutative product without inverses, compute the jackknife (leave-one-out) products ( ).Results: This article gives a linear-time Jackknife Product algorithm. Its upward phase constructs a standard segment tree for computing segment products like ; its novel downward phase mirrors the upward phase while exploiting the symmetry of and its complement . The algorithm requires storage for elements of and only about products. In contrast, the standard segment tree algorithms require about products for construction and products for calculating each , i.e., about products in total; and a naïve quadratic algorithm using element-by-element products to compute each requires products.Conclusions: In the herpesvirus application, the Jackknife Product algorithm required 15 minutes; standard segment tree algorithms would have taken an estimated 3 hours; and the quadratic algorithm, an estimated 1 month. The Jackknife Product algorithm has many possible uses in bioinformatics and statistics.


2020 ◽  
Author(s):  
John Spouge ◽  
Joseph M. Ziegelbauer ◽  
Mileidy Gonzalez

Abstract Background: Data about herpesvirus microRNA motifs on human circular RNAs suggested the following statistical question. Consider independent random counts, not necessarily identically distributed. Conditioned on the sum, decide whether one of the counts is unusually large. Exact computation of the p-value leads to a specific algorithmic problem. Given n elements g0,g1,...gn-1 in a set with the closure and associative properties and a commutative product without inverses, compute the jackknife (leave-one-out) products gbar;=g0,g1,...gj-1 g j+1...gn-1 (0&le;j<n).Results: This article gives a linear-time Jackknife Product algorithm. Its upward phase constructs a standard segment tree for computing segment products like g[i,j)=gigi+1...gj-1; its novel downward phase mirrors the upward phase while exploiting the symmetry of and its complement gbar;j. The algorithm requires storage for elements of and only about products. In contrast, the standard segment tree algorithms require about n products for construction and log2 n products for calculating each gbar;j, i.e., about products n log n in total; and a naïve quadratic algorithm using n-2 element-by-element products to compute each gbar;j requires n (n-2) products.Conclusions: In the herpesvirus application, the Jackknife Product algorithm required 15 minutes; standard segment tree algorithms would have taken an estimated 3 hours; and the quadratic algorithm, an estimated 1 month. The Jackknife Product algorithm has many possible uses in bioinformatics and statistics.


Author(s):  
Yanyan Xu ◽  
◽  
Xiangyang Xu ◽  
Rui Yu

A disparity optimization algorithm based on an improved guided filter is proposed to smooth the disparity image. A well-known problem to local stereo matching is the low matching accuracy and staircase effect in regions with weak texture and slope. Our disparity optimization method solves this problem and achieve a smooth disparity. First, the initial disparity image is obtained by a local stereo matching algorithm using segment tree. Then, the guided filter is improved by using gradient domain information. Lastly, the improved guided filter is adopted as the disparity optimization method to smooth the disparity image. Experiments conducted on the Middlebury data sets demonstrate that by using the proposed algorithm in this paper, the smoothness of the disparity map in slope regions is improved, and a higher precision of dense disparity is obtained.


Sign in / Sign up

Export Citation Format

Share Document