centroid point
Recently Published Documents


TOTAL DOCUMENTS

33
(FIVE YEARS 12)

H-INDEX

6
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Maryam Farzam ◽  
Mozhdeh Afshar kermani ◽  
Tofigh Allahviranloo

Abstract Since real-world data is often inaccurate and working with fuzzy data and Z-numbers are very important and necessary, in the real world we need to rank and compare data. In this paper, we introduce a new method for ranking Z-numbers. This ranking algorithm is based on centroid point.We evaluate distance between centroid point, and based on this distance, we rank the Z-numbers.We use this method in two practical examples. First in ranking the return on assets of Tehran stock exchange, and second, in ranking of factors affecting the productivity of tourism security.The advantage of this method over conventional fuzzy methods is considering uncertainty, and allocating credit in the opinion of experts to estimate fuzzy parameters.


2021 ◽  
Vol 23 (Supplement_6) ◽  
pp. vi129-vi129
Author(s):  
Lubna Hammoudeh ◽  
Ho Young Lee ◽  
Evangelia Kaza ◽  
Jeffrey Guenette ◽  
Tracy Balboni

Abstract BACKGROUND Currently, the standard MRI sequence for SC imaging in SBRT has been axial 2D T2-weighted Turbo Spin Echo (TSE). Even though 3D T2-weighted sequences such as SPACE (Sampling Perfection with Application optimized Contrasts using different flip angle Evolution) image a whole volume simultaneously and thus offer better reconstruction, they have not been clinically implemented due to their long acquisition times. However, the application of Compressed Sensing (CS) methods on SPACE sequences, achieving clinically acceptable time. METHODS A 3D T2 CS SPACE was obtained and evaluated against the standard 2D TSE for spine SBRT based on a MagPhan RT quality assurance phantom and patients data, analysis was done using the phantom manufacturer software ImageOwl that calculates image distortions by comparing the known position of phantom features to their detected position in the image. RESULTS Results of phantom comparison between 3D T2 and 2D T2 indicate that although the 3D sequence had lower signal-to-noise ratio (SNR) than the 2D sequence, it presented less geometric distortions caused by gradient non-linearities, particularly in the anterior-posterior (A/P) and head-feet (H/F) directions. Distortions caused by chemical shift are in theory smaller for the 3D T2 CS SPACE, amounting to 0.85mm compared to 1.62mm with 2D T2. Between 2D versus 3D MRI defined SC data among 4 patients, average deviation of the centroid point cord contours was 0.08cm. The volume of the cord showed 1cc larger 3D volumes compared to 2D T2. Finally, the mean voxel count overlap coefficient and DICE coefficient was 0.92 and 0.87 respectively. CONCLUSIONS Since 3D MRI is under consideration to replace 2D MRI, it is important to compare SC contours from 3D to 2D MRI and assess their impact on treatment plans. Positive results would pave the path for larger subject cohort evaluation.


2021 ◽  
Vol 12 (2) ◽  
pp. 138
Author(s):  
Hashfi Fadhillah ◽  
Suryo Adhi Wibowo ◽  
Rita Purnamasari

Abstract  Combining the real world with the virtual world and then modeling it in 3D is an effort carried on Augmented Reality (AR) technology. Using fingers for computer operations on multi-devices makes the system more interactive. Marker-based AR is one type of AR that uses markers in its detection. This study designed the AR system by detecting fingertips as markers. This system is designed using the Region-based Deep Fully Convolutional Network (R-FCN) deep learning method. This method develops detection results obtained from the Fully Connected Network (FCN). Detection results will be integrated with a computer pointer for basic operations. This study uses a predetermined step scheme to get the best IoU parameters, precision and accuracy. The scheme in this study uses a step scheme, namely: 25K, 50K and 75K step. High precision creates centroid point changes that are not too far away. High accuracy can improve AR performance under conditions of rapid movement and improper finger conditions. The system design uses a dataset in the form of an index finger image with a configuration of 10,800 training data and 3,600 test data. The model will be tested on each scheme using video at different distances, locations and times. This study produced the best results on the 25K step scheme with IoU of 69%, precision of 5.56 and accuracy of 96%.Keyword: Augmented Reality, Region-based Convolutional Network, Fully Convolutional Network, Pointer, Step training Abstrak Menggabungkan dunia nyata dengan dunia virtual lalu memodelkannya bentuk 3D merupakan upaya yang diusung pada teknologi Augmented Reality (AR). Menggunakan jari untuk operasi komputer pada multi-device membuat sistem yang lebih interaktif. Marker-based AR merupakan salah satu jenis AR yang menggunakan marker dalam deteksinya. Penelitian ini merancang sistem AR dengan mendeteksi ujung jari sebagai marker. Sistem ini dirancang menggunakan metode deep learning Region-based Fully Convolutional Network (R-FCN). Metode ini mengembangkan hasil deteksi yang didapat dari Fully Connected Network (FCN). Hasil deteksi akan diintegrasikan dengan pointer komputer untuk operasi dasar. Penelitian ini menggunakan skema step training yang telah ditentukan untuk mendapatkan parameter IoU, presisi dan akurasi yang terbaik. Skema pada penelitian ini menggunakan skema step yaitu: 25K, 50K dan 75K step. Presisi tinggi menciptakan perubahan titik centroid yang tidak terlalu jauh. Akurasi  yang tinggi dapat meningkatkan kinerja AR dalam kondisi pergerakan yang cepat dan kondisi jari yang tidak tepat. Perancangan sistem menggunakan dataset berupa citra jari telunjuk dengan konfigurasi 10.800 data latih dan 3.600 data uji. Model akan diuji pada tiap skema dilakukan menggunakan video pada jarak, lokasi dan waktu yang berbeda. Penelitian ini menghasilkan hasil terbaik pada skema step 25K dengan IoU sebesar 69%, presisi sebesar 5,56 dan akurasi sebesar 96%.Kata kunci: Augmented Reality, Region-based Convolutional Network, Fully Convolutional Network, Pointer, Step training 


2021 ◽  
Vol 13 (3) ◽  
pp. 472
Author(s):  
Yang Chen ◽  
Guanlan Liu ◽  
Yaming Xu ◽  
Pai Pan ◽  
Yin Xing

Airborne laser scanning (ALS) point cloud has been widely used in the fields of ground powerline surveying, forest monitoring, urban modeling, and so on because of the great convenience it brings to people’s daily life. However, the sparsity and uneven distribution of point clouds increases the difficulty of setting uniform parameters for semantic classification. The PointNet++ network is an end-to-end learning network for irregular point data and highly robust to small perturbations of input points along with corruption. It eliminates the need to calculate costly handcrafted features and provides a new paradigm for 3D understanding. However, each local region in the output is abstracted by its centroid and local feature that encodes the centroid’s neighborhood. The feature learned on the centroid point may not contain relevant information of itself for random sampling, especially in large-scale neighborhood balls. Moreover, the centroid point’s global-level information in each sample layer is also not marked. Therefore, this study proposed a modified PointNet++ network architecture which concentrates the point-level and global features on the centroid point towards the local features to facilitate classification. The proposed approach also utilizes a modified Focal Loss function to solve the extremely uneven category distribution on ALS point clouds. An elevation- and distance-based interpolation method is also proposed for the objects in ALS point clouds which exhibit discrepancies in elevation distributions. The experiments on the Vaihingen dataset of the International Society for Photogrammetry and Remote Sensing and the GML(B) 3D dataset demonstrate that the proposed method which provides additional contextual information to support classification achieves high accuracy with simple discriminative models and new state-of-the-art performance in power line categories.


2021 ◽  
Vol 13 (3) ◽  
pp. 432
Author(s):  
Shiyu Yan ◽  
Guohui Yang ◽  
Qingyan Li ◽  
Bin Zhang ◽  
Yu Wang ◽  
...  

We report on a self-adaptive waveform centroid algorithm that combines the selection of double-scale data and the intensity-weighted (DSIW) method for accurate LiDAR distance–intensity imaging. A time window is set to adaptively select the effective data. At the same time, the intensity-weighted method can reduce the influence of sharp noise on the calculation. The horizontal and vertical coordinates of the centroid point obtained by the proposed algorithm are utilized to record the distance and echo intensity information, respectively. The proposed algorithm was experimentally tested, achieving an average ranging error of less than 0.3 ns under the various noise conditions in the listed tests, thus exerting better precision compared to the digital constant fraction discriminator (DCFD) algorithm, peak (PK) algorithm, Gauss fitting (GF) algorithm, and traditional waveform centroid (TC) algorithm. Furthermore, the proposed algorithm is fairly robust, with remarkably successful ranging rates of above 97% in all tests in this paper. Furthermore, the laser echo intensity measured by the proposed algorithm was proved to be robust to noise and to work in accordance with the transmission characteristics of LiDAR. Finally, we provide a distance–intensity point cloud image calibrated by our algorithm. The empirical findings in this study provide a new understanding of using LiDAR to draw multi-dimensional point cloud images.


Author(s):  
S. Vasuhi ◽  
A. Samydurai ◽  
Vijayakumar M.

In this paper, a novel approach is proposed to track humans for video surveillance using multiple cameras and video stitching techniques. SIFT key points are extracted from all camera inputs. Using k-d tree algorithm, all the key points are matched and random sample consensus (RANSAC) is used to identify the match correspondence among all the matched points. Homography matrix is calculated using four matched robust feature correspondences, the images are warped with respect to the other images, and the human tracking is performed on the stitched image. To identify the human in the stitched video, background modeling is performed using fuzzy inference system and perform foreground extraction. After foreground extraction, the blobs are constructed around each detected human and centroid point is calculated for each blob. Finally, tracking of multiple humans is done by Kalman filter (KF) with Hungarian algorithm.


Author(s):  
Trenouth MJ
Keyword(s):  

Objective: The variance of cephalometric points depends not only on the variability of the points themselves but on the magnitude of the distances between them. The study was undertaken to test the effect of standardizing the variance by dividing it by the mean distance to give the standardized variance.


2020 ◽  
Author(s):  
Jiajia Liu ◽  
Zuhao Zhou ◽  
Ziqi Yan ◽  
Yangwen Jia ◽  
Hao Wang

<p>Precipitation and other meteorological variables are very important input data for distributed hydrological models, which determine the simulation accuracy of the models. It is a normal way to subdivide the large area watershed into numerous subbasins to reflect the spatial variation, and the value is usually unique within each subbasin. In most model application, the values of meteorological variables are interpolated from meteorological station observed data to the centroid point of the subbasin with interpolation method (called one-cell interpolation). Because the centroid point could not represent the whole subbasin, the one-cell interpolation will bring input data uncertainty to the model. In this study, a new method is introduced to analysis this uncertainty, which firstly interpolate the values into numerous cells smaller than the subbasin then sum up to the subbasin (called multi-cells interpolation). The results show that one-cell interpolation way is not always consistent with the results of multi-cells interpolation, and the variance is greater in summer than in winter. The consistency grows with the increase of the number of the cells, which indicates that dozens of the cells could got the stable state. The variance is also influenced by the density of meteorological station, but the minimal cell number is almost the same. Thus, in the interpolation of the meteorological variables in distributed hydrological model, it recommends to interpolate the values to numerous smaller cells then sum up to the subbasins, rather than only interpolate to the centroid point.</p>


2020 ◽  
Vol 132 (2) ◽  
pp. 624-630
Author(s):  
Matthew J. Zdilla ◽  
Brianna K. Ritz ◽  
Nicholas S. Nestor

OBJECTIVEThe first attempt to cannulate the foramen ovale is oftentimes unsuccessful and requires subsequent reattempts, thereby increasing the risk of an adverse event and radiation exposure to the patient and surgeon. Failure in cannulation may be attributable to variation in soft-tissue–based landmarks used for needle guidance. Also, the incongruity between guiding marks on the face and bony landmarks visible on fluoroscopic images may also complicate cannulation. Therefore, the object of this study was to assess the location of the foramen ovale by way of bony landmarks, exclusive of soft-tissue guidance.METHODSA total of 817 foramina ovalia (411 left-sided, 406 right-sided) from cranial base images of 424 dry crania were included in the study. The centroid point of each foramen ovale was identified. A sagittal plane through the posterior-most molar (molar plane) and a coronal plane passing through the articular eminences of the temporal bones (inter-eminence plane) were superimposed on images. The distances of the planes from the centroids of the foramina were measured. Also, counts were taken to assess how often the planes and their intersections crossed the boundary of the foramen ovale.RESULTSThe average distance between the molar plane and the centroid of the foramen was 1.53 ± 1.24 mm (mean ± SD). The average distance between the inter-eminence plane and the centroid was 1.69 ± 1.49 mm. The molar and inter-eminence planes crossed through the foramen ovale boundary 83.7% (684/817) and 81.6% (667/817) of the time, respectively. The molar and inter-eminence planes passed through the boundary of the foramen together 73.5% (302/411) of the time. The molar and inter-eminence planes intersected within the boundary of the foramen half of the time (49.4%; 404/817).CONCLUSIONSThe results of this study provide a novel means of identifying the location of the foramen ovale. Unlike the soft-tissue landmarks used in the many variations of the route of Härtel, the bony landmarks identified in this study can be palpated, marked on the face, appreciated fluoroscopically, and do not require any measurement from soft-tissue structures. Utilizing the molar and inter-eminence planes as cannulation guides will improve the approach to the foramen ovale and decrease the amount of radiation exposure to both the patient and surgeon.


Author(s):  
Irfan Deli ◽  
Emel Kırmızı Öztürk

In this chapter, some basic definitions and operations on the concepts of fuzzy set, fuzzy number, intuitionistic fuzzy set, single-valued neutrosophic set, single-valued neutrosophic number (SVN-number) are presented. Secondly, two centroid point are called 1. and 2. centroid point for single-valued trapezoidal neutrosophic number (SVTN-number) and single-valued triangular neutrosophic number (SVTrN-number) are presented. Then, some desired properties of 1. and 2. centroid point of SVTN-numbers and SVTrN-numbers studied. Also, based on concept of 1. and 2. centroid point of SVTrN-numbers, a new single-valued neutrosophic multiple-attribute decision-making method is proposed. Moreover, a numerical example is introduced to illustrate the availability and practicability of the proposed method. Finally, since centroid points of normalized SNTN-numbers or SNTrN-numbers are fuzzy values, all definitions and properties of fuzzy graph theory can applied to SNTN-numbers or SNTrN-numbers. For example, definition of fuzzy graph theory based on centroid points of normalized SVTN-numbers and SVTrN-numbers is given.


Sign in / Sign up

Export Citation Format

Share Document