Automated Extraction of Anthropometric Data from 3D Images

Author(s):  
Steven Paquette ◽  
J. David Brantley ◽  
Brian D. Corner ◽  
Peng Li ◽  
Thomas Oliver

The use of 3D scanning systems for the capture and measurement of human body dimensions is becoming commonplace. While the ability of available scanning systems to record the surface anatomy of the human body is generally regarded as acceptable for most applications, effective use of the images to obtain anthropometric data requires specially developed data extraction software. However, for large data sets, extraction of useful information can be quite time consuming. A major benefit therefore is to possess an automated software program that quickly facilitates the extraction of reliable anthropometric data from 3D scanned images. In this paper the accuracy and variability of two fully automated data extraction systems (Cyberware WB-4 scanner with Natick-Scan software and Hamamatsu BL Scanner with accompanying software) are examined and compared with measurements obtained from traditional anthropometry. In order to remove many confounding variables that living humans introduce during the scanning process, a set of clothing dressforms was chosen as the focus of study. An analysis of the measurement data generally indicates that automated data extraction compares favorably with standard anthropometry for some measurements but requires additional refinement for others.

2017 ◽  
Vol 1 (2) ◽  
pp. 47
Author(s):  
Andi Asrul Sani

Attention to the dimensions of the human body has actually existed since centuries ago. Even philosophers, artists, and architects admire the size of the human body. The design of ancient worship such as the Greek temple is the result of the design collected from the proportional measures of various members of the human body. This study aims to explore the proportion of golden section found in the human body both men and women. Methods of data collection using anthropometric data of Indonesian people. Further data is analyzed proportion using golden section. The results of this study indicate that the human-Indonesian body contained the value of Golden Section proportion is the ratio between the height of the elbow standing with height; hip height with standing eye height; tall sitting eyes with a sitting vertical range; Vertical reach sits with standing vertical range; Knee height with sitting height. The value of the proportion of the golden section contained in the human body applies to both men and women. The results of this study can be an early reference findings, that the human body contained the value of the proportion of golden section. These findings still need to be explored with further research, given the problem of the proportion of golden section associated with the numbers so it requires accuracy of measurement data. Keyword: Anthropometry, Proportion, Golden Section Abstrak: Perhatian terhadap dimensi tubuh manusia sebenarnya sudah ada sejak berabad-abad silam. Bahkan para filsuf, seniman, dan arsitek mengagumi ukuran-ukuran tubuh manusia. Perancangan peribadatan kuno seperti kuil Yunani merupakan hasil rancangan yang terkumpul dari ukuran-ukuran yang proporsional dari berbagai anggota tubuh manusia. Penelitian ini bertujuan mengeksplorasi proporsi golden section yang terdapat pada tubuh manusia baik laki-laki maupun perempuan. Metode pengumpulan data menggunakan data antropometri orang Indonesia. Selanjutnya data dianalisis proporsinya menggunakan golden section. Hasil penelitian ini menunjukkan bahwa pada tubuh manusia-Indonesia terkandung nilai proporsi Golden Section yaitu perbandingan antara tinggi siku berdiri dengan tinggi badan; tinggi pinggul dengan tinggi mata berdiri; tinggi mata duduk dengan jangkauan vertikal duduk; Jangkauan vertikal duduk dengan jangkauan vertikal berdiri; Tinggi lutut dengan tinggi duduk. Nilai proporsi golden section yang dikandung pada tubuh manusia berlaku baik pada Laki-laki maupun perempuan. Hasil penelitian ini dapat menjadi referensi awal temuan, bahwa tubuh manusia terkandung nilai proporsi golden section. Temuan ini masih perlu didalami dengan penelitian lanjutan, mengingat persoalan proporsi golden section berkaitan dengan angka-angka sehingga memerlukan akurasi dan ketepatan data-data pengukuran. Kata Kunci: Antropometri, Proporsi, Goden Section


2020 ◽  
Vol 13 (4) ◽  
pp. 588-594
Author(s):  
Saravana Kumar Coimbatore Shanmugam ◽  
Santhosh Rajendran ◽  
Amudhavalli Padmanabhan ◽  
Kalaiarasan Chellan

Background: Increase in the internet data has increased the priority in the data extraction accuracy. Accuracy here lies with what data the user has requested for and what has been retrieved. The same large data sets that need to be analyzed make the required information retrieval a challenging task. Objective: To propose a new algorithm in an improved way than the traditional methods to classify the category or group to which each training sentence belongs. Method: Identifying the category to which the input sentence belongs is achieved by analyzing the Noun and Verb of each training sentence. NLP is applied to each training sentence and the group or category classification is achieved using the proposed GENI algorithm so that the classifier is trained efficiently to extract the user requested information. Results: The input sentences are transformed into a data table by applying GENI algorithm for group categorization. Plotting the graph in R tool, the accuracy of the group extracted by the Classifier involving GENI approach is higher than that of Naive Bayes & Decision Trees. Conclusion: It remains a challenging task to extract the user-requested data, when the user query is complex. Existing techniques are based more on the fixed attributes, and when we move with respect to the fixed attributes, it becomes too complex or impossible for us to determine the common group from the base sentence. Existing techniques are more suitable to a smaller dataset, whereas the proposed GENI algorithm does not hold any restrictions for the Group categorization of larger data sets.


2015 ◽  
Vol 48 (6) ◽  
pp. 2019-2025 ◽  
Author(s):  
Simon Frølich ◽  
Henrik Birkedal

Modern advanced diffraction experiments such asin situdiffraction, position-resolved diffraction or diffraction tomography generate extremely large data sets with hundreds to many thousands of diffractograms. Analyzing such data sets by Rietveld refinement is hampered by the logistics of running the Rietveld refinement program, extracting and analyzing the results, and possibly re-refining the data set based on an analysis of the preceding cycle of refinements. The complexity of the analysis may prevent some researchers either from performing the experiments or from conducting an exhaustive analysis of collected data. To this end, a MATLAB framework,MultiRef, which facilitates automated refinements, data extraction and intelligent choice of refinement model based on user choices, has been developed The use ofMultiRefis illustrated on data sets from diffraction tomography, position-resolved diffraction andin situpowder diffraction investigations of crystallization.


2018 ◽  
Vol 7 (3.29) ◽  
pp. 12
Author(s):  
L Chandra Sekhar Reddy ◽  
Dr D. Murali

We live today in a digital world a tremendous amount of data is generated by each digital service we use. This vast amount of data generated is called Big Data. According to Wikipedia, Big Data is a word for large data sets or compositions that the traditional data monitoring application software is pitiful to compress [5]. Extensive data cannot be used to receive data, store data, analyse data, search, share, transfer, view, consult, and update and maintain the confidentiality of information. Google's streaming services, YouTube, are one of the best examples of services that produce a massive amount of data in a brief period. Data extraction of a significant amount of data is done using Hadoop and MapReduce to measure performance. Hadoop is a system that offers consistent memory. Storage is provided by HDFS (Hadoop Distributed File System) and MapReduce analysis. MapReduce is a programming model and a corresponding implementation for processing large data sets. This article presents the analysis of Big Data on YouTube using the Hadoop and MapReduce techniques.   


2021 ◽  
Author(s):  
Shahriar Shirvani Moghaddam ◽  
Kiaksar Shirvani Moghaddam

Abstract Design an efficient data sorting algorithm that requires less time and space complexity is essential for large data sets in wireless networks, the Internet of things, data mining systems, computer science, and communications engineering. This paper proposes a low-complex data sorting algorithm that distinguishes the sorted/similar data, makes independent subarrays, and sorts the subarrays’ data using one of the popular sorting algorithms. It is proved that the mean-based pivot is as efficient as the median-based pivot for making equal-length subarrays. The numerical analyses indicate slight improvements in the elapsed time and the number of swaps of the proposed serial Merge-based and Quick-based algorithms compared to the conventional ones for low/high variance integer/non-integer uniform/Gaussian data, in different data lengths. However, using the gradual data extraction feature, the sorted parts can be extracted sequentially before ending the sorting process. Also, making independent subarrays proposes a general framework to parallel realization of sorting algorithms with separate parts. Simulation results indicate the effectiveness of the proposed parallel Merge-based and Quick-based algorithms to the conventional serial and multi-core parallel algorithms. Finally, the complexity of the proposed algorithm in both serial and parallel realizations is analyzed that shows an impressive improvement.


Author(s):  
John A. Hunt

Spectrum-imaging is a useful technique for comparing different processing methods on very large data sets which are identical for each method. This paper is concerned with comparing methods of electron energy-loss spectroscopy (EELS) quantitative analysis on the Al-Li system. The spectrum-image analyzed here was obtained from an Al-10at%Li foil aged to produce δ' precipitates that can span the foil thickness. Two 1024 channel EELS spectra offset in energy by 1 eV were recorded and stored at each pixel in the 80x80 spectrum-image (25 Mbytes). An energy range of 39-89eV (20 channels/eV) are represented. During processing the spectra are either subtracted to create an artifact corrected difference spectrum, or the energy offset is numerically removed and the spectra are added to create a normal spectrum. The spectrum-images are processed into 2D floating-point images using methods and software described in [1].


Author(s):  
Thomas W. Shattuck ◽  
James R. Anderson ◽  
Neil W. Tindale ◽  
Peter R. Buseck

Individual particle analysis involves the study of tens of thousands of particles using automated scanning electron microscopy and elemental analysis by energy-dispersive, x-ray emission spectroscopy (EDS). EDS produces large data sets that must be analyzed using multi-variate statistical techniques. A complete study uses cluster analysis, discriminant analysis, and factor or principal components analysis (PCA). The three techniques are used in the study of particles sampled during the FeLine cruise to the mid-Pacific ocean in the summer of 1990. The mid-Pacific aerosol provides information on long range particle transport, iron deposition, sea salt ageing, and halogen chemistry.Aerosol particle data sets suffer from a number of difficulties for pattern recognition using cluster analysis. There is a great disparity in the number of observations per cluster and the range of the variables in each cluster. The variables are not normally distributed, they are subject to considerable experimental error, and many values are zero, because of finite detection limits. Many of the clusters show considerable overlap, because of natural variability, agglomeration, and chemical reactivity.


Author(s):  
Mykhajlo Klymash ◽  
Olena Hordiichuk — Bublivska ◽  
Ihor Tchaikovskyi ◽  
Oksana Urikova

In this article investigated the features of processing large arrays of information for distributed systems. A method of singular data decomposition is used to reduce the amount of data processed, eliminating redundancy. Dependencies of com­putational efficiency on distributed systems were obtained using the MPI messa­ging protocol and MapReduce node interaction software model. Were analyzed the effici­ency of the application of each technology for the processing of different sizes of data: Non — distributed systems are inefficient for large volumes of information due to low computing performance. It is proposed to use distributed systems that use the method of singular data decomposition, which will reduce the amount of information processed. The study of systems using the MPI protocol and MapReduce model obtained the dependence of the duration calculations time on the number of processes, which testify to the expediency of using distributed computing when processing large data sets. It is also found that distributed systems using MapReduce model work much more efficiently than MPI, especially with large amounts of data. MPI makes it possible to perform calculations more efficiently for small amounts of information. When increased the data sets, advisable to use the Map Reduce model.


2018 ◽  
Vol 2018 (6) ◽  
pp. 38-39
Author(s):  
Austa Parker ◽  
Yan Qu ◽  
David Hokanson ◽  
Jeff Soller ◽  
Eric Dickenson ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document