2d images
Recently Published Documents


TOTAL DOCUMENTS

621
(FIVE YEARS 209)

H-INDEX

26
(FIVE YEARS 4)

2022 ◽  
Vol 209 ◽  
pp. 109974
Author(s):  
Lixin Wang ◽  
Yanshu Yin ◽  
Changmin Zhang ◽  
Wenjie Feng ◽  
Guoyong Li ◽  
...  

Author(s):  
V. A. Ganchenko ◽  
E. E. Marushko ◽  
L. P. Podenok ◽  
A. V. Inyutin

This article describes evaluation the information content of metal objects surfaces for classification of fractures using 2D and 3D data. As parameters, the textural characteristics of Haralick, local binary patterns of pixels for 2D images, macrogeometric descriptors of metal objects digitized by a 3D scanner are considered. The analysis carried out on basis of information content estimation to select the features that are most suitable for solving the problem of metals fractures classification. The results will be used for development of methods for complex forensic examination of complex polygonal surfaces of solid objects for automated system for analyzing digital images.


2022 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Chih-Hao Wen ◽  
Chih-Chan Cheng ◽  
Yuh-Chuan Shih

PurposeThis research aims to collect human body variables via 2D images captured by digital cameras. Based on those human variables, the forecast and recommendation of the Digital Camouflage Uniforms (DCU) for Taiwan's military personnel are made.Design/methodology/approachA total of 375 subjects are recruited (male: 253; female: 122). In this study, OpenPose converts the photographed 2D images into four body variables, which are compared with those of a tape measure and 3D scanning simultaneously. Then, the recommendation model of the DCU is built by the decision tree. Meanwhile, the Euclidean distance of each size of the DCU in the manufacturing specification is calculated as the best three recommendations.FindingsThe recommended size established by the decision tree is only 0.62 and 0.63. However, for the recommendation result of the best three options, the DCU Fitting Score can be as high as 0.8 or more. The results of OpenPose and 3D scanning have the highest correlation coefficient even though the method of measuring body size is different. This result confirms that OpenPose has significant measurement validity. That is, inexpensive equipment can be used to obtain reasonable results.Originality/valueIn general, the method proposed in this study is suitable for applications in e-commerce and the apparel industry in a long-distance, non-contact and non-pre-labeled manner when the world is facing Covid-19. In particular, it can reduce the measurement troubles of ordinary users when purchasing clothing online.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8397
Author(s):  
Van-Hung Le ◽  
Rafal Scherer

Human segmentation and tracking often use the outcome of person detection in the video. Thus, the results of segmentation and tracking depend heavily on human detection results in the video. With the advent of Convolutional Neural Networks (CNNs), there are excellent results in this field. Segmentation and tracking of the person in the video have significant applications in monitoring and estimating human pose in 2D images and 3D space. In this paper, we performed a survey of many studies, methods, datasets, and results for human segmentation and tracking in video. We also touch upon detecting persons as it affects the results of human segmentation and human tracking. The survey is performed in great detail up to source code paths. The MADS (Martial Arts, Dancing and Sports) dataset comprises fast and complex activities. It has been published for the task of estimating human posture. However, before determining the human pose, the person needs to be detected as a segment in the video. Moreover, in the paper, we publish a mask dataset to evaluate the segmentation and tracking of people in the video. In our MASK MADS dataset, we have prepared 28 k mask images. We also evaluated the MADS dataset for segmenting and tracking people in the video with many recently published CNNs methods.


2021 ◽  
Vol 66 (2) ◽  
pp. 69
Author(s):  
A.-I. Marinescu

This paper tackles the sensitive subject of face shape identification via near neutral-pose 2D images of human subjects. The possibility of extending to 3D facial models is also proposed, and would alleviate the need for the neutral stance. Accurate face shape classification serves as a vital building block of any hairstyle and eye-wear recommender system. Our approach is based on extracting relevant facial landmark measurements and passing them through a naive Bayes classifier unit in order to yield the final decision. The literature on this subject is particularly scarce owing to the very subjective nature of human face shape classification. We wish to contribute a robust and automatic system that performs this task and highlight future development directions on this matter.


Author(s):  
Christina Konstantopoulos ◽  
Tejas S Mehta ◽  
Alexander Brook ◽  
Vandana Dialani ◽  
Rashmi Mehta ◽  
...  

Abstract Objective Low-energy (LE) images of contrast-enhanced mammography (CEM) have been shown to be noninferior to digital mammography. However, our experience is that LE images are superior to 2D mammography. Our purpose was to compare cancer appearance on LE to 2D images. Methods In this IRB-approved retrospective study, seven breast radiologists evaluated 40 biopsy-proven cancer cases on craniocaudal (CC) and mediolateral oblique (MLO) LE images and recent 2D images for cancer visibility, confidence in margins, and conspicuity of findings using a Likert scale. Objective measurements were performed using contrast-to-noise ratio (CNR) estimated from regions of interest placed on tumor and background parenchyma. Reader agreement was evaluated using Fleiss kappa. Per-reader comparisons were performed using Wilcoxon test and overall comparisons used three-way analysis of variance. Results Low-energy images showed improved performance for visibility (CC LE 4.0 vs 2D 3.5, P < 0.001 and MLO LE 3.7 vs 2D 3.5, P = 0.01), confidence in margins (CC LE 3.2 vs 2D 2.8, P < 0.001 and MLO LE 3.1 vs 2D 2.9, P < 0.008), and conspicuity compared to tissue density compared to 2D mammography (CC LE 3.6 vs 2D 3.2, P < 0.001 and MLO LE 3.5 vs 2D 3.2, P < 0.001). The average CNR was significantly higher for LE than for digital mammography (CC 2.1 vs 3.2, P < 0.001 and MLO 2.1 vs 3.4, P < 0.001). Conclusion Our results suggest that cancers may be better visualized on the LE CEM images compared with the 2D digital mammogram.


OENO One ◽  
2021 ◽  
Vol 55 (4) ◽  
pp. 209-226
Author(s):  
Carlos Lopes ◽  
Jorge Cadima

Recent advances in machine vision technologies have provided a multitude of automatic tools for recognition and quantitative estimation of grapevine bunch features in 2D images. However, converting them into bunch weight (BuW) is still a big challenge. This paper aims to compare the explanatory power of the number of visible berries (#vBe) and the bunch area (BuA) in 2D images, in order to predict BuW. A set of 300 bunches from four grapevine cultivars were picked at harvest and imaged using a digital RGB camera. Then each bunch was manually assessed for several morphological attributes and, from each image, the #vBe was visually assessed while BuA was segmented using manual labelling combined with an image processing software. Single and multiple regression analysis between BuW and the image-based variables were performed and the obtained regression models were subsequently validated with two independent datasets.The high goodness of fit obtained for all the linear regression models indicates that either one of the image-based variables can be used as an accurate proxy of actual bunch weight and that a general model is also suitable. The comparison of the explanatory power of the two image-based attributes for predicting bunch weight showed that the models based on the predictor #vBe had a slightly lower coefficient of determination (R2) than the models based on BuA. The combination of the two image-based explanatory variables in a multiple regression model produced predictor models with similar or noticeably higher R2 than those obtained for single-predictor models. However, adding a second variable produced a higher and more generalised gain in accuracy for the simple regression models based on the predictor #vBe than for the models based on BuA. Our results recommend the use of the models based on the two image-based variables, as they were generally more accurate and robust than the single variable models. When the gains in accuracy produced by adding a second image-based feature are small, the option of using only a single predictor can be chosen; in such a case, our results indicate that BuA would be a more accurate and less cultivar-dependent option than the #vBe.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8208
Author(s):  
Jaerock Kwon ◽  
Yunju Lee ◽  
Jehyung Lee

The model-based gait analysis of kinematic characteristics of the human body has been used to identify individuals. To extract gait features, spatiotemporal changes of anatomical landmarks of the human body in 3D were preferable. Without special lab settings, 2D images were easily acquired by monocular video cameras in real-world settings. The 2D and 3D locations of key joint positions were estimated by the 2D and 3D pose estimators. Then, the 3D joint positions can be estimated from the 2D image sequences in human gait. Yet, it has been challenging to have the exact gait features of a person due to viewpoint variance and occlusion of body parts in the 2D images. In the study, we conducted a comparative study of two different approaches: feature-based and spatiotemporal-based viewpoint invariant person re-identification using gait patterns. The first method is to use gait features extracted from time-series 3D joint positions to identify an individual. The second method uses a neural network, a Siamese Long Short Term Memory (LSTM) network with the 3D spatiotemporal changes of key joint positions in a gait cycle to classify an individual without extracting gait features. To validate and compare these two methods, we conducted experiments with two open datasets of the MARS and CASIA-A datasets. The results show that the Siamese LSTM outperforms the gait feature-based approaches on the MARS dataset by 20% and 55% on the CASIA-A dataset. The results show that feature-based gait analysis using 2D and 3D pose estimators is premature. As a future study, we suggest developing large-scale human gait datasets and designing accurate 2D and 3D joint position estimators specifically for gait patterns. We expect that the current comparative study and the future work could contribute to rehabilitation study, forensic gait analysis and early detection of neurological disorders.


Sign in / Sign up

Export Citation Format

Share Document