single view
Recently Published Documents


TOTAL DOCUMENTS

810
(FIVE YEARS 267)

H-INDEX

40
(FIVE YEARS 8)

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 518
Author(s):  
Ashraf Siddique ◽  
Seungkyu Lee

The three-dimensional (3D) symmetry shape plays a critical role in the reconstruction and recognition of 3D objects under occlusion or partial viewpoint observation. Symmetry structure prior is particularly useful in recovering missing or unseen parts of an object. In this work, we propose Sym3DNet for single-view 3D reconstruction, which employs a three-dimensional reflection symmetry structure prior of an object. More specifically, Sym3DNet includes 2D-to-3D encoder-decoder networks followed by a symmetry fusion step and multi-level perceptual loss. The symmetry fusion step builds flipped and overlapped 3D shapes that are fed to a 3D shape encoder to calculate the multi-level perceptual loss. Perceptual loss calculated in different feature spaces counts on not only voxel-wise shape symmetry but also on the overall global symmetry shape of an object. Experimental evaluations are conducted on both large-scale synthetic 3D data (ShapeNet) and real-world 3D data (Pix3D). The proposed method outperforms state-of-the-art approaches in terms of efficiency and accuracy on both synthetic and real-world datasets. To demonstrate the generalization ability of our approach, we conduct an experiment with unseen category samples of ShapeNet, exhibiting promising reconstruction results as well.


PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0262181
Author(s):  
Prasetia Utama Putra ◽  
Keisuke Shima ◽  
Koji Shimatani

Multiple cameras are used to resolve occlusion problem that often occur in single-view human activity recognition. Based on the success of learning representation with deep neural networks (DNNs), recent works have proposed DNNs models to estimate human activity from multi-view inputs. However, currently available datasets are inadequate in training DNNs model to obtain high accuracy rate. Against such an issue, this study presents a DNNs model, trained by employing transfer learning and shared-weight techniques, to classify human activity from multiple cameras. The model comprised pre-trained convolutional neural networks (CNNs), attention layers, long short-term memory networks with residual learning (LSTMRes), and Softmax layers. The experimental results suggested that the proposed model could achieve a promising performance on challenging MVHAR datasets: IXMAS (97.27%) and i3DPost (96.87%). A competitive recognition rate was also observed in online classification.


2022 ◽  
Vol 14 (2) ◽  
pp. 257
Author(s):  
Yu Tao ◽  
Siting Xiong ◽  
Jan-Peter Muller ◽  
Greg Michael ◽  
Susan J. Conway ◽  
...  

We propose using coupled deep learning based super-resolution restoration (SRR) and single-image digital terrain model (DTM) estimation (SDE) methods to produce subpixel-scale topography from single-view ESA Trace Gas Orbiter Colour and Stereo Surface Imaging System (CaSSIS) and NASA Mars Reconnaissance Orbiter High Resolution Imaging Science Experiment (HiRISE) images. We present qualitative and quantitative assessments of the resultant 2 m/pixel CaSSIS SRR DTM mosaic over the ESA and Roscosmos Rosalind Franklin ExoMars rover’s (RFEXM22) planned landing site at Oxia Planum. Quantitative evaluation shows SRR improves the effective resolution of the resultant CaSSIS DTM by a factor of 4 or more, while achieving a fairly good height accuracy measured by root mean squared error (1.876 m) and structural similarity (0.607), compared to the ultra-high-resolution HiRISE SRR DTMs at 12.5 cm/pixel. We make available, along with this paper, the resultant CaSSIS SRR image and SRR DTM mosaics, as well as HiRISE full-strip SRR images and SRR DTMs, to support landing site characterisation and future rover engineering for the RFEXM22.


2022 ◽  
Vol 6 ◽  
Author(s):  
Sukanto Limbong ◽  
Senada Siallagan

This is an ethnography study which aimed to find out the real condition of poverty in Indonesia. Preliminary study showed that there are massive social impacts in the new normal due to COVID-19. There are three kinds of poverty that can be seen in real terms namely extreme poverty, absolute poverty, and relative poverty. Moreover, when viewed from the biblical approach, there are two words used that helped us in our  research on poverty, namely "race” = “poor people" and "dal” which is more translated as “weaker” than “poor”. The Bible does not state a single view on poverty but mentioned some Bible passages. The first is idleness poverty. This poverty is caused by laziness or negligence over personal responsibility to look for means to meet needs. The Bible uses ants as an opposite example of laziness in the book of Proverbs 6:6. The second is theodise poverty. This poverty is illustrated by Job who was stripped off of his riches, yet he was able to accept and embrace whatever the Lord gave to him. 


Author(s):  
Javier Rodriguez-Puigvert ◽  
Ruben Martinez-Cantin ◽  
Javier Civera

Author(s):  
Guiju Ping ◽  
Mahdi Abolfazli Esfahani ◽  
Jiaying Chen ◽  
Han Wang

2022 ◽  
Vol 88 (1) ◽  
pp. 65-72
Author(s):  
Wanxuan Geng ◽  
Weixun Zhou ◽  
Shuanggen Jin

Traditional urban scene-classification approaches focus on images taken either by satellite or in aerial view. Although single-view images are able to achieve satisfactory results for scene classification in most situations, the complementary information provided by other image views is needed to further improve performance. Therefore, we present a complementary information-learning model (CILM) to perform multi-view scene classification of aerial and ground-level images. Specifically, the proposed CILM takes aerial and ground-level image pairs as input to learn view-specific features for later fusion to integrate the complementary information. To train CILM, a unified loss consisting of cross entropy and contrastive losses is exploited to force the network to be more robust. Once CILM is trained, the features of each view are extracted via the two proposed feature-extraction scenarios and then fused to train the support vector machine classifier for classification. The experimental results on two publicly available benchmark data sets demonstrate that CILM achieves remarkable performance, indicating that it is an effective model for learning complementary information and thus improving urban scene classification.


2021 ◽  
Vol 38 (6) ◽  
pp. 1699-1711
Author(s):  
Devanshu Tiwari ◽  
Manish Dixit ◽  
Kamlesh Gupta

This paper simply presents a fully automated breast cancer detection system as “Deep Multi-view Breast cancer Detection” based on deep transfer learning. The deep transfer learning model i.e., Visual Geometry Group 16 (VGG 16) is used in this approach for the correct classification of Breast thermal images into either normal or abnormal. This VGG 16 model is trained with the help of Static as well as Dynamic breast thermal images dataset consisting of multi-view, single view breast thermal images. These Multi-view breast thermal images are generated in this approach by concatenating the conventional left, frontal and right view breast thermal images taken from the Database for Mastology Research with Infrared image for the first time in order to generate a more informative and complete thermal temperature map of breast for enhancing the accuracy of the overall system. For the sake of genuine comparison, three other popular deep transfer learning models like Residual Network 50 (ResNet50V2), InceptionV3 network and Visual Geometry Group 19 (VGG 19) are also trained with the same augmented dataset consisting of multi-view as well as single view breast thermal images. The VGG 16 based Deep Multi-view Breast cancer Detect system delivers the best training, validation as well as testing accuracies as compared to their other deep transfer learning models. The VGG 16 achieves an encouraging testing accuracy of 99% on the Dynamic breast thermal images testing dataset utilizing the multi-view breast thermal images as input. Whereas the testing accuracies of 95%, 94% and 89% are achieved by the VGG 19, ResNet50V2, InceptionV3 models respectively over the Dynamic breast thermal images testing dataset utilizing the same multi-view breast thermal images as input.


2021 ◽  
Author(s):  
mariam hameed ◽  
Raja Shahbaz ◽  
Javed Iqbal

The majority of people are looking for vehicles. To find their desired vehicle, people search on the most popular websites. The sellers post their ads on multiple sites to reach out to a larger audience. This means buyers have to find their desired vehicle on multiple sites to compare the prices because buyers always look for lower prices. This search needs to visit and find ads on multiple vehicle entertaining websites, which requires much time. In this paper, we are proposing a unique “Automobile Search Engine” (ASE) which grabs all the automobile ads posted on different websites. Our solution will give a single search for automobile ads. The user can get all the ads based on his/her search criteria. ASE will find all the ads for different websites and will display the results. The user can then compare the searched ads on one single view. ASE supported Multi Thread Crawler.


2021 ◽  
Author(s):  
mariam hameed ◽  
Raja Shahbaz ◽  
Javed Iqbal

The majority of people are looking for vehicles. To find their desired vehicle, people search on the most popular websites. The sellers post their ads on multiple sites to reach out to a larger audience. This means buyers have to find their desired vehicle on multiple sites to compare the prices because buyers always look for lower prices. This search needs to visit and find ads on multiple vehicle entertaining websites, which requires much time. In this paper, we are proposing a unique “Automobile Search Engine” (ASE) which grabs all the automobile ads posted on different websites. Our solution will give a single search for automobile ads. The user can get all the ads based on his/her search criteria. ASE will find all the ads for different websites and will display the results. The user can then compare the searched ads on one single view. ASE supported Multi Thread Crawler.


Sign in / Sign up

Export Citation Format

Share Document