Simulating visual geometry

Author(s):  
Matthias Müller ◽  
Nuttapong Chentanez ◽  
Miles Macklin
Keyword(s):  
Author(s):  
Cristina Dondi ◽  
Abhishek Dutta ◽  
Matilde Malaspina ◽  
Andrew Zisserman

A presentation of the 15cILLUSTRATION database and website, a searchable database of 15th-century printed illustrations developed by the 15cBOOKTRADE Project in collaboration with the Visual Geometry Group (VGG) at the Department of Engineering Science of the University of Oxford. 15cILLUSTRATION is the first comprehensive and systematic tool to track and investigate the production, use, circulation, and copying of woodblocks, iconographic subjects, artistic styles, within 15th-century printed illustrated editions. The paper illustrates the potential of the 15cILLUSTRATION website as a research support tool for art historians, book historians, philologists and historians of visual and material culture.


Entropy ◽  
2019 ◽  
Vol 21 (10) ◽  
pp. 999
Author(s):  
Yuting Pu ◽  
Honggeng Yang ◽  
Xiaoyang Ma ◽  
Xiangxun Sun

The recognition of the voltage sag sources is the basis for formulating a voltage sag governance plan and clarifying the responsibility for the accident. Aiming at the recognition problem of voltage sag sources, a recognition method of voltage sag sources based on phase space reconstruction and improved Visual Geometry Group (VGG) transfer learning is proposed from the perspective of image classification. Firstly, phase space reconstruction technology is used to transform voltage sag signals, generate reconstruction images of voltage sag, and analyze the intuitive characteristics of different sag sources from reconstruction images. Secondly, combined with the attention mechanism, the standard VGG 16 model is improved to extract the features completely and prevent over-fitting. Finally, VGG transfer learning model uses the idea of transfer learning for training, which improves the efficiency of model training and the recognition accuracy of sag sources. The purpose of the training model is to minimize the cross entropy loss function. The simulation analysis verifies the effectiveness and superiority of the proposed method.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Mohsen Sadeghi ◽  
Hannah R. Sheahan ◽  
James N. Ingram ◽  
Daniel M. Wolpert
Keyword(s):  

2020 ◽  
Vol 10 (21) ◽  
pp. 7464
Author(s):  
Donghyun Kim ◽  
Eunhye Choi ◽  
Ho Gul Jeong ◽  
Joonho Chang ◽  
Sekyoung Youm

Temporomandibular joint osteoarthritis (TMJ OA) is a degenerative condition of the TMJ led by a pathological tissue response of the joint under mechanical loading. It is characterized by the progressive destruction of the internal surfaces of the joint, which can result in debilitating pain and joint noise. Panoramic imaging can be used as a basic screening tool with thorough clinical examination in diagnosing TMJ OA. This paper proposes an algorithm that can extract the condylar region and determine its abnormality by using convolutional neural networks (CNNs) and Faster region-based CNNs (R-CNNs). Panoramic images are collected retrospectively and 1000 images are classified into three categories—normal, abnormal, and unreadable—by a dentist or orofacial pain specialist. Labels indicating whether the condyle is detected and its location enabled more clearly recognizable panoramic images. The uneven proportion of normal to abnormal data is adjusted by duplicating and rotating the images. An R-CNN model and a Visual Geometry Group-16 (VGG16) model are used for learning and condyle discrimination, respectively. To prevent overfitting, the images are rotated ±10° and shifted by 10%. The average precision of condyle detection using an R-CNN at intersection over union (IoU) >0.5 is 99.4% (right side) and 100% (left side). The sensitivity, specificity, and accuracy of the TMJ OA classification algorithm using a CNN are 0.54, 0.94, and 0.84, respectively. The findings demonstrate that classifying panoramic images through CNNs is possible. It is expected that artificial intelligence will be more actively applied to analyze panoramic X-ray images in the future.


1995 ◽  
Vol 79 (485) ◽  
pp. 420
Author(s):  
D. R. J. Chillingworth ◽  
Anatolij Fomenko
Keyword(s):  

2021 ◽  
Vol 16 (1) ◽  
Author(s):  
Dian Hong ◽  
Ying-Yi Zheng ◽  
Ying Xin ◽  
Ling Sun ◽  
Hang Yang ◽  
...  

Abstract Background Many genetic syndromes (GSs) have distinct facial dysmorphism, and facial gestalts can be used as a diagnostic tool for recognizing a syndrome. Facial recognition technology has advanced in recent years, and the screening of GSs by facial recognition technology has become feasible. This study constructed an automatic facial recognition model for the identification of children with GSs. Results A total of 456 frontal facial photos were collected from 228 children with GSs and 228 healthy children in Guangdong Provincial People's Hospital from Jun 2016 to Jan 2021. Only one frontal facial image was selected for each participant. The VGG-16 network (named after its proposal lab, Visual Geometry Group from Oxford University) was pretrained by transfer learning methods, and a facial recognition model based on the VGG-16 architecture was constructed. The performance of the VGG-16 model was evaluated by five-fold cross-validation. Comparison of VGG-16 model to five physicians were also performed. The VGG-16 model achieved the highest accuracy of 0.8860 ± 0.0211, specificity of 0.9124 ± 0.0308, recall of 0.8597 ± 0.0190, F1-score of 0.8829 ± 0.0215 and an area under the receiver operating characteristic curve of 0.9443 ± 0.0276 (95% confidence interval: 0.9210–0.9620) for GS screening, which was significantly higher than that achieved by human experts. Conclusions This study highlighted the feasibility of facial recognition technology for GSs identification. The VGG-16 recognition model can play a prominent role in GSs screening in clinical practice.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-11 ◽  
Author(s):  
Muhammad Mateen ◽  
Junhao Wen ◽  
Nasrullah Nasrullah ◽  
Song Sun ◽  
Shaukat Hayat

In the field of ophthalmology, diabetic retinopathy (DR) is a major cause of blindness. DR is based on retinal lesions including exudate. Exudates have been found to be one of the signs and serious DR anomalies, so the proper detection of these lesions and the treatment should be done immediately to prevent loss of vision. In this paper, pretrained convolutional neural network- (CNN-) based framework has been proposed for the detection of exudate. Recently, deep CNNs were individually applied to solve the specific problems. But, pretrained CNN models with transfer learning can utilize the previous knowledge to solve the other related problems. In the proposed approach, initially data preprocessing is performed for standardization of exudate patches. Furthermore, region of interest (ROI) localization is used to localize the features of exudates, and then transfer learning is performed for feature extraction using pretrained CNN models (Inception-v3, Residual Network-50, and Visual Geometry Group Network-19). Moreover, the fused features from fully connected (FC) layers are fed into the softmax classifier for exudate classification. The performance of proposed framework has been analyzed using two well-known publicly available databases such as e-Ophtha and DIARETDB1. The experimental results demonstrate that the proposed pretrained CNN-based framework outperforms the existing techniques for the detection of exudates.


Author(s):  
Huanhuan Ran ◽  
Shiping Wen ◽  
Qian Li ◽  
Yuting Cao ◽  
Kaibo Shi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document