Behind the Scene of “The Holy Family with St. Anne and the Young St. John” by Bernardino Luini: A Computer-Assisted Method to Unveil the Underdrawings

2020 ◽  
pp. 000370282094992
Author(s):  
Michele Caccia ◽  
Letizia Bonizzoni ◽  
Marco Martini ◽  
Raffaella Fontana ◽  
Valeria Villa ◽  
...  

Uncovering the underdrawings (UDs), the preliminary sketch made by the painter on the grounded preparatory support, is a keystone for understanding the painting's history including the original project of the artist, the pentimenti (an underlying image in a painting providing evidence of revision by the artist) or the possible presence of co-workers’ contributions. The application of infrared reflectography (IRR) has made the dream of discovering the UDs come true: since its introduction, there has been a growing interest in the technology, which therefore has evolved leading to advanced instruments. Most of the literature either report on the technological advances in IRR devices or present case studies, but a straightforward method to improve the visibility of the UDs has not been presented yet. Most of the data handling methods are devoted to a specific painting or they are not user-friendly enough to be applied by non-specialized users, hampering, thus, their widespread application in areas other than the scientific one, e.g., in the art history field. We developed a computer-assisted method, based on principal component analysis (PCA) and image processing, to enhance the visibility of UDs and to support the art-historians and curators’ work. Based on ImageJ/Fiji, one of the most widespread image analysis software, the algorithm is very easy to use and, in principle, can be applied to any multi- or hyper-spectral image data set. In the present paper, after describing the method, we accurately present the extraction of the UD for the panel “The Holy Family with St. Anne and the Young St. John” and for other four paintings by Luini and his workshop paying particular attention to the painting known as “The Child with the Lamb”.

PeerJ ◽  
2016 ◽  
Vol 4 ◽  
pp. e2211 ◽  
Author(s):  
Rui Alves ◽  
Marc Piñol ◽  
Jordi Vilaplana ◽  
Ivan Teixidó ◽  
Joaquim Cruz ◽  
...  

Introduction.Most documented rare diseases have genetic origin. Because of their low individual frequency, an initial diagnosis based on phenotypic symptoms is not always easy, as practitioners might never have been exposed to patients suffering from the relevant disease. It is thus important to develop tools that facilitate symptom-based initial diagnosis of rare diseases by clinicians. In this work we aimed at developing a computational approach to aid in that initial diagnosis. We also aimed at implementing this approach in a user friendly web prototype. We call this tool Rare Disease Discovery. Finally, we also aimed at testing the performance of the prototype.Methods.Rare Disease Discovery uses the publicly available ORPHANET data set of association between rare diseases and their symptoms to automatically predict the most likely rare diseases based on a patient’s symptoms. We apply the method to retrospectively diagnose a cohort of 187 rare disease patients with confirmed diagnosis. Subsequently we test the precision, sensitivity, and global performance of the system under different scenarios by running large scale Monte Carlo simulations. All settings account for situations where absent and/or unrelated symptoms are considered in the diagnosis.Results.We find that this expert system has high diagnostic precision (≥80%) and sensitivity (≥99%), and is robust to both absent and unrelated symptoms.Discussion.The Rare Disease Discovery prediction engine appears to provide a fast and robust method for initial assisted differential diagnosis of rare diseases. We coupled this engine with a user-friendly web interface and it can be freely accessed athttp://disease-discovery.udl.cat/. The code and most current database for the whole project can be downloaded fromhttps://github.com/Wrrzag/DiseaseDiscovery/tree/no_classifiers.


2019 ◽  
Vol 5 (1) ◽  
pp. 231-234 ◽  
Author(s):  
Thomas Wittenberg ◽  
Pascal Zobel ◽  
Magnus Rathke ◽  
Steffen Mühldorfer

AbstractEarly detection of polyps is one central goal of colonoscopic screening programs. To support gastroenterologists during this examination process, deep convolutional neural network can be applied for computer-assisted detection of neoplastic lesions. In this work, a Mask R-CNN architecture was applied. For training and testing, three independent colonoscopy data sets were used, including 2484 HD labelled images with polyps from our clinic, as well as two public image data sets from the MICCAI 2015 polyp detection challenge, consisting of 612 SD and 194 HD labelled images with polyps. After training the deep neural network, best results for the three test data sets were achieved in the range of recall = 0.92, precision = 0.86, F1 = 0.89 (data set A), rec = 0.86, prec = 0.80, F1 = 0.82 (data set B) and rec = 0.83, prec = 0.74, F1 = 0.79 (data set C).


Author(s):  
Rudolf Oldenbourg

The recent renaissance of the light microsope is fueled in part by technological advances in components on the periphery of the microscope, such as the laser as illumination source, electronic image recording (video), computer assisted image analysis and the biochemistry of fluorescent dyes for labeling specimens. After great progress in these peripheral parts, it seems timely to examine the optics itself and ask how progress in the periphery facilitates the use of new optical components and of new optical designs inside the microscope. Some results of this fruitful reflection are presented in this symposium.We have considered the polarized light microscope, and developed a design that replaces the traditional compensator, typically a birefringent crystal plate, with a precision universal compensator made of two liquid crystal variable retarders. A video camera and digital image processing system provide fast measurements of specimen anisotropy (retardance magnitude and azimuth) at ALL POINTS of the image forming the field of view. The images document fine structural and molecular organization within a thin optical section of the specimen.


2015 ◽  
Vol 14 (4) ◽  
pp. 165-181 ◽  
Author(s):  
Sarah Dudenhöffer ◽  
Christian Dormann

Abstract. The purpose of this study was to replicate the dimensions of the customer-related social stressors (CSS) concept across service jobs, to investigate their consequences for service providers’ well-being, and to examine emotional dissonance as mediator. Data of 20 studies comprising of different service jobs (N = 4,199) were integrated into a single data set and meta-analyzed. Confirmatory factor analyses and explorative principal component analysis confirmed four CSS scales: disproportionate expectations, verbal aggression, ambiguous expectations, disliked customers. These CSS scales were associated with burnout and job satisfaction. Most of the effects were partially mediated by emotional dissonance. Further analyses revealed that differences among jobs exist with regard to the factor solution. However, associations between CSS and outcomes are mainly invariant across service jobs.


2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


2018 ◽  
Author(s):  
Peter De Wolf ◽  
Zhuangqun Huang ◽  
Bede Pittenger

Abstract Methods are available to measure conductivity, charge, surface potential, carrier density, piezo-electric and other electrical properties with nanometer scale resolution. One of these methods, scanning microwave impedance microscopy (sMIM), has gained interest due to its capability to measure the full impedance (capacitance and resistive part) with high sensitivity and high spatial resolution. This paper introduces a novel data-cube approach that combines sMIM imaging and sMIM point spectroscopy, producing an integrated and complete 3D data set. This approach replaces the subjective approach of guessing locations of interest (for single point spectroscopy) with a big data approach resulting in higher dimensional data that can be sliced along any axis or plane and is conducive to principal component analysis or other machine learning approaches to data reduction. The data-cube approach is also applicable to other AFM-based electrical characterization modes.


2020 ◽  
Vol 33 (6) ◽  
pp. 838-844
Author(s):  
Jan-Helge Klingler ◽  
Ulrich Hubbe ◽  
Christoph Scholz ◽  
Florian Volz ◽  
Marc Hohenhaus ◽  
...  

OBJECTIVEIntraoperative 3D imaging and navigation is increasingly used for minimally invasive spine surgery. A novel, noninvasive patient tracker that is adhered as a mask on the skin for 3D navigation necessitates a larger intraoperative 3D image set for appropriate referencing. This enlarged 3D image data set can be acquired by a state-of-the-art 3D C-arm device that is equipped with a large flat-panel detector. However, the presumably associated higher radiation exposure to the patient has essentially not yet been investigated and is therefore the objective of this study.METHODSPatients were retrospectively included if a thoracolumbar 3D scan was performed intraoperatively between 2016 and 2019 using a 3D C-arm with a large 30 × 30–cm flat-panel detector (3D scan volume 4096 cm3) or a 3D C-arm with a smaller 20 × 20–cm flat-panel detector (3D scan volume 2097 cm3), and the dose area product was available for the 3D scan. Additionally, the fluoroscopy time and the number of fluoroscopic images per 3D scan, as well as the BMI of the patients, were recorded.RESULTSThe authors compared 62 intraoperative thoracolumbar 3D scans using the 3D C-arm with a large flat-panel detector and 12 3D scans using the 3D C-arm with a small flat-panel detector. Overall, the 3D C-arm with a large flat-panel detector required more fluoroscopic images per scan (mean 389.0 ± 8.4 vs 117.0 ± 4.6, p < 0.0001), leading to a significantly higher dose area product (mean 1028.6 ± 767.9 vs 457.1 ± 118.9 cGy × cm2, p = 0.0044).CONCLUSIONSThe novel, noninvasive patient tracker mask facilitates intraoperative 3D navigation while eliminating the need for an additional skin incision with detachment of the autochthonous muscles. However, the use of this patient tracker mask requires a larger intraoperative 3D image data set for accurate registration, resulting in a 2.25 times higher radiation exposure to the patient. The use of the patient tracker mask should thus be based on an individual decision, especially taking into considering the radiation exposure and extent of instrumentation.


2020 ◽  
Vol 16 (8) ◽  
pp. 1088-1105
Author(s):  
Nafiseh Vahedi ◽  
Majid Mohammadhosseini ◽  
Mehdi Nekoei

Background: The poly(ADP-ribose) polymerases (PARP) is a nuclear enzyme superfamily present in eukaryotes. Methods: In the present report, some efficient linear and non-linear methods including multiple linear regression (MLR), support vector machine (SVM) and artificial neural networks (ANN) were successfully used to develop and establish quantitative structure-activity relationship (QSAR) models capable of predicting pEC50 values of tetrahydropyridopyridazinone derivatives as effective PARP inhibitors. Principal component analysis (PCA) was used to a rational division of the whole data set and selection of the training and test sets. A genetic algorithm (GA) variable selection method was employed to select the optimal subset of descriptors that have the most significant contributions to the overall inhibitory activity from the large pool of calculated descriptors. Results: The accuracy and predictability of the proposed models were further confirmed using crossvalidation, validation through an external test set and Y-randomization (chance correlations) approaches. Moreover, an exhaustive statistical comparison was performed on the outputs of the proposed models. The results revealed that non-linear modeling approaches, including SVM and ANN could provide much more prediction capabilities. Conclusion: Among the constructed models and in terms of root mean square error of predictions (RMSEP), cross-validation coefficients (Q2 LOO and Q2 LGO), as well as R2 and F-statistical value for the training set, the predictive power of the GA-SVM approach was better. However, compared with MLR and SVM, the statistical parameters for the test set were more proper using the GA-ANN model.


2019 ◽  
Vol 11 (10) ◽  
pp. 1157 ◽  
Author(s):  
Jorge Fuentes-Pacheco ◽  
Juan Torres-Olivares ◽  
Edgar Roman-Rangel ◽  
Salvador Cervantes ◽  
Porfirio Juarez-Lopez ◽  
...  

Crop segmentation is an important task in Precision Agriculture, where the use of aerial robots with an on-board camera has contributed to the development of new solution alternatives. We address the problem of fig plant segmentation in top-view RGB (Red-Green-Blue) images of a crop grown under open-field difficult circumstances of complex lighting conditions and non-ideal crop maintenance practices defined by local farmers. We present a Convolutional Neural Network (CNN) with an encoder-decoder architecture that classifies each pixel as crop or non-crop using only raw colour images as input. Our approach achieves a mean accuracy of 93.85% despite the complexity of the background and a highly variable visual appearance of the leaves. We make available our CNN code to the research community, as well as the aerial image data set and a hand-made ground truth segmentation with pixel precision to facilitate the comparison among different algorithms.


2017 ◽  
Vol 727 ◽  
pp. 447-449 ◽  
Author(s):  
Jun Dai ◽  
Hua Yan ◽  
Jian Jian Yang ◽  
Jun Jun Guo

To evaluate the aging behavior of high density polyethylene (HDPE) under an artificial accelerated environment, principal component analysis (PCA) was used to establish a non-dimensional expression Z from a data set of multiple degradation parameters of HDPE. In this study, HDPE samples were exposed to the accelerated thermal oxidative environment for different time intervals up to 64 days. The results showed that the combined evaluating parameter Z was characterized by three-stage changes. The combined evaluating parameter Z increased quickly in the first 16 days of exposure and then leveled off. After 40 days, it began to increase again. Among the 10 degradation parameters, branching degree, carbonyl index and hydroxyl index are strongly associated. The tensile modulus is highly correlated with the impact strength. The tensile strength, tensile modulus and impact strength are negatively correlated with the crystallinity.


Sign in / Sign up

Export Citation Format

Share Document