scholarly journals Noise Doesn't Lie: Towards Universal Detection of Deep Inpainting

Author(s):  
Ang Li ◽  
Qiuhong Ke ◽  
Xingjun Ma ◽  
Haiqin Weng ◽  
Zhiyuan Zong ◽  
...  

Deep image inpainting aims to restore damaged or missing regions in an image with realistic contents. While having a wide range of applications such as object removal and image recovery, deep inpainting techniques also have the risk of being manipulated for image forgery. A promising countermeasure against such forgeries is deep inpainting detection, which aims to locate the inpainted regions in an image. In this paper, we make the first attempt towards universal detection of deep inpainting, where the detection network can generalize well when detecting different deep inpainting methods. To this end, we first propose a novel data generation approach to generate a universal training dataset, which imitates the noise discrepancies exist in real versus inpainted image contents to train universal detectors. We then design a Noise-Image Cross-fusion Network (NIX-Net) to effectively exploit the discriminative information contained in both the images and their noise patterns. We empirically show, on multiple benchmark datasets, that our approach outperforms existing detection methods by a large margin and generalize well to unseen deep inpainting techniques. Our universal training dataset can also significantly boost the generalizability of existing detection methods.

2014 ◽  
Vol 12 (06) ◽  
pp. 1442009 ◽  
Author(s):  
Broto Chakrabarty ◽  
Nita Parekh

Repetition of a structural motif within protein is associated with a wide range of structural and functional roles. In most cases the repeating units are well conserved at the structural level while at the sequence level, they are mostly undetectable suggesting the need for structure-based methods. Since most known methods require a training dataset, de novo approach is desirable. Here, we propose an efficient graph-based approach for detecting structural repeats in proteins. In a protein structure represented as a graph, interactions between inter- and intra-repeat units are well captured by the eigen spectra of adjacency matrix of the graph. These conserved interactions give rise to similar connections and a unique profile of the principal eigen spectra for each repeating unit. The efficacy of the approach is shown on eight repeat families annotated in UniProt, comprising of both solenoid and nonsolenoid repeats with varied secondary structure architecture and repeat lengths. The performance of the approach is also tested on other known benchmark datasets and the performance compared with two repeat identification methods. For a known repeat type, the algorithm also identifies the type of repeat present in the protein. A web tool implementing the algorithm is available at the URL http://bioinf.iiit.ac.in/PRIGSA/ .


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ibtissame Khaoua ◽  
Guillaume Graciani ◽  
Andrey Kim ◽  
François Amblard

AbstractFor a wide range of purposes, one faces the challenge to detect light from extremely faint and spatially extended sources. In such cases, detector noises dominate over the photon noise of the source, and quantum detectors in photon counting mode are generally the best option. Here, we combine a statistical model with an in-depth analysis of detector noises and calibration experiments, and we show that visible light can be detected with an electron-multiplying charge-coupled devices (EM-CCD) with a signal-to-noise ratio (SNR) of 3 for fluxes less than $$30\,{\text{photon}}\,{\text{s}}^{ - 1} \,{\text{cm}}^{ - 2}$$ 30 photon s - 1 cm - 2 . For green photons, this corresponds to 12 aW $${\text{cm}}^{ - 2}$$ cm - 2 ≈ $$9{ } \times 10^{ - 11}$$ 9 × 10 - 11 lux, i.e. 15 orders of magnitude less than typical daylight. The strong nonlinearity of the SNR with the sampling time leads to a dynamic range of detection of 4 orders of magnitude. To detect possibly varying light fluxes, we operate in conditions of maximal detectivity $${\mathcal{D}}$$ D rather than maximal SNR. Given the quantum efficiency $$QE\left( \lambda \right)$$ Q E λ of the detector, we find $${ \mathcal{D}} = 0.015\,{\text{photon}}^{ - 1} \,{\text{s}}^{1/2} \,{\text{cm}}$$ D = 0.015 photon - 1 s 1 / 2 cm , and a non-negligible sensitivity to blackbody radiation for T > 50 °C. This work should help design highly sensitive luminescence detection methods and develop experiments to explore dynamic phenomena involving ultra-weak luminescence in biology, chemistry, and material sciences.


2021 ◽  
Vol 3 (9) ◽  
Author(s):  
Sadik Omairey ◽  
Nithin Jayasree ◽  
Mihalis Kazilas

AbstractThe increasing use of fibre reinforced polymer composite materials in a wide range of applications increases the use of similar and dissimilar joints. Traditional joining methods such as welding, mechanical fastening and riveting are challenging in composites due to their material properties, heterogeneous nature, and layup configuration. Adhesive bonding allows flexibility in materials selection and offers improved production efficiency from product design and manufacture to final assembly, enabling cost reduction. However, the performance of adhesively bonded composite structures cannot be fully verified by inspection and testing due to the unforeseen nature of defects and manufacturing uncertainties presented in this joining method. These uncertainties can manifest as kissing bonds, porosity and voids in the adhesive. As a result, the use of adhesively bonded joints is often constrained by conservative certification requirements, limiting the potential of composite materials in weight reduction, cost-saving, and performance. There is a need to identify these uncertainties and understand their effect when designing these adhesively bonded joints. This article aims to report and categorise these uncertainties, offering the reader a reliable and inclusive source to conduct further research, such as the development of probabilistic reliability-based design optimisation, sensitivity analysis, defect detection methods and process development.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2499
Author(s):  
Michael Dillon ◽  
Maja A. Zaczek-Moczydlowska ◽  
Christine Edwards ◽  
Andrew D. Turner ◽  
Peter I. Miller ◽  
...  

In the past twenty years marine biotoxin analysis in routine regulatory monitoring has advanced significantly in Europe (EU) and other regions from the use of the mouse bioassay (MBA) towards the high-end analytical techniques such as high-performance liquid chromatography (HPLC) with tandem mass spectrometry (MS). Previously, acceptance of these advanced methods, in progressing away from the MBA, was hindered by a lack of commercial certified analytical standards for method development and validation. This has now been addressed whereby the availability of a wide range of analytical standards from several companies in the EU, North America and Asia has enhanced the development and validation of methods to the required regulatory standards. However, the cost of the high-end analytical equipment, lengthy procedures and the need for qualified personnel to perform analysis can still be a challenge for routine monitoring laboratories. In developing regions, aquaculture production is increasing and alternative inexpensive Sensitive, Measurable, Accurate and Real-Time (SMART) rapid point-of-site testing (POST) methods suitable for novice end users that can be validated and internationally accepted remain an objective for both regulators and the industry. The range of commercial testing kits on the market for marine toxin analysis remains limited and even more so those meeting the requirements for use in regulatory control. Individual assays include enzyme-linked immunosorbent assays (ELISA) and lateral flow membrane-based immunoassays (LFIA) for EU-regulated toxins, such as okadaic acid (OA) and dinophysistoxins (DTXs), saxitoxin (STX) and its analogues and domoic acid (DA) in the form of three separate tests offering varying costs and benefits for the industry. It can be observed from the literature that not only are developments and improvements ongoing for these assays, but there are also novel assays being developed using upcoming state-of-the-art biosensor technology. This review focuses on both currently available methods and recent advances in innovative methods for marine biotoxin testing and the end-user practicalities that need to be observed. Furthermore, it highlights trends that are influencing assay developments such as multiplexing capabilities and rapid POST, indicating potential detection methods that will shape the future market.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2144
Author(s):  
Stefan Reitmann ◽  
Lorenzo Neumann ◽  
Bernhard Jung

Common Machine-Learning (ML) approaches for scene classification require a large amount of training data. However, for classification of depth sensor data, in contrast to image data, relatively few databases are publicly available and manual generation of semantically labeled 3D point clouds is an even more time-consuming task. To simplify the training data generation process for a wide range of domains, we have developed the BLAINDER add-on package for the open-source 3D modeling software Blender, which enables a largely automated generation of semantically annotated point-cloud data in virtual 3D environments. In this paper, we focus on classical depth-sensing techniques Light Detection and Ranging (LiDAR) and Sound Navigation and Ranging (Sonar). Within the BLAINDER add-on, different depth sensors can be loaded from presets, customized sensors can be implemented and different environmental conditions (e.g., influence of rain, dust) can be simulated. The semantically labeled data can be exported to various 2D and 3D formats and are thus optimized for different ML applications and visualizations. In addition, semantically labeled images can be exported using the rendering functionalities of Blender.


Electronics ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 52
Author(s):  
Richard Evan Sutanto ◽  
Sukho Lee

Several recent studies have shown that artificial intelligence (AI) systems can malfunction due to intentionally manipulated data coming through normal channels. Such kinds of manipulated data are called adversarial examples. Adversarial examples can pose a major threat to an AI-led society when an attacker uses them as means to attack an AI system, which is called an adversarial attack. Therefore, major IT companies such as Google are now studying ways to build AI systems which are robust against adversarial attacks by developing effective defense methods. However, one of the reasons why it is difficult to establish an effective defense system is due to the fact that it is difficult to know in advance what kind of adversarial attack method the opponent is using. Therefore, in this paper, we propose a method to detect the adversarial noise without knowledge of the kind of adversarial noise used by the attacker. For this end, we propose a blurring network that is trained only with normal images and also use it as an initial condition of the Deep Image Prior (DIP) network. This is in contrast to other neural network based detection methods, which require the use of many adversarial noisy images for the training of the neural network. Experimental results indicate the validity of the proposed method.


2021 ◽  
Vol 11 (13) ◽  
pp. 6085
Author(s):  
Jesus Salido ◽  
Vanesa Lomas ◽  
Jesus Ruiz-Santaquiteria ◽  
Oscar Deniz

There is a great need to implement preventive mechanisms against shootings and terrorist acts in public spaces with a large influx of people. While surveillance cameras have become common, the need for monitoring 24/7 and real-time response requires automatic detection methods. This paper presents a study based on three convolutional neural network (CNN) models applied to the automatic detection of handguns in video surveillance images. It aims to investigate the reduction of false positives by including pose information associated with the way the handguns are held in the images belonging to the training dataset. The results highlighted the best average precision (96.36%) and recall (97.23%) obtained by RetinaNet fine-tuned with the unfrozen ResNet-50 backbone and the best precision (96.23%) and F1 score values (93.36%) obtained by YOLOv3 when it was trained on the dataset including pose information. This last architecture was the only one that showed a consistent improvement—around 2%—when pose information was expressly considered during training.


2017 ◽  
Vol 17 (4) ◽  
pp. 850-868 ◽  
Author(s):  
William Soo Lon Wah ◽  
Yung-Tsang Chen ◽  
Gethin Wyn Roberts ◽  
Ahmed Elamin

Analyzing changes in vibration properties (e.g. natural frequencies) of structures as a result of damage has been heavily used by researchers for damage detection of civil structures. These changes, however, are not only caused by damage of the structural components, but they are also affected by the varying environmental conditions the structures are faced with, such as the temperature change, which limits the use of most damage detection methods presented in the literature that did not account for these effects. In this article, a damage detection method capable of distinguishing between the effects of damage and of the changing environmental conditions affecting damage sensitivity features is proposed. This method eliminates the need to form the baseline of the undamaged structure using damage sensitivity features obtained from a wide range of environmental conditions, as conventionally has been done, and utilizes features from two extreme and opposite environmental conditions as baselines. To allow near real-time monitoring, subsequent measurements are added one at a time to the baseline to create new data sets. Principal component analysis is then introduced for processing each data set so that patterns can be extracted and damage can be distinguished from environmental effects. The proposed method is tested using a two-dimensional truss structure and validated using measurements from the Z24 Bridge which was monitored for nearly a year, with damage scenarios applied to it near the end of the monitoring period. The results demonstrate the robustness of the proposed method for damage detection under changing environmental conditions. The method also works despite the nonlinear effects produced by environmental conditions on damage sensitivity features. Moreover, since each measurement is allowed to be analyzed one at a time, near real-time monitoring is possible. Damage progression can also be given from the method which makes it advantageous for damage evolution monitoring.


Author(s):  
Haidi Hasan Badr ◽  
Nayer Mahmoud Wanas ◽  
Magda Fayek

Since labeled data availability differs greatly across domains, Domain Adaptation focuses on learning in new and unfamiliar domains by reducing distribution divergence. Recent research suggests that the adversarial learning approach could be a promising way to achieve the domain adaptation objective. Adversarial learning is a strategy for learning domain-transferable features in robust deep networks. This paper introduces the TSAL paradigm, a two-step adversarial learning framework. It addresses the real-world problem of text classification, where source domain(s) has labeled data but target domain (s) has only unlabeled data. TSAL utilizes joint adversarial learning with class information and domain alignment deep network architecture to learn both domain-invariant and domain-specific features extractors. It consists of two training steps that are similar to the paradigm, in which pre-trained model weights are used as initialization for training with new data. TSAL’s two training phases, however, are based on the same data, not different data, as is the case with fine-tuning. Furthermore, TSAL only uses the learned domain-invariant feature extractor from the first training as an initialization for its peer in subsequent training. By doubling the training, TSAL can emphasize the leverage of the small unlabeled target domain and learn effectively what to share between various domains. A detailed analysis of many benchmark datasets reveals that our model consistently outperforms the prior art across a wide range of dataset distributions.


Diagnostics ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 40
Author(s):  
Meike Nauta ◽  
Ricky Walsh ◽  
Adam Dubowski ◽  
Christin Seifert

Machine learning models have been successfully applied for analysis of skin images. However, due to the black box nature of such deep learning models, it is difficult to understand their underlying reasoning. This prevents a human from validating whether the model is right for the right reasons. Spurious correlations and other biases in data can cause a model to base its predictions on such artefacts rather than on the true relevant information. These learned shortcuts can in turn cause incorrect performance estimates and can result in unexpected outcomes when the model is applied in clinical practice. This study presents a method to detect and quantify this shortcut learning in trained classifiers for skin cancer diagnosis, since it is known that dermoscopy images can contain artefacts. Specifically, we train a standard VGG16-based skin cancer classifier on the public ISIC dataset, for which colour calibration charts (elliptical, coloured patches) occur only in benign images and not in malignant ones. Our methodology artificially inserts those patches and uses inpainting to automatically remove patches from images to assess the changes in predictions. We find that our standard classifier partly bases its predictions of benign images on the presence of such a coloured patch. More importantly, by artificially inserting coloured patches into malignant images, we show that shortcut learning results in a significant increase in misdiagnoses, making the classifier unreliable when used in clinical practice. With our results, we, therefore, want to increase awareness of the risks of using black box machine learning models trained on potentially biased datasets. Finally, we present a model-agnostic method to neutralise shortcut learning by removing the bias in the training dataset by exchanging coloured patches with benign skin tissue using image inpainting and re-training the classifier on this de-biased dataset.


Sign in / Sign up

Export Citation Format

Share Document