scholarly journals Detection accuracy for epithelial dysplasia using an objective autofluorescence visualization method based on the luminance ratio

2017 ◽  
Vol 9 (11) ◽  
pp. e2-e2 ◽  
Author(s):  
Nanami Yamamoto ◽  
Koji Kawaguchi ◽  
Hisako Fujihara ◽  
Mitsuhiko Hasebe ◽  
Yuta Kishi ◽  
...  
2020 ◽  
Vol 2020 ◽  
pp. 1-21
Author(s):  
Hua Zhang ◽  
Jiawei Qin ◽  
Boan Zhang ◽  
Hanbing Yan ◽  
Jing Guo ◽  
...  

The visual recognition of Android malicious applications (Apps) is mainly focused on the binary classification using grayscale images, while the multiclassification of malicious App families is rarely studied. If we can visualize the Android malicious Apps as color images, we will get more features than using grayscale images. In this paper, a method of color visualization for Android Apps is proposed and implemented. Based on this, combined with deep learning models, a multiclassifier for the Android malicious App families is implemented, which can classify 10 common malicious App families. In order to better understand the behavioral characteristics of malicious Apps, we conduct a comprehensive manual analysis for a large number of malicious Apps and summarize 1695 malicious behavior characteristics as customized features. Compared with the App classifier based on the grayscale visualization method, it is verified that the classifier using the color visualization method can achieve better classification results. We use four types of Android App features: classes.dex file, sets of class names, APIs, and customized features as input for App visualization. According to the experimental results, we find out that using the customized features as the color visualization input features can achieve the highest detection accuracy rate, which is 96% in the ten malicious families.


2006 ◽  
Vol 27 (4) ◽  
pp. 218-228 ◽  
Author(s):  
Paul Rodway ◽  
Karen Gillies ◽  
Astrid Schepman

This study examined whether individual differences in the vividness of visual imagery influenced performance on a novel long-term change detection task. Participants were presented with a sequence of pictures, with each picture and its title displayed for 17  s, and then presented with changed or unchanged versions of those pictures and asked to detect whether the picture had been changed. Cuing the retrieval of the picture's image, by presenting the picture's title before the arrival of the changed picture, facilitated change detection accuracy. This suggests that the retrieval of the picture's representation immunizes it against overwriting by the arrival of the changed picture. The high and low vividness participants did not differ in overall levels of change detection accuracy. However, in replication of Gur and Hilgard (1975) , high vividness participants were significantly more accurate at detecting salient changes to pictures compared to low vividness participants. The results suggest that vivid images are not characterised by a high level of detail and that vivid imagery enhances memory for the salient aspects of a scene but not all of the details of a scene. Possible causes of this difference, and how they may lead to an understanding of individual differences in change detection, are considered.


Author(s):  
Gregor Volberg

Previous studies often revealed a right-hemisphere specialization for processing the global level of compound visual stimuli. Here we explore whether a similar specialization exists for the detection of intersected contours defined by a chain of local elements. Subjects were presented with arrays of randomly oriented Gabor patches that could contain a global path of collinearly arranged elements in the left or in the right visual hemifield. As expected, the detection accuracy was higher for contours presented to the left visual field/right hemisphere. This difference was absent in two control conditions where the smoothness of the contour was decreased. The results demonstrate that the contour detection, often considered to be driven by lateral coactivation in primary visual cortex, relies on higher-level visual representations that differ between the hemispheres. Furthermore, because contour and non-contour stimuli had the same spatial frequency spectra, the results challenge the view that the right-hemisphere advantage in global processing depends on a specialization for processing low spatial frequencies.


2009 ◽  
Vol 3 (1) ◽  
pp. 1-8 ◽  
Author(s):  
Allyson Barnacz ◽  
Franco Amati ◽  
Christina Fenton ◽  
Amanda Johnson ◽  
Julian Paul Keenan
Keyword(s):  

2018 ◽  
Author(s):  
Menghua Duan ◽  
Lin Chen ◽  
Yongchang Feng ◽  
Junnosuke Okajima ◽  
Atsuki Komiya

2020 ◽  
Vol 2020 (4) ◽  
pp. 76-1-76-7
Author(s):  
Swaroop Shankar Prasad ◽  
Ofer Hadar ◽  
Ilia Polian

Image steganography can have legitimate uses, for example, augmenting an image with a watermark for copyright reasons, but can also be utilized for malicious purposes. We investigate the detection of malicious steganography using neural networkbased classification when images are transmitted through a noisy channel. Noise makes detection harder because the classifier must not only detect perturbations in the image but also decide whether they are due to the malicious steganographic modifications or due to natural noise. Our results show that reliable detection is possible even for state-of-the-art steganographic algorithms that insert stego bits not affecting an image’s visual quality. The detection accuracy is high (above 85%) if the payload, or the amount of the steganographic content in an image, exceeds a certain threshold. At the same time, noise critically affects the steganographic information being transmitted, both through desynchronization (destruction of information which bits of the image contain steganographic information) and by flipping these bits themselves. This will force the adversary to use a redundant encoding with a substantial number of error-correction bits for reliable transmission, making detection feasible even for small payloads.


2019 ◽  
Vol 11 (1) ◽  
Author(s):  
M.A. Boyarchuk ◽  
I.G. Zhurkin ◽  
V.B. Nepoklonov

2019 ◽  
Vol 31 (6) ◽  
pp. 844-850 ◽  
Author(s):  
Kevin T. Huang ◽  
Michael A. Silva ◽  
Alfred P. See ◽  
Kyle C. Wu ◽  
Troy Gallerani ◽  
...  

OBJECTIVERecent advances in computer vision have revolutionized many aspects of society but have yet to find significant penetrance in neurosurgery. One proposed use for this technology is to aid in the identification of implanted spinal hardware. In revision operations, knowing the manufacturer and model of previously implanted fusion systems upfront can facilitate a faster and safer procedure, but this information is frequently unavailable or incomplete. The authors present one approach for the automated, high-accuracy classification of anterior cervical hardware fusion systems using computer vision.METHODSPatient records were searched for those who underwent anterior-posterior (AP) cervical radiography following anterior cervical discectomy and fusion (ACDF) at the authors’ institution over a 10-year period (2008–2018). These images were then cropped and windowed to include just the cervical plating system. Images were then labeled with the appropriate manufacturer and system according to the operative record. A computer vision classifier was then constructed using the bag-of-visual-words technique and KAZE feature detection. Accuracy and validity were tested using an 80%/20% training/testing pseudorandom split over 100 iterations.RESULTSA total of 321 total images were isolated containing 9 different ACDF systems from 5 different companies. The correct system was identified as the top choice in 91.5% ± 3.8% of the cases and one of the top 2 or 3 choices in 97.1% ± 2.0% and 98.4 ± 13% of the cases, respectively. Performance persisted despite the inclusion of variable sizes of hardware (i.e., 1-level, 2-level, and 3-level plates). Stratification by the size of hardware did not improve performance.CONCLUSIONSA computer vision algorithm was trained to classify at least 9 different types of anterior cervical fusion systems using relatively sparse data sets and was demonstrated to perform with high accuracy. This represents one of many potential clinical applications of machine learning and computer vision in neurosurgical practice.


2019 ◽  
Vol 8 (3) ◽  
pp. 5926-5929

Blind forensic-investigation in a digital image is a new research direction in image security. It aims to discover the altered image content without any embedded security scheme. Block and key point based methods are the two dispensation options in blind image forensic investigation. Both the techniques exhibit the best performance to reveal the tampered image. The success of these methods is limited due to computational complexity and detection accuracy against various image distortions and geometric transformation operations. This article introduces different blind image tampering methods and introduces a robust image forensic investigation method to determine the copy-move tampered image by means of fuzzy logic approach. Empirical outcomes facilitate that the projected scheme effectively classifies copy-move type of forensic images as well as blurred tampered image. Overall detection accuracy of this method is high over the existing methods.


Sign in / Sign up

Export Citation Format

Share Document