scholarly journals Image Detection System for Schmidt Plates

1984 ◽  
Vol 78 ◽  
pp. 169-171 ◽  
Author(s):  
Hideo Maehara ◽  
Tomohiko Yamagata

A 14-inch Schmidt plate contains 109 photographic grains and 105 to 106 images of stars and galaxies on it. Such a quantity of data is too large to be handled in a conventional way even for a big computer.There is, in general, an alternative method to solve this problem; one is to store the data of all pixels on intermediate medium (e.g., magnetic tape), and reduce them into image parameters afterwards. The other method is to do all the processing simultaneously with the measurement. The latter is very useful for the automated detection of celestial images on large Schmidt plates.

Computers ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 103
Author(s):  
Jun Park ◽  
Youngho Cho

As the popularity of social network service (SNS) messengers (such as Telegram, WeChat or KakaoTalk) grows rapidly, cyberattackers and cybercriminals start targeting them, and from various media, we can see numerous cyber incidents that have occurred in the SNS messenger platforms. Especially, according to existing studies, a novel type of botnet, which is the so-called steganography-based botnet (stego-botnet), can be constructed and implemented in SNS chat messengers. In the stego-botnet, by using various steganography techniques, every botnet communication and control (C&C) messages are secretly embedded into multimedia files (such as image or video files) frequently shared in the SNS messenger. As a result, the stego-botnet can hide its malicious messages between a bot master and bots much better than existing botnets by avoiding traditional botnet-detection methods without steganography-detection functions. Meanwhile, existing studies have focused on devising and improving steganography-detection algorithms but no studies conducted automated steganography image-detection system although there are a large amount of SNS chatrooms on the Internet and thus may exist many potential steganography images on those chatrooms which need to be inspected for security. Consequently, in this paper, we propose an automated system that detects steganography image files by collecting and inspecting all image files shared in an SNS chatroom based on open image steganography tools. In addition, we implement our proposed system based on two open steganography tools (Stegano and Cryptosteganography) in the KakaoTalk SNS messenger and show our experimental results that validate our proposed automated detection system work successfully according to our design purposes.


Author(s):  
S. W. Kwon ◽  
I. S. Song ◽  
S. W. Lee ◽  
J. S. Lee ◽  
J. H. Kim ◽  
...  

1998 ◽  
Vol 25 (2) ◽  
pp. 110-114
Author(s):  
P. D. Blankenship ◽  
J. W. White ◽  
M. C. Lamb

Abstract Some farmers mechanically screen farmer stock (FS) peanuts after combining to remove undesired materials for value and quality improvement. Screening is accomplished with low capacity, portable screens at the field after combining or with high capacity cleaners or screens at buying points. An alternative method for FS peanut screening has been developed cooperatively by Amadas Industries and USDA-ARS, National Peanut Research Laboratory utilizing an experimental combine screening attachment. The attachment is a hydraulically driven, rotating cylindrical screen (trommel) with an axis inclined less than 10° from horizontal during operation. Peanuts are screened with the trommel prior to entering the combine basket, and smaller, unwanted materials are returned to the soil. Thirty-eight lots of FS peanuts averaging 3.27 t/lot were combined throughout all U.S. peanut-producing regions to examine performance. Foreign materials for the screened lots averaged 2.15% less than the unscreened lots (P = 0.05). Hulls were 0.62% less in the screened lots (P = 0.05). None of the other grade factors or market values per hectare were significantly different for runner peanuts. Foreign materials for screened virginia peanuts were 2.44% less than in unscreened (P = 0.01). Loose shelled kernels were 0.44% higher (P = 0.05), hulls 0.67% lower (P = 0.10), and damage 0.56% higher in screened peanuts than in unscreened. None of the other grade factors or market values per hectare were significantly different for Virginia peanuts. Although most grade factors and values per hectare were not significantly different for screened and unscreened peanuts tested, foreign materials were reduced significantly providing needed quality improvement. Possible cleaning costs also could be reduced with the attachment.


2009 ◽  
Vol 69 (5) ◽  
pp. AB189 ◽  
Author(s):  
Jean-Christophe Saurin ◽  
Emmanuel Ben Soussan ◽  
Marianne Gaudric ◽  
Marie-George Lapalus ◽  
Franck Cholet ◽  
...  

Author(s):  
Mohamed Estai ◽  
Marc Tennant ◽  
Dieter Gebauer ◽  
Andrew Brostek ◽  
Janardhan Vignarajan ◽  
...  

Objective: This study aimed to evaluate an automated detection system to detect and classify permanent teeth on orthopantomogram (OPG) images using convolutional neural networks (CNNs). Methods: In total, 591 digital OPGs were collected from patients older than 18 years. Three qualified dentists performed individual teeth labelling on images to generate the ground truth annotations. A three-step procedure, relying upon CNNs, was proposed for automated detection and classification of teeth. Firstly, U-Net, a type of CNN, performed preliminary segmentation of tooth regions or detecting regions of interest (ROIs) on panoramic images. Secondly, the Faster R-CNN, an advanced object detection architecture, identified each tooth within the ROI determined by the U-Net. Thirdly, VGG-16 architecture classified each tooth into 32 categories, and a tooth number was assigned. A total of 17,135 teeth cropped from 591 radiographs were used to train and validate the tooth detection and tooth numbering modules. 90% of OPG images were used for training, and the remaining 10% were used for validation. 10-folds cross-validation was performed for measuring the performance. The intersection over union (IoU), F1 score, precision, and recall (i.e. sensitivity) were used as metrics to evaluate the performance of resultant CNNs. Results: The ROI detection module had an IoU of 0.70. The tooth detection module achieved a recall of 0.99 and a precision of 0.99. The tooth numbering module had a recall, precision and F1 score of 0.98. Conclusion: The resultant automated method achieved high performance for automated tooth detection and numbering from OPG images. Deep learning can be helpful in the automatic filing of dental charts in general dentistry and forensic medicine.


Sensors ◽  
2018 ◽  
Vol 18 (7) ◽  
pp. 2327 ◽  
Author(s):  
Jinsong Zhang ◽  
Wenjie Xing ◽  
Mengdao Xing ◽  
Guangcai Sun

In recent years, terahertz imaging systems and techniques have been developed and have gradually become a leading frontier field. With the advantages of low radiation and clothing-penetrable, terahertz imaging technology has been widely used for the detection of concealed weapons or other contraband carried on personnel at airports and other secure locations. This paper aims to detect these concealed items with deep learning method for its well detection performance and real-time detection speed. Based on the analysis of the characteristics of terahertz images, an effective detection system is proposed in this paper. First, a lots of terahertz images are collected and labeled as the standard data format. Secondly, this paper establishes the terahertz classification dataset and proposes a classification method based on transfer learning. Then considering the special distribution of terahertz image, an improved faster region-based convolutional neural network (Faster R-CNN) method based on threshold segmentation is proposed for detecting human body and other objects independently. Finally, experimental results demonstrate the effectiveness and efficiency of the proposed method for terahertz image detection.


10.2196/27663 ◽  
2021 ◽  
Vol 8 (5) ◽  
pp. e27663
Author(s):  
Sandersan Onie ◽  
Xun Li ◽  
Morgan Liang ◽  
Arcot Sowmya ◽  
Mark Erik Larsen

Background Suicide is a recognized public health issue, with approximately 800,000 people dying by suicide each year. Among the different technologies used in suicide research, closed-circuit television (CCTV) and video have been used for a wide array of applications, including assessing crisis behaviors at metro stations, and using computer vision to identify a suicide attempt in progress. However, there has been no review of suicide research and interventions using CCTV and video. Objective The objective of this study was to review the literature to understand how CCTV and video data have been used in understanding and preventing suicide. Furthermore, to more fully capture progress in the field, we report on an ongoing study to respond to an identified gap in the narrative review, by using a computer vision–based system to identify behaviors prior to a suicide attempt. Methods We conducted a search using the keywords “suicide,” “cctv,” and “video” on PubMed, Inspec, and Web of Science. We included any studies which used CCTV or video footage to understand or prevent suicide. If a study fell into our area of interest, we included it regardless of the quality as our goal was to understand the scope of how CCTV and video had been used rather than quantify any specific effect size, but we noted the shortcomings in their design and analyses when discussing the studies. Results The review found that CCTV and video have primarily been used in 3 ways: (1) to identify risk factors for suicide (eg, inferring depression from facial expressions), (2) understanding suicide after an attempt (eg, forensic applications), and (3) as part of an intervention (eg, using computer vision and automated systems to identify if a suicide attempt is in progress). Furthermore, work in progress demonstrates how we can identify behaviors prior to an attempt at a hotspot, an important gap identified by papers in the literature. Conclusions Thus far, CCTV and video have been used in a wide array of applications, most notably in designing automated detection systems, with the field heading toward an automated detection system for early intervention. Despite many challenges, we show promising progress in developing an automated detection system for preattempt behaviors, which may allow for early intervention.


2020 ◽  
Vol 10 (7) ◽  
pp. 2511
Author(s):  
Young-Joo Han ◽  
Ha-Jin Yu

As defect detection using machine vision is diversifying and expanding, approaches using deep learning are increasing. Recently, there have been much research for detecting and classifying defects using image segmentation, image detection, and image classification. These methods are effective but require a large number of actual defect data. However, it is very difficult to get a large amount of actual defect data in industrial areas. To overcome this problem, we propose a method for defect detection using stacked convolutional autoencoders. The autoencoders we proposed are trained by using only non-defect data and synthetic defect data generated by using the characteristics of defect based on the knowledge of the experts. A key advantage of our approach is that actual defect data is not required, and we verified that the performance is comparable to the systems trained using real defect data.


Sign in / Sign up

Export Citation Format

Share Document