Unsupervised Segmentation of Natural Images

2002 ◽  
Vol 9 (5) ◽  
pp. 197-201
Author(s):  
Xiao Yan Dai ◽  
Junji Maeda

2017 ◽  
Vol 252 ◽  
pp. 95-101 ◽  
Author(s):  
Zhong-jie Zhu ◽  
Yu-er Wang ◽  
Gang-yi Jiang


2008 ◽  
Vol 110 (2) ◽  
pp. 212-225 ◽  
Author(s):  
Allen Y. Yang ◽  
John Wright ◽  
Yi Ma ◽  
S. Shankar Sastry






Author(s):  
Yuki HAYAMI ◽  
Daiki TAKASU ◽  
Hisakazu AOYANAGI ◽  
Hiroaki TAKAMATSU ◽  
Yoshifumi SHIMODAIRA ◽  
...  




2021 ◽  
pp. 096372142199033
Author(s):  
Katherine R. Storrs ◽  
Roland W. Fleming

One of the deepest insights in neuroscience is that sensory encoding should take advantage of statistical regularities. Humans’ visual experience contains many redundancies: Scenes mostly stay the same from moment to moment, and nearby image locations usually have similar colors. A visual system that knows which regularities shape natural images can exploit them to encode scenes compactly or guess what will happen next. Although these principles have been appreciated for more than 60 years, until recently it has been possible to convert them into explicit models only for the earliest stages of visual processing. But recent advances in unsupervised deep learning have changed that. Neural networks can be taught to compress images or make predictions in space or time. In the process, they learn the statistical regularities that structure images, which in turn often reflect physical objects and processes in the outside world. The astonishing accomplishments of unsupervised deep learning reaffirm the importance of learning statistical regularities for sensory coding and provide a coherent framework for how knowledge of the outside world gets into visual cortex.



2021 ◽  
Vol 13 (3) ◽  
pp. 1-19
Author(s):  
Sreelakshmy I. J. ◽  
Binsu C. Kovoor

Image inpainting is a technique in the world of image editing where missing portions of the image are estimated and filled with the help of available or external information. In the proposed model, a novel hybrid inpainting algorithm is implemented, which adds the benefits of a diffusion-based inpainting method to an enhanced exemplar algorithm. The structure part of the image is dealt with a diffusion-based method, followed by applying an adaptive patch size–based exemplar inpainting. Due to its hybrid nature, the proposed model exceeds the quality of output obtained by applying conventional methods individually. A new term, coefficient of smoothness, is introduced in the model, which is used in the computation of adaptive patch size for the enhanced exemplar method. An automatic mask generation module relieves the user from the burden of creating additional mask input. Quantitative and qualitative evaluation is performed on images from various datasets. The results provide a testimonial to the fact that the proposed model is faster in the case of smooth images. Moreover, the proposed model provides good quality results while inpainting natural images with both texture and structure regions.



Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4818
Author(s):  
Nils Mandischer ◽  
Tobias Huhn ◽  
Mathias Hüsing ◽  
Burkhard Corves

In the EU project SHAREWORK, methods are developed that allow humans and robots to collaborate in an industrial environment. One of the major contributions is a framework for task planning coupled with automated item detection and localization. In this work, we present the methods used for detecting and classifying items on the shop floor. Important in the context of SHAREWORK is the user-friendliness of the methodology. Thus, we renounce heavy-learning-based methods in favor of unsupervised segmentation coupled with lenient machine learning methods for classification. Our algorithm is a combination of established methods adjusted for fast and reliable item detection at high ranges of up to eight meters. In this work, we present the full pipeline from calibration, over segmentation to item classification in the industrial context. The pipeline is validated on a shop floor of 40 sqm and with up to nine different items and assemblies, reaching a mean accuracy of 84% at 0.85 Hz.



Sign in / Sign up

Export Citation Format

Share Document