image modelling
Recently Published Documents


TOTAL DOCUMENTS

71
(FIVE YEARS 19)

H-INDEX

8
(FIVE YEARS 2)

2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Junyu Chen ◽  
Haiwei Li ◽  
Liyao Song ◽  
Geng Zhang ◽  
Bingliang Hu ◽  
...  

AbstractDeveloping an efficient and quality remote sensing (RS) technology using volume and efficient modelling in different aircraft RS images is challenging. Generative models serve as a natural and convenient simulation method. Because aircraft types belong to the fine class under the rough class, the issue of feature entanglement may occur while modelling multiple aircraft classes. Our solution to this issue was a novel first-generation realistic aircraft type simulation system (ATSS-1) based on the RS images. It realised fine modelling of the seven aircraft types based on a real scene by establishing an adaptive weighted conditional attention generative adversarial network and joint geospatial embedding (GE) network. An adaptive weighted conditional batch normalisation attention block solved the subclass entanglement by reassigning the intra-class-wise characteristic responses. Subsequently, an asymmetric residual self-attention module was developed by establishing a remote region asymmetric relationship for mining the finer potential spatial representation. The mapping relationship between the input RS scene and the potential space of the generated samples was explored through the GE network construction that used the selected prior distribution z, as an intermediate representation. A public RS dataset (OPT-Aircraft_V1.0) and two public datasets (MNIST and Fashion-MNIST) were used for simulation model testing. The results demonstrated the effectiveness of ATSS-1, promoting further development of realistic automatic RS simulation.


Author(s):  
Rohan Bolusani

Abstract: Generating realistic images from text is innovative and interesting, but modern-day machine learning models are still far from this goal. With research and development in the field of natural language processing, neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, in the field of machine learning, generative adversarial networks (GANs) have begun to generate extremely accurate images of especially in categories, such as faces, album covers, and room interiors. In this work, the main goal is to develop a neural network to bridge these advances in text and image modelling, by essentially translating characters to pixels the project will demonstrate the capability of generative models by taking detailed text descriptions and generate plausible images. Keywords: Deep Learning, Computer Vision, NLP, Generative Adversarial Networks


Food Control ◽  
2021 ◽  
pp. 108316
Author(s):  
Hasitha Priyashantha ◽  
Annika Höjer ◽  
Karin Hallin Saedén ◽  
Åse Lundh ◽  
Monika Johansson ◽  
...  

2021 ◽  
Vol 15 (2-3) ◽  
pp. 2170024
Author(s):  
Juliane Hermann ◽  
Kai Brehmer ◽  
Vera Jankowski ◽  
Michaela Lellig ◽  
Mathias Hohl ◽  
...  

2021 ◽  
Vol 15 (1) ◽  
pp. 2170011
Author(s):  
Juliane Hermann ◽  
Kai Brehmer ◽  
Vera Jankowski ◽  
Michaela Lellig ◽  
Mathias Hohl ◽  
...  

2020 ◽  
pp. 1900143
Author(s):  
Juliane Hermann ◽  
Kai Brehmer ◽  
Vera Jankowski ◽  
Michaela Lellig ◽  
Mathias Hohl ◽  
...  

Author(s):  
Oktay Karakuş ◽  
Ercan E Kuruoglu ◽  
Alin Achim

This paper presents a novel statistical model i.e. the Laplace-Rician distribution, for the characterisation of synthetic aperture radar (SAR) images. Since accurate statistical models lead to better results in applications such as target tracking, classification, or despeckling, characterising SAR images of various scenes including urban, sea surface, or agricultural, is essential. The proposed Laplace-Rician model is investigated for SAR images of several frequency bands and various scenes in comparison to state-of-the-art statistical models that include K, Weibull, and Lognormal. The results demonstrate the superior performance and flexibility of the proposed model for all frequency bands and scenes.


Sign in / Sign up

Export Citation Format

Share Document