image content
Recently Published Documents


TOTAL DOCUMENTS

545
(FIVE YEARS 114)

H-INDEX

26
(FIVE YEARS 4)

2022 ◽  
Vol 3 ◽  
Author(s):  
Karolina Pakėnaitė ◽  
Petar Nedelev ◽  
Eirini Kamperou ◽  
Michael J. Proulx ◽  
Peter M. Hall

Millions of people with a visual impairment across the world are denied access to visual images. They are unable to enjoy the simple pleasures of viewing family photographs, those in textbooks or tourist brochures and the pictorial embellishment of news stories etc. We propose a simple, inexpensive but effective approach, to make content accessible via touch. We use state-of-the-art algorithms to automatically process an input photograph into a collage of icons, that depict the most important semantic aspects of a scene. This collage is then printed onto swell paper. Our experiments show that people can recognise content with an accuracy exceeding 70% and create plausible narratives to explain it. This means that people can understand image content via touch. Communicating scene foreground is a step forward, but there are many other steps needed to provide the visually impaired with the fullest possible access to visual content.


2021 ◽  
Vol 9 (17) ◽  
pp. 123-145
Author(s):  
Liz Watkins

Colorization describes the digitization and retrospective addition of color to photographic and film materials (celluloid nitrate, glass negatives) initially made and circulated in a black-and-white format. Revisiting the controversial 1980s colorization of 24 classic Hollywood studio titles, which incited debate over questions of copyright, authorship and artistic expression, this essay examines the use of colorization to interpret museum collections for new audiences. The aesthetics of colorization have been criticized for prioritizing image content over the history of film technologies, practices and exhibition. An examination of They Shall Not Grow Old (Jackson, 2018) finds a use of digital editing and coloring techniques in the colorization of First World War film footage held in the Imperial War Museum archives that is familiar to the director’s fiction films. Jackson’s film is a commemorative project, yet the “holistic unity” of authorial technique operates across fragments of archive film and photographs to imbricate of fiction and nonfiction, signaling vital questions around the ethics and ideologies of “natural color”, historiography, and the authenticity of materials and spectator experience.


2021 ◽  
Vol 2021 (29) ◽  
pp. 288-293
Author(s):  
Alexandra Spote ◽  
Pierre-Jean Lapray ◽  
Jean-Baptiste Thomas ◽  
Ivar Farup

This article considers the joint demosaicing of colour and polarisation image content captured with a Colour and Polarisation Filter Array imaging system. The Linear Minimum Mean Square Error algorithm is applied to this case, and its performance is compared to the state-of-theart Edge-Aware Residual Interpolation algorithm. Results show that the LMMSE demosaicing method gives statistically higher scores on the largest tested database, in term of peak signal-to-noise ratio relatively to a CPFA-dedicated algorithm.


2021 ◽  
Author(s):  
Koen C. Kusters ◽  
Luis A. Zavala-Mondragon ◽  
Javier Olivan Bescos ◽  
Peter Rongen ◽  
Peter H.N. de With ◽  
...  

Author(s):  
Ximena Pocco ◽  
Jorge Poco ◽  
Matheus Viana ◽  
Rogerio de Paula ◽  
Luis G. Nonato ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Zelin Deng ◽  
Qiran Zhu ◽  
Pei He ◽  
Dengyong Zhang ◽  
Yuansheng Luo

Using the convolutional neural network (CNN) method for image emotion recognition is a research hotspot of deep learning. Previous studies tend to use visual features obtained from a global perspective and ignore the role of local visual features in emotional arousal. Moreover, the CNN shallow feature maps contain image content information; such maps obtained from shallow layers directly to describe low-level visual features may lead to redundancy. In order to enhance image emotion recognition performance, an improved CNN is proposed in this work. Firstly, the saliency detection algorithm is used to locate the emotional region of the image, which is served as the supplementary information to conduct emotion recognition better. Secondly, the Gram matrix transform is performed on the CNN shallow feature maps to decrease the redundancy of image content information. Finally, a new loss function is designed by using hard labels and probability labels of image emotion category to reduce the influence of image emotion subjectivity. Extensive experiments have been conducted on benchmark datasets, including FI (Flickr and Instagram), IAPSsubset, ArtPhoto, and Abstract. The experimental results show that compared with the existing approaches, our method has a good application prospect.


2021 ◽  
Vol 11 (15) ◽  
pp. 6752
Author(s):  
Changhui You ◽  
Hong Zheng ◽  
Zhongyuan Guo ◽  
Tianyu Wang ◽  
Xiongbin Wu

In recent years, source camera identification has become a research hotspot in the field of image forensics and has received increasing attention. It has high application value in combating the spread of pornographic photos, copyright authentication of art photos, image tampering forensics, and so on. Although the existing algorithms greatly promote the research progress of source camera identification, they still cannot effectively reduce the interference of image content with image forensics. To suppress the influence of image content on source camera identification, a multiscale content-independent feature fusion network (MCIFFN) is proposed to solve the problem of source camera identification. MCIFFN is composed of three parallel branch networks. Before the image is sent to the first two branch networks, an adaptive filtering module is needed to filter the image content and extract the noise features, and then the noise features are sent to the corresponding convolutional neural networks (CNN), respectively. In order to retain the information related to the image color, this paper does not preprocess the third branch network, but directly sends the image data to CNN. Finally, the content-independent features of different scales extracted from the three branch networks are fused, and the fused features are used for image source identification. The CNN feature extraction network in MCIFFN is a shallow network embedded with a squeeze and exception (SE) structure called SE-SCINet. The experimental results show that the proposed MCIFFN is effective and robust, and the classification accuracy is improved by approximately 2% compared with the SE-SCINet network.


Sign in / Sign up

Export Citation Format

Share Document