Automatic Translation of Medical Information Based on Text Feature Recognition

Author(s):  
Suyang Zhang

In order to improve the effect of the medical information automatic translation system, based on the text feature recognition technology of medical image, this paper constructs an automatic translation system that can recognize medical images. On the basis of the medical information image retrieval based on the combination of text information and visual information, this paper uses automatic image annotation technology and semantic similarity calculation method to extract the text and semantic features of medical information images. Then, this paper uses the inherent multi-information fusion capability of the Bayesian inference network to fuse the text features of medical information images and the semantic features of image content together to realize medical information image retrieval. Finally, this paper designs experiments to test the performance of the medical information automatic translation system. The research shows that the system constructed in this paper has certain effects.

2018 ◽  
Vol 15 (3) ◽  
pp. 517-531 ◽  
Author(s):  
Pinle Qin ◽  
Jun Chen ◽  
Kai Zhang ◽  
Rui Chai

With a widespread use of digital imaging data in hospitals, the size of medical image repositories is increasing rapidly. This causes difficulty in managing and querying these large databases leading to the need of content based medical image retrieval (CBMIR) systems. A major challenge in CBMIR systems is the ?semantic gap? that exists between the low level visual information captured by imaging devices and high level semantic information perceived by the human. Using deep convolution neural network (CNN) to construct the CBMIR system can fully characterize the high level semantic features information for medical image retrieval. The existing network mostly used for the natural images can?t produce a good result directly applied to medical image. This paper used UNet method to preprocessing under the guidance of medical knowledge. Then, multi-scale receiving field convolution module is used to extract features of the segmented images with different sizes. Finally, encoded the features and used a coarse to fine search strategy with an average search accuracy of 0.73.


Information ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 285
Author(s):  
Wenjing Yang ◽  
Liejun Wang ◽  
Shuli Cheng ◽  
Yongming Li ◽  
Anyu Du

Recently, deep learning to hash has extensively been applied to image retrieval, due to its low storage cost and fast query speed. However, there is a defect of insufficiency and imbalance when existing hashing methods utilize the convolutional neural network (CNN) to extract image semantic features and the extracted features do not include contextual information and lack relevance among features. Furthermore, the process of the relaxation hash code can lead to an inevitable quantization error. In order to solve these problems, this paper proposes deep hash with improved dual attention for image retrieval (DHIDA), which chiefly has the following contents: (1) this paper introduces the improved dual attention mechanism (IDA) based on the ResNet18 pre-trained module to extract the feature information of the image, which consists of the position attention module and the channel attention module; (2) when calculating the spatial attention matrix and channel attention matrix, the average value and maximum value of the column of the feature map matrix are integrated in order to promote the feature representation ability and fully leverage the features of each position; and (3) to reduce quantization error, this study designs a new piecewise function to directly guide the discrete binary code. Experiments on CIFAR-10, NUS-WIDE and ImageNet-100 show that the DHIDA algorithm achieves better performance.


Content based image retrieval system retrieve the images according to the strong feature related to desire as color, texture and shape of an image. Although visual features cannot be completely determined by semantic features, but still semantic features can be integrate easily into mathematical formulas. This paper is focused on retrieval of images within a large image collection, based on color projection by applying segmentation and quantification on different color models and compared for good result. This method is applied on different categories of image set and evaluated its retrieval rate in different models


Biometrics ◽  
2017 ◽  
pp. 652-689
Author(s):  
Anupam Mukherjee

This chapter will focus on the concept of Content-based image retrieval. Searching of an image or video database based on text based description is a manual labor intensive process. Descriptions of the file are usually typed manually for each image by human operators because the automatic generation of keywords for the images is difficult without incorporation of visual information and feature extraction. This method is impractical in today's multimedia information era. “Content-based” means that the search will analyze the actual contents of the image rather than the metadata such as keywords, tags, and descriptions associated with the image. The term “content” in this context might refer to colors, shapes, textures, or any other information that can be derived from the image itself. Several important sections are highlighted in this chapter, like architectures, query techniques, multidimensional indexing, video retrieval and different application sections of CBIR.


Terminology ◽  
2019 ◽  
Vol 25 (2) ◽  
pp. 222-258 ◽  
Author(s):  
Pilar León-Araúz ◽  
Arianne Reimerink ◽  
Pamela Faber

Abstract Reutilization and interoperability are major issues in the fields of knowledge representation and extraction, as reflected in initiatives such as the Semantic Web and the Linked Open Data Cloud. This paper shows how terminological resources can be integrated and reused within different types of application. EcoLexicon is a multilingual terminological knowledge base (TKB) on environmental science that integrates conceptual, linguistic and visual information. It has led to the following by-products: (i) the EcoLexicon English Corpus; (ii) EcoLexiCAT, a terminology-enhanced translation tool; and (iii) Manzanilla, an image annotation tool. This paper explains EcoLexicon and its by-products, and shows how the latter exploit and enhance the data in the TKB.


Sign in / Sign up

Export Citation Format

Share Document