scholarly journals A Novel Multi Hidden Layer Convolutional Neural Network for Content Based Image Retrieval

The applications of a content-based image retrieval system in fields such as multimedia, security, medicine, and entertainment, have been implemented on a huge real-time database by using a convolutional neural network architecture. In general, thus far, content-based image retrieval systems have been implemented with machine learning algorithms. A machine learning algorithm is applicable to a limited database because of the few feature extraction hidden layers between the input and the output layers. The proposed convolutional neural network architecture was successfully implemented using 128 convolutional layers, pooling layers, rectifier linear unit (ReLu), and fully connected layers. A convolutional neural network architecture yields better results of its ability to extract features from an image. The Euclidean distance metric is used for calculating the similarity between the query image and the database images. It is implemented using the COREL database. The proposed system is successfully evaluated using precision, recall, and F-score. The performance of the proposed method is evaluated using the precision and recall.

Author(s):  
Vijayaprabakaran K. ◽  
Sathiyamurthy K. ◽  
Ponniamma M.

A typical healthcare application for elderly people involves monitoring daily activities and providing them with assistance. Automatic analysis and classification of an image by the system is difficult compared to human vision. Several challenging problems for activity recognition from the surveillance video involving the complexity of the scene analysis under observations from irregular lighting and low-quality frames. In this article, the authors system use machine learning algorithms to improve the accuracy of activity recognition. Their system presents a convolutional neural network (CNN), a machine learning algorithm being used for image classification. This system aims to recognize and assist human activities for elderly people using input surveillance videos. The RGB image in the dataset used for training purposes which requires more computational power for classification of the image. By using the CNN network for image classification, the authors obtain a 79.94% accuracy in the experimental part which shows their model obtains good accuracy for image classification when compared with other pre-trained models.


2020 ◽  
Vol 102-B (6_Supple_A) ◽  
pp. 101-106
Author(s):  
Romil F. Shah ◽  
Stefano A. Bini ◽  
Alejandro M. Martinez ◽  
Valentina Pedoia ◽  
Thomas P. Vail

Aims The aim of this study was to evaluate the ability of a machine-learning algorithm to diagnose prosthetic loosening from preoperative radiographs and to investigate the inputs that might improve its performance. Methods A group of 697 patients underwent a first-time revision of a total hip (THA) or total knee arthroplasty (TKA) at our institution between 2012 and 2018. Preoperative anteroposterior (AP) and lateral radiographs, and historical and comorbidity information were collected from their electronic records. Each patient was defined as having loose or fixed components based on the operation notes. We trained a series of convolutional neural network (CNN) models to predict a diagnosis of loosening at the time of surgery from the preoperative radiographs. We then added historical data about the patients to the best performing model to create a final model and tested it on an independent dataset. Results The convolutional neural network we built performed well when detecting loosening from radiographs alone. The first model built de novo with only the radiological image as input had an accuracy of 70%. The final model, which was built by fine-tuning a publicly available model named DenseNet, combining the AP and lateral radiographs, and incorporating information from the patient’s history, had an accuracy, sensitivity, and specificity of 88.3%, 70.2%, and 95.6% on the independent test dataset. It performed better for cases of revision THA with an accuracy of 90.1%, than for cases of revision TKA with an accuracy of 85.8%. Conclusion This study showed that machine learning can detect prosthetic loosening from radiographs. Its accuracy is enhanced when using highly trained public algorithms, and when adding clinical data to the algorithm. While this algorithm may not be sufficient in its present state of development as a standalone metric of loosening, it is currently a useful augment for clinical decision making. Cite this article: Bone Joint J 2020;102-B(6 Supple A):101–106.


IoT ◽  
2021 ◽  
Vol 2 (2) ◽  
pp. 222-235
Author(s):  
Guillaume Coiffier ◽  
Ghouthi Boukli Hacene ◽  
Vincent Gripon

Deep Neural Networks are state-of-the-art in a large number of challenges in machine learning. However, to reach the best performance they require a huge pool of parameters. Indeed, typical deep convolutional architectures present an increasing number of feature maps as we go deeper in the network, whereas spatial resolution of inputs is decreased through downsampling operations. This means that most of the parameters lay in the final layers, while a large portion of the computations are performed by a small fraction of the total parameters in the first layers. In an effort to use every parameter of a network at its maximum, we propose a new convolutional neural network architecture, called ThriftyNet. In ThriftyNet, only one convolutional layer is defined and used recursively, leading to a maximal parameter factorization. In complement, normalization, non-linearities, downsamplings and shortcut ensure sufficient expressivity of the model. ThriftyNet achieves competitive performance on a tiny parameters budget, exceeding 91% accuracy on CIFAR-10 with less than 40 k parameters in total, 74.3% on CIFAR-100 with less than 600 k parameters, and 67.1% On ImageNet ILSVRC 2012 with no more than 4.15 M parameters. However, the proposed method typically requires more computations than existing counterparts.


10.2196/14502 ◽  
2019 ◽  
Vol 7 (4) ◽  
pp. e14502
Author(s):  
Po-Ting Lai ◽  
Wei-Liang Lu ◽  
Ting-Rung Kuo ◽  
Chia-Ru Chung ◽  
Jen-Chieh Han ◽  
...  

Background Research on disease-disease association (DDA), like comorbidity and complication, provides important insights into disease treatment and drug discovery, and a large body of the literature has been published in the field. However, using current search tools, it is not easy for researchers to retrieve information on the latest DDA findings. First, comorbidity and complication keywords pull up large numbers of PubMed studies. Second, disease is not highlighted in search results. Finally, DDA is not identified, as currently no disease-disease association extraction (DDAE) dataset or tools are available. Objective As there are no available DDAE datasets or tools, this study aimed to develop (1) a DDAE dataset and (2) a neural network model for extracting DDA from the literature. Methods In this study, we formulated DDAE as a supervised machine learning classification problem. To develop the system, we first built a DDAE dataset. We then employed two machine learning models, support vector machine and convolutional neural network, to extract DDA. Furthermore, we evaluated the effect of using the output layer as features of the support vector machine-based model. Finally, we implemented large margin context-aware convolutional neural network architecture to integrate context features and convolutional neural networks through the large margin function. Results Our DDAE dataset consisted of 521 PubMed abstracts. Experiment results showed that the support vector machine-based approach achieved an F1 measure of 80.32%, which is higher than the convolutional neural network-based approach (73.32%). Using the output layer of convolutional neural network as a feature for the support vector machine does not further improve the performance of support vector machine. However, our large margin context-aware-convolutional neural network achieved the highest F1 measure of 84.18% and demonstrated that combining the hinge loss function of support vector machine with a convolutional neural network into a single neural network architecture outperforms other approaches. Conclusions To facilitate the development of text-mining research for DDAE, we developed the first publicly available DDAE dataset consisting of disease mentions, Medical Subject Heading IDs, and relation annotations. We developed different conventional machine learning models and neural network architectures and evaluated their effects on our DDAE dataset. To further improve DDAE performance, we propose an large margin context-aware-convolutional neural network model for DDAE that outperforms other approaches.


2020 ◽  
Vol 4 (4) ◽  
pp. 291-296
Author(s):  
Ziyang Wang ◽  
Wei Zheng ◽  
Youguang Chen

Collections of bronze inscription images are increasing rapidly. To use these images efficiently, we proposed an effective content-based image retrieval framework using deep learning. Specifically, we extract discriminative local features for image retrieval using the activations of the convolutional neural network and binarize the extracted features for improving the efficiency of image retrieval, firstly. Then, we use the cosine metric and Euclidean metric to calculate the similarity between the query image and dataset images. The result shows that the proposed framework has an impressive accuracy.


2021 ◽  
Author(s):  
Shashidhar R ◽  
S Patilkulkarni ◽  
Nishanth S Murthy

Abstract Communication is all about expressing one’s thoughts to another person through speech and facial expressions. But for people with hearing impairment, it is difficult to communicate without any assistance. In most of these cases Visual speech recognition (VSR) systems simplify the tasks by using Machine Learning algorithms and assisting them to understand speech and socialize without depending on the auditory perception. Thus, one can visualize VSR system as a lifeline for people with hearing impairment which helps them in providing a way to understand the words that are being tried to convey to them through speech. In this work we used VGG16 convolutional neural network architecture for Kannada and English datasets. We used custom dataset for the research work and got the accuracy of 90.10% for English database and 91.90% for Kannada database.


2020 ◽  
Vol 31 (4) ◽  
pp. 43
Author(s):  
Nuha Mohammed Khassaf ◽  
Shaimaa Hameed Shaker

At the present time, everyone is interested in dealing with images in different fields such as geographic maps, medical images, images obtaining by Camera, microscope, telescope, agricultural field photos, paintings, industrial parts drawings, space photos, etc. Content Based Image Retrieval (CBIR) is an efficient retrieval of relevant images from databases based on features extracted from the image. Follow the proposed system for retrieving images related to a query image from a large set of images, based approach to extract the texture features present in the image using statistical methods (PCA, MAD, GLCM, and Fusion) after pre-processing of images. The proposed system was trained using 1D CNN using a dataset Corel10k which widely used for experimental evaluation of CBIR performance the results of proposed system shows that the highest accuracy is 97.5% using Fusion (PCA, MAD), where the accuracy is 95% using MAD, 90% using PCA. The performance result is acceptable compared to previous work.


2019 ◽  
Author(s):  
Po-Ting Lai ◽  
Wei-Liang Lu ◽  
Ting-Rung Kuo ◽  
Chia-Ru Chung ◽  
Jen-Chieh Han ◽  
...  

BACKGROUND Research on disease-disease association (DDA), like comorbidity and complication, provides important insights into disease treatment and drug discovery, and a large body of the literature has been published in the field. However, using current search tools, it is not easy for researchers to retrieve information on the latest DDA findings. First, comorbidity and complication keywords pull up large numbers of PubMed studies. Second, disease is not highlighted in search results. Finally, DDA is not identified, as currently no disease-disease association extraction (DDAE) dataset or tools are available. OBJECTIVE As there are no available DDAE datasets or tools, this study aimed to develop (1) a DDAE dataset and (2) a neural network model for extracting DDA from the literature. METHODS In this study, we formulated DDAE as a supervised machine learning classification problem. To develop the system, we first built a DDAE dataset. We then employed two machine learning models, support vector machine and convolutional neural network, to extract DDA. Furthermore, we evaluated the effect of using the output layer as features of the support vector machine-based model. Finally, we implemented large margin context-aware convolutional neural network architecture to integrate context features and convolutional neural networks through the large margin function. RESULTS Our DDAE dataset consisted of 521 PubMed abstracts. Experiment results showed that the support vector machine-based approach achieved an F1 measure of 80.32%, which is higher than the convolutional neural network-based approach (73.32%). Using the output layer of convolutional neural network as a feature for the support vector machine does not further improve the performance of support vector machine. However, our large margin context-aware-convolutional neural network achieved the highest F1 measure of 84.18% and demonstrated that combining the hinge loss function of support vector machine with a convolutional neural network into a single neural network architecture outperforms other approaches. CONCLUSIONS To facilitate the development of text-mining research for DDAE, we developed the first publicly available DDAE dataset consisting of disease mentions, Medical Subject Heading IDs, and relation annotations. We developed different conventional machine learning models and neural network architectures and evaluated their effects on our DDAE dataset. To further improve DDAE performance, we propose an large margin context-aware-convolutional neural network model for DDAE that outperforms other approaches.


Sign in / Sign up

Export Citation Format

Share Document