Image Captioning Using Deep Learning
The task of image caption generator is mainly about extracting the features and ongoings of an image and generating human-readable captions that translate the features of the objects in the image. The contents of an image can be described by having knowledge about natural language processing and computer vision. The features can be extracted using convolution neural networks which makes use of transfer learning to implement the exception model. It stands for extreme inception, which has a feature extraction base with 36 convolution layers. This shows accurate results when compared with the other CNNs. Recurrent neural networks are used for describing the image and to generate accurate sentences. The feature vector that is extracted by using the CNN is fed to the LSTM. The Flicker 8k dataset is used to train the network in which the data is labeled properly. The model will be able to generate accurate captions that nearly describe the activities carried in the image when an input image is given to it. Further, the authors use the BLEU scores to validate the model.