A deep neural network and random forests driven computer vision framework for identification and prediction of metanil yellow adulteration in turmeric powder

Author(s):  
Dipankar Mandal ◽  
Arpitam Chatterjee ◽  
Bipan Tudu
2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Mohammed Aliy Mohammed ◽  
Fetulhak Abdurahman ◽  
Yodit Abebe Ayalew

Abstract Background Automating cytology-based cervical cancer screening could alleviate the shortage of skilled pathologists in developing countries. Up until now, computer vision experts have attempted numerous semi and fully automated approaches to address the need. Yet, these days, leveraging the astonishing accuracy and reproducibility of deep neural networks has become common among computer vision experts. In this regard, the purpose of this study is to classify single-cell Pap smear (cytology) images using pre-trained deep convolutional neural network (DCNN) image classifiers. We have fine-tuned the top ten pre-trained DCNN image classifiers and evaluated them using five class single-cell Pap smear images from SIPaKMeD dataset. The pre-trained DCNN image classifiers were selected from Keras Applications based on their top 1% accuracy. Results Our experimental result demonstrated that from the selected top-ten pre-trained DCNN image classifiers DenseNet169 outperformed with an average accuracy, precision, recall, and F1-score of 0.990, 0.974, 0.974, and 0.974, respectively. Moreover, it dashed the benchmark accuracy proposed by the creators of the dataset with 3.70%. Conclusions Even though the size of DenseNet169 is small compared to the experimented pre-trained DCNN image classifiers, yet, it is not suitable for mobile or edge devices. Further experimentation with mobile or small-size DCNN image classifiers is required to extend the applicability of the models in real-world demands. In addition, since all experiments used the SIPaKMeD dataset, additional experiments will be needed using new datasets to enhance the generalizability of the models.


2020 ◽  
Vol 23 (6) ◽  
pp. 1172-1191
Author(s):  
Artem Aleksandrovich Elizarov ◽  
Evgenii Viktorovich Razinkov

Recently, such a direction of machine learning as reinforcement learning has been actively developing. As a consequence, attempts are being made to use reinforcement learning for solving computer vision problems, in particular for solving the problem of image classification. The tasks of computer vision are currently one of the most urgent tasks of artificial intelligence. The article proposes a method for image classification in the form of a deep neural network using reinforcement learning. The idea of ​​the developed method comes down to solving the problem of a contextual multi-armed bandit using various strategies for achieving a compromise between exploitation and research and reinforcement learning algorithms. Strategies such as -greedy, -softmax, -decay-softmax, and the UCB1 method, and reinforcement learning algorithms such as DQN, REINFORCE, and A2C are considered. The analysis of the influence of various parameters on the efficiency of the method is carried out, and options for further development of the method are proposed.


2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Daniel G. E. Thiem ◽  
Paul Römer ◽  
Matthias Gielisch ◽  
Bilal Al-Nawas ◽  
Martin Schlüter ◽  
...  

Abstract Background Hyperspectral imaging (HSI) is a promising non-contact approach to tissue diagnostics, generating large amounts of raw data for whose processing computer vision (i.e. deep learning) is particularly suitable. Aim of this proof of principle study was the classification of hyperspectral (HS)-reflectance values into the human-oral tissue types fat, muscle and mucosa using deep learning methods. Furthermore, the tissue-specific hyperspectral signatures collected will serve as a representative reference for the future assessment of oral pathological changes in the sense of a HS-library. Methods A total of about 316 samples of healthy human-oral fat, muscle and oral mucosa was collected from 174 different patients and imaged using a HS-camera, covering the wavelength range from 500 nm to 1000 nm. HS-raw data were further labelled and processed for tissue classification using a light-weight 6-layer deep neural network (DNN). Results The reflectance values differed significantly (p < .001) for fat, muscle and oral mucosa at almost all wavelengths, with the signature of muscle differing the most. The deep neural network distinguished tissue types with an accuracy of > 80% each. Conclusion Oral fat, muscle and mucosa can be classified sufficiently and automatically by their specific HS-signature using a deep learning approach. Early detection of premalignant-mucosal-lesions using hyperspectral imaging and deep learning is so far represented rarely in in medical and computer vision research domain but has a high potential and is part of subsequent studies.


2018 ◽  
Vol 18 (17) ◽  
pp. 7315-7324 ◽  
Author(s):  
Ku-Young Chung ◽  
Kwangsub Song ◽  
Seok Hyun Cho ◽  
Joon-Hyuk Chang

Deep learning has arrived with a great number of advances in the research of machine learning and its models. Due to the advancements recently in the field of deep learning and its models especially in the fields like NLP and Computer Vision in supervised learning for which we have to pre-definably decide a dataset and train our model completely on it and make predictions but in case if we have any new samples of data on which we want our model to be predicted then we have to completely retrain the model, which is computationally costly therefore to avoid re-training the model, we add the new samples on the previously learnt features from the pre- trained model called Incremental Learning. In the paper we proposed the system to overcome the process of catastrophic forgetting we introduced the concept of building on pre-trained model.


2019 ◽  
Vol 77 (4) ◽  
pp. 1340-1353 ◽  
Author(s):  
Geoff French ◽  
Michal Mackiewicz ◽  
Mark Fisher ◽  
Helen Holah ◽  
Rachel Kilburn ◽  
...  

Abstract We report on the development of a computer vision system that analyses video from CCTV systems installed on fishing trawlers for the purpose of monitoring and quantifying discarded fish catch. Our system is designed to operate in spite of the challenging computer vision problem posed by conditions on-board fishing trawlers. We describe the approaches developed for isolating and segmenting individual fish and for species classification. We present an analysis of the variability of manual species identification performed by expert human observers and contrast the performance of our species classifier against this benchmark. We also quantify the effect of the domain gap on the performance of modern deep neural network-based computer vision systems.


PLoS ONE ◽  
2021 ◽  
Vol 16 (6) ◽  
pp. e0251667
Author(s):  
Imran N. Junejo

Keeping an eye on pedestrians as they navigate through a scene, surveillance cameras are everywhere. With this context, our paper addresses the problem of pedestrian attribute recognition (PAR). This problem entails recognizing attributes such as age-group, clothing style, accessories, footwear style etc. This multi-label problem is extremely challenging even for human observers and has rightly garnered attention from the computer vision community. Towards a solution to this problem, in this paper, we adopt trainable Gabor wavelets (TGW) layers and cascade them with a convolution neural network (CNN). Whereas other researchers are using fixed Gabor filters with the CNN, the proposed layers are learnable and adapt to the dataset for a better recognition. We propose a two-branch neural network where mixed layers, a combination of the TGW and convolutional layers, make up the building block of our deep neural network. We test our method on twoo challenging publicly available datasets and compare our results with state of the art.


2021 ◽  
Author(s):  
Callum Newman ◽  
Jon Petzing ◽  
Yee Mey Goh ◽  
Laura Justham

Artificial intelligence in computer vision has focused on improving test performance using techniques and architectures related to deep neural networks. However, improvements can also be achieved by carefully selecting the training dataset images. Environmental factors, such as light intensity, affect the image’s appearance and by choosing optimal factor levels the neural network’s performance can improve. However, little research into processes which help identify optimal levels is available. This research presents a case study which uses a process for developing an optimised dataset for training an object detection neural network. Images are gathered under controlled conditions using multiple factors to construct various training datasets. Each dataset is used to train the same neural network and the test performance compared to identify the optimal factors. The opportunity to use synthetic images is introduced, which has many advantages including creating images when real-world images are unavailable, and more easily controlled factors.


Sign in / Sign up

Export Citation Format

Share Document