scholarly journals Text based smart answering system in agriculture using RNN.

Author(s):  
C. A. Rose Mary ◽  
A. Raji Sukumar ◽  
N. Hemalatha

Abstract Agriculture is an important aspect of India's economy, and the country currently has one of the highest rates of farm producers in the world. The world of agriculture is becoming increasingly serious. Delivering a high-quality product is only part of the equation in today's industry. A chatbot is a tool or assistant that you may communicate with via instant messages. The chatbot understands what you're trying to say and responds with a sensible, relevant reply or just completes the best errand for you. The goal of this project is to create a Chatbot that uses natural language processing to promote remote interaction between users/farmers and the agriculture environment. A chatbot is being developed that can answer basic questions from farmers as well as give possible agricultural knowledge and solutions. This technology assists farmers in distant areas without internet access in better understanding the crop to be cultivated based on atmospheric conditions and answering fundamental agricultural concerns. In this project we have tried implementing Multi-Layer Perceptron model and Recurrent Neural Network models on the dataset. The accuracy given by RNN was 97.83% much better comparable to MLP.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Rohit Kundu ◽  
Hritam Basak ◽  
Pawan Kumar Singh ◽  
Ali Ahmadian ◽  
Massimiliano Ferrara ◽  
...  

AbstractCOVID-19 has crippled the world’s healthcare systems, setting back the economy and taking the lives of several people. Although potential vaccines are being tested and supplied around the world, it will take a long time to reach every human being, more so with new variants of the virus emerging, enforcing a lockdown-like situation on parts of the world. Thus, there is a dire need for early and accurate detection of COVID-19 to prevent the spread of the disease, even more. The current gold-standard RT-PCR test is only 71% sensitive and is a laborious test to perform, leading to the incapability of conducting the population-wide screening. To this end, in this paper, we propose an automated COVID-19 detection system that uses CT-scan images of the lungs for classifying the same into COVID and Non-COVID cases. The proposed method applies an ensemble strategy that generates fuzzy ranks of the base classification models using the Gompertz function and fuses the decision scores of the base models adaptively to make the final predictions on the test cases. Three transfer learning-based convolutional neural network models are used, namely VGG-11, Wide ResNet-50-2, and Inception v3, to generate the decision scores to be fused by the proposed ensemble model. The framework has been evaluated on two publicly available chest CT scan datasets achieving state-of-the-art performance, justifying the reliability of the model. The relevant source codes related to the present work is available in: GitHub.


Author(s):  
Yonatan Belinkov ◽  
James Glass

The field of natural language processing has seen impressive progress in recent years, with neural network models replacing many of the traditional systems. A plethora of new models have been proposed, many of which are thought to be opaque compared to their feature-rich counterparts. This has led researchers to analyze, interpret, and evaluate neural networks in novel and more fine-grained ways. In this survey paper, we review analysis methods in neural language processing, categorize them according to prominent research trends, highlight existing limitations, and point to potential directions for future work.


Author(s):  
Vo Ngoc Phu ◽  
Vo Thi Ngoc Tran

Artificial intelligence (ARTINT) and information have been famous fields for many years. A reason has been that many different areas have been promoted quickly based on the ARTINT and information, and they have created many significant values for many years. These crucial values have certainly been used more and more for many economies of the countries in the world, other sciences, companies, organizations, etc. Many massive corporations, big organizations, etc. have been established rapidly because these economies have been developed in the strongest way. Unsurprisingly, lots of information and large-scale data sets have been created clearly from these corporations, organizations, etc. This has been the major challenges for many commercial applications, studies, etc. to process and store them successfully. To handle this problem, many algorithms have been proposed for processing these big data sets.


2020 ◽  
pp. 1-22 ◽  
Author(s):  
D. Sykes ◽  
A. Grivas ◽  
C. Grover ◽  
R. Tobin ◽  
C. Sudlow ◽  
...  

Abstract Using natural language processing, it is possible to extract structured information from raw text in the electronic health record (EHR) at reasonably high accuracy. However, the accurate distinction between negated and non-negated mentions of clinical terms remains a challenge. EHR text includes cases where diseases are stated not to be present or only hypothesised, meaning a disease can be mentioned in a report when it is not being reported as present. This makes tasks such as document classification and summarisation more difficult. We have developed the rule-based EdIE-R-Neg, part of an existing text mining pipeline called EdIE-R (Edinburgh Information Extraction for Radiology reports), developed to process brain imaging reports, (https://www.ltg.ed.ac.uk/software/edie-r/) and two machine learning approaches; one using a bidirectional long short-term memory network and another using a feedforward neural network. These were developed on data from the Edinburgh Stroke Study (ESS) and tested on data from routine reports from NHS Tayside (Tayside). Both datasets consist of written reports from medical scans. These models are compared with two existing rule-based models: pyConText (Harkema et al. 2009. Journal of Biomedical Informatics42(5), 839–851), a python implementation of a generalisation of NegEx, and NegBio (Peng et al. 2017. NegBio: A high-performance tool for negation and uncertainty detection in radiology reports. arXiv e-prints, p. arXiv:1712.05898), which identifies negation scopes through patterns applied to a syntactic representation of the sentence. On both the test set of the dataset from which our models were developed, as well as the largely similar Tayside test set, the neural network models and our custom-built rule-based system outperformed the existing methods. EdIE-R-Neg scored highest on F1 score, particularly on the test set of the Tayside dataset, from which no development data were used in these experiments, showing the power of custom-built rule-based systems for negation detection on datasets of this size. The performance gap of the machine learning models to EdIE-R-Neg on the Tayside test set was reduced through adding development Tayside data into the ESS training set, demonstrating the adaptability of the neural network models.


2008 ◽  
Vol 1 (1) ◽  
pp. 95-102 ◽  
Author(s):  
F. Wu

The European Union (EU) has some of the strictest standards for mycotoxins in food and feed in the world. This paper explores the economic impacts of these standards on other nations that attempt to export foods that are susceptible to one mycotoxin, aflatoxin, to the EU. The current EU standard for total aflatoxins in food is 4 ng/g in food other than peanuts, and 15 ng/g in peanuts. Under certain conditions, export markets may actually benefit from the strict EU standard. These conditions include a consistently high-quality product, and a global scene that allows market shifts. Even lower-quality export markets can benefit from the strict EU standard, primarily by technology forcing. However, if the above conditions are not met, export markets suffer from the strict EU standard. Two case studies are presented to illustrate these two different scenarios: the U.S. pistachio and almond industries. Importantly, within the EU, food processors may suffer as well from the strict aflatoxin standard. EU policymakers should consider these more nuanced economic impacts when developing mycotoxin standards for food and feed.


Author(s):  
Megha J Panicker ◽  
Vikas Upadhayay ◽  
Gunjan Sethi ◽  
Vrinda Mathur

In the modern era, image captioning has become one of the most widely required tools. Moreover, there are inbuilt applications that generate and provide a caption for a certain image, all these things are done with the help of deep neural network models. The process of generating a description of an image is called image captioning. It requires recognizing the important objects, their attributes, and the relationships among the objects in an image. It generates syntactically and semantically correct sentences.In this paper, we present a deep learning model to describe images and generate captions using computer vision and machine translation. This paper aims to detect different objects found in an image, recognize the relationships between those objects and generate captions. The dataset used is Flickr8k and the programming language used was Python3, and an ML technique called Transfer Learning will be implemented with the help of the Xception model, to demonstrate the proposed experiment. This paper will also elaborate on the functions and structure of the various Neural networks involved. Generating image captions is an important aspect of Computer Vision and Natural language processing. Image caption generators can find applications in Image segmentation as used by Facebook and Google Photos, and even more so, its use can be extended to video frames. They will easily automate the job of a person who has to interpret images. Not to mention it has immense scope in helping visually impaired people.


Over the few years the world has seen a surge in fake news and some people are even calling it an epidemic. Misleading false articles are sold as news items over social media, whatsapp etc where no proper barrier is set to check the authenticity of posts. And not only articles but news items also contain images which are doctored to mislead the public or cause sabotage. Hence a proper barrier to check for authenticity of images related to news items is absolutely necessary. And hence classification of images(related to news items) on the basis of authenticity is imminent. This paper discusses the possibilities of identifying fake images using machine learning techniques. This is an introduction into fake news detection using the latest evolving neural network models


Sign in / Sign up

Export Citation Format

Share Document