PP234 Analysis Of Discussions On Twitter On The Topic Of COVID-19 Tests: Exploring A Natural Language Processing Approach

2021 ◽  
Vol 37 (S1) ◽  
pp. 30-30
Author(s):  
Savitri Pandey ◽  
Christopher Marshall ◽  
Maria Pokora ◽  
Anne Oyewole ◽  
Dawn Craig

IntroductionVarious strategies to suppress the Coronavirus have been adopted by governments across the world; one such strategy is diagnostic testing. The anxiety of testing on individuals is difficult to quantify. This analysis explores the use of soft intelligence from Twitter (USA, UK & India) in helping better understand this issue.MethodsA total of 650,000 tweets were collected between September and October 2020, using Twitter API using hashtags such as ‘#oxymeter’, ‘#oximeter’, ‘#antibodytest’, ‘#infraredthermometer’, ‘#swabtest’, ‘#rapidtest’, and ‘#antigen’. We applied natural language processing (TextBlob) to assign sentiment and categorize the tweets by emotions and attitude. WordCloud was then used to identify the single topmost 500 words in the whole tweet dataset.ResultsGlobal analysis and pre-processing of the tweets indicate that 21 percent, seven percent and four percent of tweets originated from the USA, UK, and India respectively. The tweets from #antibody, #rapid, #antigen, and #swabtest were positive sentiments, whereas #oxymeter, #infraredthermometer were mostly neutral. The underlying emotions of the tweets were approximately 2.5 times more positive than negative. The most used words in the tweets included ‘hope’ ‘insurance’, ‘symptoms’, ‘love’, ‘painful’, ‘cough’, ‘fast test’, ‘wife’, and ‘kids’.ConclusionsThe finding suggests that it may be reasonable to infer that people are generally concerned about their personal and social wellbeing, wanting to keep themselves safe and perceive testing to deliver some component of that feeling of safety. There are several limitations to this study such as it was restricted to only three countries, and includes only English language tweets with a limited number of hashtags.

2021 ◽  
Vol 1 (1) ◽  
pp. 2-11
Author(s):  
Sae Dieb ◽  
Kou Amano ◽  
Kosuke Tanabe ◽  
Daitetsu Sato ◽  
Masashi Ishii ◽  
...  

Author(s):  
Santosh Kumar Mishra ◽  
Rijul Dhir ◽  
Sriparna Saha ◽  
Pushpak Bhattacharyya

Image captioning is the process of generating a textual description of an image that aims to describe the salient parts of the given image. It is an important problem, as it involves computer vision and natural language processing, where computer vision is used for understanding images, and natural language processing is used for language modeling. A lot of works have been done for image captioning for the English language. In this article, we have developed a model for image captioning in the Hindi language. Hindi is the official language of India, and it is the fourth most spoken language in the world, spoken in India and South Asia. To the best of our knowledge, this is the first attempt to generate image captions in the Hindi language. A dataset is manually created by translating well known MSCOCO dataset from English to Hindi. Finally, different types of attention-based architectures are developed for image captioning in the Hindi language. These attention mechanisms are new for the Hindi language, as those have never been used for the Hindi language. The obtained results of the proposed model are compared with several baselines in terms of BLEU scores, and the results show that our model performs better than others. Manual evaluation of the obtained captions in terms of adequacy and fluency also reveals the effectiveness of our proposed approach. Availability of resources : The codes of the article are available at https://github.com/santosh1821cs03/Image_Captioning_Hindi_Language ; The dataset will be made available: http://www.iitp.ac.in/∼ai-nlp-ml/resources.html .


Author(s):  
TIAN-SHUN YAO

With the word-based theory of natural language processing, a word-based Chinese language understanding system has been developed. In the light of psychological language analysis and the features of the Chinese language, this theory of natural language processing is presented with the description of the computer programs based on it. The heart of the system is to define a Total Information Dictionary and the World Knowledge Source used in the system. The purpose of this research is to develop a system which can understand not only Chinese sentences but also the whole text.


2020 ◽  
Author(s):  
David DeFranza ◽  
Himanshu Mishra ◽  
Arul Mishra

Language provides an ever-present context for our cognitions and has the ability to shape them. Languages across the world can be gendered (language in which the form of noun, verb, or pronoun is presented as female or male) versus genderless. In an ongoing debate, one stream of research suggests that gendered languages are more likely to display gender prejudice than genderless languages. However, another stream of research suggests that language does not have the ability to shape gender prejudice. In this research, we contribute to the debate by using a Natural Language Processing (NLP) method which captures the meaning of a word from the context in which it occurs. Using text data from Wikipedia and the Common Crawl project (which contains text from billions of publicly facing websites) across 45 world languages, covering the majority of the world’s population, we test for gender prejudice in gendered and genderless languages. We find that gender prejudice occurs more in gendered rather than genderless languages. Moreover, we examine whether genderedness of language influences the stereotypic dimensions of warmth and competence utilizing the same NLP method.


The software development procedure begins with identifying the requirement analysis. The process levels of the requirements start from analysing the requirements to sketch the design of the program, which is very critical work for programmers and software engineers. Moreover, many errors will happen during the requirement analysis cycle transferring to other stages, which leads to the high cost of the process more than the initial specified process. The reason behind this is because of the specifications of software requirements created in the natural language. To minimize these errors, we can transfer the software requirements to the computerized form by the UML diagram. To overcome this, a device has been designed, which plans can provide semi-automatized aid for designers to provide UML class version from software program specifications using natural Language Processing techniques. The proposed technique outlines the class diagram in a well-known configuration and additionally facts out the relationship between instructions. In this research, we propose to enhance the procedure of producing the UML diagrams by utilizing the Natural Language, which will help the software development to analyze the software requirements with fewer errors and efficient way. The proposed approach will use the parser analyze and Part of Speech (POS) tagger to analyze the user requirements entered by the user in the English language. Then, extract the verbs and phrases, etc. in the user text. The obtained results showed that the proposed method got better results in comparison with other methods published in the literature. The proposed method gave a better analysis of the given requirements and better diagrams presentation, which can help the software engineers. Key words: Part of Speech,UM


Sign in / Sign up

Export Citation Format

Share Document