scholarly journals A Deep Learning-based Mobile Application for Segmenting Tuta Absoluta’s Damage on Tomato Plants

2021 ◽  
Vol 11 (5) ◽  
pp. 7730-7737
Author(s):  
L. Loyani ◽  
D. Machuve

With the advances in technology, computer vision applications using deep learning methods like Convolutional Neural Networks (CNNs) have been extensively applied in agriculture. Deploying these CNN models on mobile phones is beneficial in making them accessible to everyone, especially farmers and agricultural extension officers. This paper aims to automate the detection of damages caused by a devastating tomato pest known as Tuta Absoluta. To accomplish this objective, a CNN segmentation model trained on a tomato leaf image dataset is deployed on a smartphone application for early and real-time diagnosis of the pest and effective management at early tomato growth stages. The application can precisely detect and segment the shapes of Tuta Absoluta-infected areas on tomato leaves with a minimum confidence of 70% in 5 seconds only.

2020 ◽  
Vol 10 ◽  
pp. e00590
Author(s):  
Lilian Mkonyi ◽  
Denis Rubanga ◽  
Mgaya Richard ◽  
Never Zekeya ◽  
Shimada Sawahiko ◽  
...  

GigaScience ◽  
2021 ◽  
Vol 10 (5) ◽  
Author(s):  
Teng Miao ◽  
Weiliang Wen ◽  
Yinglun Li ◽  
Sheng Wu ◽  
Chao Zhu ◽  
...  

Abstract Background The 3D point cloud is the most direct and effective data form for studying plant structure and morphology. In point cloud studies, the point cloud segmentation of individual plants to organs directly determines the accuracy of organ-level phenotype estimation and the reliability of the 3D plant reconstruction. However, highly accurate, automatic, and robust point cloud segmentation approaches for plants are unavailable. Thus, the high-throughput segmentation of many shoots is challenging. Although deep learning can feasibly solve this issue, software tools for 3D point cloud annotation to construct the training dataset are lacking. Results We propose a top-to-down point cloud segmentation algorithm using optimal transportation distance for maize shoots. We apply our point cloud annotation toolkit for maize shoots, Label3DMaize, to achieve semi-automatic point cloud segmentation and annotation of maize shoots at different growth stages, through a series of operations, including stem segmentation, coarse segmentation, fine segmentation, and sample-based segmentation. The toolkit takes ∼4–10 minutes to segment a maize shoot and consumes 10–20% of the total time if only coarse segmentation is required. Fine segmentation is more detailed than coarse segmentation, especially at the organ connection regions. The accuracy of coarse segmentation can reach 97.2% that of fine segmentation. Conclusion Label3DMaize integrates point cloud segmentation algorithms and manual interactive operations, realizing semi-automatic point cloud segmentation of maize shoots at different growth stages. The toolkit provides a practical data annotation tool for further online segmentation research based on deep learning and is expected to promote automatic point cloud processing of various plants.


Drones ◽  
2021 ◽  
Vol 5 (2) ◽  
pp. 52
Author(s):  
Thomas Lee ◽  
Susan Mckeever ◽  
Jane Courtney

With the rise of Deep Learning approaches in computer vision applications, significant strides have been made towards vehicular autonomy. Research activity in autonomous drone navigation has increased rapidly in the past five years, and drones are moving fast towards the ultimate goal of near-complete autonomy. However, while much work in the area focuses on specific tasks in drone navigation, the contribution to the overall goal of autonomy is often not assessed, and a comprehensive overview is needed. In this work, a taxonomy of drone navigation autonomy is established by mapping the definitions of vehicular autonomy levels, as defined by the Society of Automotive Engineers, to specific drone tasks in order to create a clear definition of autonomy when applied to drones. A top–down examination of research work in the area is conducted, focusing on drone navigation tasks, in order to understand the extent of research activity in each area. Autonomy levels are cross-checked against the drone navigation tasks addressed in each work to provide a framework for understanding the trajectory of current research. This work serves as a guide to research in drone autonomy with a particular focus on Deep Learning-based solutions, indicating key works and areas of opportunity for development of this area in the future.


IEEE Software ◽  
2020 ◽  
Vol 37 (4) ◽  
pp. 67-74
Author(s):  
Tao Zhang ◽  
Ying Liu ◽  
Jerry Gao ◽  
Li Peng Gao ◽  
Jing Cheng

Author(s):  
Blanka Klimova ◽  
Lukas Sanda

Modern technologies surround people every day, including seniors. The aim of this pilot study was to create a maximally user-friendly mobile application in order to meet older users’ individual needs. The research sample consisted of 13 older individuals at the age of 55+ years with a mean age of 67 years, living in the Czech Republic. The key assessment tools of this pilot study were the developed application and usability testing. The findings confirmed that the newly developed mobile application for teaching English met the needs of cognitively healthy seniors, and was acceptable and feasible. In addition, it indicated what technical (e.g., visual interface or easy navigation) and pedagogical (e.g., an instructional manual or adjusting to seniors’ learning pace or clear instructions) aspects should be strictly followed when designing such an educational smartphone application. In addition, the authors of this pilot study provide several implications for pedagogical practice. Further research should include more empirical studies aimed at the exploration of educational mobile applications for older generation groups with respect to meeting their individual needs in order to enhance their overall well-being. However, such studies are, nowadays, very rare.


2018 ◽  
Vol 111 (3) ◽  
pp. 1080-1086 ◽  
Author(s):  
Joop C van Lenteren ◽  
V H P Bueno ◽  
F J Calvo ◽  
Ana M Calixto ◽  
Flavio C Montes

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Lara Lloret Iglesias ◽  
Pablo Sanz Bellón ◽  
Amaia Pérez del Barrio ◽  
Pablo Menéndez Fernández-Miranda ◽  
David Rodríguez González ◽  
...  

AbstractDeep learning is nowadays at the forefront of artificial intelligence. More precisely, the use of convolutional neural networks has drastically improved the learning capabilities of computer vision applications, being able to directly consider raw data without any prior feature extraction. Advanced methods in the machine learning field, such as adaptive momentum algorithms or dropout regularization, have dramatically improved the convolutional neural networks predicting ability, outperforming that of conventional fully connected neural networks. This work summarizes, in an intended didactic way, the main aspects of these cutting-edge techniques from a medical imaging perspective.


Author(s):  
Murizah Kassim ◽  
Ahmad Syafiq Aiman A Bakar

Public bus transportation has become an integral part of society, but the disrup-tion of bus services is one of the major concerns. This project presents the devel-opment of Smart Bus Transportation using Augmented Reality (TRANSPAR) that was developed on a mobile application. One of the major issues with public transportation is on real-time responsiveness. Most bus schedules are presented online but customers still faced many failures. Some bus schedules are not updat-ed when changes happened through time. Some existing bus schedules system is fixed to the bus stations. This research is to identify the bus schedules and its routes characteristics. A 3D AR animation based on identified characteristics was designed using the Unity 3D image marker detection on a mobile Android plat-form. A smartphone application was developed using Vuforia and Google Fire-base. TRANSPAR shows an AR mobile application for acquiring the bus time-tables. The phone camera is applied for marker image detection and scanning the bus station’s images. AR and normal image scanner were designed. Google Fire-base Database is used to retrieve and store each timetable data for every bus sta-tion. Analysis of interactivity and benefits of TRANSPAR shows about 90% agreed on the use of AR and more than 76% agreed on its functionality based on 50 taken samples. This shows a positive impact on the designed TRANSPAR. The research is significant to encourage and experience the public with new tech-nological application for public transportation and it impact the society.


Dengue cases has become endemic in Malaysia. The cost of operation to exterminate mosquito habitats are also high. To do effective operation, information from community are crucial. But, without knowing the characteristic of Aedes larvae it is hard to recognize the larvae without guide from the expert. The use of deep learning in image classification and recognition is crucial to tackle this problem. The purpose of this project is to conduct a study of characteristics of Aedes larvae and determine the best convolutional neural network model in classifying the mosquito larvae. 3 performance evaluation vector which is accuracy, log-loss and AUC-ROC will be used to measure the model’s individual performance. Then performance category which consist of Accuracy Score, Loss Score, File Size Score and Training Time Score will be used to evaluate which model is the best to be implemented into web application or mobile application. From the score collected for each model, ResNet50 has proved to be the best model in classifying the mosquito larvae species.


Sign in / Sign up

Export Citation Format

Share Document