scholarly journals Detection and Quantification of Tar Spot Foliar Infection in Maize Using Machine Learning, Object Detection, and Application Development Framework

2021 ◽  
Author(s):  
Jordan Manchego ◽  
Addie Thompson
Author(s):  
Toivo Ylinampa ◽  
Hannu Saarenmaa

New innovations are needed to speed up digitisation of insect collections. More than one half of all specimens in scientific collections are pinned insects. In Europe this means 500-1,000 million such specimens. Today’s fastest mass-digitisation (i.e., imaging) systems for pinned insects can achieve circa 70 specimens/hour and 500/day by one operator (Tegelberg et al. 2014, Tegelberg et al. 2017). This is in contrast of the 5,000/day rate of the state-of-the-art mass-digitisation systems for herbarium sheets (Oever and Gofferje 2012). The slowness of imaging pinned insects follows from the fact that they are essentially 3D objects. Although butterflies, moths, dragonflies and similar large-winged insects can be prepared (spread) as 2D objects, the fact that the labels are pinned under the insect specimen makes even these samples 3D. In imaging, the labels are often removed manually, which slows down the imaging process. If the need for manual handling of the labels can be skipped, the imaging speed can easily multiplied. ENTODIG-3D (Fig. 1) is an automated camera system, which takes pictures of insect collection boxes (units and drawers) and digitizes them, minimizing time-consuming manual handling of specimens. “Units” are small boxes or trays contained in drawers of collection cabinets, and are being used in most major insect collections. A camera is mounted on motorized rails, which moves in two dimensions over a unit or a drawer. Camera movement is guided by a machine learning object detection program. QR-codes are printed and placed underneath the unit or drawer. QR-codes may contain additional information about each specimen, for example the place it originated from in the collection. Also, the object detection program detects the specimen, and stores its coordinates. The camera mount rotates and tilts, which ensures that the camera may take photographs from all angles and positions. Pictures are transferred into the computer, which calculates a 3D-model with photogrammetry, from which the label text beneath the specimen may be read. This approach requires heavy computation in the segmentation of the top images, and in the creation of a 3D model of the unit, and in extraction of label images of many specimens. Firstly, a sparse point cloud is calculated. Secondly, a dense point cloud is calculated. Finally, a textured mesh is calculated. With machine learning object detection, the top layer, which consists of the insect, may be removed. This leaves the bottom layer with labels visible for later processing by OCR (optical character recognition). This is a new approach to digitise pinned insects in collections. The physical setup is not expensive. Therefore, many systems could be installed in parallel to work overnight to produce the images of tens of drawers. The setup is not physically demanding for the specimens, as they can be left untouched in the unit or drawer. A digital object is created, consisting of label text, unit or drawer QR-code, specimen coordinates in a drawer with unique identifier, and a top-view photo of the specimen. The drawback of this approach is the heavy computing that is needed to create the 3D-models. ENTODIG-3D can currently digitise one sample in five minutes, almost without manual work. Theoretically, potentially sustainable rate is approximately one hundred thousand samples per year. The rate is similar as the current insect digitisation system in Helsinki (Tegelberg & al. 2017), but without the need for manual handling of individual specimens. By adding more computing power, the rate may be increased in linear fashion.


2020 ◽  
Author(s):  
Darshak Mota ◽  
Neel Zadafiya ◽  
Jinan Fiaidhi

Java Spring is an application development framework for enterprise Java. It is an open source platform which is used to develop robust Java application easily. Spring can also be performed using MVC structure. The MVC architecture is based on Model View and Controller techniques, where the project structure or code is divided into three parts or sections which helps to categorize the code files and other files in an organized form. Model, View and Controller code are interrelated and often passes and fetches information from each other without having to put all code in a single file which can make testing the program easy. Testing the application while and after development is an integral part of the Software Development Life Cycle (SDLC). Different techniques have been used to test the web application which is developed using Java Spring MVC architecture. And compares the results among all the three different techniques used to test the web application.


2021 ◽  
Vol 11 (5) ◽  
pp. 2177
Author(s):  
Zuo Xiang ◽  
Patrick Seeling ◽  
Frank H. P. Fitzek

With increasing numbers of computer vision and object detection application scenarios, those requiring ultra-low service latency times have become increasingly prominent; e.g., those for autonomous and connected vehicles or smart city applications. The incorporation of machine learning through the applications of trained models in these scenarios can pose a computational challenge. The softwarization of networks provides opportunities to incorporate computing into the network, increasing flexibility by distributing workloads through offloading from client and edge nodes over in-network nodes to servers. In this article, we present an example for splitting the inference component of the YOLOv2 trained machine learning model between client, network, and service side processing to reduce the overall service latency. Assuming a client has 20% of the server computational resources, we observe a more than 12-fold reduction of service latency when incorporating our service split compared to on-client processing and and an increase in speed of more than 25% compared to performing everything on the server. Our approach is not only applicable to object detection, but can also be applied in a broad variety of machine learning-based applications and services.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2514
Author(s):  
Tharindu Kaluarachchi ◽  
Andrew Reis ◽  
Suranga Nanayakkara

After Deep Learning (DL) regained popularity recently, the Artificial Intelligence (AI) or Machine Learning (ML) field is undergoing rapid growth concerning research and real-world application development. Deep Learning has generated complexities in algorithms, and researchers and users have raised concerns regarding the usability and adoptability of Deep Learning systems. These concerns, coupled with the increasing human-AI interactions, have created the emerging field that is Human-Centered Machine Learning (HCML). We present this review paper as an overview and analysis of existing work in HCML related to DL. Firstly, we collaborated with field domain experts to develop a working definition for HCML. Secondly, through a systematic literature review, we analyze and classify 162 publications that fall within HCML. Our classification is based on aspects including contribution type, application area, and focused human categories. Finally, we analyze the topology of the HCML landscape by identifying research gaps, highlighting conflicting interpretations, addressing current challenges, and presenting future HCML research opportunities.


2021 ◽  
Vol 11 (13) ◽  
pp. 6006
Author(s):  
Huy Le ◽  
Minh Nguyen ◽  
Wei Qi Yan ◽  
Hoa Nguyen

Augmented reality is one of the fastest growing fields, receiving increased funding for the last few years as people realise the potential benefits of rendering virtual information in the real world. Most of today’s augmented reality marker-based applications use local feature detection and tracking techniques. The disadvantage of applying these techniques is that the markers must be modified to match the unique classified algorithms or they suffer from low detection accuracy. Machine learning is an ideal solution to overcome the current drawbacks of image processing in augmented reality applications. However, traditional data annotation requires extensive time and labour, as it is usually done manually. This study incorporates machine learning to detect and track augmented reality marker targets in an application using deep neural networks. We firstly implement the auto-generated dataset tool, which is used for the machine learning dataset preparation. The final iOS prototype application incorporates object detection, object tracking and augmented reality. The machine learning model is trained to recognise the differences between targets using one of YOLO’s most well-known object detection methods. The final product makes use of a valuable toolkit for developing augmented reality applications called ARKit.


Sign in / Sign up

Export Citation Format

Share Document