scholarly journals Applications of deep-learning approaches in horticultural research: a review

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Biyun Yang ◽  
Yong Xu

AbstractDeep learning is known as a promising multifunctional tool for processing images and other big data. By assimilating large amounts of heterogeneous data, deep-learning technology provides reliable prediction results for complex and uncertain phenomena. Recently, it has been increasingly used by horticultural researchers to make sense of the large datasets produced during planting and postharvest processes. In this paper, we provided a brief introduction to deep-learning approaches and reviewed 71 recent research works in which deep-learning technologies were applied in the horticultural domain for variety recognition, yield estimation, quality detection, stress phenotyping detection, growth monitoring, and other tasks. We described in detail the application scenarios reported in the relevant literature, along with the applied models and frameworks, the used data, and the overall performance results. Finally, we discussed the current challenges and future trends of deep learning in horticultural research. The aim of this review is to assist researchers and provide guidance for them to fully understand the strengths and possible weaknesses when applying deep learning in horticultural sectors. We also hope that this review will encourage researchers to explore some significant examples of deep learning in horticultural science and will promote the advancement of intelligent horticulture.

2021 ◽  
Vol 21 (1) ◽  
pp. 19
Author(s):  
Asri Rizki Yuliani ◽  
M. Faizal Amri ◽  
Endang Suryawati ◽  
Ade Ramdan ◽  
Hilman Ferdinandus Pardede

Speech enhancement, which aims to recover the clean speech of the corrupted signal, plays an important role in the digital speech signal processing. According to the type of degradation and noise in the speech signal, approaches to speech enhancement vary. Thus, the research topic remains challenging in practice, specifically when dealing with highly non-stationary noise and reverberation. Recent advance of deep learning technologies has provided great support for the progress in speech enhancement research field. Deep learning has been known to outperform the statistical model used in the conventional speech enhancement. Hence, it deserves a dedicated survey. In this review, we described the advantages and disadvantages of recent deep learning approaches. We also discussed challenges and trends of this field. From the reviewed works, we concluded that the trend of the deep learning architecture has shifted from the standard deep neural network (DNN) to convolutional neural network (CNN), which can efficiently learn temporal information of speech signal, and generative adversarial network (GAN), that utilize two networks training.


2019 ◽  
Vol 21 (5) ◽  
pp. 1609-1627 ◽  
Author(s):  
Tianlin Zhang ◽  
Jiaxu Leng ◽  
Ying Liu

AbstractDrug–drug interactions (DDIs) are crucial for drug research and pharmacovigilance. These interactions may cause adverse drug effects that threaten public health and patient safety. Therefore, the DDIs extraction from biomedical literature has been widely studied and emphasized in modern biomedical research. The previous rules-based and machine learning approaches rely on tedious feature engineering, which is labourious, time-consuming and unsatisfactory. With the development of deep learning technologies, this problem is alleviated by learning feature representations automatically. Here, we review the recent deep learning methods that have been applied to the extraction of DDIs from biomedical literature. We describe each method briefly and compare its performance in the DDI corpus systematically. Next, we summarize the advantages and disadvantages of these deep learning models for this task. Furthermore, we discuss some challenges and future perspectives of DDI extraction via deep learning methods. This review aims to serve as a useful guide for interested researchers to further advance bioinformatics algorithms for DDIs extraction from the literature.


2021 ◽  
Vol 3 (3) ◽  
pp. 190-207
Author(s):  
S. K. B. Sangeetha

In recent years, deep-learning systems have made great progress, particularly in the disciplines of computer vision and pattern recognition. Deep-learning technology can be used to enable inference models to do real-time object detection and recognition. Using deep-learning-based designs, eye tracking systems could determine the position of eyes or pupils, regardless of whether visible-light or near-infrared image sensors were utilized. For growing electronic vehicle systems, such as driver monitoring systems and new touch screens, accurate and successful eye gaze estimates are critical. In demanding, unregulated, low-power situations, such systems must operate efficiently and at a reasonable cost. A thorough examination of the different deep learning approaches is required to take into consideration all of the limitations and opportunities of eye gaze tracking. The goal of this research is to learn more about the history of eye gaze tracking, as well as how deep learning contributed to computer vision-based tracking. Finally, this research presents a generalized system model for deep learning-driven eye gaze direction diagnostics, as well as a comparison of several approaches.


Author(s):  
Rajasekaran Thangaraj ◽  
Sivaramakrishnan Rajendar ◽  
Vidhya Kandasamy

Healthcare motoring has become a popular research in recent years. The evolution of electronic devices brings out numerous wearable devices that can be used for a variety of healthcare motoring systems. These devices measure the patient's health parameters and send them for further processing, where the acquired data is analyzed. The analysis provides the patients or their relatives with the medical support required or predictions based on the acquired data. Cloud computing, deep learning, and machine learning technologies play a prominent role in processing and analyzing the data respectively. This chapter aims to provide a detailed study of IoT-based healthcare systems, a variety of sensors used to measure parameters of health, and various deep learning and machine learning approaches introduced for the diagnosis of different diseases. The chapter also highlights the challenges, open issues, and performance considerations for future IoT-based healthcare research.


2019 ◽  
Vol 35 (4) ◽  
pp. 328-337
Author(s):  
Byeongseop Kim ◽  
Seunghyeok Son ◽  
Cheolho Ryu ◽  
Jong Gye Shin

Curved hull plate forming, the process of forming a flat plate into a curved surface that can fit into the outer shell of a ship’s hull, can be achieved through either cold or thermal forming processes, with the latter processes further subcategorizable into line or triangle heating. The appropriate forming process is determined from the plate shape and surface classification, which must be determined in advance to establish a precise production plan. In this study, an algorithm to extract two-dimensional features of constant size from three-dimensional design information was developed to enable the application of machine and deep learning technologies to hull plates with arbitrary polygonal shapes. Several candidate classifiers were implemented by applying learning algorithms to datasets comprising calculated features and labels corresponding to various hull plate types, with the performance of each classifier evaluated using cross-validation. A classifier applying a convolution neural network as a deep learning technology was found to have the highest prediction accuracy, which exceeded the accuracies obtained in previous hull plate classification studies. The results of this study demonstrate that it is possible to automatically classify hull plates with high accuracy using deep learning technologies and that a perfect level of classification accuracy can be approached by obtaining further plate data.


2020 ◽  
Vol 6 (3) ◽  
pp. 27-32
Author(s):  
Artur S. Ter-Levonian ◽  
Konstantin A. Koshechkin

Introduction: Nowadays an increase in the amount of information creates the need to replace and update data processing technologies. One of the tasks of clinical pharmacology is to create the right combination of drugs for the treatment of a particular disease. It takes months and even years to create a treatment regimen. Using machine learning (in silico) allows predicting how to get the right combination of drugs and skip the experimental steps in a study that take a lot of time and financial expenses. Gradual preparation is needed for the Deep Learning of Drug Synergy, starting from creating a base of drugs, their characteristics and ways of interacting. Aim: Our review aims to draw attention to the prospect of the introduction of Deep Learning technology to predict possible combinations of drugs for the treatment of various diseases. Materials and methods: Literary review of articles based on the PUBMED project and related bibliographic resources over the past 5 years (2015–2019). Results and discussion: In the analyzed articles, Machine or Deep Learning completed the assigned tasks. It was able to determine the most appropriate combinations for the treatment of certain diseases, select the necessary regimen and doses. In addition, using this technology, new combinations have been identified that may be further involved in preclinical studies. Conclusions: From the analysis of the articles, we obtained evidence of the positive effects of Deep Learning to select “key” combinations for further stages of preclinical research.


2021 ◽  
Vol 11 (17) ◽  
pp. 8227 ◽  
Author(s):  
Andrea Loddo ◽  
Fabio Pili ◽  
Cecilia Di Ruberto

COVID-19, an infectious coronavirus disease, caused a pandemic with countless deaths. From the outset, clinical institutes have explored computed tomography as an effective and complementary screening tool alongside the reverse transcriptase-polymerase chain reaction. Deep learning techniques have shown promising results in similar medical tasks and, hence, may provide solutions to COVID-19 based on medical images of patients. We aim to contribute to the research in this field by: (i) Comparing different architectures on a public and extended reference dataset to find the most suitable; (ii) Proposing a patient-oriented investigation of the best performing networks; and (iii) Evaluating their robustness in a real-world scenario, represented by cross-dataset experiments. We exploited ten well-known convolutional neural networks on two public datasets. The results show that, on the reference dataset, the most suitable architecture is VGG19, which (i) Achieved 98.87% accuracy in the network comparison; (ii) Obtained 95.91% accuracy on the patient status classification, even though it misclassifies some patients that other networks classify correctly; and (iii) The cross-dataset experiments exhibit the limitations of deep learning approaches in a real-world scenario with 70.15% accuracy, which need further investigation to improve the robustness. Thus, VGG19 architecture showed promising performance in the classification of COVID-19 cases. Nonetheless, this architecture enables extensive improvements based on its modification, or even with preprocessing step in addition to it. Finally, the cross-dataset experiments exposed the critical weakness of classifying images from heterogeneous data sources, compatible with a real-world scenario.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 650
Author(s):  
Minki Kim ◽  
Sunwon Kang ◽  
Byoung-Dai Lee

Recently, deep learning has been employed in medical image analysis for several clinical imaging methods, such as X-ray, computed tomography, magnetic resonance imaging, and pathological tissue imaging, and excellent performance has been reported. With the development of these methods, deep learning technologies have rapidly evolved in the healthcare industry related to hair loss. Hair density measurement (HDM) is a process used for detecting the severity of hair loss by counting the number of hairs present in the occipital donor region for transplantation. HDM is a typical object detection and classification problem that could benefit from deep learning. This study analyzed the accuracy of HDM by applying deep learning technology for object detection and reports the feasibility of automating HDM. The dataset for training and evaluation comprised 4492 enlarged hair scalp RGB images obtained from male hair-loss patients and the corresponding annotation data that contained the location information of the hair follicles present in the image and follicle-type information according to the number of hairs. EfficientDet, YOLOv4, and DetectoRS were used as object detection algorithms for performance comparison. The experimental results indicated that YOLOv4 had the best performance, with a mean average precision of 58.67.


Sensors ◽  
2020 ◽  
Vol 20 (15) ◽  
pp. 4325
Author(s):  
Tiange Wang ◽  
Fangfang Yang ◽  
Kwok-Leung Tsui

Railway inspection has always been a critical task to guarantee the safety of the railway transportation. The development of deep learning technologies brings new breakthroughs in the accuracy and speed of image-based railway inspection application. In this work, a series of one-stage deep learning approaches, which are fast and accurate at the same time, are proposed to inspect the key components of railway track including rail, bolt, and clip. The inspection results show that the enhanced model, the second version of you only look once (YOLOv2), presents the best component detection performance with 93% mean average precision (mAP) at 35 image per second (IPS), whereas the feature pyramid network (FPN) based model provides a smaller mAP and much longer inference time. Besides, the detection performances of more deep learning approaches are evaluated under varying input sizes, where larger input size usually improves the detection accuracy but results in a longer inference time. Overall, the YOLO series models could achieve faster speed under the same detection accuracy.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1951
Author(s):  
Fahad Jibrin Abdu ◽  
Yixiong Zhang ◽  
Maozhong Fu ◽  
Yuhan Li ◽  
Zhenmiao Deng

The progress brought by the deep learning technology over the last decade has inspired many research domains, such as radar signal processing, speech and audio recognition, etc., to apply it to their respective problems. Most of the prominent deep learning models exploit data representations acquired with either Lidar or camera sensors, leaving automotive radars rarely used. This is despite the vital potential of radars in adverse weather conditions, as well as their ability to simultaneously measure an object’s range and radial velocity seamlessly. As radar signals have not been exploited very much so far, there is a lack of available benchmark data. However, recently, there has been a lot of interest in applying radar data as input to various deep learning algorithms, as more datasets are being provided. To this end, this paper presents a survey of various deep learning approaches processing radar signals to accomplish some significant tasks in an autonomous driving application, such as detection and classification. We have itemized the review based on different radar signal representations, as it is one of the critical aspects while using radar data with deep learning models. Furthermore, we give an extensive review of the recent deep learning-based multi-sensor fusion models exploiting radar signals and camera images for object detection tasks. We then provide a summary of the available datasets containing radar data. Finally, we discuss the gaps and important innovations in the reviewed papers and highlight some possible future research prospects.


Sign in / Sign up

Export Citation Format

Share Document