scholarly journals Deep learning and citizen science enable automated plant trait predictions from photographs

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Christopher Schiller ◽  
Sebastian Schmidtlein ◽  
Coline Boonman ◽  
Alvaro Moreno-Martínez ◽  
Teja Kattenborn

AbstractPlant functional traits (‘traits’) are essential for assessing biodiversity and ecosystem processes, but cumbersome to measure. To facilitate trait measurements, we test if traits can be predicted through visible morphological features by coupling heterogeneous photographs from citizen science (iNaturalist) with trait observations (TRY database) through Convolutional Neural Networks (CNN). Our results show that image features suffice to predict several traits representing the main axes of plant functioning. The accuracy is enhanced when using CNN ensembles and incorporating prior knowledge on trait plasticity and climate. Our results suggest that these models generalise across growth forms, taxa and biomes around the globe. We highlight the applicability of this approach by producing global trait maps that reflect known macroecological patterns. These findings demonstrate the potential of Big Data derived from professional and citizen science in concert with CNN as powerful tools for an efficient and automated assessment of Earth’s plant functional diversity.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Dipendra Jha ◽  
Vishu Gupta ◽  
Logan Ward ◽  
Zijiang Yang ◽  
Christopher Wolverton ◽  
...  

AbstractThe application of machine learning (ML) techniques in materials science has attracted significant attention in recent years, due to their impressive ability to efficiently extract data-driven linkages from various input materials representations to their output properties. While the application of traditional ML techniques has become quite ubiquitous, there have been limited applications of more advanced deep learning (DL) techniques, primarily because big materials datasets are relatively rare. Given the demonstrated potential and advantages of DL and the increasing availability of big materials datasets, it is attractive to go for deeper neural networks in a bid to boost model performance, but in reality, it leads to performance degradation due to the vanishing gradient problem. In this paper, we address the question of how to enable deeper learning for cases where big materials data is available. Here, we present a general deep learning framework based on Individual Residual learning (IRNet) composed of very deep neural networks that can work with any vector-based materials representation as input to build accurate property prediction models. We find that the proposed IRNet models can not only successfully alleviate the vanishing gradient problem and enable deeper learning, but also lead to significantly (up to 47%) better model accuracy as compared to plain deep neural networks and traditional ML techniques for a given input materials representation in the presence of big data.


2021 ◽  
Vol 5 (9) ◽  
Author(s):  
Dora Kaufman

Prolifera na sociedade o uso de tec- nologias de inteligência artificial (IA). A maior parte das implementações atuais de IA é baseada na técnica de aprendizado de máquina (machine learning), subárea da IA, denomina- da de redes neurais de aprendizado profundo (Deep Learning Neural Networks – DLNNs) cujos algoritmos “aprendem” a partir de exemplos extraídos do big data. Nesse processo de automação de decisões, intensifica-se o debate da IA ética, concentrado em princípios gerais de aplicabilidade restrita, não traduzíveis em boas práticas para nortear o ecossistema de IA. Ademais, alguns desses princípios, como justiça e dignidade, não são universais e desconhece-se como decodificá-los em termos matemáticos. O artigo pondera sobre algumas soluções para mitigar as externalidades negativas sugeridas por Luciano Floridi, Mark Coeckelbergh e Cetric Villani.


Author(s):  
Priti Srinivas Sajja ◽  
Rajendra Akerkar

Traditional approaches like artificial neural networks, in spite of their intelligent support such as learning from large amount of data, are not useful for big data analytics for many reasons. The chapter discusses the difficulties while analyzing big data and introduces deep learning as a solution. This chapter discusses various deep learning techniques and models for big data analytics. The chapter presents necessary fundamentals of an artificial neural network, deep learning, and big data analytics. Different deep models such as autoencoders, deep belief nets, convolutional neural networks, recurrent neural networks, reinforcement learning neural networks, multi model approach, parallelization, and cognitive computing are discussed here, with the latest research and applications. The chapter concludes with discussion on future research and application areas.


2019 ◽  
Author(s):  
Yu Li ◽  
Chao Huang ◽  
Lizhong Ding ◽  
Zhongxiao Li ◽  
Yijie Pan ◽  
...  

AbstractDeep learning, which is especially formidable in handling big data, has achieved great success in various fields, including bioinformatics. With the advances of the big data era in biology, it is foreseeable that deep learning will become increasingly important in the field and will be incorporated in vast majorities of analysis pipelines. In this review, we provide both the exoteric introduction of deep learning, and concrete examples and implementations of its representative applications in bioinformatics. We start from the recent achievements of deep learning in the bioinformatics field, pointing out the problems which are suitable to use deep learning. After that, we introduce deep learning in an easy-to-understand fashion, from shallow neural networks to legendary convolutional neural networks, legendary recurrent neural networks, graph neural networks, generative adversarial networks, variational autoencoder, and the most recent state-of-the-art architectures. After that, we provide eight examples, covering five bioinformatics research directions and all the four kinds of data type, with the implementation written in Tensorflow and Keras. Finally, we discuss the common issues, such as overfitting and interpretability, that users will encounter when adopting deep learning methods and provide corresponding suggestions. The implementations are freely available at https://github.com/lykaust15/Deep_learning_examples.


2019 ◽  
Vol 2019 ◽  
pp. 1-16 ◽  
Author(s):  
Yu Zheng ◽  
Xiaolong Xu ◽  
Lianyong Qi

At present, to improve the accuracy and performance for personalized recommendation in mobile wireless networks, deep learning has been widely concerned and employed with social and mobile trajectory big data. However, it is still challenging to implement increasingly complex personalized recommendation applications over big data. In view of this challenge, a hybrid recommendation framework, i.e., deep CNN-assisted personalized recommendation, named DCAPR, is proposed for mobile users. Technically, DCAPR integrates multisource heterogeneous data through convolutional neural network, as well as inputs various features, including image features, text semantic features, and mobile social user trajectories, to construct a deep prediction model. Specifically, we acquire the location information and moving trajectory sequence in the mobile wireless network first. Then, the similarity of users is calculated according to the sequence of moving trajectories to pick the neighboring users. Furthermore, we recommend the potential visiting locations for mobile users through the deep learning CNN network with the social and mobile trajectory big data. Finally, a real-word large-scale dataset, collected from Gowalla, is leveraged to verify the accuracy and effectiveness of our proposed DCAPR model.


With the evolution of artificial intelligence to deep learning, the age of perspicacious machines has pioneered that can even mimic as a human. A Conversational software agent is one of the best-suited examples of such intuitive machines which are also commonly known as chatbot actuated with natural language processing. The paper enlisted some existing popular chatbots along with their details, technical specifications, and functionalities. Research shows that most of the customers have experienced penurious service. Also, the inception of meaningful cum instructive feedback endure a demanding and exigent assignment as enactment for chatbots builtout reckon mostly upon templates and hand-written rules. Current chatbot models lack in generating required responses and thus contradict the quality conversation. So involving deep learning amongst these models can overcome this lack and can fill up the paucity with deep neural networks. Some of the deep Neural networks utilized for this till now are Stacked Auto-Encoder, sparse auto-encoders, predictive sparse and denoising auto-encoders. But these DNN are unable to handle big data involving large amounts of heterogeneous data. While Tensor Auto Encoder which overcomes this drawback is time-consuming. This paper has proposed the Chatbot to handle the big data in a manageable time.


2018 ◽  
Author(s):  
Anisha Keshavan ◽  
Jason D. Yeatman ◽  
Ariel Rokem

AbstractResearch in many fields has become increasingly reliant on large and complex datasets. “Big Data” holds untold promise to rapidly advance science by tackling new questions that cannot be answered with smaller datasets. While powerful, research with Big Data poses unique challenges, as many standard lab protocols rely on experts examining each one of the samples. This is not feasible for large-scale datasets because manual approaches are time-consuming and hence difficult to scale. Meanwhile, automated approaches lack the accuracy of examination by highly trained scientists and this may introduce major errors, sources of noise, and unforeseen biases into these large and complex datasets. Our proposed solution is to 1) start with a small, expertly labelled dataset, 2) amplify labels through web-based tools that engage citizen scientists, and 3) train machine learning on amplified labels to emulate expert decision making. As a proof of concept, we developed a system to quality control a large dataset of three-dimensional magnetic resonance images (MRI) of human brains. An initial dataset of 200 brain images labeled by experts were amplified by citizen scientists to label 722 brains, with over 80,000 ratings done through a simple web interface. A deep learning algorithm was then trained to predict data quality, based on a combination of the citizen scientist labels that accounts for differences in the quality of classification by different citizen scientists. In an ROC analysis (on left out test data), the deep learning network performed as well as a state-of-the-art, specialized algorithm (MRIQC) for quality control of T1-weighted images, each with an area under the curve of 0.99. Finally, as a specific practical application of the method, we explore how brain image quality relates to the replicability of a well established relationship between brain volume and age over development. Combining citizen science and deep learning can generalize and scale expert decision making; this is particularly important in emerging disciplines where specialized, automated tools do not already exist.


Author(s):  
Sindhu P. Menon

In the last couple of years, artificial neural networks have gained considerable momentum. Their results could be enhanced if the number of layers could be made deeper. Of late, a lot of data has been generated, which has led to big data. This comes along with many challenges like quality, which is one of the most important ones. Deep learning models can improve the quality of data. In this chapter, an attempt has been made to review deep supervised and deep unsupervised learning algorithms and the various activation functions used. Challenges in deep learning have also been discussed.


Author(s):  
Murad Khan ◽  
Bhagya Nathali Silva ◽  
Kijun Han

Big Data and deep computation are among the buzzwords in the present sophisticated digital world. Big Data has emerged with the expeditious growth of digital data. This chapter addresses the problem of employing deep learning algorithms in Big Data analytics. Unlike the traditional algorithms, this chapter comes up with various solutions to employ advanced deep learning mechanisms with less complexity and finally present a generic solution. The deep learning algorithms require less time to process the big amount of data based on different contexts. However, collecting the accurate feature and classifying the context into patterns using neural networks algorithms require high time and complexity. Therefore, using deep learning algorithms in integration with neural networks can bring optimize solutions. Consequently, the aim of this chapter is to provide an overview of how the advance deep learning algorithms can be used to solve various existing challenges in Big Data analytics.


Sign in / Sign up

Export Citation Format

Share Document