scholarly journals Artificial intelligence with deep learning in nuclear medicine and radiology

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Milan Decuyper ◽  
Jens Maebe ◽  
Roel Van Holen ◽  
Stefaan Vandenberghe

AbstractThe use of deep learning in medical imaging has increased rapidly over the past few years, finding applications throughout the entire radiology pipeline, from improved scanner performance to automatic disease detection and diagnosis. These advancements have resulted in a wide variety of deep learning approaches being developed, solving unique challenges for various imaging modalities. This paper provides a review on these developments from a technical point of view, categorizing the different methodologies and summarizing their implementation. We provide an introduction to the design of neural networks and their training procedure, after which we take an extended look at their uses in medical imaging. We cover the different sections of the radiology pipeline, highlighting some influential works and discussing the merits and limitations of deep learning approaches compared to other traditional methods. As such, this review is intended to provide a broad yet concise overview for the interested reader, facilitating adoption and interdisciplinary research of deep learning in the field of medical imaging.

Drones ◽  
2021 ◽  
Vol 5 (2) ◽  
pp. 52
Author(s):  
Thomas Lee ◽  
Susan Mckeever ◽  
Jane Courtney

With the rise of Deep Learning approaches in computer vision applications, significant strides have been made towards vehicular autonomy. Research activity in autonomous drone navigation has increased rapidly in the past five years, and drones are moving fast towards the ultimate goal of near-complete autonomy. However, while much work in the area focuses on specific tasks in drone navigation, the contribution to the overall goal of autonomy is often not assessed, and a comprehensive overview is needed. In this work, a taxonomy of drone navigation autonomy is established by mapping the definitions of vehicular autonomy levels, as defined by the Society of Automotive Engineers, to specific drone tasks in order to create a clear definition of autonomy when applied to drones. A top–down examination of research work in the area is conducted, focusing on drone navigation tasks, in order to understand the extent of research activity in each area. Autonomy levels are cross-checked against the drone navigation tasks addressed in each work to provide a framework for understanding the trajectory of current research. This work serves as a guide to research in drone autonomy with a particular focus on Deep Learning-based solutions, indicating key works and areas of opportunity for development of this area in the future.


Sensors ◽  
2019 ◽  
Vol 20 (1) ◽  
pp. 214 ◽  
Author(s):  
Itzik Klein

One of the approaches for indoor positioning using smartphones is pedestrian dead reckoning. There, the user step length is estimated using empirical or biomechanical formulas. Such calculation was shown to be very sensitive to the smartphone location on the user. In addition, knowledge of the smartphone location can also help for direct step-length estimation and heading determination. In a wider point of view, smartphone location recognition is part of human activity recognition employed in many fields and applications, such as health monitoring. In this paper, we propose to use deep learning approaches to classify the smartphone location on the user, while walking, and require robustness in terms of the ability to cope with recordings that differ (in sampling rate, user dynamics, sensor type, and more) from those available in the train dataset. The contributions of the paper are: (1) Definition of the smartphone location recognition framework using accelerometers, gyroscopes, and deep learning; (2) examine the proposed approach on 107 people and 31 h of recorded data obtained from eight different datasets; and (3) enhanced algorithms for using only accelerometers for the classification process. The experimental results show that the smartphone location can be classified with high accuracy using only the smartphone’s accelerometers.


2020 ◽  
Vol 6 (10) ◽  
pp. 110
Author(s):  
Francesco Lombardi ◽  
Simone Marinai

Nowadays, deep learning methods are employed in a broad range of research fields. The analysis and recognition of historical documents, as we survey in this work, is not an exception. Our study analyzes the papers published in the last few years on this topic from different perspectives: we first provide a pragmatic definition of historical documents from the point of view of the research in the area, then we look at the various sub-tasks addressed in this research. Guided by these tasks, we go through the different input-output relations that are expected from the used deep learning approaches and therefore we accordingly describe the most used models. We also discuss research datasets published in the field and their applications. This analysis shows that the latest research is a leap forward since it is not the simple use of recently proposed algorithms to previous problems, but novel tasks and novel applications of state of the art methods are now considered. Rather than just providing a conclusive picture of the current research in the topic we lastly suggest some potential future trends that can represent a stimulus for innovative research directions.


2021 ◽  
Vol 40 ◽  
pp. 03030
Author(s):  
Mehdi Surani ◽  
Ramchandra Mangrulkar

Over the past years the exponential growth of social media usage has given the power to every individual to share their opinions freely. This has led to numerous threats allowing users to exploit their freedom of speech, thus spreading hateful comments, using abusive language, carrying out personal attacks, and sometimes even to the extent of cyberbullying. However, determining abusive content is not a difficult task and many social media platforms have solutions available already but at the same time, many are searching for more efficient ways and solutions to overcome this issue. Traditional models explore machine learning models to identify negative content posted on social media. Shaming categories are explored, and content is put in place according to the label. Such categorization is easy to detect as the contextual language used is direct. However, the use of irony to mock or convey contempt is also a part of public shaming and must be considered while categorizing the shaming labels. In this research paper, various shaming types, namely toxic, severe toxic, obscene, threat, insult, identity hate, and sarcasm are predicted using deep learning approaches like CNN and LSTM. These models have been studied along with traditional models to determine which model gives the most accurate results.


Author(s):  
Patrick Omoumi ◽  
Alexis Ducarouge ◽  
Antoine Tournier ◽  
Hugh Harvey ◽  
Charles E. Kahn ◽  
...  

Abstract Artificial intelligence (AI) has made impressive progress over the past few years, including many applications in medical imaging. Numerous commercial solutions based on AI techniques are now available for sale, forcing radiology practices to learn how to properly assess these tools. While several guidelines describing good practices for conducting and reporting AI-based research in medicine and radiology have been published, fewer efforts have focused on recommendations addressing the key questions to consider when critically assessing AI solutions before purchase. Commercial AI solutions are typically complicated software products, for the evaluation of which many factors are to be considered. In this work, authors from academia and industry have joined efforts to propose a practical framework that will help stakeholders evaluate commercial AI solutions in radiology (the ECLAIR guidelines) and reach an informed decision. Topics to consider in the evaluation include the relevance of the solution from the point of view of each stakeholder, issues regarding performance and validation, usability and integration, regulatory and legal aspects, and financial and support services. Key Points • Numerous commercial solutions based on artificial intelligence techniques are now available for sale, and radiology practices have to learn how to properly assess these tools. • We propose a framework focusing on practical points to consider when assessing an AI solution in medical imaging, allowing all stakeholders to conduct relevant discussions with manufacturers and reach an informed decision as to whether to purchase an AI commercial solution for imaging applications. • Topics to consider in the evaluation include the relevance of the solution from the point of view of each stakeholder, issues regarding performance and validation, usability and integration, regulatory and legal aspects, and financial and support services.


2021 ◽  
Vol 6 (1) ◽  
pp. 1-5
Author(s):  
Zobeir Raisi ◽  
Mohamed A. Naiel ◽  
Paul Fieguth ◽  
Steven Wardell ◽  
John Zelek

The reported accuracy of recent state-of-the-art text detection methods, mostly deep learning approaches, is in the order of 80% to 90% on standard benchmark datasets. These methods have relaxed some of the restrictions of structured text and environment (i.e., "in the wild") which are usually required for classical OCR to properly function. Even with this relaxation, there are still circumstances where these state-of-the-art methods fail.  Several remaining challenges in wild images, like in-plane-rotation, illumination reflection, partial occlusion, complex font styles, and perspective distortion, cause exciting methods to perform poorly. In order to evaluate current approaches in a formal way, we standardize the datasets and metrics for comparison which had made comparison between these methods difficult in the past. We use three benchmark datasets for our evaluations: ICDAR13, ICDAR15, and COCO-Text V2.0. The objective of the paper is to quantify the current shortcomings and to identify the challenges for future text detection research.


Android OS, which is the most prevalent operating system (OS), has enjoyed immense popularity for smart phones over the past few years. Seizing this opportunity, cybercrime will occur in the form of piracy and malware. Traditional detection does not suffice to combat newly created advanced malware. So, there is a need for smart malware detection systems to reduce malicious activities risk. Machine learning approaches have been showing promising results in classifying malware where most of the method are shallow learners like Random Forest (RF) in recent years. In this paper, we propose Deep-Droid as a deep learning framework, for detection Android malware. Hence, our Deep-Droid model is a deep learner that outperforms exiting cutting-edge machine learning approaches. All experiments performed on two datasets (Drebin-215 & Malgenome-215) to assess our Deep-Droid model. The results of experiments show the effectiveness and robustness of Deep-Droid. Our Deep-Droid model achieved accuracy over 98.5%.


Author(s):  
Zhenguang Liu ◽  
Peng Qian ◽  
Xiang Wang ◽  
Lei Zhu ◽  
Qinming He ◽  
...  

Smart contracts hold digital coins worth billions of dollars, their security issues have drawn extensive attention in the past years. Towards smart contract vulnerability detection, conventional methods heavily rely on fixed expert rules, leading to low accuracy and poor scalability. Recent deep learning approaches alleviate this issue but fail to encode useful expert knowledge. In this paper, we explore combining deep learning with expert patterns in an explainable fashion. Specifically, we develop automatic tools to extract expert patterns from the source code. We then cast the code into a semantic graph to extract deep graph features. Thereafter, the global graph feature and local expert patterns are fused to cooperate and approach the final prediction, while yielding their interpretable weights. Experiments are conducted on all available smart contracts with source code in two platforms, Ethereum and VNT Chain. Empirically, our system significantly outperforms state-of-the-art methods. Our code is released.


2021 ◽  
Author(s):  
Albert Ji ◽  
Wai Lok Woo ◽  
Eugene Wai Leong Wong ◽  
Yang Thee Quek

Rail track is a critical component of rail systems. Accidents or interruptions caused by rail track anomalies usually possess severe outcomes. Therefore, rail track condition monitoring is an important task. Over the past decade, deep learning techniques have been rapidly developed and deployed. In the paper, we review the existing literature on applying deep learning to rail track condition monitoring. Potential challenges and opportunities are discussed for the research community to decide on possible directions. Two application cases are presented to illustrate the implementation of deep learning to rail track condition monitoring in practice before we conclude the paper.


2020 ◽  
Vol 31 (7-8) ◽  
Author(s):  
Marina Paolanti ◽  
Rocco Pietrini ◽  
Adriano Mancini ◽  
Emanuele Frontoni ◽  
Primo Zingaretti

Abstract In retail environments, understanding how shoppers move about in a store’s spaces and interact with products is very valuable. While the retail environment has several favourable characteristics that support computer vision, such as reasonable lighting, the large number and diversity of products sold, as well as the potential ambiguity of shoppers’ movements, mean that accurately measuring shopper behaviour is still challenging. Over the past years, machine-learning and feature-based tools for people counting as well as interactions analytic and re-identification were developed with the aim of learning shopper skills based on occlusion-free RGB-D cameras in a top-view configuration. However, after moving into the era of multimedia big data, machine-learning approaches evolved into deep learning approaches, which are a more powerful and efficient way of dealing with the complexities of human behaviour. In this paper, a novel VRAI deep learning application that uses three convolutional neural networks to count the number of people passing or stopping in the camera area, perform top-view re-identification and measure shopper–shelf interactions from a single RGB-D video flow with near real-time performances has been introduced. The framework is evaluated on the following three new datasets that are publicly available: TVHeads for people counting, HaDa for shopper–shelf interactions and TVPR2 for people re-identification. The experimental results show that the proposed methods significantly outperform all competitive state-of-the-art methods (accuracy of 99.5% on people counting, 92.6% on interaction classification and 74.5% on re-id), bringing to different and significative insights for implicit and extensive shopper behaviour analysis for marketing applications.


Sign in / Sign up

Export Citation Format

Share Document