Is deep learning better than traditional approaches in tag recommendation for software information sites?

2019 ◽  
Vol 109 ◽  
pp. 1-13 ◽  
Author(s):  
Pingyi Zhou ◽  
Jin Liu ◽  
Xiao Liu ◽  
Zijiang Yang ◽  
John Grundy
2021 ◽  
Vol 49 (1) ◽  
pp. 030006052098284
Author(s):  
Tingting Qiao ◽  
Simin Liu ◽  
Zhijun Cui ◽  
Xiaqing Yu ◽  
Haidong Cai ◽  
...  

Objective To construct deep learning (DL) models to improve the accuracy and efficiency of thyroid disease diagnosis by thyroid scintigraphy. Methods We constructed DL models with AlexNet, VGGNet, and ResNet. The models were trained separately with transfer learning. We measured each model’s performance with six indicators: recall, precision, negative predictive value (NPV), specificity, accuracy, and F1-score. We also compared the diagnostic performances of first- and third-year nuclear medicine (NM) residents with assistance from the best-performing DL-based model. The Kappa coefficient and average classification time of each model were compared with those of two NM residents. Results The recall, precision, NPV, specificity, accuracy, and F1-score of the three models ranged from 73.33% to 97.00%. The Kappa coefficient of all three models was >0.710. All models performed better than the first-year NM resident but not as well as the third-year NM resident in terms of diagnostic ability. However, the ResNet model provided “diagnostic assistance” to the NM residents. The models provided results at speeds 400 to 600 times faster than the NM residents. Conclusion DL-based models perform well in diagnostic assessment by thyroid scintigraphy. These models may serve as tools for NM residents in the diagnosis of Graves’ disease and subacute thyroiditis.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Christian Crouzet ◽  
Gwangjin Jeong ◽  
Rachel H. Chae ◽  
Krystal T. LoPresti ◽  
Cody E. Dunn ◽  
...  

AbstractCerebral microhemorrhages (CMHs) are associated with cerebrovascular disease, cognitive impairment, and normal aging. One method to study CMHs is to analyze histological sections (5–40 μm) stained with Prussian blue. Currently, users manually and subjectively identify and quantify Prussian blue-stained regions of interest, which is prone to inter-individual variability and can lead to significant delays in data analysis. To improve this labor-intensive process, we developed and compared three digital pathology approaches to identify and quantify CMHs from Prussian blue-stained brain sections: (1) ratiometric analysis of RGB pixel values, (2) phasor analysis of RGB images, and (3) deep learning using a mask region-based convolutional neural network. We applied these approaches to a preclinical mouse model of inflammation-induced CMHs. One-hundred CMHs were imaged using a 20 × objective and RGB color camera. To determine the ground truth, four users independently annotated Prussian blue-labeled CMHs. The deep learning and ratiometric approaches performed better than the phasor analysis approach compared to the ground truth. The deep learning approach had the most precision of the three methods. The ratiometric approach has the most versatility and maintained accuracy, albeit with less precision. Our data suggest that implementing these methods to analyze CMH images can drastically increase the processing speed while maintaining precision and accuracy.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3046
Author(s):  
Shervin Minaee ◽  
Mehdi Minaei ◽  
Amirali Abdolrashidi

Facial expression recognition has been an active area of research over the past few decades, and it is still challenging due to the high intra-class variation. Traditional approaches for this problem rely on hand-crafted features such as SIFT, HOG, and LBP, followed by a classifier trained on a database of images or videos. Most of these works perform reasonably well on datasets of images captured in a controlled condition but fail to perform as well on more challenging datasets with more image variation and partial faces. In recent years, several works proposed an end-to-end framework for facial expression recognition using deep learning models. Despite the better performance of these works, there are still much room for improvement. In this work, we propose a deep learning approach based on attentional convolutional network that is able to focus on important parts of the face and achieves significant improvement over previous models on multiple datasets, including FER-2013, CK+, FERG, and JAFFE. We also use a visualization technique that is able to find important facial regions to detect different emotions based on the classifier’s output. Through experimental results, we show that different emotions are sensitive to different parts of the face.


Electronics ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 495
Author(s):  
Imayanmosha Wahlang ◽  
Arnab Kumar Maji ◽  
Goutam Saha ◽  
Prasun Chakrabarti ◽  
Michal Jasinski ◽  
...  

This article experiments with deep learning methodologies in echocardiogram (echo), a promising and vigorously researched technique in the preponderance field. This paper involves two different kinds of classification in the echo. Firstly, classification into normal (absence of abnormalities) or abnormal (presence of abnormalities) has been done, using 2D echo images, 3D Doppler images, and videographic images. Secondly, based on different types of regurgitation, namely, Mitral Regurgitation (MR), Aortic Regurgitation (AR), Tricuspid Regurgitation (TR), and a combination of the three types of regurgitation are classified using videographic echo images. Two deep-learning methodologies are used for these purposes, a Recurrent Neural Network (RNN) based methodology (Long Short Term Memory (LSTM)) and an Autoencoder based methodology (Variational AutoEncoder (VAE)). The use of videographic images distinguished this work from the existing work using SVM (Support Vector Machine) and also application of deep-learning methodologies is the first of many in this particular field. It was found that deep-learning methodologies perform better than SVM methodology in normal or abnormal classification. Overall, VAE performs better in 2D and 3D Doppler images (static images) while LSTM performs better in the case of videographic images.


1997 ◽  
Vol 6 (3) ◽  
pp. 347-355 ◽  
Author(s):  
Erich H. Loewy

Virtue ethics attempts to identify certain commonly agreed-upon dispositions to act in certain ways, dispositions that would be accepted as ‘good’ by those affected, and to locate the goodness or badness of an act internal to the agent. Basically, virtue ethics is said to date back to Aristotle, but as Alisdair MacIntyre has pointed out, the whole idea of ‘virtue ethics’ would have been unintelligible in Greek philosophy for “a virtue (arete) was an excellence and ethics concerned excellence of character; all ethics was virtue ethics.” Virtue ethics as a method to approach problems in medical ethics is said by some to lend itself to working through cases at the bedside or, at least, is better than the conventional method of handling ethical problems. In this paper I want to explore some of the shortcomings of this approach, examine other traditional approaches, indicate some of their limitations, and suggest a different conceptualization of the approach.


2020 ◽  
Vol 14 (4) ◽  
pp. 471-484
Author(s):  
Suraj Shetiya ◽  
Saravanan Thirumuruganathan ◽  
Nick Koudas ◽  
Gautam Das

Accurate selectivity estimation for string predicates is a long-standing research challenge in databases. Supporting pattern matching on strings (such as prefix, substring, and suffix) makes this problem much more challenging, thereby necessitating a dedicated study. Traditional approaches often build pruned summary data structures such as tries followed by selectivity estimation using statistical correlations. However, this produces insufficiently accurate cardinality estimates resulting in the selection of sub-optimal plans by the query optimizer. Recently proposed deep learning based approaches leverage techniques from natural language processing such as embeddings to encode the strings and use it to train a model. While this is an improvement over traditional approaches, there is a large scope for improvement. We propose Astrid, a framework for string selectivity estimation that synthesizes ideas from traditional and deep learning based approaches. We make two complementary contributions. First, we propose an embedding algorithm that is query-type (prefix, substring, and suffix) and selectivity aware. Consider three strings 'ab', 'abc' and 'abd' whose prefix frequencies are 1000, 800 and 100 respectively. Our approach would ensure that the embedding for 'ab' is closer to 'abc' than 'abd'. Second, we describe how neural language models could be used for selectivity estimation. While they work well for prefix queries, their performance for substring queries is sub-optimal. We modify the objective function of the neural language model so that it could be used for estimating selectivities of pattern matching queries. We also propose a novel and efficient algorithm for optimizing the new objective function. We conduct extensive experiments over benchmark datasets and show that our proposed approaches achieve state-of-the-art results.


2018 ◽  
Vol 14 (10) ◽  
pp. 155014771880671 ◽  
Author(s):  
Tao Li ◽  
Hai Wang ◽  
Yuan Shao ◽  
Qiang Niu

With the rapid growth of indoor positioning requirements without equipment and the convenience of channel state information acquisition, the research on indoor fingerprint positioning based on channel state information is increasingly valued. In this article, a multi-level fingerprinting approach is proposed, which is composed of two-level methods: the first layer is achieved by deep learning and the second layer is implemented by the optimal subcarriers filtering method. This method using channel state information is termed multi-level fingerprinting with deep learning. Deep neural networks are applied in the deep learning of the first layer of multi-level fingerprinting with deep learning, which includes two phases: an offline training phase and an online localization phase. In the offline training phase, deep neural networks are used to train the optimal weights. In the online localization phase, the top five closest positions to the location position are obtained through forward propagation. The second layer optimizes the results of the first layer through the optimal subcarriers filtering method. Under the accuracy of 0.6 m, the positioning accuracy of two common environments has reached, respectively, 96% and 93.9%. The evaluation results show that the positioning accuracy of this method is better than the method based on received signal strength, and it is better than the support vector machine method, which is also slightly improved compared with the deep learning method.


Author(s):  
Réka Hollandi ◽  
Ákos Diósdi ◽  
Gábor Hollandi ◽  
Nikita Moshkov ◽  
Péter Horváth

AbstractAnnotatorJ combines single-cell identification with deep learning and manual annotation. Cellular analysis quality depends on accurate and reliable detection and segmentation of cells so that the subsequent steps of analyses e.g. expression measurements may be carried out precisely and without bias. Deep learning has recently become a popular way of segmenting cells, performing unimaginably better than conventional methods. However, such deep learning applications may be trained on a large amount of annotated data to be able to match the highest expectations. High-quality annotations are unfortunately expensive as they require field experts to create them, and often cannot be shared outside the lab due to medical regulations.We propose AnnotatorJ, an ImageJ plugin for the semi-automatic annotation of cells (or generally, objects of interest) on (not only) microscopy images in 2D that helps find the true contour of individual objects by applying U-Net-based pre-segmentation. The manual labour of hand-annotating cells can be significantly accelerated by using our tool. Thus, it enables users to create such datasets that could potentially increase the accuracy of state-of-the-art solutions, deep learning or otherwise, when used as training data.


2021 ◽  
Vol 3 (1) ◽  
pp. 1-16
Author(s):  
Saeed Roshani ◽  
◽  
Hossein Heshmati ◽  
Sobhan Roshani ◽  
◽  
...  

In this paper, a lowpass – bandpass dual band microwave filter is designed by using deep learning and artificial intelligence. The designed filter has compact size and desirable pass bands. In the proposed filter, the resonators with Z-shaped and T-shaped lines are used to design the low pass channel, while coupling lines, stepped impedance resonators and open ended stubs are utilized to design the bandpass channel. Artificial neural network (ANN) and deep learning (DL) technique has been utilized to extract the proposed filter transfer function, so the values of the transmission zeros can be located in the desired frequency. This technique can also be used for the other electrical devices. The lowpass channel cut off frequency is 1 GHz, with better than 0.2 dB insertion loss. Also, the bandpass channel main frequency is designed at 2.4 GHz with 0.5 dB insertion loss in the passband.


Sign in / Sign up

Export Citation Format

Share Document