scholarly journals Improving Matrix Factorization Based Expert Recommendation for Manuscript Editing Services by Refining User Opinions with Binary Ratings

2020 ◽  
Vol 10 (10) ◽  
pp. 3395 ◽  
Author(s):  
Yeonbin Son ◽  
Yerim Choi

As language editing became an essential process for enhancing the quality of a research manuscript, there are several companies providing manuscript editing services. In such companies, a manuscript submitted for proofreading is matched with an editing expert through a manual process, which is costly and often subjective. The major drawback of the manual process is that it is almost impossible to consider the inherent characteristics of a manuscript such as writing style and paragraph composition. To this end, we propose an expert recommendation method for manuscript editing services based on matrix factorization, a well-known collaborative filtering approach for learning latent information in ordinal ratings given by users. Specifically, binary ratings are utilized to substitute ordinal ratings when negative opinions are expressed by users since negative opinions are more accurately expressed by binary ratings than ordinal ratings. From the experiments using a real-world dataset, the proposed method outperformed the rest of the compared methods with an RMSE (root mean squared error) of 0.1. Moreover, the effectiveness of substituting ordinal ratings with binary ratings was validated by conducting sentiment analysis on text reviews.

Author(s):  
Mohammed Erritali ◽  
Badr Hssina ◽  
Abdelkader Grota

<p>Recommendation systems are used successfully to provide items (example:<br />movies, music, books, news, images) tailored to user preferences.<br />Among the approaches proposed, we use the collaborative filtering approach<br />of finding the information that satisfies the user by using the<br />reviews of other users. These ratings are stored in matrices that their<br />sizes increase exponentially to predict whether an item is interesting<br />or not. The problem is that these systems overlook that an assessment<br />may have been influenced by other factors which we call the cold start<br />factor. Our objective is to apply a hybrid approach of recommendation<br />systems to improve the quality of the recommendation. The advantage<br />of this approach is the fact that it does not require a new algorithm<br />for calculating the predictions. We we are going to apply the two Kclosest<br />neighbor algorithms and the matrix factorization algorithm of<br />collaborative filtering which are based on the method of (singular value<br />decomposition).</p>


2014 ◽  
Vol 2 (2) ◽  
pp. 47-58
Author(s):  
Ismail Sh. Baqer

A two Level Image Quality enhancement is proposed in this paper. In the first level, Dualistic Sub-Image Histogram Equalization DSIHE method decomposes the original image into two sub-images based on median of original images. The second level deals with spikes shaped noise that may appear in the image after processing. We presents three methods of image enhancement GHE, LHE and proposed DSIHE that improve the visual quality of images. A comparative calculations is being carried out on above mentioned techniques to examine objective and subjective image quality parameters e.g. Peak Signal-to-Noise Ratio PSNR values, entropy H and mean squared error MSE to measure the quality of gray scale enhanced images. For handling gray-level images, convenient Histogram Equalization methods e.g. GHE and LHE tend to change the mean brightness of an image to middle level of the gray-level range limiting their appropriateness for contrast enhancement in consumer electronics such as TV monitors. The DSIHE methods seem to overcome this disadvantage as they tend to preserve both, the brightness and contrast enhancement. Experimental results show that the proposed technique gives better results in terms of Discrete Entropy, Signal to Noise ratio and Mean Squared Error values than the Global and Local histogram-based equalization methods


2020 ◽  
Vol 10 (24) ◽  
pp. 8904
Author(s):  
Ana Isabel Montoya-Munoz ◽  
Oscar Mauricio Caicedo Rendon

The reliability in data collection is essential in Smart Farming supported by the Internet of Things (IoT). Several IoT and Fog-based works consider the reliability concept, but they fall short in providing a network’s edge mechanisms for detecting and replacing outliers. Making decisions based on inaccurate data can diminish the quality of crops and, consequently, lose money. This paper proposes an approach for providing reliable data collection, which focuses on outlier detection and treatment in IoT-based Smart Farming. Our proposal includes an architecture based on the continuum IoT-Fog-Cloud, which incorporates a mechanism based on Machine Learning to detect outliers and another based on interpolation for inferring data intended to replace outliers. We located the data cleaning at the Fog to Smart Farming applications functioning in the farm operate with reliable data. We evaluate our approach by carrying out a case study in a network based on the proposed architecture and deployed at a Colombian Coffee Smart Farm. Results show our mechanisms achieve high Accuracy, Precision, and Recall as well as low False Alarm Rate and Root Mean Squared Error when detecting and replacing outliers with inferred data. Considering the obtained results, we conclude that our approach provides reliable data collection in Smart Farming.


Proceedings ◽  
2020 ◽  
Vol 59 (1) ◽  
pp. 2
Author(s):  
Benoit Figuet ◽  
Raphael Monstein ◽  
Michael Felux

In this paper, we present an aircraft localization solution developed in the context of the Aircraft Localization Competition and applied to the OpenSky Network real-world ADS-B data. The developed solution is based on a combination of machine learning and multilateration using data provided by time synchronized ground receivers. A gradient boosting regression technique is used to obtain an estimate of the geometric altitude of the aircraft, as well as a first guess of the 2D aircraft position. Then, a triplet-wise and an all-in-view multilateration technique are implemented to obtain an accurate estimate of the aircraft latitude and longitude. A sensitivity analysis of the accuracy as a function of the number of receivers is conducted and used to optimize the proposed solution. The obtained predictions have an accuracy below 25 m for the 2D root mean squared error and below 35 m for the geometric altitude.


2007 ◽  
Vol 89 (3) ◽  
pp. 135-153 ◽  
Author(s):  
JINLIANG WANG

SummaryKnowledge of the genetic relatedness among individuals is essential in diverse research areas such as behavioural ecology, conservation biology, quantitative genetics and forensics. How to estimate relatedness accurately from genetic marker information has been explored recently by many methodological studies. In this investigation I propose a new likelihood method that uses the genotypes of a triad of individuals in estimating pairwise relatedness (r). The idea is to use a third individual as a control (reference) in estimating the r between two other individuals, thus reducing the chance of genes identical in state being mistakenly inferred as identical by descent. The new method allows for inbreeding and accounts for genotype errors in data. Analyses of both simulated and human microsatellite and SNP datasets show that the quality of r estimates (measured by the root mean squared error, RMSE) is generally improved substantially by the new triadic likelihood method (TL) over the dyadic likelihood method and five moment estimators. Simulations also show that genotyping errors/mutations, when ignored, result in underestimates of r for related dyads, and that incorporating a model of typing errors in the TL method improves r estimates for highly related dyads but impairs those for loosely related or unrelated dyads. The effects of inbreeding were also investigated through simulations. It is concluded that, because most dyads in a natural population are unrelated or only loosely related, the overall performance of the new triadic likelihood method is the best, offering r estimates with a RMSE that is substantially smaller than the five commonly used moment estimators and the dyadic likelihood method.


2021 ◽  
pp. 202-208
Author(s):  
Daniel Theodorus ◽  
Sarjon Defit ◽  
Gunadi Widi Nurcahyo

Industri 4.0 mendorong banyak perusahaan bertransformasi ke sistem digital. Machine Learning merupakan salah satu solusi dalam analisa data. Analisa data menjadi poin penting dalam memberikan layanan yang terbaik (user experience) kepada pelanggan. Lokasi yang diangkat dalam penelitian ini adalah PT. Sentral Tukang Indonesia yang bergerak dalam bidang penjualan bahan bangunan dan alat pertukangan seperti: cat, tripleks, aluminium, keramik, dan hpl. Dengan banyaknya data yang tersedia, menyebabkan perusahaan mengalami kesulitan dalam memberikan rekomendasi produk kepada pelanggan. Sistem rekomendasi muncul sebagai solusi dalam memberikan rekomendasi produk,  berdasarkan interaksi antara pelanggan dengan pelanggan lainnya yang terdapat di dalam data histori penjualan. Tujuan dari penelitian ini adalah Membantu perusahaan dalam memberikan rekomendasi produk sehingga dapat meningkatkan penjualan, memudahkan pelanggan untuk menemukan produk yang dibutuhkan, dan meningkatkan layanan yang terbaik kepada pelanggan.Data yang digunakan adalah data histori penjualan dalam 1 periode (Q1 2021), data pelanggan, dan data produk pada PT. Sentral Tukang Indonesia. Data histori penjualan tersebut akan dibagi menjadi 80% untuk dataset training dan 20% untuk dataset testing. Metode Item-based Collaborative Filtering pada penelitian ini memakai algoritma Cosine Similarity untuk menghitung tingkat kemiripan antar produk. Prediksi score memakai rumus Weighted Sum dan dalam menghitung tingkat error memakai rumus Root Mean Squared Error. Hasil dari penelitian ini memperlihatkan rekomendasi top 10 produk per pelanggan. Produk yang tampil merupakan produk yang memiliki score tertinggi dari pelanggan tersebut. Penelitian ini dapat menjadi referensi dan acuan bagi perusahaan dalam memberikan rekomendasi produk yang dibutuhkan oleh pelanggan.


Author(s):  
Guibing Guo ◽  
Enneng Yang ◽  
Li Shen ◽  
Xiaochun Yang ◽  
Xiaodong He

Trust-aware recommender systems have received much attention recently for their abilities to capture the influence among connected users. However, they suffer from the efficiency issue due to large amount of data and time-consuming real-valued operations. Although existing discrete collaborative filtering may alleviate this issue to some extent, it is unable to accommodate social influence. In this paper we propose a discrete trust-aware matrix factorization (DTMF) model to take dual advantages of both social relations and discrete technique for fast recommendation. Specifically, we map the latent representation of users and items into a joint hamming space by recovering the rating and trust interactions between users and items. We adopt a sophisticated discrete coordinate descent (DCD) approach to optimize our proposed model. In addition, experiments on two real-world datasets demonstrate the superiority of our approach against other state-of-the-art approaches in terms of ranking accuracy and efficiency.


2021 ◽  
Vol 8 ◽  
Author(s):  
A. Christoper Tamilmathi ◽  
P. L. Chithra

This paper introduces a novel deep learned quantization-based coding for 3D Airborne LiDAR (Light detection and ranging) point cloud (pcd) image (DLQCPCD). The raw pcd signals are sampled and transformed by applying the Nyquist signal sampling and Min-max signal transformation techniques, respectively for improving the efficiency of the training process. Then, the transformed signals are feed into the deep learned quantization module for compressing the data. To the best of our knowledge, this proposed DLQCPCD is the first deep learning-based model for 3D airborne LiDAR pcd compression. The functions of Mean Squared Error and Stochastic Gradient Descent optimization function enhance the quality of the decompressed image by 67.01 percent on average, compared to other functions. The model’s efficiency has been validated with established well-known compression techniques such as the 7-Zip, WinRAR, and tensor tucker decomposition algorithm on the three inconsistent airborne datasets. The experimental results show that the proposed model compresses every pcd image into constant 16 Number of Neurons of data and decompresses the image with approximately 160 dB of PSNR value, 174.46 s execution time with 0.6 s execution speed per instruction, and proved that it outperforms the other existing algorithms regarding space and time.


Author(s):  
Calvin Omind Munna

Currently, there a growing demand of data produced and stored in clinical domains. Therefore, for effective dealings of massive sets of data, a fusion methodology needs to be analyzed by considering the algorithmic complexities. For effective minimization of the severance of image content, hence minimizing the capacity to store and communicate data in optimal forms, image processing methodology has to be involved. In that case, in this research, two compression methodologies: lossy compression and lossless compression were utilized for the purpose of compressing images, which maintains the quality of images. Also, a number of sophisticated approaches to enhance the quality of the fused images have been applied. The methodologies have been assessed and various fusion findings have been presented. Lastly, performance parameters were obtained and evaluated with respect to sophisticated approaches. Structure Similarity Index Metric (SSIM), Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR) are the metrics, which were utilized for the sample clinical pictures. Critical analysis of the measurement parameters shows higher efficiency compared to numerous image processing methods. This research draws understanding to these approaches and enables scientists to choose effective methodologies of a particular application.


2020 ◽  
Vol 10 (14) ◽  
pp. 4926 ◽  
Author(s):  
Raúl Lara-Cabrera ◽  
Ángel González-Prieto ◽  
Fernando Ortega

Providing useful information to the users by recommending highly demanded products and services is a fundamental part of the business of many top tier companies. Recommender Systems make use of many sources of information to provide users with accurate predictions and novel recommendations of items. Here we propose, DeepMF, a novel collaborative filtering method that combines the Deep Learning paradigm with Matrix Factorization (MF) to improve the quality of both predictions and recommendations made to the user. Specifically, DeepMF performs successive refinements of a MF model with a layered architecture that uses the acquired knowledge in a layer as input for subsequent layers. Experimental results showed that the quality of both the predictions and recommendations of DeepMF overcome the baselines.


Sign in / Sign up

Export Citation Format

Share Document