Inkjet Quality Ruler Experiments and Print Uniformity Predictor

2020 ◽  
Vol 2020 (9) ◽  
pp. 373-1-373-8
Author(s):  
Yi Yang ◽  
Utpal Sarkar ◽  
Isabel Borrell ◽  
Jan P. Allebach

Macro-uniformity is an important factor in the overall quality of prints from inkjet printers. The International Committee for Information Technology Standards (INCITS) defined the macrouniformity for prints, which includes several printing defects such as banding, streaks, mottle, etc. Although we can quantitatively analyze a certain kind of defect, it is difficult to assess the overall perceptual quality when multiple defects appear simultaneously in a print. We used the Macro-uniformity quality rulers designed by INCITS W1.1 as experimental references, to conduct a psychophysical experiment for pooling perceptual assessments of our print samples from subjects. Then, calculated features can describe the severity of defects in a test sample; and we trained a predictive model using these data. The predictor can automatically predict the macro-uniformity score as judged by humans. Our results show that the predictor can work accurately. The predicted scores are similar to the subjective visual scores (ground-truth). Also, we used 6-fold cross-validation to confirm the efficacy of our predictor.

Land ◽  
2020 ◽  
Vol 9 (6) ◽  
pp. 174
Author(s):  
Desheng Wang ◽  
A-Xing Zhu

Digital soil mapping (DSM) is currently the primary framework for predicting the spatial variation of soil information (soil type or soil properties). Random forests and similarity-based methods have been used widely in DSM. However, the accuracy of the similarity-based approach is limited, and the performance of random forests is affected by the quality of the feature set. The objective of this study was to present a method for soil mapping by integrating the similarity-based approach and the random forests method. The Heshan area (Heilongjiang province, China) was selected as the case study for mapping soil subgroups. The results of the regular validation samples showed that the overall accuracy of the integrated method (71.79%) is higher than that of a similarity-based approach (58.97%) and random forests (66.67%). The results of the 5-fold cross-validation showed that the overall accuracy of the integrated method, similarity-based approach, and random forests range from 55% to 72.73%, 43.48% to 69.57%, and 54.17% to 70.83%, with an average accuracy of 66.61%, 57.39%, and 59.62%, respectively. These results suggest that the proposed method can produce a high-quality covariate set and achieve a better performance than either the random forests or similarity-based approach alone.


2020 ◽  
Vol 8 (2) ◽  
Author(s):  
yohana Tri Utami ◽  
Dewi Asiah Shofiana ◽  
Yunda Heningtyas

Telecommunication industries are experiencing substantial problems related to the migration of customers due to a large number of competing companies, dynamic circumstances, as well as the presence of many innovative and attractive offerings. The situation has resulted in a high level of customer migration, affecting a decrement toward the company revenue. Regarding that condition, the customer churn is one well-know approach that can help in increasing the company's revenue and reputation. As to predict the reason behind the migration of customer, this study proposed a data mining classification technique by applying the C4.5 algorithm. Patterns generated by the model were implemented using 10-fold cross-validation, resulting in a model with an accuracy rate of 87%, precision 87.5%, and a recall of 97%. Based on the good performance quality of the model, it can be stated that the C4.5 algorithm succeeded to discover several causes from the migration of telecommunication users, in which price holds the top place as the primary reason


Author(s):  
M.Veera Kumari Et.al

In the world there are so many airline services which facilitate different airline facilities for their customers. Those airline services may satisfy or may not satisfy their customers. Customers cannot express their comments immediately, so airline services provide the twitter blog to give the feedback on their services. Twitter has been increased to develop the quality of services[4]. This paper develop the different classification techniques to improve accuracy for sentiment analysis. The tweets of services are classified into three polarities such as positive, negative and neutral. Classification methods are Random forest(RF), Logistic Regression(LR), K-Nearest Neighbors(KNN), Naïve Baye’s(NB), Decision Tree(DTC), Extreme Gradient Boost(XGB), merging of (two, three and four) classification techniques with majority Voting Classifier, AdaBoost measuring the accuracy achieved by the function using 20-fold and 30-fold cross validation was compassed in the validation phase. In this paper proposes a new ensemble Bagging approach for different classifiers[10]. The metrics of sentiment analysis precision, recall, f1-score, micro average, macro average and accuracy are discovered for all above mentioned classification techniques. In addition average predictions of classifiers and also accuracy of average predictions of classifiers was calculated for getting good quality of services. The result describes that bagging classifiers achieve better accuracy than non-bagging classifiers.


2020 ◽  
Vol 25 (6) ◽  
pp. 4805-4830
Author(s):  
Davide Falessi ◽  
Jacky Huang ◽  
Likhita Narayana ◽  
Jennifer Fong Thai ◽  
Burak Turhan

Abstract We are in the shoes of a practitioner who uses previous project releases’ data to predict which classes of the current release are defect-prone. In this scenario, the practitioner would like to use the most accurate classifier among the many available ones. A validation technique, hereinafter “technique”, defines how to measure the prediction accuracy of a classifier. Several previous research efforts analyzed several techniques. However, no previous study compared validation techniques in the within-project across-release class-level context or considered techniques that preserve the order of data. In this paper, we investigate which technique recommends the most accurate classifier. We use the last release of a project as the ground truth to evaluate the classifier’s accuracy and hence the ability of a technique to recommend an accurate classifier. We consider nine classifiers, two industry and 13 open projects, and three validation techniques: namely 10-fold cross-validation (i.e., the most used technique), bootstrap (i.e., the recommended technique), and walk-forward (i.e., a technique preserving the order of data). Our results show that: 1) classifiers differ in accuracy in all datasets regardless of their entity per value, 2) walk-forward outperforms both 10-fold cross-validation and bootstrap statistically in all three accuracy metrics: AUC of the selected classifier, bias and absolute bias, 3) surprisingly, all techniques resulted to be more prone to overestimate than to underestimate the performances of classifiers, and 3) the defect rate resulted in changing between the second and first half in both industry projects and 83% of open-source datasets. This study recommends the use of techniques that preserve the order of data such as walk-forward over 10-fold cross-validation and bootstrap in the within-project across-release class-level context given the above empirical results and that walk-forward is by nature more simple, inexpensive, and stable than the other two techniques.


Cybersecurity ◽  
2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Jonah Burgess ◽  
Philip O’Kane ◽  
Sakir Sezer ◽  
Domhnall Carlin

AbstractWhile consumers use the web to perform routine activities, they are under the constant threat of attack from malicious websites. Even when visiting ‘trusted’ sites, there is always a risk that site is compromised, and, hosting a malicious script. In this scenario, the injected script would typically force the victim’s browser to undergo a series of redirects before reaching an attacker-controlled domain, which, delivers the actual malware. Although these malicious redirection chains aim to frustrate detection and analysis efforts, they could be used to help identify web-based attacks. Building upon previous work, this paper presents the first known application of a Long Short-Term Memory (LSTM) network to detect Exploit Kit (EK) traffic, utilising the structure of HTTP redirects. Samples are processed as sequences, where each timestep represents a redirect and contains a unique combination of 48 features. The experiment is conducted using a ground-truth dataset of 1279 EK and 5910 benign redirection chains. Hyper-parameters are tuned via K-fold cross-validation (5f-CV), with the optimal configuration achieving an F1 score of 0.9878 against the unseen test set. Furthermore, we compare the results of isolated feature categories to assess their importance.


2020 ◽  
Vol 2020 (9) ◽  
pp. 66-1-66-9
Author(s):  
Muhammad Irshad ◽  
Alessandro R. Silva ◽  
Sana Alamgeer ◽  
Mylène C.Q. Farias

In this work, we present a psychophysical study, in which, we analyzed the perceptual quality of images enhanced with several types of enhancement algorithms, including color, sharpness, histogram, and contrast enhancements. To estimate and compare the qualities of enhanced images, we performed a psychophysical experiment with 35 source images, obtained from publicly available databases. More specifically, we used images from the Challenge Database, the CSIQ database, and the TID2013 database. To generate the test sequences, we used 12 different image enhancement algorithms, generating a dataset with a total of 455 images. We used a Double Stimulus Continuous Quality Scale (DSCQS) experimental methodology, with a between-subjects approach where each subject scored a subset of the total database to avoid fatigue. Given the high number of test images, we designed a crowd-sourcing interface to perform an online psychophysical experiment. This type of interface has the advantage of making it possible to collect data from many participants. We also performed an experiment in a controlled laboratory environment and compared its results with the crowd-sourcing results. Since there are very few quality enhancement databases available in the literature, this works represents a contribution to the area of image quality.


Author(s):  
Jung Soo Nam ◽  
Cho Rok Na ◽  
Hyoung Han Jo ◽  
Jun Yeob Song ◽  
Tae Ho Ha ◽  
...  

This article discusses the development of lens form error prediction models using in-process cavity pressure and temperature signals based on a k-fold cross-validation method. In a series of lens injection moulding experiments, the built-in-sensor mould is used, the in-process cavity pressure and temperature signals are captured and the lens form errors are measured. Then, three features including maximum pressure, holding pressure and maximum temperature are identified from the measured cavity pressure and temperature profiles, and the lens form error prediction models are formulated based on a response surface methodology. In particular, the k-fold cross-validation approach is adopted in order to improve the prediction accuracy. It is demonstrated that the lens form error prediction models can be practically used for diagnosing the quality of injection-moulded lenses in an industrial site.


2018 ◽  
Vol 10 (12) ◽  
pp. 1968 ◽  
Author(s):  
Nathaniel Levitan ◽  
Barry Gross

Due to its worldwide coverage and high revisit time, satellite-based remote sensing provides the ability to monitor in-season crop state variables and yields globally. In this study, we presented a novel approach to training agronomic satellite retrieval algorithms by utilizing collocated crop growth model simulations and solar-reflective satellite measurements. Specifically, we showed that bidirectional long short-term memory networks (BLSTMs) can be trained to predict the in-season state variables and yields of Agricultural Production Systems sIMulator (APSIM) maize crop growth model simulations from collocated Moderate Resolution Imaging Spectroradiometer (MODIS) 500-m satellite measurements over the United States Corn Belt at a regional scale. We evaluated the performance of the BLSTMs through both k-fold cross validation and comparison to regional scale ground-truth yields and phenology. Using k-fold cross validation, we showed that three distinct in-season maize state variables (leaf area index, aboveground biomass, and specific leaf area) can be retrieved with cross-validated R2 values ranging from 0.4 to 0.8 for significant portions of the season. Several other plant, soil, and phenological in-season state variables were also evaluated in the study for their retrievability via k-fold cross validation. In addition, by comparing to survey-based United State Department of Agriculture (USDA) ground truth data, we showed that the BLSTMs are able to predict actual county-level yields with R2 values between 0.45 and 0.6 and actual state-level phenological dates (emergence, silking, and maturity) with R2 values between 0.75 and 0.85. We believe that a potential application of this methodology is to develop satellite products to monitor in-season field-scale crop growth on a global scale by reproducing the methodology with field-scale crop growth model simulations (utilizing farmer-recorded field-scale agromanagement data) and collocated high-resolution satellite data (fused with moderate-resolution satellite data).


2021 ◽  
Vol 13 (16) ◽  
pp. 3301
Author(s):  
Yeonju Choi ◽  
Sanghyuck Han ◽  
Yongwoo Kim

In recent years, research on increasing the spatial resolution and enhancing the quality of satellite images using the deep learning-based super-resolution (SR) method has been actively conducted. In a remote sensing field, conventional SR methods required high-quality satellite images as the ground truth. However, in most cases, high-quality satellite images are difficult to acquire because many image distortions occur owing to various imaging conditions. To address this problem, we propose an adaptive image quality modification method to improve SR image quality for the KOrea Multi-Purpose Satellite-3 (KOMPSAT-3). The KOMPSAT-3 is a high performance optical satellite, which provides 0.7-m ground sampling distance (GSD) panchromatic and 2.8-m GSD multi-spectral images for various applications. We proposed an SR method with a scale factor of 2 for the panchromatic and pan-sharpened images of KOMPSAT-3. The proposed SR method presents a degradation model that generates a low-quality image for training, and a method for improving the quality of the raw satellite image. The proposed degradation model for low-resolution input image generation is based on Gaussian noise and blur kernel. In addition, top-hat and bottom-hat transformation is applied to the original satellite image to generate an enhanced satellite image with improved edge sharpness or image clarity. Using this enhanced satellite image as the ground truth, an SR network is then trained. The performance of the proposed method was evaluated by comparing it with other SR methods in multiple ways, such as edge extraction, visual inspection, qualitative analysis, and the performance of object detection. Experimental results show that the proposed SR method achieves improved reconstruction results and perceptual quality compared to conventional SR methods.


Hepatitis is a common worldwide public health problem that attacks almost every population in various countries. Machine learning has been widely used to classify various diseases, including hepatitis. In this research, the Random Forest algorithm will be used along with the dataset of patients with hepatitis to classify whether the patient's condition will live or die. Missing value and imbalance class exists in this dataset. In that class, the sample of healthy and sick patients that often occurs in the disease dataset. We replace missing values using mean and median and to deal with this imbalance of class, we use cost-sensitive methods to put penalty in classification. A manual selection feature process is also carried out to look for features that can be removed while still maintaining the quality of accuracy and classification. The validation method used is 10-fold Cross-Validation and using Random Forest Algorithm with tuned parameter to find the best result in classifying the class. This research prioritizes classification results by considering the small amount of data and the imbalance of the class, so it can classify the class more successfully and accurate for hepatitis patients. The accuracy value obtained is 85.80%.


Sign in / Sign up

Export Citation Format

Share Document