iris data
Recently Published Documents


TOTAL DOCUMENTS

101
(FIVE YEARS 25)

H-INDEX

13
(FIVE YEARS 2)

Significance ◽  
2021 ◽  
Vol 18 (6) ◽  
pp. 26-29
Author(s):  
Antony Unwin ◽  
Kim Kleinman
Keyword(s):  
Data Set ◽  

2021 ◽  
Vol 15 ◽  
Author(s):  
Usama Riaz ◽  
Fuleah A. Razzaq ◽  
Shiang Hu ◽  
Pedro A. Valdés-Sosa

Finding the common principal component (CPC) for ultra-high dimensional data is a multivariate technique used to discover the latent structure of covariance matrices of shared variables measured in two or more k conditions. Common eigenvectors are assumed for the covariance matrix of all conditions, only the eigenvalues being specific to each condition. Stepwise CPC computes a limited number of these CPCs, as the name indicates, sequentially and is, therefore, less time-consuming. This method becomes unfeasible when the number of variables p is ultra-high since storing k covariance matrices requires O(kp2) memory. Many dimensionality reduction algorithms have been improved to avoid explicit covariance calculation and storage (covariance-free). Here we propose a covariance-free stepwise CPC, which only requires O(kn) memory, where n is the total number of examples. Thus for n < < p, the new algorithm shows apparent advantages. It computes components quickly, with low consumption of machine resources. We validate our method CFCPC with the classical Iris data. We then show that CFCPC allows extracting the shared anatomical structure of EEG and MEG source spectra across a frequency range of 0.01–40 Hz.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Jiasen Liu ◽  
Chao Wang ◽  
Zheng Tu ◽  
Xu An Wang ◽  
Chuan Lin ◽  
...  

With the advent of the intelligent era, more and more artificial intelligence algorithms are widely used and a large number of user data are collected in the cloud server for sharing and analysis, but the security risks of private data breaches are also increasing in the meantime. CKKS homomorphic encryption has become a research focal point in the cryptography field because of its ability of homomorphic encryption for floating-point numbers and comparable computational efficiency. Based on the CKKS homomorphic encryption, this paper implements a secure KNN classification scheme in cloud servers for Cyberspace (CKKSKNNC) and supports batch calculation. This paper uses the CKKS homomorphic encryption scheme to encrypt user data samples and then uses Euclidean distance, Pearson similarity, and cosine similarity to compute the similarity between ciphertext data samples. Finally, the security classification of the samples is realized by voting rules. This paper selects IRIS data set for experimental, which is the classification data set commonly used in machine learning. The experimental results show that the accuracy of the other three similarity algorithms of the IRIS data is around 97% except for the Pearson correlation coefficient, which is almost the same as that in plaintext, which proves the effectiveness of this scheme. Through comparative experiments, the efficiency of this scheme is proved.


Electronics ◽  
2021 ◽  
Vol 10 (20) ◽  
pp. 2482
Author(s):  
Soronzonbold Otgonbaatar ◽  
Mihai Datcu

Satellite instruments monitor the Earth’s surface day and night, and, as a result, the size of Earth observation (EO) data is dramatically increasing. Machine Learning (ML) techniques are employed routinely to analyze and process these big EO data, and one well-known ML technique is a Support Vector Machine (SVM). An SVM poses a quadratic programming problem, and quantum computers including quantum annealers (QA) as well as gate-based quantum computers promise to solve an SVM more efficiently than a conventional computer; training the SVM by employing a quantum computer/conventional computer represents a quantum SVM (qSVM)/classical SVM (cSVM) application. However, quantum computers cannot tackle many practical EO problems by using a qSVM due to their very low number of input qubits. Hence, we assembled a coreset (“core of a dataset”) of given EO data for training a weighted SVM on a small quantum computer, a D-Wave quantum annealer with around 5000 input quantum bits. The coreset is a small, representative weighted subset of an original dataset, and its performance can be analyzed by using the proposed weighted SVM on a small quantum computer in contrast to the original dataset. As practical data, we use synthetic data, Iris data, a Hyperspectral Image (HSI) of Indian Pine, and a Polarimetric Synthetic Aperture Radar (PolSAR) image of San Francisco. We measured the closeness between an original dataset and its coreset by employing a Kullback–Leibler (KL) divergence test, and, in addition, we trained a weighted SVM on our coreset data by using both a D-Wave quantum annealer (D-Wave QA) and a conventional computer. Our findings show that the coreset approximates the original dataset with very small KL divergence (smaller is better), and the weighted qSVM even outperforms the weighted cSVM on the coresets for a few instances of our experiments. As a side result (or a by-product result), we also present our KL divergence findings for demonstrating the closeness between our original data (i.e., our synthetic data, Iris data, hyperspectral image, and PolSAR image) and the assembled coreset.


2021 ◽  
Vol 2068 (1) ◽  
pp. 012004
Author(s):  
Chiang Ling Feng

Abstract The data from an Iris flower database is studied. The Iris database is the most commonly used database for machine learning algorithms. The Iris database was developed by Ronald Aylmer Fisher in 1936. The Iris database has 150 records in three categories: Iris Sentosa, Iris Versicolor and Iris Virginic. The database has four attributes: sepal length, sepal width, petal length and petal width. For the machine learning algorithm, 150 Iris flower databases are used. Of the 150 Iris in the Iris database, 80% are used as the training set and the remaining 20% Iris as the test set. In machine learning, to perform classification and discrimination is a complicated and difficult thing. In this study, a grey relation grade is used to extract the main features of the Iris flower and a Binary Tree [1] is used to classify the Irises. The results show that for the same specific attributes, grey relation grade extracts the main attributes and can be used in combination with a binary for classification.


BIBECHANA ◽  
2021 ◽  
Vol 18 (2) ◽  
pp. 154-163
Author(s):  
Devendra Raj Upadhyay ◽  
Trishna Subedi

Interstellar dust properties using far-infrared bands analyze nature around asymptotic giant branch stars and stellar objects. Here, we present physical properties around the cavity region across an AGB star named IRAS 04427+4951 Sky View Observatory of IRIS, AKARI map, SIMBAD, Aladin v2.5, and Gaia Archive. The average color temperature and mass are 23.48 ± 0.009 K, 3.55×1027 kg (1.79× 10-3 Mʘ ) in IRIS data and 14.89 ± 0.004 K and 5.34×1028 kg (2.69 × 10-2 Mʘ ) from AKARI data. The size of isolated cavity-like structure around the AGB stars of 45.67 pc × 17.02 pc and 42.25 pc × 17.76 pc, respectively. The visual extinction is to be in the range of 3.2×10-4 to 4.3×10-4 mag in and 4.5 × 10-3 to 7.4×10-3 mag. The inclination angle is 86.150 and 93.920. The method and results we present developed can for the study of astrochemistry of interstellar medium. BIBECHANA 18 (2) (2021) 154-163


2021 ◽  
Author(s):  
Yuan Hu ◽  
Xiaoyong Si

Abstract The aim is to further improve the efficiency of iris detection and ensure real-time iris data acquisition. Here, the light field refocusing algorithm can collect the data in real-time based on the existing iris data acquisition and detection system, and the DL (Deep Learning) CNN (Convolutional Neural Network) is introduced. Consequently, an iris image acquisition and real-time detection system based on CNN is proposed, and the system for image acquisition, processing, and displaying is constructed based on FPGA (Field Programmable Gate Array). The spatial filtering algorithm can compare the performance of the proposed bilateral filters with common filters. The results indicate that the proposed bilateral filters can pick out qualified iris images in real-time, greatly improving the accuracy of the iris image recognition system. The average time for real-time quality assessment of each frame image is less than 0.05 seconds. The classification accuracy of the iris image quality assessment algorithm based on DL is 96.38%, higher than the other two algorithms, and the average classification error rate is 3.69%, lower than the average error rate of other algorithms. The results can provide a reference for real-time iris image detection and data acquisition.


Multilayer Perceptron Neural Network (MLPNNs) constructs of input, at least one hidden and output layer. Number of the neurons in the hidden layer affects the NNs performance. It also consider difficult task to overcome. This research, aims to exanimate the performance of seven heuristic methods that have been used to estimate the neurons numbers in the hidden layer. The effectiveness of these methods was verified using a six of benchmark datasets. The number of hidden layer neurons that selected by each heuristic method for every data set was used to train the MLP. The results demonstrate that the number of hidden neurons selected by each method provides different accuracy and stability compared with other methods. The number of neurons selected by Hush method for ine data set was 26 neurons. It’s achieved the best accuracy with 99.90%and lowest accuracy achieved by Sheela method with 67.51% using 4 neurons. Using 22 neurons with 97.97% accuracy Ke, J method received the best result for Ionosphere data set. While the lowest accuracy was 96.95% with 5 neurons achieved by Kayama method.For Iris data set with 8 neurons achieved 97.19 as best accuracy achieved by Hush method. For the same data set the lowest results were 92.33 % using 3 neurons obtained by using Kayama method. For WBC data set 96.40% the best accuracy achieved using Sheela and Kaastra methods using 4 and 7neurons, while Kanellopoulos method achieved the lowest accuracy 94.18% with 7neurons. For Glass dataset, 87.15% was the best obtained accuracy using 18 neurons Hush method and using Wang method 82.27 % with 6 neurons was the lowest accuracy. Finally for PID 75.31% accuracy achieved by Kayama method with 3 neurons, where Kanellopoulos method obtained 72.17% through using 24 neurons.


Author(s):  
Dr. Kalaivazhi Vijayaragavan ◽  
S. Prakathi ◽  
S. Rajalakshmi ◽  
M Sandhiya

Machine learning is a subfield of artificial intelligence, which is learning algorithms to make decision-based on data and try to behave like a human being. Classification is one of the most fundamental concepts in machine learning. It is a process of recognizing, understanding, and grouping ideas and objects into pre-set categories or sub-populations. Using precategorized training datasets, machine learning concept use variety of algorithms to classify the future datasets into categories. Classification algorithms use input training data in machine learning to predict the subsequent data that fall into one of the predetermined categories. To improve the classification accuracy design of neural network is regarded as effective model to obtain better accuracy. However, design of neural network is usually consider scaling layer, perceptron layers and probabilistic layer. In this paper, an enhanced model selection can be evaluated with training and testing strategy. Further, the classification accuracy can be predicted. Finally by using two popular machine learning frameworks: PyTorch and Tensor Flow the prediction of classification accuracy is compared. Results demonstrate that the proposed method can predict with more accuracy. After the deployment of our machine learning model the performance of the model has been evaluated with the help of iris data set.


2020 ◽  
Vol 7 (6) ◽  
pp. 1213
Author(s):  
Ketut Agus Seputra ◽  
I Nyoman Saputra Wahyu Wijaya
Keyword(s):  
Data Set ◽  

<p>K-Means merupakan algoritma yang digunakan untuk melakukan pengklasteran data. Namun, k-means memiliki<br />masalah dalam sensitivitas penentuan partisi awal jumlah klaster. Penelitian terkait menyatakan algoritma k-means tergantung pada penentuan titik pusat klaster awal. Pemilihan pusat klaster awal secara acak cenderung menghasilkan klaster yang berbeda. Sehingga untuk menentukan klaster terbaik harus dilakukan dengan memperhatikan nilai Sum Sequare Error yang terkecil. Untuk mengatasi permasalahan tersebut, penentuan klaster dilakukan dengan menggunakan algoritma pillar. Algoritma pillar menentukan titik pusat klaster dengan memilih data dengan nilai euclidean paling jauh dari titik pusat klaster. Namun pemilihan titik klaster tetap memperhatikan kemungkinan data outlier. Pengujian dilakukan dengan menetapkan satu buah klaster awal sebagai inisialisasi skaligus sebagai klaster pembanding untuk menentukan kualitas klaster berikutnya. Penelitian ini menggunakan data set ruspini dan iris. Untuk data ruspini terdiri dari 76 data set, sedangkan data iris terdiri dari 150 data set. Klaster Pillar memiliki nilai Sum Sequere Error, Variance Cluster, dan Davies yang lebih kecil dibandingkan klaster dinamis pada data set ruspini. Nilai tersebut secara berurutan untuk algoritma pillar adalah 0.28, 0.11, 7.30, 5.88. Untuk data set iris nilai Sum Square Error lebih tinggi dibandingkan dengan klaster dinamis yaitu 0.34. Sedangkan algoritma klaster dinamis memiliki nilai 0.32. Hal tersebut disebahkan penentuan data outlier pada iris data set yang tidak akurat. Ketidakakurantan tersebut berasal dari data yang bersifat multivariat, sehingga memungkinkan data outlier menjadi centroid awal klaster. Sehingga jika dilihat dari nilai validitas SSE, algoritma pillar k-means klaster dinamis masih kurang bekerja optimal dibandingkan dengan algoritma k-means klaster dinamis.</p>


Sign in / Sign up

Export Citation Format

Share Document