performance of algorithm
Recently Published Documents


TOTAL DOCUMENTS

44
(FIVE YEARS 18)

H-INDEX

3
(FIVE YEARS 1)

2022 ◽  
Vol 14 (2) ◽  
pp. 407
Author(s):  
Jongjin Seo ◽  
Haklim Choi ◽  
Young-Suk Oh

Aerosols in the atmosphere play an essential role in the radiative transfer process due to their scattering, absorption, and emission. Moreover, they interrupt the retrieval of atmospheric properties from ground-based and satellite remote sensing. Thus, accurate aerosol information needs to be obtained. Herein, we developed an optimal-estimation-based aerosol optical depth (AOD) retrieval algorithm using the hyperspectral infrared downwelling emitted radiance of the Atmospheric Emitted Radiance Interferometer (AERI). The proposed algorithm is based on the phenomena that the thermal infrared radiance measured by a ground-based remote sensor is sensitive to the thermodynamic profile and degree of the turbid aerosol in the atmosphere. To assess the performance of algorithm, AERI observations, measured throughout the day on 21 October 2010 at Anmyeon, South Korea, were used. The derived thermodynamic profiles and AODs were compared with those of the European center for a reanalysis of medium-range weather forecasts version 5 and global atmosphere watch precision-filter radiometer (GAW-PFR), respectively. The radiances simulated with aerosol information were more suitable for the AERI-observed radiance than those without aerosol (i.e., clear sky). The temporal variation trend of the retrieved AOD matched that of GAW-PFR well, although small discrepancies were present at high aerosol concentrations. This provides a potential possibility for the retrieval of nighttime AOD.


Author(s):  
Amar A. Mahawish ◽  
Hassan J. Hassan

The congestion on the internet is the main issue that affects the performance of transition data over the network. An algorithm for congestion control is required to keep any network efficient and reliable for transfer traffic data of the users. Many Algorithms had been suggested over the years to improve the control of congestion that occurs in the network such as drop tail packets. Recently there are many algorithms have been developed to overcome the drawback of the drop tail procedure. One of the important algorithms developed is active queue management (AQM) that provides efficient congestion control by reducing drop packets, this technique considered as a base for many other congestion control algorithms schema. It works at the network core (router) for controlling the drop and marking of packets in the router's buffer before the congestion inception. In this study, a comprehensive survey is done on the AQM Algorithm schemas that proposed and modification these algorithms to achieve the best performance, the classification of AQM algorithms based on queue length, queue delay, or both. The advantages and limitations of each algorithm have been discussed. Also, debate the intelligent techniques procedure with AQM algorithm to achieve optimization in performance of algorithm operation. Finally, the comparison has been discussed among algorithms to find the weakness and powerful of each one based on different metrics.


2021 ◽  
Vol 263 (1) ◽  
pp. 5310-5313
Author(s):  
Youngbeen Chung ◽  
Narae Kim ◽  
Donggeun Lee ◽  
Sang-Heon Kim ◽  
Junhong Park

In case of pneumonia often accompanied by serious complications, sometimes lead to death, early diagnosis and continuous monitoring can greatly reduce the dangerousness. Moreover, the COVID-19 pandemic has demonstrated the need for new diagnostic tools that can minimize medical personnel engagement while avoiding equipment being exposed to afflicted patients. In this study, we developed cough monitoring algorithm by detecting the vibrations of human body. The acceleration response at each part of body was measured to determine propagation characteristics of vibration when cough occurs. And it was confirmed that the monitoring accuracy was improved when use the vibration signal compared to the case of using only acoustic signal. After that, we analyzed the perceived cough in terms of psych-acoustical and sound-energy aspects. For the characteristic features derived by quantifying the results of analysis, the data augmentation process was applied, and finally AI-based pneumonia diagnosis algorithm was constructed. To estimate the performance of algorithm, the accuracy of pneumonia determination in new cough cases was verified. It showed the higher value than the accuracy of pulmonologists with only cough sounds. Therefore, developed algorithm that perform continuous cough monitoring and reliable pneumonia diagnosis can be used as an effective supplementary tool for early diagnosis and prognosis of pneumonia.


2021 ◽  
Vol 263 (2) ◽  
pp. 4683-4691
Author(s):  
Lei Wang ◽  
Kean Chen ◽  
Jian Xu ◽  
Wang Qi

In recent years, more attention has been paid to the performance of algorithm in active noise control (ANC). Compared with filtered-x LMS (FxLMS) algorithm based on stochastic gradient descent, filtered-x RLS (FXRLS) algorithm has faster convergence speed and better tracking performance at the cost of high computational complexity. In order to reduce the computation, fast transversal filter (FTF) algorithm can be used in ANC system. In this paper, simplified multi-channel FXFTF algorithms are presented, and the convergence speed and noise reduction performance of different multichannel algorithms are simulated and compared, and the numerical stability of the algorithms are analyzed.


Author(s):  
Xiangjun Li ◽  
Shuili Zhang ◽  
Haibo Zhao

With multimedia becoming widely popular, the conflict between mass data and finite memory devices has been continuously intensified; so, it requires more convenient, efficient and high-quality transmission and storage technology and meanwhile, this is also the researchers’ pursuit for highly efficient compression technology and it is the fast image transmission that is what people really seek. This paper mainly further studies wavelet analysis and fractal compression coding, proposes a fast image compression coding method based on wavelet transform and fractal theory, and provides the theoretical basis and specific operational approaches for the algorithm. It makes use of the smoothness of wavelet, the high compression ratio of fractal compression coding and the high quality of reconstructed image. It firstly processes the image through wavelet transform. Then it introduces fractal features and classifies the image according to the features of image sub-blocks. Each class selects the proper features. In this way, for any sub-block, it only needs to search the best-matched block in a certain class according to the corresponding features. With this method, it can effectively narrow the search in order to speed up coding and build the relation of inequality between the sub-block and the matching mean square error. So, it can effectively combine wavelet transform with fractal theory and further improves the quality of reconstructed image. By comparing the simulation experiment, it objectively analyzes the performance of algorithm and proves that the proposed algorithm has higher efficiency.


Author(s):  
Sainan Xiao ◽  
Wangdong Yang ◽  
Buwen Cao ◽  
Honglie Zhou ◽  
Chenjun He

Finding an effective license plate localization (LPL) method is challenging owing to different conditions during the image acquisition phase. Most existing methods do not consider various low-quality image conditions that exist in real-world situations. Low-quality image conditions mean that an image can have low resolution, plate imperfection effects, variable illumination environments or background objects similar to the license plate (LP). To improve the anti-interference ability and the speed performance of algorithm, this study aims to develop a parallel partial enhancement method based on color differences that demonstrates improved localization performance for blue–white LP images under low-quality conditions. A novel color difference model is exploited to enhance LP areas and filter non-LP areas. Blue–white color ratio and projection analysis are performed to select the exact LP area from the candidates. Moreover, this study develops a parallel version based on a multicore CPU for real-time processing for industrial applications. An image database including 395 low-quality car images captured from various scenes under different conditions is tested for the performance evaluation. The extensive experiments show the effectiveness and efficiency of the proposed approach.


Author(s):  
V. NishaJenipher , Et. al.

Due to increasing cancer cases around the world, Lung cancer has become the favorite topic of research for a long period of time. The actual reason is due to the increasing rate of new cases across the globe. Therefore, many researchers used prediction or classification algorithm to identify the factors that contribute to the increase of this deadly disease. Two models were built namely WRF and RF. RF model provides the result of features selected by a predominant feature selection method whereas WRF model provides result of all features without performing any selection process. A comparison is made to inform the importance of selecting the feature for classification or prediction algorithm. The accuracy provided by WRF model is higher than RF model which highlights the importance of selecting the feature for classification algorithm.  


Now a day’s cancer has become a deathly disease due to the abnormal growth of the cell. Many researchers are working in this area for the early prediction of cancer. For the proper classification of cancer data, demands for the identification of proper set of genes by analyzing the genomic data. Most of the researchers used microarrays to identify the cancerous genomes. However, such kind of data is high dimensional where number of genes are more compared to samples. Also the data consists of many irrelevant features and noisy data. The classification technique deal with such kind of data influences the performance of algorithm. A popular classification algorithm (i.e., Logistic Regression) is considered in this work for gene classification. Regularization techniques like Lasso with L1 penalty, Ridge with L2 penalty, and hybrid Lasso with L1/2+2 penalty used to minimize irrelevant features and avoid overfitting. However, these methods are of sparse parametric and limits to linear data. Also methods have not produced promising performance when applied to high dimensional genome data. For solving these problems, this paper presents an Additive Sparse Logistic Regression with Additive Regularization (ASLR) method to discriminate linear and non-linear variables in gene classification. The results depicted that the proposed method proved to be the best-regularized method for classifying microarray data compared to standard methods


Author(s):  
N. A. Likhoded ◽  
A. A. Tolstsikau

Locality is an algorithm characteristic describing a usage level of fast access memory. For example, in case of distributed memory computers we focus on memory of each computational node. To achieve the high performance of algorithm implementation one should choose the best possible locality option. Studying the parallel algorithm locality is to estimate the number and volume of data communications. In this work, we formulate and prove the statements for computers with distributed memory that allow us to estimate the asymptotic volume of data communication operations. These estimation results are useful while comparing alternative versions of parallel algorithms during data communication cost analysis.


2020 ◽  
Vol 9 (10) ◽  
pp. 3144
Author(s):  
Abinaya Priya Venkataraman ◽  
Delila Sirak ◽  
Rune Brautaset ◽  
Alberto Dominguez-Vicent

Objective: To evaluate the performance of two subjective refraction measurement algorithms by comparing the refraction values, visual acuity, and the time taken by the algorithms with the standard subjective refraction (SSR). Methods: The SSR and two semi-automated algorithm-based subjective refraction (SR1 and SR2) in-built in the Vision-R 800 phoropter were performed in 68 subjects. In SR1 and SR2, the subject’s responses were recorded in the algorithm which continuously modified the spherical and cylindrical component accordingly. The main difference between SR1 and SR2 is the use of an initial fogging step in SR1. Results: The average difference and agreement limits intervals in the spherical equivalent between each refraction method were smaller than 0.25 D, and 2.00 D, respectively. For the cylindrical components, the average difference was almost zero and the agreement limits interval was less than 0.50 D. The visual acuities were not significantly different among the methods. The times taken for SR1 and SR2 were significantly shorter, and SR2 was on average was three times faster than SSR. Conclusions: The refraction values and the visual acuity obtained with the standard subjective refraction and algorithm-based methods were similar on average. The algorithm-based methods were significantly faster than the standard method.


Sign in / Sign up

Export Citation Format

Share Document