A Fast Method for Defogging of Outdoor Visual Images

Author(s):  
Tannistha Pal

Images captured in severe atmospheric catastrophe especially in fog critically degrade the quality of an image and thereby reduces the visibility of an image which in turn affects several computer vision applications like visual surveillance detection, intelligent vehicles, remote sensing, etc. Thus acquiring clear vision is the prime requirement of any image. In the last few years, many approaches have been made towards solving this problem. In this article, a comparative analysis has been made on different existing image defogging algorithms and then a technique has been proposed for image defogging based on dark channel prior strategy. Experimental results show that the proposed method shows efficient results by significantly improving the visual effects of images in foggy weather. Also computational time of the existing techniques are much higher which has been overcame in this paper by using the proposed method. Qualitative assessment evaluation is performed on both benchmark and real time data sets for determining theefficacy of the technique used. Finally, the whole work is concluded with its relative advantages and shortcomings.

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5204
Author(s):  
Anastasija Nikiforova

Nowadays, governments launch open government data (OGD) portals that provide data that can be accessed and used by everyone for their own needs. Although the potential economic value of open (government) data is assessed in millions and billions, not all open data are reused. Moreover, the open (government) data initiative as well as users’ intent for open (government) data are changing continuously and today, in line with IoT and smart city trends, real-time data and sensor-generated data have higher interest for users. These “smarter” open (government) data are also considered to be one of the crucial drivers for the sustainable economy, and might have an impact on information and communication technology (ICT) innovation and become a creativity bridge in developing a new ecosystem in Industry 4.0 and Society 5.0. The paper inspects OGD portals of 60 countries in order to understand the correspondence of their content to the Society 5.0 expectations. The paper provides a report on how much countries provide these data, focusing on some open (government) data success facilitating factors for both the portal in general and data sets of interest in particular. The presence of “smarter” data, their level of accessibility, availability, currency and timeliness, as well as support for users, are analyzed. The list of most competitive countries by data category are provided. This makes it possible to understand which OGD portals react to users’ needs, Industry 4.0 and Society 5.0 request the opening and updating of data for their further potential reuse, which is essential in the digital data-driven world.


Author(s):  
A Salman Avestimehr ◽  
Seyed Mohammadreza Mousavi Kalan ◽  
Mahdi Soltanolkotabi

Abstract Dealing with the shear size and complexity of today’s massive data sets requires computational platforms that can analyze data in a parallelized and distributed fashion. A major bottleneck that arises in such modern distributed computing environments is that some of the worker nodes may run slow. These nodes a.k.a. stragglers can significantly slow down computation as the slowest node may dictate the overall computational time. A recent computational framework, called encoded optimization, creates redundancy in the data to mitigate the effect of stragglers. In this paper, we develop novel mathematical understanding for this framework demonstrating its effectiveness in much broader settings than was previously understood. We also analyze the convergence behavior of iterative encoded optimization algorithms, allowing us to characterize fundamental trade-offs between convergence rate, size of data set, accuracy, computational load (or data redundancy) and straggler toleration in this framework.


2019 ◽  
Vol 75 (1) ◽  
pp. 14-24 ◽  
Author(s):  
Joseph A. M. Paddison

Diffuse scattering is a rich source of information about disorder in crystalline materials, which can be modelled using atomistic techniques such as Monte Carlo and molecular dynamics simulations. Modern X-ray and neutron scattering instruments can rapidly measure large volumes of diffuse-scattering data. Unfortunately, current algorithms for atomistic diffuse-scattering calculations are too slow to model large data sets completely, because the fast Fourier transform (FFT) algorithm has long been considered unsuitable for such calculations [Butler & Welberry (1992). J. Appl. Cryst. 25, 391–399]. Here, a new approach is presented for ultrafast calculation of atomistic diffuse-scattering patterns. It is shown that the FFT can actually be used to perform such calculations rapidly, and that a fast method based on sampling theory can be used to reduce high-frequency noise in the calculations. These algorithms are benchmarked using realistic examples of compositional, magnetic and displacive disorder. They accelerate the calculations by a factor of at least 102, making refinement of atomistic models to large diffuse-scattering volumes practical.


Author(s):  
Perikles Simon

AbstractDuring a pandemic, robust estimation of case fatality rates (CFRs) is essential to plan and control suppression and mitigation strategies. At present, estimates for the CFR of COVID-19 caused by SARS-CoV-2 infection vary considerably. Expert consensus of 0.1–1% covers in practical terms a range from normal seasonable Influenza to Spanish Influenza. In the following, I deduce a formula for an adjusted Infection Fatality Rate (IFR) to assess mortality in a period following a positive test adjusted for selection bias.Official datasets on cases and deaths were combined with data sets on number of tests. After data curation and quality control, a total of IFR (n=819) was calculated for 21 countries for periods of up to 26 days between registration of a case and death.Estimates for IRFs increased with length of period, but levelled off at >9days with a median for all 21 countries of 0.11 (95%-CI: 0.073–0.15). An epidemiologically derived IFR of 0.040 % (95%-CI: 0.029%– 0.055%) was determined for Iceland and was very close to the calculated IFR of 0.057% (95%-CI: 0.042– 0.078), but 2.7–6-fold lower than CFRs. IFRs, but not CFRs, were positively associated with increased proportions of elderly in age-cohorts (n=21, spearman’s ρ=.73, p =.02).Real-time data on molecular and serological testing may further displace classical diagnosis of disease and its related death. I will critically discuss, why, how and under which conditions the IFR, provides a more solid early estimate of the global burden of a pandemic than the CFR.


This framework includes two novel approaches to choose the outlier from various datasets. First one being Relative Cosine-based Outlier Score (RCOS).It's proposed to measure the deviation score of the objects in which each single attribute deviation is calculated and multiplied to get the entire object deviation. Initially we set the threshold. If the calculated score is greater than the threshold, then the instance is considered as an outlier. These are identified and removed since outliers are not required for classification. Now, the remaining normal objects are subjected to different methods of classification. The second method is Hybrid Isolation Forest (HiForest). It is an enhanced version of isolation forest. Similar to method outliers are identified and removed. An experimental analysis is performed on synthetic real time data sets considered from weka and UCI repository. Classification models are built and the generated results are tabulated and accuracy is recorded. The results obtained by the above methods are compared and graphs are plotted for visualization


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Bing Tang ◽  
Linyao Kang ◽  
Li Zhang ◽  
Feiyan Guo ◽  
Haiwu He

Nonnegative matrix factorization (NMF) has been introduced as an efficient way to reduce the complexity of data compression and its capability of extracting highly interpretable parts from data sets, and it has also been applied to various fields, such as recommendations, image analysis, and text clustering. However, as the size of the matrix increases, the processing speed of nonnegative matrix factorization is very slow. To solve this problem, this paper proposes a parallel algorithm based on GPU for NMF in Spark platform, which makes full use of the advantages of in-memory computation mode and GPU acceleration. The new GPU-accelerated NMF on Spark platform is evaluated in a 4-node Spark heterogeneous cluster using Google Compute Engine by configuring each node a NVIDIA K80 CUDA device, and experimental results indicate that it is competitive in terms of computational time against the existing solutions on a variety of matrix orders. Furthermore, a GPU-accelerated NMF-based parallel collaborative filtering (CF) algorithm is also proposed, utilizing the advantages of data dimensionality reduction and feature extraction of NMF, as well as the multicore parallel computing mode of CUDA. Using real MovieLens data sets, experimental results have shown that the parallelization of NMF-based collaborative filtering on Spark platform effectively outperforms traditional user-based and item-based CF with a higher processing speed and higher recommendation accuracy.


2014 ◽  
Vol 571-572 ◽  
pp. 497-501 ◽  
Author(s):  
Qi Lv ◽  
Wei Xie

Real-time log analysis on large scale data is important for applications. Specifically, real-time refers to UI latency within 100ms. Therefore, techniques which efficiently support real-time analysis over large log data sets are desired. MongoDB provides well query performance, aggregation frameworks, and distributed architecture which is suitable for real-time data query and massive log analysis. In this paper, a novel implementation approach for an event driven file log analyzer is presented, and performance comparison of query, scan and aggregation operations over MongoDB, HBase and MySQL is analyzed. Our experimental results show that HBase performs best balanced in all operations, while MongoDB provides less than 10ms query speed in some operations which is most suitable for real-time applications.


Sign in / Sign up

Export Citation Format

Share Document