random dataset
Recently Published Documents


TOTAL DOCUMENTS

13
(FIVE YEARS 6)

H-INDEX

4
(FIVE YEARS 2)

2020 ◽  
pp. JCM.01987-20
Author(s):  
Hauke Tönnies ◽  
Karola Prior ◽  
Dag Harmsen ◽  
Alexander Mellmann

The environmental bacterium Pseudomonas aeruginosa, in particular multidrug resistant clones, is often associated with nosocomial infections and outbreaks. Today, core genome multilocus sequence typing (cgMLST) is frequently applied to delineate sporadic cases from nosocomial transmissions. However, until recently, no cgMLST scheme for a standardized typing of P. aeruginosa was available.To establish a novel cgMLST scheme for P. aeruginosa, we initially determined the breadth of the P. aeruginosa population based on MLST data with a Bayesian approach (BAPS). Using genomic data of representative isolates for the whole population and for all 12 serogroups, we extracted target genes and further refined them using a random dataset of 1,000 P. aeruginosa genomes. Subsequently, we investigated reproducibility and discriminatory ability with repeatedly sequenced isolates and isolates from well-defined outbreak scenarios, respectively, and compared clustering applying two recently published cgMLST schemes.BAPS generated seven P. aeruginosa groups. To cover these and all serogroups, 15 reference strains were used to determine genes common in all strains. After refinement with the dataset of 1,000 genomes, the cgMLST scheme consisted of 3,867 target genes, which are representative for the P. aeruginosa population and highly reproducible using biological replicates. We finally evaluated the scheme by reanalyzing two published outbreaks, where the authors used single nucleotide polymorphisms (SNPs) typing. In both cases cgMLST was concordant to the previous SNP results and to the results of the two other cgMLST schemes.In conclusion, the highly-reproducible novel P. aeruginosa cgMLST scheme facilitates outbreak investigations due to the publicly available cgMLST nomenclature.


Diagnostics ◽  
2020 ◽  
Vol 10 (11) ◽  
pp. 969
Author(s):  
Maximiliano Lucius ◽  
Jorge De All ◽  
José Antonio De All ◽  
Martín Belvisi ◽  
Luciana Radizza ◽  
...  

This study evaluated whether deep learning frameworks trained in large datasets can help non-dermatologist physicians improve their accuracy in categorizing the seven most common pigmented skin lesions. Open-source skin images were downloaded from the International Skin Imaging Collaboration (ISIC) archive. Different deep neural networks (DNNs) (n = 8) were trained based on a random dataset constituted of 8015 images. A test set of 2003 images was used to assess the classifiers’ performance at low (300 × 224 RGB) and high (600 × 450 RGB) image resolution and aggregated data (age, sex and lesion localization). We also organized two different contests to compare the DNN performance to that of general practitioners by means of unassisted image observation. Both at low and high image resolution, the DNN framework differentiated dermatological images with appreciable performance. In all cases, the accuracy was improved when adding clinical data to the framework. Finally, the least accurate DNN outperformed general practitioners. The physician’s accuracy was statistically improved when allowed to use the output of this algorithmic framework as guidance. DNNs are proven to be high performers as skin lesion classifiers and can improve general practitioner diagnosis accuracy in a routine clinical scenario.


2020 ◽  
Author(s):  
Maximiliano Lucius ◽  
Jorge De All ◽  
José Antonio De All ◽  
Martín Belvisi ◽  
Luciana Radizza ◽  
...  

AbstractArtificial intelligence can be a key tool in the context of assisting in the diagnosis of dermatological conditions, particularly when performed by general practitioners with limited or no access to high resolution optical equipment. This study evaluates the performance of deep convolutional neural networks (DNNs) in the classification of seven pigmented skin lesions. Additionally, it assesses the improvement ratio in the classification performance when utilized by general practitioners. Open-source skin images were downloaded from the ISIC archive. Different DNNs (n=8) were trained based on a random dataset constituted by 8,015 images. A test set of 2,003 images has been used to assess the classifiers performance at low (300 × 224 RGB) and high (600 × 450 RGB) image resolution and aggregated clinical data (age, sex and lesion localization). We have also organized two different contests to compare the DNNs performance to that of general practitioners by means of unassisted image observation. Both at low and high image resolution, the DNNs framework being trained differentiated dermatological images with appreciable performance. In all cases, accuracy has been improved when adding clinical data to the framework. Finally, the lowest accurate DNN outperformed general practitioners. Physician’s accuracy was statistically improved when allowed to use the output of this algorithmic framework as guidance. DNNS are proven to be high performers as skin lesion classifiers. The aim is to include these AI tools in the context of general practitioners whilst improving their diagnosis accuracy in a routine clinical scenario when or where the use of high-resolution equipment is not accessible.


2019 ◽  
Vol 2019 ◽  
pp. 1-18 ◽  
Author(s):  
Tran Khanh Dang ◽  
Khanh T. K. Tran

Wireless sensor networks consist of a large number of distributed sensor nodes so that potential risks are becoming more and more unpredictable. The new entrants pose the potential risks when they move into the secure zone. To build a door wall that provides safety and security for the system, many recent research works applied the initial authentication process. However, the majority of the previous articles only focused on the Central Authority (CA) since this leads to an increase in the computation cost and energy consumption for the specific cases on the Internet of Things (IoT). Hence, in this article, we will lessen the importance of these third parties through proposing an enhanced authentication mechanism that includes key management and evaluation based on the past interactions to assist the objects joining a secured area without any nearby CA. We refer to a mobility dataset from CRAWDAD collected at the University Politehnica of Bucharest and rebuilt into a new random dataset larger than the old one. The new one is an input for a simulated authenticating algorithm to observe the communication cost and resource usage of devices. Our proposal helps the authenticating to be flexible, being strict with unknown devices into the secured zone. The threshold of maximum friends can modify based on the optimization of the symmetric-key algorithm to diminish communication costs (our experimental results compared to previous schemes less than 2000 bits) and raise flexibility in resource-constrained environments.


2019 ◽  
pp. 483-506
Author(s):  
Varunya Attasena ◽  
Nouria Harbi ◽  
Jérôme Darmont

Cloud computing helps reduce costs, increase business agility and deploy solutions with a high return on investment for many types of applications, including data warehouses and on-line analytical processing. However, storing and transferring sensitive data into the cloud raises legitimate security concerns. In this paper, the authors propose a new multi-secret sharing approach for deploying data warehouses in the cloud and allowing on-line analysis processing, while enforcing data privacy, integrity and availability. The authors first validate the relevance of their approach theoretically and then experimentally with both a simple random dataset and the Star Schema Benchmark. The authors also demonstrate its superiority to related methods.


Fractals ◽  
2018 ◽  
Vol 26 (05) ◽  
pp. 1850075
Author(s):  
DAH-CHIN LUOR

In this paper we consider the expectation, the autocovariance, and increments of the deviation of a fractal interpolation function [Formula: see text] corresponding to a random dataset [Formula: see text]. We show that the covariance of [Formula: see text] and [Formula: see text] is a fractal interpolation function on [Formula: see text] for each fixed [Formula: see text], where [Formula: see text]. We also prove that, for a fixed [Formula: see text], the covariance of [Formula: see text] and [Formula: see text] is a fractal interpolation function on [Formula: see text]. A special type of increments of the deviation of [Formula: see text] is also investigated.


2018 ◽  
Vol 2 (1) ◽  
pp. 29-35
Author(s):  
Naimah Mohd Hussin ◽  
Ammar Azlan

Every semester, a new batch of final year students needs to find a topic and a supervisor to complete their final year project requirement. The problem with the current approach is that it is based on first come first serve. So, the pairing between student and supervisor is not the optimal ones, i.e. some students may not get their preferred topic or supervisor. Plus, it is also time consuming for both students and supervisors. The researcher is motivated to solve this long overdue problem by applying a stable marriage model tha t is introduced by Gale and Shapley hence the name Gale - Shapley Algorithm. To determine the functionality of this approach, a system prototype has been constructed and a random dataset is used. The result, 60% of the students get their first choice topics while the remaining students get their second or third choice. This is a remarkable outcome considering the time and effort saved compared to the current process. Therefore, stable marriage model is applicable in solving student - topic pairing.


2015 ◽  
Vol 11 (2) ◽  
pp. 22-43 ◽  
Author(s):  
Varunya Attasena ◽  
Nouria Harbi ◽  
Jérôme Darmont

Cloud computing helps reduce costs, increase business agility and deploy solutions with a high return on investment for many types of applications, including data warehouses and on-line analytical processing. However, storing and transferring sensitive data into the cloud raises legitimate security concerns. In this paper, the authors' propose a new multi-secret sharing approach for deploying data warehouses in the cloud and allowing on-line analysis processing, while enforcing data privacy, integrity and availability. The authors' first validate the relevance of their approach theoretically and then experimentally with both a simple random dataset and the Star Schema Benchmark. The authors also demonstrate its superiority to related methods.


Sign in / Sign up

Export Citation Format

Share Document