source transformation
Recently Published Documents


TOTAL DOCUMENTS

65
(FIVE YEARS 3)

H-INDEX

10
(FIVE YEARS 0)



2021 ◽  
Author(s):  
Wentong Liao ◽  
Cuiling Lan ◽  
Michael Ying Yang ◽  
Wenjun Zeng ◽  
Bodo Rosenhahn


2021 ◽  
pp. 20-32
Author(s):  
admin admin ◽  

Recently, the security of heterogeneous multimedia data becomes a very critical issue, substantially with the proliferation of multimedia data and applications. Cloud computing is the hidden back-end for storing heterogeneous multimedia data. Notwithstanding that using cloud storage is indispensable, but the remote storage servers are untrusted. Therefore, one of the most critical challenges is securing multimedia data storage and retrieval from the untrusted cloud servers. This paper applies a Shamir Secrete-Sharing scheme and integrates with cloud computing to guarantee efficiency and security for sensitive multimedia data storage and retrieval. The proposed scheme can fully support the comprehensive and multilevel security control requirements for the cloud-hosted multimedia data and applications. In addition, our scheme is also based on a source transformation that provides powerful mutual interdependence in its encrypted representation—the Share Generator slices and encrypts the multimedia data before sending it to the cloud storage. The extensive experimental evaluation on various configurations confirmed the effectiveness and efficiency of our scheme, which showed excellent performance and compatibility with several implementation strategies.



2020 ◽  
Author(s):  
Sean K. Martin ◽  
John P. Aggleton ◽  
Shane M. O’Mara

AbstractLarge-scale simultaneous in vivo recordings of neurons in multiple brain regions raises the question of the probability of recording direct interactions of neurons within, and between, multiple brain regions. In turn, identifying inter-regional communication rules between neurons during behavioural tasks might be possible, assuming conjoint activity between neurons in connected brain regions can be detected. Using the hypergeometric distribution, and employing anatomically-tractable connection mapping between regions, we derive a method to calculate the probability distribution of ‘recordable’ connections between groups of neurons. This mathematically-derived distribution is validated by Monte Carlo simulations of directed graphs representing the underlying anatomical connectivity structure. We apply this method to simulated graphs with multiple neurons, based on counts in rat brain regions, and to connection matrices from the Blue Brain model of the mouse neocortex connectome. Overall, we find low probabilities of simultaneously-recording directly interacting neurons in vivo in anatomically-connected regions with standard (tetrode-based) approaches. We suggest alternative approaches, including new recording technologies and summing neuronal activity over larger scales, offer promise for testing hypothesised interregional communication and source transformation rules.



Author(s):  
Jannek Squar ◽  
Tim Jammer ◽  
Michael Blesel ◽  
Michael Kuhn ◽  
Thomas Ludwig


2020 ◽  
Vol 29 (03n04) ◽  
pp. 2060012
Author(s):  
Viktor Schuppan

We introduce an enhanced notion of unsatisfiable cores for QBF in prenex CNF that allows to weaken universal quantifiers to existential quantifiers in addition to the traditional removal of clauses. The resulting unsatisfiable cores can be different from those of the traditional notion in terms of syntax, standard semantics, and proof-based semantics. This not only gives rise to explanations of unsatisfiability but, via duality, also leads to diagnoses and repairs of unsatisfiability that are not obtained with traditional unsatisfiable cores. We use a source-to-source transformation on QBF in PCNF such that the weakening of universal quantifiers to existential quantifiers in the original formula corresponds to the removal of clauses in the transformed formula. This makes any tool or method for the computation of unsatisfiable cores of the traditional notion available for the computation of unsatisfiable cores of our enhanced notion. We implement our approach as an extension to the QBF solver DepQBF, and we perform an extensive experimental evaluation on a subset of QBFLIB. We illustrate with several case studies that helpful information can be provided by unsatisfiable cores of our enhanced notion.



Author(s):  
Omar M. Alhawi ◽  
Herbert Rocha ◽  
Mikhail R. Gadelha ◽  
Lucas C. Cordeiro ◽  
Eddie Batista

Abstract DepthK is a source-to-source transformation tool that employs bounded model checking (BMC) to verify and falsify safety properties in single- and multi-threaded C programs, without manual annotation of loop invariants. Here, we describe and evaluate a proof-by-induction algorithm that combines k-induction with invariant inference to prove and refute safety properties. We apply two invariant generators to produce program invariants and feed these into a k-induction-based verification algorithm implemented in DepthK, which uses the efficient SMT-based context-bounded model checker (ESBMC) as sequential verification back-end. A set of C benchmarks from the International Competition on Software Verification (SV-COMP) and embedded-system applications extracted from the available literature are used to evaluate the effectiveness of the proposed approach. Experimental results show that k-induction with invariants can handle a wide variety of safety properties, in typical programs with loops and embedded software applications from the telecommunications, control systems, and medical domains. The results of our comparative evaluation extend the knowledge about approaches that rely on both BMC and k-induction for software verification, in the following ways. (1) The proposed method outperforms the existing implementations that use k-induction with an interval-invariant generator (e.g., 2LS and ESBMC), in the category ConcurrencySafety, and overcame, in others categories, such as SoftwareSystems, other software verifiers that use plain BMC (e.g., CBMC). Also, (2) it is more precise than other verifiers based on the property-directed reachability (PDR) algorithm (i.e., SeaHorn, Vvt and CPAchecker-CTIGAR). This way, our methodology demonstrated improvement over existing BMC and k-induction-based approaches.



2020 ◽  
Vol 8 (5) ◽  
pp. 4896-4899

ETL stands for extraction, transformation and loading, where extraction is done to active data from the source, transformation involve data cleansing, data filtering, data validation and finally application of certain rules and loading stores back the data to the destination repository where it has to finally reside. Pig is one of the most important to which could be applied in Extract, Transform and Load (ETL) process. It helps in applying the ETL approach to the large set of data. Initially Pig loads the data, and further is able to perform predictions, repetitions, expected conversions and further transformations. UDFs can be used to perform more complex algorithms during the transformation phase. The huge data processed by Pig, could be stored back in HDFS. In this paper we demonstrate the ETL process using Pig in Hadoop. Here we demonstrate how the files in HDFS are extracted, transformed and loaded back to HDFS using Pig. We extend the functionality of Pig Latin with Python UDFs to perform transformations.



2020 ◽  
Vol 2 (1) ◽  
Author(s):  
Cyrille Ahmed Midingoyi ◽  
Christophe Pradal ◽  
Ioannis N Athanasiadis ◽  
Marcello Donatelli ◽  
Andreas Enders ◽  
...  

Abstract The diversity of plant and crop process-based modelling platforms in terms of implementation language, software design and architectural constraints limits the reusability of the model components outside the platform in which they were originally developed, making model reuse a persistent issue. To facilitate the intercomparison and improvement of process-based models and the exchange of model components, several groups in the field joined to create the Agricultural Model Exchange Initiative (AMEI). Agricultural Model Exchange Initiative proposes a centralized framework for exchanging and reusing model components. It provides a modular and declarative approach to describe the specification of unit models and their composition. A model algorithm is associated with each model specification, which implements its mathematical behaviour. This paper focuses on the expression of the model algorithm independently of the platform specificities, and how the model algorithm can be seamlessly integrated into different platforms. We define CyML, a Cython-derived language with minimum specifications to implement model component algorithms. We also propose CyMLT, an extensible source-to-source transformation system that transforms CyML source code into different target languages such as Fortran, C#, C++, Java and Python, and into different programming paradigms. CyMLT is also able to generate model components to target modelling platforms such as DSSAT, BioMA, Record, SIMPLACE and OpenAlea. We demonstrate our reuse approach with a simple unit model and the capacity to extend CyMLT with other languages and platforms. The approach we present here will help to improve the reproducibility, exchange and reuse of process-based models.



Sign in / Sign up

Export Citation Format

Share Document