Based on DNA OTP Key Generation and Management Research

2013 ◽  
Vol 427-429 ◽  
pp. 2470-2472
Author(s):  
Yun Peng Zhang ◽  
Feng Ying Tian ◽  
Man Hui Sun ◽  
Ding Yu ◽  
Fei Xiang Fan ◽  
...  

With the development of molecular-bio technology, the feature of DNA molecules for ultra-large-scale data storage has created a new approach for data storage. This paper gives a way of strengthening key transport security. Through recombinant DNA technology, use only sender-receiver know restriction enzymes to combine the key DNA and the T vector, to form a recombinant plasmid, making the key DNA bio-hide, and then place the recombinant plasmid in implanted bacteria .

2020 ◽  
Vol 17 (1) ◽  
pp. 43-63
Author(s):  
A. Sathish ◽  
S. Ravimaran ◽  
S. Jerald Nirmal Kumar

With the rapid developments occurring in cloud computing and services, there has been a growing trend of using the cloud for large-scale data storage. This has led to a major security dispute on data handling. Thus, the process can be overcome by utilizing an efficient shielded access on a key propagation (ESAKP) technique along with an adaptive optimization algorithm for password generation and performing double permutation. The password generation is done by adaptive ant lion optimization (AALO) which tackles the problem of ineffiency. This build has stronger security which needs an efficient selection property by eliminating the worst fit in each iteration. The optimized password is utilized by an adaptive vignere cipher for efficient key generation in which adaptiveness is employed to prevent the dilemma of choosing the first letter of alphabet which in turn reduces the computation time and improves the security. Additionally, there is a need to encrypte the symmetric key asymmetrically with a Elliptic Curve-Diffie Hellman algorithm (EC-DH) with a double stage permutation which produces a scrambling form of data adding security to the data.


2008 ◽  
Vol 59 (11) ◽  
Author(s):  
Iulia Lupan ◽  
Sergiu Chira ◽  
Maria Chiriac ◽  
Nicolae Palibroda ◽  
Octavian Popescu

Amino acids are obtained by bacterial fermentation, extraction from natural protein or enzymatic synthesis from specific substrates. With the introduction of recombinant DNA technology, it has become possible to apply more rational approaches to enzymatic synthesis of amino acids. Aspartase (L-aspartate ammonia-lyase) catalyzes the reversible deamination of L-aspartic acid to yield fumaric acid and ammonia. It is one of the most important industrial enzymes used to produce L-aspartic acid on a large scale. Here we described a novel method for [15N] L-aspartic synthesis from fumarate and ammonia (15NH4Cl) using a recombinant aspartase.


2021 ◽  
Author(s):  
Ashley Sousa

Cellulosic ethanol has shown promise as a feasible alternative fuel, especially if the hydrolysis of lignocellulosic biomass is done through a single step process known as consolidated bioprocessing (CBP). A major challenge for CBP, especially for large-scale industrial applications is the inhibition of celluloytic microorganisms by ethanol. While recombinant DNA technology and microbial acclimatization by exposure have resulted in some increase in ethanol tolerance, the search remains for robust bacteria that can proliferate in industrially-relevant conditions. This study applied an anaerobic gradient system to provide a continous spatial pathway for the selection of cellulolytic consortia with increased tolerance to ethanol. DGGE analysis showed that increasing concentrations of ethanol impacts the community profile. Biofilm formation of cellulose degrading communities has been found to be influenced by species diversity. Environmental gradients have shown promise for selective enrichment of cellulolytic consortia at desired conditions required for industrial application.


2019 ◽  
Author(s):  
Yasset Perez-Riverol ◽  
Pablo Moreno

AbstractThe recent improvements in mass spectrometry instruments and new analytical methods are increasing the intersection between proteomics and big data science. In addition, the bioinformatics analysis is becoming an increasingly complex and convoluted process involving multiple algorithms and tools. A wide variety of methods and software tools have been developed for computational proteomics and metabolomics during recent years, and this trend is likely to continue. However, most of the computational proteomics and metabolomics tools are targeted and design for single desktop application limiting the scalability and reproducibility of the data analysis. In this paper we overview the key steps of metabolomic and proteomics data processing including main tools and software use to perform the data analysis. We discuss the combination of software containers with workflows environments for large scale metabolomics and proteomics analysis. Finally, we introduced to the proteomics and metabolomics communities a new approach for reproducible and large-scale data analysis based on BioContainers and two of the most popular workflows environments: Galaxy and Nextflow.


The discovery of two naturally occurring biological molecules, plasmid DNA and restriction enzymes, with remarkable properties have made possible the development of methods to isolate and manipulate specific DNA fragments. Through this technology, a DNA fragment, even an entire gene and its controlling elements, can be isolated and rejoined with a plasmid or phage DNA, and the hybrid DNA molecule can be inserted into a bacterium. The foreign DNA insert can be multiplied inside the bacterial host and induced to express or synthesize the protein product of the foreign DNA. The entire process through which this can be achieved is called recombinant DNA technology or genetic engineering. The recombinant DNA technology has been extended to animal and plant cells. In this chapter, methods for isolation, modification, rejoining and replication of genomic DNA, and production of new or enhanced protein products within a host cell have been described.


Author(s):  
Oshin Sharma ◽  
Anusha S.

The emerging trends in fog computing have increased the interests and focus in both industry and academia. Fog computing extends cloud computing facilities like the storage, networking, and computation towards the edge of networks wherein it offloads the cloud data centres and reduces the latency of providing services to the users. This paradigm is like cloud in terms of data, storage, application, and computation services, except with a fundamental difference: it is decentralized. Furthermore, these fog systems can process huge amounts of data locally and can be installed on hardware of different types. These characteristics make fog suitable for time- and location-based applications like internet of things (IoT) devices which can process large amounts of data. In this chapter, the authors present fog data streaming, its architecture, and various applications.


Sign in / Sign up

Export Citation Format

Share Document