scholarly journals Research on TCP Performance Model and Transport Agent Architecture in Broadband Wireless Network

2021 ◽  
Vol 22 (2) ◽  
Author(s):  
Lintao Li ◽  
Parv Sharma ◽  
Mehdi Gheisari ◽  
Amit Sharma

The problems of Internet stability, heterogeneity, fairness of bandwidth sharing among streams, efficiency of use and congestion control have been solved in this article. This paper proposes an improved scheme of TCP proxy acknowledgement based on Automatic Repeat Request (ARQ), which improves throughput, reduces delay and saves uplink bandwidth of wireless link, and is more suitable for future asymmetric networks. The substantial improvement is observed during the experimentation as processing efficiency of protocol. The observed results revealed that overall processing time for each packet is approximately equals to one fourth of the transfer control protocol and the reduction of 59% is also observed in the utility of resources. The protocol also incorporates various simple techniques for the recovery of loss to improve the throughput in noisy wireless conditions. The results show that the adoption of the average diversity combining technology is helpful to improve the throughput and effective factor performance, and can reduce the requirement of radio link protocol (RLP) maximum retransmission times. As nearly 90% of uplink acknowledgement frames are filtered, the uplink bandwidth utilization rate is significantly improved. Decomposing large data frames into small data frames is also helpful to improve system performance.

2021 ◽  
Vol 14 (2) ◽  
pp. 268-277
Author(s):  
Etza nofarita

Security issues of a system are factors that need to be considered in the operation of information systems, which are intended to prevent threats to the system and detect and correct any damage to the system. Distributed Denial of Services (DDOS) is a form of attack carried out by someone, individuals or groups to damage data that can be attacked through a server or malware in the form of packages that damage the network system used. Security is a mandatory thing in a network to avoid damage to the data system or loss of data from bad people or heckers. Packages sent in the form of malware that attacks, causing bandwidth hit continuously. Network security is a factor that must be maintained and considered in an information system. Ddos forms are Ping of Death, flooding, Remote controled attack, UDP flood, and Smurf Attack. The goal is to use DDOS to protect or prevent system threats and improve damaged systems. Computer network security is very important in maintaining the security of data in the form of small data or large data used by the user.


2018 ◽  
Author(s):  
Hamid Bagher ◽  
Usha Muppiral ◽  
Andrew J Severin ◽  
Hridesh Rajan

AbstractBackgroundCreating a computational infrastructure to analyze the wealth of information contained in data repositories that scales well is difficult due to significant barriers in organizing, extracting and analyzing relevant data. Shared Data Science Infrastructures like Boa can be used to more efficiently process and parse data contained in large data repositories. The main features of Boa are inspired from existing languages for data intensive computing and can easily integrate data from biological data repositories.ResultsHere, we present an implementation of Boa for Genomic research (BoaG) on a relatively small data repository: RefSeq’s 97,716 annotation (GFF) and assembly (FASTA) files and metadata. We used BoaG to query the entire RefSeq dataset and gain insight into the RefSeq genome assemblies and gene model annotations and show that assembly quality using the same assembler varies depending on species.ConclusionsIn order to keep pace with our ability to produce biological data, innovative methods are required. The Shared Data Science Infrastructure, BoaG, can provide greater access to researchers to efficiently explore data in ways previously not possible for anyone but the most well funded research groups. We demonstrate the efficiency of BoaG to explore the RefSeq database of genome assemblies and annotations to identify interesting features of gene annotation as a proof of concept for much larger datasets.


2011 ◽  
Vol 19 (2-3) ◽  
pp. 133-145
Author(s):  
Gabriela Turcu ◽  
Ian Foster ◽  
Svetlozar Nestorov

Text analysis tools are nowadays required to process increasingly large corpora which are often organized as small files (abstracts, news articles, etc.). Cloud computing offers a convenient, on-demand, pay-as-you-go computing environment for solving such problems. We investigate provisioning on the Amazon EC2 cloud from the user perspective, attempting to provide a scheduling strategy that is both timely and cost effective. We derive an execution plan using an empirically determined application performance model. A first goal of our performance measurements is to determine an optimal file size for our application to consume. Using the subset-sum first fit heuristic we reshape the input data by merging files in order to match as closely as possible the desired file size. This also speeds up the task of retrieving the results of our application, by having the output be less segmented. Using predictions of the performance of our application based on measurements on small data sets, we devise an execution plan that meets a user specified deadline while minimizing cost.


2013 ◽  
Vol 5 (1) ◽  
pp. 66-83 ◽  
Author(s):  
Iman Rahimi ◽  
Reza Behmanesh ◽  
Rosnah Mohd. Yusuff

The objective of this article is an evaluation and assessment efficiency of the poultry meat farm as a case study with the new method. As it is clear poultry farm industry is one of the most important sub- sectors in comparison to other ones. The purpose of this study is the prediction and assessment efficiency of poultry farms as decision making units (DMUs). Although, several methods have been proposed for solving this problem, the authors strongly need a methodology to discriminate performance powerfully. Their methodology is comprised of data envelopment analysis and some data mining techniques same as artificial neural network (ANN), decision tree (DT), and cluster analysis (CA). As a case study, data for the analysis were collected from 22 poultry companies in Iran. Moreover, due to a small data set and because of the fact that the authors must use large data set for applying data mining techniques, they employed k-fold cross validation method to validate the authors’ model. After assessing efficiency for each DMU and clustering them, followed by applied model and after presenting decision rules, results in precise and accurate optimizing technique.


2013 ◽  
Vol 2013 ◽  
pp. 1-11
Author(s):  
Dewang Chen ◽  
Long Chen

In order to obtain a decent trade-off between the low-cost, low-accuracy Global Positioning System (GPS) receivers and the requirements of high-precision digital maps for modern railways, using the concept of constraint K-segment principal curves (CKPCS) and the expert knowledge on railways, we propose three practical CKPCS generation algorithms with reduced computational complexity, and thereafter more suitable for engineering applications. The three algorithms are named ALLopt, MPMopt, and DCopt, in which ALLopt exploits global optimization and MPMopt and DCopt apply local optimization with different initial solutions. We compare the three practical algorithms according to their performance on average projection error, stability, and the fitness for simple and complex simulated trajectories with noise data. It is found that ALLopt only works well for simple curves and small data sets. The other two algorithms can work better for complex curves and large data sets. Moreover, MPMopt runs faster than DCopt, but DCopt can work better for some curves with cross points. The three algorithms are also applied in generating GPS digital maps for two railway GPS data sets measured in Qinghai-Tibet Railway (QTR). Similar results like the ones in synthetic data are obtained. Because the trajectory of a railway is relatively simple and straight, we conclude that MPMopt works best according to the comprehensive considerations on the speed of computation and the quality of generated CKPCS. MPMopt can be used to obtain some key points to represent a large amount of GPS data. Hence, it can greatly reduce the data storage requirements and increase the positioning speed for real-time digital map applications.


2020 ◽  
pp. 1-11
Author(s):  
Erjia Yan ◽  
Zheng Chen ◽  
Kai Li

Citation sentiment plays an important role in citation analysis and scholarly communication research, but prior citation sentiment studies have used small data sets and relied largely on manual annotation. This paper uses a large data set of PubMed Central (PMC) full-text publications and analyzes citation sentiment in more than 32 million citances within PMC, revealing citation sentiment patterns at the journal and discipline levels. This paper finds a weak relationship between a journal’s citation impact (as measured by CiteScore) and the average sentiment score of citances to its publications. When journals are aggregated into quartiles based on citation impact, we find that journals in higher quartiles are cited more favorably than those in the lower quartiles. Further, social science journals are found to be cited with higher sentiment, followed by engineering and natural science and biomedical journals, respectively. This result may be attributed to disciplinary discourse patterns in which social science researchers tend to use more subjective terms to describe others’ work than do natural science or biomedical researchers.


2018 ◽  
Vol 20 (6) ◽  
pp. 1997-2008 ◽  
Author(s):  
Clare Horscroft ◽  
Sarah Ennis ◽  
Reuben J Pengelly ◽  
Timothy J Sluckin ◽  
Andrew Collins

Abstract Insights into genetic loci which are under selection and their functional roles contribute to increased understanding of the patterns of phenotypic variation we observe today. The availability of whole-genome sequence data, for humans and other species, provides opportunities to investigate adaptation and evolution at unprecedented resolution. Many analytical methods have been developed to interrogate these large data sets and characterize signatures of selection in the genome. We review here recently developed methods and consider the impact of increased computing power and data availability on the detection of selection signatures. Consideration of demography, recombination and other confounding factors is important, and use of a range of methods in combination is a powerful route to resolving different forms of selection in genome sequence data. Overall, a substantial improvement in methods for application to whole-genome sequencing is evident, although further work is required to develop robust and computationally efficient approaches which may increase reproducibility across studies.


2014 ◽  
Vol 556-562 ◽  
pp. 5321-5327
Author(s):  
Hui Qun Zhao ◽  
Hai Gang Yang

TransactionEvent is one of the five events defined in EPCGlobal standard. As TransactionEvent lasts for a long period and processes large data, it has a higher demand of real-time. The process of the TransactionEvent in the Internet of Things is complex. In order to overcome these disadvantages, this paper proposes a non-integrated program. This program will ensure the TransactionEvent processing efficiency, reliability and real time. In the end of this paper, the article will implement a prototype system of a commercial IoT to verify this method.


Author(s):  
Atika Dwi Hanun Amalia ◽  
Herry Suprajitno ◽  
Asri Bekti Pratiwi

The purpose of this research is to solve the Close-Open Mixed Vehicle Routing Problem (COMVRP) using Bat Algorithm. COMVRP which is a combination of Close Vehicle Routing Problem or commonly known as Vehicle Routing Problem (VRP) with Open Vehicle Routing Problem (OVRP) is a problem to determine vehicles route in order to minimize total distance to serve customers without exceed vehicle capacity. COMVRP occurs when the company already has private vehicles but its capacity could not fulfill all customer demands so the company must rent several vehicles from other companies to complete the distribution process. In this case, the private vehicle returns to the depot after serving the last customer while the rental vehicle does not need to return. Bat algorithm is an algorithm inspired by the process of finding prey from small bats using echolocation. The implementation program to solve was created using Java programming with NetBeans IDE 8.2 software which was implemented using 3 cases, small data with 18 customers, medium data with 75 customers and large data with 100 customers. Based on the implementation results, it can be concluded that the more iterations, the smaller total costs are obtained, while for the pulse rate and the amount of bat tends not to affect the total cost obtained.


Author(s):  
Kim Wallin

The standard Master Curve (MC) deals only with materials assumed to be homogeneous, but MC analysis methods for inhomogeneous materials have also been developed. Especially the bi-modal and multi-modal analysis methods are becoming more and more standard. Their drawback is that these methods are generally reliable only with sufficiently large data sets (number of valid tests, r ≥ 15–20). Here, the possibility of using the multi-modal analysis method with smaller data sets is assessed, and a new procedure to conservatively account for possible inhomogeneities is proposed.


Sign in / Sign up

Export Citation Format

Share Document