software frameworks
Recently Published Documents


TOTAL DOCUMENTS

86
(FIVE YEARS 28)

H-INDEX

7
(FIVE YEARS 1)

2021 ◽  
Vol 18 (4) ◽  
pp. 1-26
Author(s):  
Aninda Manocha ◽  
Tyler Sorensen ◽  
Esin Tureci ◽  
Opeoluwa Matthews ◽  
Juan L. Aragón ◽  
...  

Graph structures are a natural representation of important and pervasive data. While graph applications have significant parallelism, their characteristic pointer indirect loads to neighbor data hinder scalability to large datasets on multicore systems. A scalable and efficient system must tolerate latency while leveraging data parallelism across millions of vertices. Modern Out-of-Order (OoO) cores inherently tolerate a fraction of long latencies, but become clogged when running severely memory-bound applications. Combined with large power/area footprints, this limits their parallel scaling potential and, consequently, the gains that existing software frameworks can achieve. Conversely, accelerator and memory hierarchy designs provide performant hardware specializations, but cannot support diverse application demands. To address these shortcomings, we present GraphAttack, a hardware-software data supply approach that accelerates graph applications on in-order multicore architectures. GraphAttack proposes compiler passes to (1) identify idiomatic long-latency loads and (2) slice programs along these loads into data Producer/ Consumer threads to map onto pairs of parallel cores. Each pair shares a communication queue; the Producer asynchronously issues long-latency loads, whose results are buffered in the queue and used by the Consumer. This scheme drastically increases memory-level parallelism (MLP) to mitigate latency bottlenecks. In equal-area comparisons, GraphAttack outperforms OoO cores, do-all parallelism, prefetching, and prior decoupling approaches, achieving a 2.87× speedup and 8.61× gain in energy efficiency across a range of graph applications. These improvements scale; GraphAttack achieves a 3× speedup over 64 parallel cores. Lastly, it has pragmatic design principles; it enhances in-order architectures that are gaining increasing open-source support.


Author(s):  
Karthiga B ◽  
Rekha M

Virtual brain research is accelerating thedevelopment of inexpensive real-time Brain Computer Interface (BCI). Hardware improvements that increase the capability of Virtual brain analyse and Brain Computer wearable sensors have made possible several new software frameworks for developers to use and create applications combining BCI and IoT. In this paper, we complete a survey on BCI in IoT from various perspectives; including Electroencephalogram (EEG) based BCI models, machine learning, and current active platforms. Based on our investigations, the main findings of this survey highlights three major development trends of BCI, which are EEG, IoT, and cloud computing. Using this it is completely useful for finding the true state of whether the brain is alive or dead. If it is alive, then the activity of the brain is monitored and stored. Through this anyone can come to conclusion that whether the action done is legal or illegal. And this has an advantage for 2 scenarios. First is for AUTISM affected people and secondly Forgery in asset documents. And if any changes in the status of the brain then it will be send to the specific person in their relation using SMS & Email id.


2021 ◽  
Vol 64 (9) ◽  
pp. 108-116
Author(s):  
Emma Tosch ◽  
Eytan Bakshy ◽  
Emery D. Berger ◽  
David D. Jensen ◽  
J. Eliot B. Moss

Online experiments are an integral part of the design and evaluation of software infrastructure at Internet firms. To handle the growing scale and complexity of these experiments, firms have developed software frameworks for their design and deployment. Ensuring that the results of experiments in these frameworks are trustworthy---referred to as internal validity ---can be difficult. Currently, verifying internal validity requires manual inspection by someone with substantial expertise in experimental design. We present the first approach for checking the internal validity of online experiments statically, that is, from code alone. We identify well-known problems that arise in experimental design and causal inference, which can take on unusual forms when expressed as computer programs: failures of randomization and treatment assignment, and causal sufficiency errors. Our analyses target PLANOUT, a popular framework that features a domain-specific language (DSL) to specify and run complex experiments. We have built PLANALYZER, a tool that checks PLANOUT programs for threats to internal validity, before automatically generating important data for the statistical analyses of a large class of experimental designs. We demonstrate PLANALYZER'S utility on a corpus of PLANOUT scripts deployed in production at Facebook, and we evaluate its ability to identify threats on a mutated subset of this corpus. PLANALYZER has both precision and recall of 92% on the mutated corpus, and 82% of the contrasts it generates match hand-specified data.


Author(s):  
Karwan Jameel Merceedi ◽  
Nareen Abdulla Sabry

In the last few days, data and the internet have become increasingly growing, occurring in big data. For these problems, there are many software frameworks used to increase the performance of the distributed system. This software is used for available ample data storage. One of the most beneficial software frameworks used to utilize data in distributed systems is Hadoop. This software creates machine clustering and formatting the work between them. Hadoop consists of two major components: Hadoop Distributed File System (HDFS) and Map Reduce (MR). By Hadoop, we can process, count, and distribute each word in a large file and know the number of affecting for each of them. The HDFS is designed to effectively store and transmit colossal data sets to high-bandwidth user applications. The differences between this and other file systems provided are relevant. HDFS is intended for low-cost hardware and is exceptionally tolerant to defects. Thousands of computers in a vast cluster both have directly associated storage functions and user programmers. The resource scales with demand while being cost-effective in all sizes by distributing storage and calculation through numerous servers. Depending on the above characteristics of the HDFS, many researchers worked in this field trying to enhance the performance and efficiency of the addressed file system to be one of the most active cloud systems. This paper offers an adequate study to review the essential investigations as a trend beneficial for researchers wishing to operate in such a system. The basic ideas and features of the investigated experiments were taken into account to have a robust comparison, which simplifies the selection for future researchers in this subject. According to many authors, this paper will explain what Hadoop is and its architectures, how it works, and its performance analysis in a distributed systems. In addition, assessing each Writing and compare with each other.


Author(s):  
Chavi Ralhan ◽  
Rakesh Bishnoi ◽  
Ankit ◽  
Anjali ◽  
Hitesh Kumar

Copied code or code clones are a sort of code that contrarily affect the improvement and support of software frameworks. Software clone research in the past generally cantered around the discovery. what's more, examination of code clones, while research lately reaches out to the entire range of clone the board. In the last decade, three reviews showed up in the writing, which cover the recognition, examination and transformative attributes of code clones. This paper presents a complete overview on the state of the workmanship in clone the board, with top to bottom examination of clone the executives exercise (e.g., following, refactoring, cost benefit investigation) past the recognition and examination. This is the main overview on clone the board, where we highlight the accomplishments up until now, and uncover roads for additional exploration essential towards an incorporated clone the board framework. We accept that we have worked really hard in studying the territory of clone the board and that this work may fill in as a guide for future research in the area.


Author(s):  
Pooja Prafulchandra Panchal

Software Reliability is the likelihood of disappointment free software activity for a predefined timeframe in a predetermined climate. Software Reliability is additionally a significant factor influencing framework dependability, Software Reliability without inordinate constraints. Different methodologies can be utilized to improve the dependability of part based software framework; be that as it may, it is difficult to adjust advancement time and spending plan with software reliability. We register the dependability of the different component based software framework utilizing fuzzy logic approach. We can plan and figure the reliability of the component based software frameworks by utilizing neural organization, neuro-fuzzy and hereditary calculation likewise and so. There are different traits of the reliability to plan and investigation for the part based software framework.


2021 ◽  
Author(s):  
Thomas Seidler ◽  
Norbert Schultz ◽  
Dr. Markus Quade ◽  
Christian Autermann ◽  
Dr. Benedikt Gräler ◽  
...  

<p>Earth system modeling is virtually impossible without dedicated data analysis. Typically, data are big and due to the complexity of the system, adequate tools for the analysis lie in the domain of machine learning or artificial intelligence. However, earth system specialists have other expertise than developing and deploying state-of-the art programming code which is needed to efficiently use modern software frameworks and computing resources. In addition, Cloud and HPC infrastructure are frequently needed to run analyses with data beyond Tera- or even Petascale volume, and corresponding requirements on available RAM, GPU and CPU sizes. </p><p>Inside the KI:STE project (www.kiste-project.de), we extend the concepts of an existing project, the Mantik-platform (www.mantik.ai), such that handling of data and algorithms is facilitated for earth system analyses while abstracting technical challenges such as scheduling and monitoring of training jobs and platform specific configurations away from the user.</p><p>The principles for design are collaboration and reproducibility of algorithms from the first data load to the deployment of a model to a cluster infrastructure. In addition to the executive part where code is developed and deployed, the KI:STE project develops a learning platform where dedicated topics in relation to earth system science are systematically and pedagogically presented.</p><p>In this presentation, we show the architecture and interfaces of the KI:STE platform together with a simple example.</p>


2021 ◽  
Vol 171 ◽  
pp. 110846
Author(s):  
Cristiano Politowski ◽  
Fabio Petrillo ◽  
João Eduardo Montandon ◽  
Marco Tulio Valente ◽  
Yann-Gaël Guéhéneuc

2021 ◽  
Vol 55 ◽  
pp. 312-318
Author(s):  
Sri Sudha Vijay Keshav Kolla ◽  
Andre Sanchez ◽  
Peter Plapper

2021 ◽  
Vol 251 ◽  
pp. 03007
Author(s):  
Marilena Bandieramonte ◽  
Riccardo Maria Bianchi ◽  
Joseph Boudreau ◽  
Andrea Dell’Acqua ◽  
Vakhtang Tsulaia

The GeoModel class library for detector description has recently been released as an open-source package and extended with a set of tools to allow much of the detector modeling to be carried out in a lightweight development environment, outside of large and complex software frameworks. These tools include the mechanisms for creating persistent representation of the geometry, an interactive 3D visualization tool, various command-line tools, a plugin system, and XML and JSON parsers. The overall goal of the tool suite is a fast geometry development cycle with quick visual feedback. The tool suite can be built on both Linux and Macintosh systems with minimal external dependencies. It includes useful command-line utilities: gmclash which runs clash detection, gmgeantino which generates geantino maps, and fullSimLight which runs GEANT4 simulation on geometry imported from GeoModel description. The GeoModel tool suite is presently in use in both the ATLAS and FASER experiments. In ATLAS it will be the basis of the LHC Run 4 geometry description.


Sign in / Sign up

Export Citation Format

Share Document