2019 ◽  
Vol 214 ◽  
pp. 00001
Alessandra Forti ◽  
Latchezar Betev ◽  
Maarten Litmaath ◽  
Oxana Smirnova ◽  
Petya Vasileva ◽  

The 23rd International Conference on Computing in High Energy and Nuclear Physics (CHEP) took place in the National Palace of Culture, Sofia, Bulgaria from 9th to 13th of July 2018. 575 participants joined the plenary and the eight parallel sessions dedicated to: online computing; offline computing; distributed computing; data handling; software development; machine learning and physics analysis; clouds, virtualisation and containers; networks and facilities. The conference hosted 35 plenary presentations, 323 parallel presentations and 188 posters.

Virendra Tiwari ◽  
Balendra Garg ◽  
Uday Prakash Sharma

The machine learning algorithms are capable of managing multi-dimensional data under the dynamic environment. Despite its so many vital features, there are some challenges to overcome. The machine learning algorithms still requires some additional mechanisms or procedures for predicting a large number of new classes with managing privacy. The deficiencies show the reliable use of a machine learning algorithm relies on human experts because raw data may complicate the learning process which may generate inaccurate results. So the interpretation of outcomes with expertise in machine learning mechanisms is a significant challenge in the machine learning algorithm. The machine learning technique suffers from the issue of high dimensionality, adaptability, distributed computing, scalability, the streaming data, and the duplicity. The main issue of the machine learning algorithm is found its vulnerability to manage errors. Furthermore, machine learning techniques are also found to lack variability. This paper studies how can be reduced the computational complexity of machine learning algorithms by finding how to make predictions using an improved algorithm.

With the immense growth in the field of mobile communication, a good number of complex applications are now available for the mobile devices. The complex applications are made available to comply with the client demand for the higher performing and higher capable applications, which can be accessible from any locations and any devices. Thus, the application developers have attempted to make highly scaled applications to be deployed on mobile communication devices. The larger applications have higher demand for memory and processing capabilities. Thus, making the similar infrastructure available on the mobile computing environments was always a challenge. Nonetheless, with the availability of distributed computing architecture, the bottleneck for the computing capabilities for these complex applications can be handled. Nonetheless, the memory capabilities for the applications must be addressed more sophisticated manner using distribution of the memory and sharing of the data. Henceforth, distributed caching came under existence.A conveyed cache is an augmentation of the customary idea of cache utilized in a solitary district. A conveyed cache may traverse various servers with the goal that it can develop in size and in value-based limit. It is for the most part used to store application information living in database and web session information.One of the most popular technique for making the cache available is to perform cache discovery operations in the network. A number of parallel research attempts are made to identify the accurate place in the network to create or build the distributed cache network.However, the most of the parallel research attempts are criticised for considering single dimensions for cache discovery as few of the work focuses on distance, few of the work focuses on density and some of the works focuses on page replacement policies applicable on mobile computing environments as MANETs or WSN. Henceforth, the demand of the research is to consider multiple parameters for cache discovery and build a framework to automatically define the cache distribution. Hence, this work proposes a novel architecture or framework to detect the cache distribution based on distance, stale page reduction mechanism and finally the energy optimization. The outcome of the research is to automate the recommendation of cache discovery and increase the network life time by 90% compared to the existing methods for cache discovery. In order to handle the complex processing of the proposed algorithms, this work deploys machine learning methods to reduce the time complexity.

In the present occasions, because of urbanization and contamination, it has gotten important to screen and assess the nature of water arriving at our homes. Guaranteeing safe inventory of drinking water has become a major test for the cutting edge progress. In this desk work, we present a structure and improvement of a minimal effort framework for continuous checking of the water quality (WQ) in IoT (web of things). The framework comprise of a few sensors are accustomed to guesstimatingsomatic and element limitations of the water. The parameters like temperature, PH, turbidity, conductivity, broke up oxygen of the water can be estimated. The deliberate qualities from the sensors can be prepared by the center controller. The RBPI B+ (RBPI) model can be consumed as a center controller. At last, the instrument facts can be understood on web utilizing distributed computing. Here the information's are handled utilizing AI calculation it sense the water condition if the WQis great it open the entryway divider else it shuts the door divider. This whole procedure happens naturally without human mediation therefore spare an opportunity to contract with the circumstance physically. The uniqueness of our proposed research is to get the water observing framework with high recurrence, high portability, and low controlled.

2021 ◽  
Vol 51 (4) ◽  
pp. 75-81
Ahad Mirza Baig ◽  
Alkida Balliu ◽  
Peter Davies ◽  
Michal Dory

Rachid Guerraoui was the rst keynote speaker, and he got things o to a great start by discussing the broad relevance of the research done in our community relative to both industry and academia. He rst argued that, in some sense, the fact that distributed computing is so pervasive nowadays could end up sti ing progress in our community by inducing people to work on marginal problems, and becoming isolated. His rst suggestion was to try to understand and incorporate new ideas coming from applied elds into our research, and argued that this has been historically very successful. He illustrated this point via the distributed payment problem, which appears in the context of blockchains, in particular Bitcoin, but then turned out to be very theoretically interesting; furthermore, the theoretical understanding of the problem inspired new practical protocols. He then went further to discuss new directions in distributed computing, such as the COVID tracing problem, and new challenges in Byzantine-resilient distributed machine learning. Another source of innovation Rachid suggested was hardware innovations, which he illustrated with work studying the impact of RDMA-based primitives on fundamental problems in distributed computing. The talk concluded with a very lively discussion.

Loı̈c M. Roch ◽  
Florian Häse ◽  
Christoph Kreisbeck ◽  
Teresa Tamayo-Mendoza ◽  
Lars P. E. Yunker ◽  

<div>Autonomous or “self-driving” laboratories combine robotic platforms with artificial intelligence to increase the rate of scientific discovery. They have the potential to transform our traditional approaches to experimentation. Although autonomous laboratories recently gained increased attention, the requirements imposed by engineering the software packages often prevent their development. Indeed, autonomous laboratories require considerable effort in designing and writing advanced and robust software packages to control, orchestrate and synchronize automated instrumentations, cope with databases, and interact with various artificial intelligence algorithms. To overcome this limitation, we introduce ChemOS, a portable, modular and versatile software package, which supplies the structured layers indispensable for operating autonomous laboratories. Additionally, it enables remote control of laboratories, provides access to distributed computing resources, and comprises state-of-the-art machine learning methods. We believe that ChemOS will reduce the time-to-deployment from automated to autonomous discovery, and will provide the scientific community with an easy-to-use package to facilitate novel discovery, at a faster pace.</div>

2020 ◽  
Andrew Kamal

With the emergence of regressional mathematics and algebraic topology comes advancements in the field of artificial intelligence and machine learning. Such advancements when looking into problems such as nuclear fusion and entropy, can be utilized to analyze unsolved abnormalities in the area of fusion related research. Proof theory will be utilized throughout this paper. For logical mathematical proofs: n represents an unknown number, e represents point of entropy, and m represents maximum point, f represents fusion. This paper will look into analysis of the topic of nuclear fusion and unsolved problems as hardness problems and attempt to formulate computational proofs in relation to entropy, fusion maximum, heat transfer, and entropy transfer mechanisms. This paper will not only be centered around logical proofs but also around computational mechanisms such as distributed computing and its potential role in analyzing computational hardness in relation to fusion related problems. We will summarize a proposal for experimentation utilizing further logical proof formalities and the decentralized-internet SDK for a computational pipeline in order to solve fusion related hardness problems.

2008 ◽  
pp. 1696-1705
George Tzanis ◽  
Christos Berberidis ◽  
Ioannis Vlahavas

At the end of the 1980s, a new discipline named data mining emerged. The introduction of new technologies such as computers, satellites, new mass storage media, and many others have lead to an exponential growth of collected data. Traditional data analysis techniques often fail to process large amounts of, often noisy, data efficiently in an exploratory fashion. The scope of data mining is the knowledge extraction from large data amounts with the help of computers. It is an interdisciplinary area of research that has its roots in databases, machine learning, and statistics and has contributions from many other areas such as information retrieval, pattern recognition, visualization, parallel and distributed computing. There are many applications of data mining in the real world. Customer relationship management, fraud detection, market and industry characterization, stock management, medicine, pharmacology, and biology are some examples (Two Crows Corporation, 1999).

Sign in / Sign up

Export Citation Format

Share Document