High Performance Working Practices: The New Framework for Nurturing Sustainability?

2011 ◽  
Author(s):  
Ana Martins ◽  
Kevin Brown ◽  
Orlando Pereira ◽  
Isabel Martins
2003 ◽  
Vol 12 (06) ◽  
pp. 769-781 ◽  
Author(s):  
ÁKOS ZARÁNDY ◽  
CSABA REKECZKY ◽  
PÉTER FÖLDESY ◽  
ISTVÁN SZATMÁRI

The first CNN technology-based, high performance industrial visual computer called Aladdin is reported. The revolutionary device is the world premier of the ACE4k Cellular Visual Microprocessor (CVM) chip powering an industrial visual computer. One of the most important features of the Aladdin system is the image processing library. The library reduces algorithm development time, provides efficient codes, error free operation in binary, and accurate operation in grayscale nodes. Moreover the library provides an easy way to use the Aladdin system for those who are not familiar with the CNN technology.


2014 ◽  
Vol 551 ◽  
pp. 470-477 ◽  
Author(s):  
Yang Liu ◽  
Yan Bo Zhu

Navigation signals should be steadily tracked to allow the extraction of navigation information, and support the calculation of navigation solution. Generally, extraction of navigation information is realized by GNSS tracking loops, which need to implement two critical operations: 1) tracking parameters should be precisely estimated in order to reliably decode the navigation message. 2) Successive tracking method of navigation information. This paper proposes a novel tracking framework using dynamic FLL assisting PLL strategy and a loop state detection monitor. The new framework extends traditional phase and delay locked loop (PLL/DLL) tracking framework, and contributions mainly lie in two parts. A dynamic framework for parameters adjustment is proposed to avoid tracking failure due to the change of environment, which is complemented in a FLL-PLL cooperative framework. Experimental results demonstrate the advantages of our algorithm compared with standard PLL/DLL framework. It is shown that the algorithm proposed is more suitable for tracking under signal attenuation situation, while maintaining high performance at the same time.


2010 ◽  
Vol 07 (01) ◽  
pp. 41-50
Author(s):  
P. AROCKIA JANSI RANI ◽  
V. SADASIVAM

A statistical approach for modeling the code vectors designed using a supervised learning neural network is proposed in this paper. Since wavelet-based compression is more robust under transmission and decoding errors, the proposed work is implemented in the wavelet domain. Two crucial issues in compression methods are the coding efficiency and the psycho visual quality achieved while modeling different image regions. In this paper, a high performance wavelet coder which provides a new framework for handling these issues in a simple and effective manner is proposed. First the input image is subjected to wavelet transform. Then the transformed coefficients are subjected to Quantization followed by the well known Huffman Encoder. In the Quantization process, initially a codebook is designed using Learning Vector Quantizer. Since codebook is an essential component for the reconstructed image quality and also to exploit the spatial energy compaction of the codevectors, the codebook is further modeled using Savitzky–Golay polynomial. Experimental results show that the proposed work gives better results in terms of PSNR that are competitive with the state-of-art coders in literature.


2020 ◽  
Vol 245 ◽  
pp. 07052
Author(s):  
Maxim Storetvedt ◽  
Latchezar Betev ◽  
Håvard Helstrup ◽  
Kristin Fanebust Hetland ◽  
Bjarte Kileng

The new JAliEn (Java ALICE Environment) middleware is a Grid framework designed to satisfy the needs of the ALICE experiment for the LHC Run 3, such as providing a high-performance and high-scalability service to cope with the increased volumes of collected data. This new framework also introduces a split, two-layered job pilot, creating a new approach to how jobs are handled and executed within the Grid. Each layer runs on a separate JVM, with a separate authentication token, allowing for a finer control of permissions and improved isolation of the payload. Having these separate layers also allows for the execution of job payloads within containers. This allows for the further strengthening of isolation and creates a cohesive environment across computing sites, while avoiding the resource overhead associated with traditional virtualisation. This contribution presents the architecture of the new split job pilot found in JAliEn, and the methods used to achieve the execution of Grid jobs while maintaining reliable communication between layers. Specifically, how this is achieved despite the possibility of a layer being run in a separate container, while retaining isolation and mitigating possible security risks. Furthermore, we discuss how the implementation remains agnostic to the choice of container platform, allowing it to run within popular platforms such as Singularity and Docker.


2018 ◽  
Vol 184 ◽  
pp. 269-278 ◽  
Author(s):  
Anne Reinarz ◽  
Tim Dodwell ◽  
Tim Fletcher ◽  
Linus Seelinger ◽  
Richard Butler ◽  
...  

Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1651
Author(s):  
Prabal Datta Barua ◽  
Wai Yee Chan ◽  
Sengul Dogan ◽  
Mehmet Baygin ◽  
Turker Tuncer ◽  
...  

Optical coherence tomography (OCT) images coupled with many learning techniques have been developed to diagnose retinal disorders. This work aims to develop a novel framework for extracting deep features from 18 pre-trained convolutional neural networks (CNN) and to attain high performance using OCT images. In this work, we have developed a new framework for automated detection of retinal disorders using transfer learning. This model consists of three phases: deep fused and multilevel feature extraction, using 18 pre-trained networks and tent maximal pooling, feature selection with ReliefF, and classification using the optimized classifier. The novelty of this proposed framework is the feature generation using widely used CNNs and to select the most suitable features for classification. The extracted features using our proposed intelligent feature extractor are fed to iterative ReliefF (IRF) to automatically select the best feature vector. The quadratic support vector machine (QSVM) is utilized as a classifier in this work. We have developed our model using two public OCT image datasets, and they are named database 1 (DB1) and database 2 (DB2). The proposed framework can attain 97.40% and 100% classification accuracies using the two OCT datasets, DB1 and DB2, respectively. These results illustrate the success of our model.


2013 ◽  
Vol 69 (7) ◽  
pp. 1283-1288 ◽  
Author(s):  
Tadeusz Skarzynski

While the majority of macromolecular X-ray data are currently collected using highly efficient beamlines at an ever-increasing number of synchrotrons, there is still a need for high-performance reliable systems for in-house experiments. In addition to crystal screening and optimization of data-collection parameters before a synchrotron trip, the home system allows the collection of data as soon as the crystals are produced to obtain the solution of novel structures, especially by the molecular-replacement method, and is invaluable in achieving the quick turnover that is often required for ligand-binding studies in the pharmaceutical industry. There has been a continuous evolution of X-ray sources, detectors and software developed for in-house use in recent years and a diverse range of tools for structural biology laboratories are available. An overview of the main directions of these developments and examples of specific solutions available to the macromolecular crystallography community are presented in this paper, showing that data collection `at home' is still an attractive proposition complementing the use of synchrotron beamlines.


2021 ◽  
Author(s):  
Jin Wang ◽  
Jiacheng Wu ◽  
Mingda Li ◽  
Jiaqi Gu ◽  
Ariyam Das ◽  
...  

AbstractWith an escalating arms race to adopt machine learning (ML) in diverse application domains, there is an urgent need to support declarative machine learning over distributed data platforms. Toward this goal, a new framework is needed where users can specify ML tasks in a manner where programming is decoupled from the underlying algorithmic and system concerns. In this paper, we argue that declarative abstractions based on Datalog are natural fits for machine learning and propose a purely declarative ML framework with a Datalog query interface. We show that using aggregates in recursive Datalog programs entails a concise expression of ML applications, while providing a strictly declarative formal semantics. This is achieved by introducing simple conditions under which the semantics of recursive programs is guaranteed to be equivalent to that of aggregate-stratified ones. We further provide specialized compilation and planning techniques for semi-naive fixpoint computation in the presence of aggregates and optimization strategies that are effective on diverse recursive programs and distributed data platforms. To test and demonstrate these research advances, we have developed a powerful and user-friendly system on top of Apache Spark. Extensive evaluations on large-scale datasets illustrate that this approach will achieve promising performance gains while improving both programming flexibility and ease of development and deployment for ML applications.


Sign in / Sign up

Export Citation Format

Share Document