Retooling for the Future: Launching the Advanced Light Source at Lawrence's Laboratory, 1980––1986

2008 ◽  
Vol 38 (4) ◽  
pp. 569-609 ◽  
Author(s):  
Catherine Westfall

In the early 1980s, David Shirley tried to launch a new synchrotron light source for materials science at Lawrence Berkeley Laboratory (LBL). Building accelerators was LBL's stock-in-trade. Yet with the Advanced Light Source (ALS) nothing proceeded as in the past. Whereas nuclear and high energy physicists had been happy when funding was procured for new machines, materials scientists were irritated to learn that Shirley had brokered a deal with Presidential Science Advisor George Keyworth to fund the ALS. Materials scientists valued accelerators less because materials science had benefitted less from large-scale devices; such devices were therefore uncommon in their field. The project also faced competition and the criticism that LBL managers wanted it only to help their laboratory weather the threatening times that came with Ronald Reagan and his promise to cut the size of government (and in fact that was a part of the rationale). The ALS also suffered because Shirley's deal was ill-suited for Washington in the 1980s. Scientists were less influential than in previous decades and a more robust federal bureaucracy controlled funding. Other ALS advocates eventually crafted a convincing scientific justification, recruited potential users, and guided the proposal through materials science reviews and the proper Washington channels. Although one-on-one deal making àà la Ernest Lawrence was a relic of the past, Shirley did bargain collectively with other directors, paving the way for ALS funding and a retooling of the national laboratories and materials science: in the 1990s and 2000s the largest Department of Energy accelerators were devoted to materials science, not nuclear or high-energy physics.

2021 ◽  
Vol 36 (10) ◽  
pp. 2150070
Author(s):  
Maria Grigorieva ◽  
Dmitry Grin

Large-scale distributed computing infrastructures ensure the operation and maintenance of scientific experiments at the LHC: more than 160 computing centers all over the world execute tens of millions of computing jobs per day. ATLAS — the largest experiment at the LHC — creates an enormous flow of data which has to be recorded and analyzed by a complex heterogeneous and distributed computing environment. Statistically, about 10–12% of computing jobs end with a failure: network faults, service failures, authorization failures, and other error conditions trigger error messages which provide detailed information about the issue, which can be used for diagnosis and proactive fault handling. However, this analysis is complicated by the sheer scale of textual log data, and often exacerbated by the lack of a well-defined structure: human experts have to interpret the detected messages and create parsing rules manually, which is time-consuming and does not allow identifying previously unknown error conditions without further human intervention. This paper is dedicated to the description of a pipeline of methods for the unsupervised clustering of multi-source error messages. The pipeline is data-driven, based on machine learning algorithms, and executed fully automatically, allowing categorizing error messages according to textual patterns and meaning.


2020 ◽  
Vol 245 ◽  
pp. 07036
Author(s):  
Christoph Beyer ◽  
Stefan Bujack ◽  
Stefan Dietrich ◽  
Thomas Finnern ◽  
Martin Flemming ◽  
...  

DESY is one of the largest accelerator laboratories in Europe. It develops and operates state of the art accelerators for fundamental science in the areas of high energy physics, photon science and accelerator development. While for decades high energy physics (HEP) has been the most prominent user of the DESY compute, storage and network infrastructure, various scientific areas as science with photons and accelerator development have caught up and are now dominating the demands on the DESY infrastructure resources, with significant consequences for the IT resource provisioning. In this contribution, we will present an overview of the computational, storage and network resources covering the various physics communities on site. Ranging from high-throughput computing (HTC) batch-like offline processing in the Grid and the interactive user analyses resources in the National Analysis Factory (NAF) for the HEP community, to the computing needs of accelerator development or of photon sciences such as PETRA III or the European XFEL. Since DESY is involved in these experiments and their data taking, their requirements include fast low-latency online processing for data taking and calibration as well as offline processing, thus high-performance computing (HPC) workloads, that are run on the dedicated Maxwell HPC cluster. As all communities face significant challenges due to changing environments and increasing data rates in the following years, we will discuss how this will reflect in necessary changes to the computing and storage infrastructures. We will present DESY compute cloud and container orchestration plans as a basis for infrastructure and platform services. We will show examples of Jupyter notebooks for small scale interactive analysis, as well as its integration into large scale resources such as batch systems or Spark clusters. To overcome the fragmentation of the various resources for all scientific communities at DESY, we explore how to integrate them into a seamless user experience in an Interdisciplinary Data Analysis Facility.


2004 ◽  
Vol 13 (03) ◽  
pp. 391-502 ◽  
Author(s):  
MASSIMO GIOVANNINI

Cosmology, high-energy physics and astrophysics are today converging to the study of large scale magnetic fields. While the experimental evidence for the existence of large scale magnetization in galaxies, clusters and super-clusters is rather compelling, the origin of the phenomenon remains puzzling especially in light of the most recent observations. The purpose of the present review is to describe the physical motivations and the open theoretical problems related to the existence of large scale magnetic fields.


2019 ◽  
Vol 34 (Supplement_1) ◽  
pp. i46-i57
Author(s):  
Robert Crease ◽  
Elyse Graham ◽  
Jamie Folsom

Abstract Over the past few years, research carried out at large-scale materials science facilities in the USA and elsewhere has undergone a phase transition that affected its character and culture. Research cultures at these facilities now resemble ecosystems, comprising of complex and evolving interactions between individuals, institutions, and the overall research environment. The outcome of this phase transition, which has been gradual and building since the 1980s, is known as the New (or Ecologic) Big Science [Crease, R. and Westfall, C. (2016). The new big science. Physics Today, 69: 30–6]. In this article, we describe this phase transition, review the practical challenges that it poses for historians, review some potential digital tools that might respond to these challenges, and then assess the theoretical implications posed by “database history’.


1993 ◽  
Vol 5 (4) ◽  
pp. 505-549 ◽  
Author(s):  
Bruce Denby

In the past few years a wide variety of applications of neural networks to pattern recognition in experimental high-energy physics has appeared. The neural network solutions are in general of high quality, and, in a number of cases, are superior to those obtained using "traditional'' methods. But neural networks are of particular interest in high-energy physics for another reason as well: much of the pattern recognition must be performed online, that is, in a few microseconds or less. The inherent parallelism of neural network algorithms, and the ability to implement them as very fast hardware devices, may make them an ideal technology for this application.


2005 ◽  
Vol 20 (14) ◽  
pp. 3021-3032
Author(s):  
Ian M. Fisk

In this review, the computing challenges facing the current and next generation of high energy physics experiments will be discussed. High energy physics computing represents an interesting infrastructure challenge as the use of large-scale commodity computing clusters has increased. The causes and ramifications of these infrastructure challenges will be outlined. Increasing requirements, limited physical infrastructure at computing facilities, and limited budgets have driven many experiments to deploy distributed computing solutions to meet the growing computing needs for analysis reconstruction, and simulation. The current generation of experiments have developed and integrated a number of solutions to facilitate distributed computing. The current work of the running experiments gives an insight into the challenges that will be faced by the next generation of experiments and the infrastructure that will be needed.


2013 ◽  
Vol 46 (1) ◽  
pp. 1-13 ◽  
Author(s):  
Scott Classen ◽  
Greg L. Hura ◽  
James M. Holton ◽  
Robert P. Rambo ◽  
Ivan Rodic ◽  
...  

The SIBYLS beamline (12.3.1) of the Advanced Light Source at Lawrence Berkeley National Laboratory, supported by the US Department of Energy and the National Institutes of Health, is optimized for both small-angle X-ray scattering (SAXS) and macromolecular crystallography (MX), making it unique among the world's mostly SAXS or MX dedicated beamlines. Since SIBYLS was commissioned, assessments of the limitations and advantages of a combined SAXS and MX beamline have suggested new strategies for integration and optimal data collection methods and have led to additional hardware and software enhancements. Features described include a dual mode monochromator [containing both Si(111) crystals and Mo/B4C multilayer elements], rapid beamline optics conversion between SAXS and MX modes, active beam stabilization, sample-loading robotics, and mail-in and remote data collection. These features allow users to gain valuable insights from both dynamic solution scattering and high-resolution atomic diffraction experiments performed at a single synchrotron beamline. Key practical issues considered for data collection and analysis include radiation damage, structural ensembles, alternative conformers and flexibility. SIBYLS develops and applies efficient combined MX and SAXS methods that deliver high-impact results by providing robust cost-effective routes to connect structures to biology and by performing experiments that aid beamline designs for next generation light sources.


1997 ◽  
Vol 84 (1-3) ◽  
pp. 85-98 ◽  
Author(s):  
Tony Warwick ◽  
Harald Ade ◽  
Adam P Hitchcock ◽  
Howard Padmore ◽  
Ed.G Rightor ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document