Particle Physics and the Nuclear Force

2017 ◽  
pp. 113-124
2019 ◽  
Vol 32 (3) ◽  
pp. 318-322
Author(s):  
Elia A. Sinaiko

Gravity has been shown in theories of relativity to be the curving of space around massive bodies. Thus, objects in orbits are following a straight line along a curved space. Why massive bodies curve space is not explained. We continue to ask “What is Gravity?” Quantum mechanics unites theories of electro-magnetism (QED), the weak nuclear force (EWT), and the strong nuclear force (QCD) in the standard model of particle physics, or with a grand unified theory (GUT) sought for these three fundamental forces. As yet there is no empirically verified quantum theory of gravity unified with these three fundamental forces. Considering gravity to be the curving of space, it is evident that gravity supervenes from the properties of space itself. In this short paper, we will attempt to define one of these spatial properties. We will not attempt to define the properties of time, though time appears to be a part of a complete model of gravity. At least in this regard, and likely in many others, our model will be incomplete. We will build a case for the massive collapse of probability density waves (PDWs) in surrounding space, due to the interactions of particles in massive bodies. The collapse of these probabilities, of each particle’s possible superposition somewhere in the surrounding space, causes the apparent “curving” of space. We will conclude that space is not the absence of things. Space is a thing in itself. Included in the properties of space is the potential to contain/transmit PDWs. This potential is suggested by both the theories of relativity and the experimental observations of quantum mechanics. In the presence of massive bodies, particle superposition and the probability of existence in the surrounding space is, to varying degrees, lost and space appears to curve as a consequence.


2021 ◽  
pp. 2130009
Author(s):  
B. A. Robson

This paper presents a critical historical review of the two main approaches to providing an understanding of the nature of the weak nuclear force via the Standard Model and the Generation Model of particle physics. The Standard Model is generally considered to be incomplete in the sense that it provides little understanding of several empirical observations: the Generation Model was developed to overcome several dubious assumptions made during the development of the Standard Model. This paper indicates that the Generation Model provides a more consistent understanding of the weak nuclear force than the earlier Standard Model.


Author(s):  
E.D. Wolf

Most microelectronics devices and circuits operate faster, consume less power, execute more functions and cost less per circuit function when the feature-sizes internal to the devices and circuits are made smaller. This is part of the stimulus for the Very High-Speed Integrated Circuits (VHSIC) program. There is also a need for smaller, more sensitive sensors in a wide range of disciplines that includes electrochemistry, neurophysiology and ultra-high pressure solid state research. There is often fundamental new science (and sometimes new technology) to be revealed (and used) when a basic parameter such as size is extended to new dimensions, as is evident at the two extremes of smallness and largeness, high energy particle physics and cosmology, respectively. However, there is also a very important intermediate domain of size that spans from the diameter of a small cluster of atoms up to near one micrometer which may also have just as profound effects on society as “big” physics.


Author(s):  
Sterling P. Newberry

At the 1958 meeting of our society, then known as EMSA, the author introduced the concept of microspace and suggested its use to provide adequate information storage space and the use of electron microscope techniques to provide storage and retrieval access. At this current meeting of MSA, he wishes to suggest an additional use of the power of the electron microscope.The author has been contemplating this new use for some time and would have suggested it in the EMSA fiftieth year commemorative volume, but for page limitations. There is compelling reason to put forth this suggestion today because problems have arisen in the “Standard Model” of particle physics and funds are being greatly reduced just as we need higher energy machines to resolve these problems. Therefore, any techniques which complement or augment what we can accomplish during this austerity period with the machines at hand is worth exploring.


2013 ◽  
Vol 221 (3) ◽  
pp. 190-200 ◽  
Author(s):  
Jörg-Tobias Kuhn ◽  
Thomas Kiefer

Several techniques have been developed in recent years to generate optimal large-scale assessments (LSAs) of student achievement. These techniques often represent a blend of procedures from such diverse fields as experimental design, combinatorial optimization, particle physics, or neural networks. However, despite the theoretical advances in the field, there still exists a surprising scarcity of well-documented test designs in which all factors that have guided design decisions are explicitly and clearly communicated. This paper therefore has two goals. First, a brief summary of relevant key terms, as well as experimental designs and automated test assembly routines in LSA, is given. Second, conceptual and methodological steps in designing the assessment of the Austrian educational standards in mathematics are described in detail. The test design was generated using a two-step procedure, starting at the item block level and continuing at the item level. Initially, a partially balanced incomplete item block design was generated using simulated annealing, whereas in a second step, items were assigned to the item blocks using mixed-integer linear optimization in combination with a shadow-test approach.


Sign in / Sign up

Export Citation Format

Share Document