MASS, MATTER, MATERIALIZATION, MATTERGENESIS AND CONSERVATION OF CHARGE

2013 ◽  
Vol 22 (05) ◽  
pp. 1350027
Author(s):  
UNG CHAN TSAN

Conservation of mass in classical physics and in chemistry is considered to be equivalent to conservation of matter and is a necessary condition together with other universal conservation laws to account for observed experiments. Indeed matter conservation is associated to conservation of building blocks (molecules, atoms, nucleons, quarks and leptons). Matter is massive but mass and matter are two distinct concepts even if conservation of mass and conservation of matter represent the same reality in classical physics and chemistry. Conservation of mass is a consequence of conservation of atoms. Conservation of mass is valid because in these cases it is a very good approximation, the variation of mass being tiny and undetectable by weighing. However, nuclear physics and particle physics clearly show that conservation of mass is not valid to express conservation of matter. Mass is one form of energy, is a positive quantity and plays a fundamental role in dynamics allowing particles to be accelerated. Origin of mass may be linked to recently discovered Higgs bosons. Matter conservation means conservation of baryonic number A and leptonic number L, A and L being algebraic numbers. Positive A and L are associated to matter particles, negative A and L are associated to antimatter particles. All known interactions do conserve matter thus could not generate, from pure energy, a number of matter particles different from that of number of antimatter particles. But our universe is material and neutral, this double message has to be deciphered simultaneously. Asymmetry of our universe demands an interaction which violates matter conservation but obeys all universal conservation laws, in particular conservation of electric charge Q. Expression of Q shows that conservation of (A–L) and total flavor TF are necessary and sufficient to conserve Q. Conservation of A and L is indeed a trivial case of conservation of (A–L) and is valid for all known interactions of the standard model. Assumption of a novel interaction MC conserving (A–L) but violating simultaneously A and L (not trivial case of conservation) would allow energy to be transformed into a pair of baryon lepton or into a pair of antibaryon antilepton of opposite charges. This model could explain the asymmetric but nevertheless electrically neutral Universe but could not account for the numerical value of the tiny excess of matter over antimatter. The concept of anti-Universe would be superfluous. Observation of matter nonconservation processes would be of great interest to falsify this speculation.

2007 ◽  
Vol 12 (1) ◽  
pp. 27-48 ◽  
Author(s):  
Diego Hurtado DE Mendoza ◽  
Ana María Vara

As a historiographical category, ‘big science’ was elaborated from the point of view of advanced countries. However, some developing countries decided to invest a significant part of their rather modest science budgets in building many-million-dollar facilities. A comparative approach to the study of the first stages of the Argentine TANDAR heavy ion accelerator and the Brazilian National Laboratory Synchrotron Light (LNLS) projects may help understand specificities in patterns of organisation of big science in peripheral contexts. Oversimplification of the decision-making processes linked to authoritarian political contexts—which allowed to overcome the lack of consensus within the physics community as well as financial uncertainties—seem to have been a necessary condition for TANDAR and LNLS, which differentiated them from big science in developed countries. Additionally, the different ways in which the institutionalisation of the nuclear area took place in Argentina and Brazil seem to have been responsible for the different paths followed by experimental physics between the 1960s and 1980s: nuclear physics in Argentina and particle physics in Brazil.


Universe ◽  
2021 ◽  
Vol 7 (3) ◽  
pp. 72
Author(s):  
Clementina Agodi ◽  
Antonio D. Russo ◽  
Luciano Calabretta ◽  
Grazia D’Agostino ◽  
Francesco Cappuzzello ◽  
...  

The search for neutrinoless double-beta (0νββ) decay is currently a key topic in physics, due to its possible wide implications for nuclear physics, particle physics, and cosmology. The NUMEN project aims to provide experimental information on the nuclear matrix elements (NMEs) that are involved in the expression of 0νββ decay half-life by measuring the cross section of nuclear double-charge exchange (DCE) reactions. NUMEN has already demonstrated the feasibility of measuring these tiny cross sections for some nuclei of interest for the 0νββ using the superconducting cyclotron (CS) and the MAGNEX spectrometer at the Laboratori Nazionali del Sud (LNS.) Catania, Italy. However, since the DCE cross sections are very small and need to be measured with high sensitivity, the systematic exploration of all nuclei of interest requires major upgrade of the facility. R&D for technological tools has been completed. The realization of new radiation-tolerant detectors capable of sustaining high rates while preserving the requested resolution and sensitivity is underway, as well as the upgrade of the CS to deliver beams of higher intensity. Strategies to carry out DCE cross-section measurements with high-intensity beams were developed in order to achieve the challenging sensitivity requested to provide experimental constraints to 0νββ NMEs.


Symmetry ◽  
2021 ◽  
Vol 13 (6) ◽  
pp. 1077
Author(s):  
Yarema A. Prykarpatskyy

Dubrovin’s work on the classification of perturbed KdV-type equations is reanalyzed in detail via the gradient-holonomic integrability scheme, which was devised and developed jointly with Maxim Pavlov and collaborators some time ago. As a consequence of the reanalysis, one can show that Dubrovin’s criterion inherits important parts of the gradient-holonomic scheme properties, especially the necessary condition of suitably ordered reduction expansions with certain types of polynomial coefficients. In addition, we also analyze a special case of a new infinite hierarchy of Riemann-type hydrodynamical systems using a gradient-holonomic approach that was suggested jointly with M. Pavlov and collaborators. An infinite hierarchy of conservation laws, bi-Hamiltonian structure and the corresponding Lax-type representation are constructed for these systems.


2011 ◽  
Vol 18 (5) ◽  
pp. 563-572 ◽  
Author(s):  
G. Balasis ◽  
C. Papadimitriou ◽  
I. A. Daglis ◽  
A. Anastasiadis ◽  
I. Sandberg ◽  
...  

Abstract. The dynamics of complex systems are founded on universal principles that can be used to describe disparate problems ranging from particle physics to economies of societies. A corollary is that transferring ideas and results from investigators in hitherto disparate areas will cross-fertilize and lead to important new results. In this contribution, we investigate the existence of a universal behavior, if any, in solar flares, magnetic storms, earthquakes and pre-seismic electromagnetic (EM) emissions, extending the work recently published by Balasis et al. (2011a). A common characteristic in the dynamics of the above-mentioned phenomena is that their energy release is basically fragmentary, i.e. the associated events are being composed of elementary building blocks. By analogy with earthquakes, the magnitude of the magnetic storms, solar flares and pre-seismic EM emissions can be appropriately defined. Then the key question we can ask in the frame of complexity is whether the magnitude distribution of earthquakes, magnetic storms, solar flares and pre-fracture EM emissions obeys the same law. We show that these apparently different extreme events, which occur in the solar-terrestrial system, follow the same energy distribution function. The latter was originally derived for earthquake dynamics in the framework of nonextensive Tsallis statistics.


Author(s):  
Alexandros Ioannidis-Pantopikos ◽  
Donat Agosti

In the landscape of general-purpose repositories, Zenodo was built at the European Laboratory for Particle Physics' (CERN) data center to facilitate the sharing and preservation of the long tail of research across all disciplines and scientific domains. Given Zenodo’s long tradition of making research artifacts FAIR (Findable, Accessible, Interoperable, and Reusable), there are still challenges in applying these principles effectively when serving the needs of specific research domains. Plazi’s biodiversity taxonomic literature processing pipeline liberates data from publications, making it FAIR via extensive metadata, the minting of a DataCite Digital Object Identifier (DOI), a licence and both human- and machine-readable output provided by Zenodo, and accessible via the Biodiversity Literature Repository community at Zenodo. The deposits (e.g., taxonomic treatments, figures) are an example of how local networks of information can be formally linked to explicit resources in a broader context of other platforms like GBIF (Global Biodiversity Information Facility). In the context of biodiversity taxonomic literature data workflows, a general-purpose repository’s traditional submission approach is not enough to preserve rich metadata and to capture highly interlinked objects, such as taxonomic treatments and digital specimens. As a prerequisite to serve these use cases and ensure that the artifacts remain FAIR, Zenodo introduced the concept of custom metadata, which allows enhancing submissions such as figures or taxonomic treatments (see as an example the treatment of Eurygyrus peloponnesius) with custom keywords, based on terms from common biodiversity vocabularies like Darwin Core and Audubon Core and with an explicit link to the respective vocabulary term. The aforementioned pipelines and features are designed to be served first and foremost using public Representational State Transfer Application Programming Interfaces (REST APIs) and open web technologies like webhooks. This approach allows researchers and platforms to integrate existing and new automated workflows into Zenodo and thus empowers research communities to create self-sustained cross-platform ecosystems. The BiCIKL project (Biodiversity Community Integrated Knowledge Library) exemplifies how repositories and tools can become building blocks for broader adoption of the FAIR principles. Starting with the above literature processing pipeline, the concepts of and resulting FAIR data, with a focus on the custom metadata used to enhance the deposits, will be explained.


Although the discussion today is intended to be technical, it may be useful to start with a few words about the nature of the problem. What is parity and what is its violation ? Basically, this means going back to the principle that the laws of physics are indistinguishable if one changes from a right-handed to a left-handed co-ordinate system, or in other words, that any physical apparatus observed in a mirror will appear to obey the same laws of physics as the original. We are well aware of the fact that this symmetry holds wherever we have checked it throughout classical physics; it holds throughout atomic physics, and it seems to hold to a very good degree of accuracy in nuclear physics as well. In physics we always proceed by attempting generalizations, being prepared to give these up when evidence is in contradiction with them; and therefore in general when we see a symmetry hold good to a very high degree of accuracy, we are inclined to assume that it will hold absolutely.


2012 ◽  
Vol 2012 ◽  
pp. 1-38 ◽  
Author(s):  
Andrea Giuliani ◽  
Alfredo Poves

This paper introduces the neutrinoless double-beta decay (the rarest nuclear weak process) and describes the status of the research for this transition, both from the point of view of theoretical nuclear physics and in terms of the present and future experimental scenarios. Implications of this phenomenon on crucial aspects of particle physics are briefly discussed. The calculations of the nuclear matrix elements in case of mass mechanisms are reviewed, and a range for these quantities is proposed for the most appealing candidates. After introducing general experimental concepts—such as the choice of the best candidates, the different proposed technological approaches, and the sensitivity—we make the point on the experimental situation. Searches running or in preparation are described, providing an organic presentation which picks up similarities and differences. A critical comparison of the adopted technologies and of their physics reach (in terms of sensitivity to the effective Majorana neutrino mass) is performed. As a conclusion, we try to envisage what we expect round the corner and at a longer time scale.


Physics ◽  
2019 ◽  
Vol 1 (3) ◽  
pp. 375-391 ◽  
Author(s):  
Robin Smith ◽  
Jack Bishop

We present an open-source kinematic fitting routine designed for low-energy nuclear physics applications. Although kinematic fitting is commonly used in high-energy particle physics, it is rarely used in low-energy nuclear physics, despite its effectiveness. A FORTRAN and ROOT C++ version of the FUNKI_FIT kinematic fitting code have been developed and published open access. The FUNKI_FIT code is universal in the sense that the constraint equations can be easily modified to suit different experimental set-ups and reactions. Two case studies for the use of this code, utilising experimental and Monte–Carlo data, are presented: (1) charged-particle spectroscopy using silicon-strip detectors; (2) charged-particle spectroscopy using active target detectors. The kinematic fitting routine provides an improvement in resolution in both cases, demonstrating, for the first time, the applicability of kinematic fitting across a range of nuclear physics applications. The ROOT macro has been developed in order to easily apply this technique in standard data analysis routines used by the nuclear physics community.


Sign in / Sign up

Export Citation Format

Share Document