Addition of Unsaturated Monomers to Rubber and Similar Polymers

1963 ◽  
Vol 36 (1) ◽  
pp. 282-295 ◽  
Author(s):  
C. Pinazzi ◽  
J-C. Danjard ◽  
R. Pautrat

Abstract The addition of different ethylene monomers to polyisoprene was studied in order to throw a light on reaction mechanisms and structures of reagents. It was attempted, during the planned transformations, to maintain as best as possible the shapes and sizes of initial macromolecules, upon which rests the property of high elasticity. For this purpose, it was decided to avoid grafting reactions, as well as any reactions affecting the nature of the polyisoprenic chain, e.g., scission, cross-linking, and cyclization. After the completion of the work carried out by one of the authors on the addition of maleic anhydride, the existence of two mechanisms was brought to light: the one is of a radical type, produced by adding an unsaturated reagent on a methylene close to a double bond of chain; the other is of a thermal nature, triggered by the action not of a catalyst, but of a rather high temperature. It is clear that the latter process does involve isomerization of a part of chain double bonds. The model to which maleic anhydride is connected has been deduced by examining the reaction aptitudes of a series of monomers. The major part of highly polymerizable materials, with the exception of acrylonitrile, were eliminated a priori, in order to avoid both homopolymerization reactions and graftings. The monomers in which double bonds are depleted in π electrons are more apt to give the desired reactions. The more favorable effect is obtained with α carbonyl (maleic anhydride and γ crotonolactone). Other factors were also taken into account. The work reviewed here enabled us to assess the way in which reactions evolve according to the considered mechanisms and produce new macromolecular materials. The resulting compounds have a high rubberlike elasticity and show a high chemical reactivity, due to anhydride or lactone side groups.

2021 ◽  
Vol 2021 (4) ◽  
Author(s):  
Luke Corcoran ◽  
Florian Loebbert ◽  
Julian Miczajka ◽  
Matthias Staudacher

Abstract We extend the recently developed Yangian bootstrap for Feynman integrals to Minkowski space, focusing on the case of the one-loop box integral. The space of Yangian invariants is spanned by the Bloch-Wigner function and its discontinuities. Using only input from symmetries, we constrain the functional form of the box integral in all 64 kinematic regions up to twelve (out of a priori 256) undetermined constants. These need to be fixed by other means. We do this explicitly, employing two alternative methods. This results in a novel compact formula for the box integral valid in all kinematic regions of Minkowski space.


2020 ◽  
Vol 2020 (8) ◽  
Author(s):  
I. L. Buchbinder ◽  
E. A. Ivanov ◽  
B. S. Merzlikin ◽  
K. V. Stepanyantz

Abstract We apply the harmonic superspace approach for calculating the divergent part of the one-loop effective action of renormalizable 6D, $$ \mathcal{N} $$ N = (1, 0) supersymmetric higher-derivative gauge theory with a dimensionless coupling constant. Our consideration uses the background superfield method allowing to carry out the analysis of the effective action in a manifestly gauge covariant and $$ \mathcal{N} $$ N = (1, 0) supersymmetric way. We exploit the regularization by dimensional reduction, in which the divergences are absorbed into a renormalization of the coupling constant. Having the expression for the one-loop divergences, we calculate the relevant β-function. Its sign is specified by the overall sign of the classical action which in higher-derivative theories is not fixed a priori. The result agrees with the earlier calculations in the component approach. The superfield calculation is simpler and provides possibilities for various generalizations.


Author(s):  
Srinath Satyanarayana ◽  
Daniel T. McCormick ◽  
Arun Majumdar

In recent years several surface stress sensors based on microcantilevers have been developed for biosensing [1–4]. Since these sensors are made using standard microfabrication processes, they can be easily made in an array format, making them suitable for high-throughput multiplexed analysis. Specific reactions occurring on one surface (enabled by selective modification of the surface a priori) of the sensor element change the surface stress, which in turn causes the sensor to deflect. The magnitude and the rate of deflection are then used to study the reaction. The microcantilevers in these sensors are usually fabricated using material like silicon and its oxides or nitrides. The high elasticity modulus of these materials places limitations on the sensitivity and sensor geometry. Alternately polymers, which have a much lower elastic modulus when compared to silicon or its derivatives, offers greater design flexibility, i.e. allow the exploration of innovative sensor configurations that can have higher sensitivity and at the same time are suitable for integration with microfluidics and electrical detection systems.


Author(s):  
Robert Audi

Abstract Kant influentially distinguished analytic from synthetic a priori propositions, and he took certain propositions in the latter category to be of immense philosophical importance. His distinction between the analytic and the synthetic has been accepted by many and attacked by others; but despite its importance, a number of discussions of it since at least W. V. Quine’s have paid insufficient attention to some of the passages in which Kant draws the distinction. This paper seeks to clarify what appear to be three distinct conceptions of the analytic (and implicitly of the synthetic) that are presented in Kant’s Critique of Pure Reason and in some other Kantian texts. The conceptions are important in themselves, and their differences are significant even if they are extensionally equivalent. The paper is also aimed at showing how the proposed understanding of these conceptions—and especially the one that has received insufficient attention from philosophers—may bear on how we should conceive the synthetic a priori, in and beyond Kant’s own writings.


Author(s):  
CHENGGUANG ZHU ◽  
zhongpai Gao ◽  
Jiankang Zhao ◽  
Haihui Long ◽  
Chuanqi Liu

Abstract The relative pose estimation of a space noncooperative target is an attractive yet challenging task due to the complexity of the target background and illumination, and the lack of a priori knowledge. Unfortunately, these negative factors have a grave impact on the estimation accuracy and the robustness of filter algorithms. In response, this paper proposes a novel filter algorithm to estimate the relative pose to improve the robustness based on a stereovision system. First, to obtain a coarse relative pose, the weighted total least squares (WTLS) algorithm is adopted to estimate the relative pose based on several feature points. The resulting relative pose is fed into the subsequent filter scheme as observation quantities. Second, the classic Bayes filter is exploited to estimate the relative state except for moment-of-inertia ratios. Additionally, the one-step prediction results are used as feedback for WTLS initialization. The proposed algorithm successfully eliminates the dependency on continuous tracking of several fixed points. Finally, comparison experiments demonstrate that the proposed algorithm presents a better performance in terms of robustness and convergence time.


2019 ◽  
Author(s):  
Jennifer M Rodd

This chapter focuses on the process by which stored knowledge about a word’s form (orthographic or phonological) maps onto stored knowledge about its meaning. This mapping is made challenging by the ambiguity that is ubiquitous in natural language: most familiar words can refer to multiple different concepts. This one-to-many mapping from form to meaning within the lexicon is a core feature of word-meaning access. Fluent, accurate word-meaning access requires that comprehenders integrate multiple cues in order to determine which of a word’s possible semantic features are relevant in the current context. Specifically, word-meaning access is guided by (i) distributional information about the a priori relative likelihoods of different word meanings and (ii) a wide range of contextual cues that indicate which meanings are most likely in the current context.


2018 ◽  
pp. 303-313
Author(s):  
Christopher P. Guzelian

Two years ago, Bob Mulligan and I empirically tested whether the Bank of Amsterdam, a prototypical central bank, had caused a boom-bust cycle in the Amsterdam commodities markets in the 1780s owing to the bank’s sudden initiation of low-fractional-re-serve banking (Guzelian & Mulligan 2015).1 Widespread criticism came quickly after we presented our data findings at that year’s Austrian Economic Research Conference. Walter Block representa-tively responded: «as an Austrian, I maintain you cannot «test» apodictic theories, you can only illustrate them».2 Non-Austrian, so-called «empirical» economists typically have no problem with data-driven, inductive research. But Austrians have always objected strenuously on ontological and epistemolog-ical grounds that such studies do not produce real knowledge (Mises 1998, 113-115; Mises 2007). Camps of economists are talking past each other in respective uses of the words «testing» and «eco-nomic theory». There is a vital distinction between «testing» (1) an economic proposition, praxeologically derived, and (2) the rele-vance of an economic proposition, praxeologically derived. The former is nonsensical; the latter may be necessary to acquire eco-nomic theory and knowledge. Clearing up this confusion is this note’s goal. Rothbard (1951) represents praxeology as the indispensible method for gaining economic knowledge. Starting with a Aristote-lian/Misesian axiom «humans act» or a Hayekian axiom of «humans think», a voluminous collection of logico-deductive eco-nomic propositions («theorems») follows, including theorems as sophisticated and perhaps unintuitive as the one Mulligan and I examined: low-fractional-reserve banking causes economic cycles. There is an ontological and epistemological analog between Austrian praxeology and mathematics. Much like praxeology, we «know» mathematics to be «true» because it is axiomatic and deductive. By starting with Peano Axioms, mathematicians are able by a long process of creative deduction, to establish the real number system, or that for the equation an + bn = cn, there are no integers a, b, c that satisfy the equation for any integer value of n greater than 2 (Fermat’s Last Theorem). But what do mathematicians mean when they then say they have mathematical knowledge, or that they have proven some-thing «true»? Is there an infinite set of rational numbers floating somewhere in the physical universe? Naturally no. Mathemati-cians mean that they have discovered an apodictic truth — some-thing unchangeably true without reference to physical reality because that truth is a priori.


Author(s):  
José Carlos Bermejo

The journals are basically the only channel through the scientists can make the result of their research known to their colleagues. Scientific journals select the information they publish and guarantee its quality by means of a double blind procedure of censorship by peers. If on the one hand this procedure seems logical as a method for including a study within a consolidated scientific field, it is also true that it can function as a mechanism for censorship. The idea that the works not included in a standard publication lack a priori of practically any value is the basis of the career of academic scholars. Starting with this principle, a hierarchical system of scientific ranking has been built among researchers. The basis of his scientific curriculum is the metric of vanity.Key WordsScientific journals, curriculum, censorshipResumenLas revistas son básicamente el único canal a través del cual los científicos pueden dar a conocer el resultado de su investigación a sus colegas. Las revistas científicas seleccionan la información que publican y garantizan su calidad por medio de un procedimiento de censura por pares de doble ciego. Si, por un lado, este procedimiento parece lógico como método para incluir un estudio en un campo científico consolidado, también es verdad que puede funcionar como mecanismo de censura. La idea de que los trabajos no incluidos en una publicación estándar carecen a priori de prácticamente ningún valor es la base de la carrera académica. Partiendo de este principio se ha construido entre los investigadores un sistema jerárquico de clasificación. La base de este currículum científico es la métrica de la vanidad.Palabras claveRevistas científicas, currículum, censura.


Author(s):  
Сергей Александрович Лебедев ◽  
Сергей Николаевич Коськов

В статье излагается содержание двух базовых концепций неклассической философии и методологии науки: конвенционалистской и консенсуалистской теории природы научного знания и научной истины. Каждая из них является альтернативой двум основным парадигмам классической философии и методологии науки: эмпиризму (позитивизму) и рационализму. С точки зрения конвенционализма научное знание не есть ни описание чистого опыта, ни его обобщение. Но оно не является также и результатом некой априорной интуиции и чистого разума. Согласно конвенционализму научное знание - это система доказательной информации, исходные принципы которой имеют характер условных, конвенциональных истин. Отсюда следует, что любая истина в науке не категорична, а условна и имеет форму «если, то». Консенсуалистская концепция природы научного знания возникла в философии науки второй половины XX в. Она была, с одной стороны, обобщением конвенционализма, а с другой - его отрицанием. Если в конвенционализме основным субъектом научного познания является отдельный ученый, то в консенсуалистской эпистемологии таким субъектом является социальный субъект - научное сообщество. Научное познание имеет принципиально коллективный характер как в плане его получения в силу разделения научного труда, так и в плане его легитимации и оценки. Последние операции всегда являются результатом консенсуса научного сообщества. The article examines the content of two basic conceptions of non-classical philosophy and methodology of science: the conventionalist and consensual theory of the nature of scientific knowledge. Each of them is an alternative to the two main paradigms of classical philosophy and the methodology of science: empiricism (positivism) and rationalism. From the point of view of conventionalism, scientific knowledge is neither a description of pure experience nor a generalization of it. But it is also not the result of some a priori intuition and pure reason. According to conventionalism, scientific knowledge is a system of evidence-based information, the initial principles of which have the character of conditional, conventional truths. It follows that any truth in science is not categorical, but conditional and has the form «if, then». The consensual concept of the nature of scientific knowledge emerged in the philosophy of science of the second half of the twentieth century. It was, on the one hand, a generalization of conventionalism; on the other, a negation of it. If in conventionalism the main subject of scientific knowledge is an individual scientist, then in consensual epistemology such a subject is a social subject - the scientific community. Scientific knowledge has a fundamentally collective character, both in terms of its acquisition by virtue of the division of scientific work, and in terms of its legitimization and evaluation. The latest operations are always the result of a consensus of the scientific community.


2012 ◽  
Vol 5 (4) ◽  
pp. 831-841 ◽  
Author(s):  
B. Funke ◽  
T. von Clarmann

Abstract. Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.


Sign in / Sign up

Export Citation Format

Share Document