normal branch
Recently Published Documents


TOTAL DOCUMENTS

34
(FIVE YEARS 13)

H-INDEX

8
(FIVE YEARS 0)

2021 ◽  
Vol 2021 (12) ◽  
pp. 011
Author(s):  
Antonio De Felice ◽  
Shinji Mukohyama ◽  
Masroor C. Pookkillath

Abstract The Minimal theory of Massive Gravity (MTMG) is endowed non-linearly with only two tensor modes in the gravity sector which acquire a non-zero mass. On a homogeneous and isotropic background the theory is known to possess two branches: the self-accelerating branch with a phenomenology in cosmology which, except for the mass of the tensor modes, exactly matches the one of ΛCDM; and the normal branch which instead shows deviation from General Relativity in terms of both background and linear perturbations dynamics. For the latter branch we study using several early and late times data sets the constraints on today's value of the graviton mass μ0, finding that (μ0/H 0)2 = 0.119-0.098 +0.12 at 68% CL, which in turn gives an upper bound at 95% CL as μ0 < 8.4 × 10-34 eV. This corresponds to the strongest bound on the mass of the graviton for the normal branch of MTMG.


Author(s):  
A D’Aí ◽  
C Pinto ◽  
M Del Santo ◽  
F Pintore ◽  
R Soria ◽  
...  

Abstract Soft Ultra-Luminous X-ray (ULXs) sources are a sub-class of the ULXs that can switch from a supersoft spectral state, where most of the luminosity is emitted below 1 keV, to a soft spectral state with significant emission above 1 keV. In a few systems, dips have been observed. The mechanism behind this state transition and the dips nature are still debated. To investigate these issues, we obtained a long XMM-Newton monitoring campaign of a member of this class, NGC 247 ULX-1. We computed the hardness-intensity diagram for the whole data-set and identified two different branches: the normal branch and the dipping branch, which we study with four and three hardness-intensity resolved spectra, respectively. All seven spectra are well described by two thermal components: a colder (kTbb ∼ 0.1-0.2 keV) black-body, interpreted as emission from the photo-sphere of a radiatively-driven wind, and a hotter (kTdisk ∼ 0.6 keV) multi-colour disk black-body, likely due to reprocessing of radiation emitted from the innermost regions. In addition, a complex pattern of emission and absorption lines has been taken into account based on previous high-resolution spectroscopic results. We studied the evolution of spectral parameters and flux of the two thermal components along the two branches and discuss two scenarios possibly connecting the state transition and the dipping phenomenon. One is based on geometrical occultation of the emitting regions, the other invokes the onset of a propeller effect.


Author(s):  
Myles A Mitchell ◽  
Christian Arnold ◽  
César Hernández-Aguayo ◽  
Baojiu Li

Abstract We study the effects of two popular modified gravity theories, which incorporate very different screening mechanisms, on the angular power spectra of the thermal (tSZ) and kinematic (kSZ) components of the Sunyaev-Zel’dovich effect. Using the first cosmological simulations that simultaneously incorporate both screened modified gravity and a complete galaxy formation model, we find that the tSZ and kSZ power spectra are significantly enhanced by the strengthened gravitational forces in Hu-Sawicki f(R) gravity and the normal-branch Dvali-Gabadadze-Porrati model. Employing a combination of non-radiative and full-physics simulations, we find that the extra baryonic physics present in the latter acts to suppress the tSZ power on angular scales l ≳ 3000 and the kSZ power on all tested scales, and this is found to have a substantial effect on the model differences. Our results indicate that the tSZ and kSZ power can be used as powerful probes of gravity on large scales, using data from current and upcoming surveys, provided sufficient work is conducted to understand the sensitivity of the constraints to baryonic processes that are currently not fully understood.


Author(s):  
A. Ravanpak ◽  
G. F. Fadakar

In this paper, we consider a normal branch of the DGP cosmological model with a quintessence scalar field on the brane as the dark energy component. Using the dynamical system approach, we study the stability properties of the model. We find that [Formula: see text], as one of our new dimensionless variables which is defined in terms of the quintessence potential, has a crucial role in the history of the universe. We divide our discussion into two parts: a constant [Formula: see text] and a varying [Formula: see text]. In the case of a constant [Formula: see text] we calculate all the critical points of the model even those at infinity and then assume all of them as instantaneous critical points in the varying [Formula: see text] situation which is the main part of this paper. We find that the effect of the extra dimension in such a model is independent of the value of [Formula: see text]. Then, we consider a Gaussian potential for which [Formula: see text] is not constant but varies from zero to infinity. We discuss the evolution of the dynamical variables of the model and conclude that their asymptotic behaviors follow the trajectories of the moving critical points. Also, we find two different possible fates for the universe. In one of them, it could experience an accelerated expansion, but then enters a decelerating phase and finally reaches a stable matter-dominated solution. In the other scenario, the universe could approach the matter-dominated critical point without experiencing any accelerated expansion. We argue that the first scenario is more compatible with observations.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
S Mehta ◽  
S Niklitschek ◽  
F Fernandez ◽  
C Villagran ◽  
J Avila ◽  
...  

Abstract Background EKG interpretation is slowly transitioning to a physician-free, Artificial Intelligence (AI)-driven endeavor. Our continued efforts to innovate follow a carefully laid stepwise approach, as follows: 1) Create an AI algorithm that accurately identifies STEMI against non-STEMI using a 12-lead EKG; 2) Challenging said algorithm by including different EKG diagnosis to the previous experiment, and now 3) To further validate the accuracy and reliability of our algorithm while also improving performance in a prehospital and hospital settings. Purpose To provide an accurate, reliable, and cost-effective tool for STEMI detection with the potential to redirect human resources into other clinically relevant tasks and save the need for human resources. Methods Database: EKG records obtained from Latin America Telemedicine Infarct Network (Mexico, Colombia, Argentina, and Brazil) from April 2014 to December 2019. Dataset: A total of 11,567 12-lead EKG records of 10-seconds length with sampling frequency of 500 [Hz], including the following balanced classes: unconfirmed and angiographically confirmed STEMI, branch blocks, non-specific ST-T abnormalities, normal and abnormal (200+ CPT codes, excluding the ones included in other classes). The label of each record was manually checked by cardiologists to ensure precision (Ground truth). Pre-processing: The first and last 250 samples were discarded as they may contain a standardization pulse. An order 5 digital low pass filter with a 35 Hz cut-off was applied. For each record, the mean was subtracted to each individual lead. Classification: The determined classes were STEMI (STEMI in different locations of the myocardium – anterior, inferior and lateral); Not-STEMI (A combination of randomly sampled normal, branch blocks, non-specific ST-T abnormalities and abnormal records – 25% of each subclass). Training & Testing: A 1-D Convolutional Neural Network was trained and tested with a dataset proportion of 90/10; respectively. The last dense layer outputs a probability for each record of being STEMI or Not-STEMI. Additional testing was performed with a subset of the original dataset of angiographically confirmed STEMI. Results See Figure Attached – Preliminary STEMI Dataset Accuracy: 96.4%; Sensitivity: 95.3%; Specificity: 97.4% – Confirmed STEMI Dataset: Accuracy: 97.6%; Sensitivity: 98.1%; Specificity: 97.2%. Conclusions Our results remain consistent with our previous experience. By further increasing the amount and complexity of the data, the performance of the model improves. Future implementations of this technology in clinical settings look promising, not only in performing swift screening and diagnostic steps but also partaking in complex STEMI management triage. Funding Acknowledgement Type of funding source: None


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
S Mehta ◽  
J Avila ◽  
S Niklitschek ◽  
F Fernandez ◽  
C Villagran ◽  
...  

Abstract Background As EKG interpretation paradigms to a physician-free milieu, accumulating massive quantities of distilled pre-processed data becomes a must for machine learning techniques. In our pursuit of reducing ischemic times in STEMI management, we have improved our Artificial Intelligence (AI)-guided diagnostic tool by following a three-step approach: 1) Increase accuracy by adding larger clusters of data. 2) Increase the breadth of EKG classifications to provide more precise feedback and further refine the inputs which ultimately reflects in better and more accurate outputs. 3) Improving the algorithms' ability to discern between cardiovascular entities reflected in the EKG records. Purpose To bolster our algorithm's accuracy and reliability for electrocardiographic STEMI recognition. Methods Dataset: A total of 7,286 12-lead EKG records of 10-seconds length with a sampling frequency of 500 Hz obtained from Latin America Telemedicine Infarct Network from April 2014 to December 2019. This included the following balanced classes: angiographically confirmed STEMI, branch blocks, non-specific ST-T abnormalities, normal, and abnormal (200+ CPT codes, excluding the ones included in other classes). Labels of each record were manually checked by cardiologists to ensure precision (Ground truth). Pre-processing: First and last 250 samples were discarded to avoid a standardization pulse. Order 5 digital low pass filters with a 35 Hz cut-off was applied. For each record, the mean was subtracted to each individual lead. Classification: Determined classes were “STEMI” and “Not-STEMI” (A combination of randomly sampled normal, branch blocks, non-specific ST-T abnormalities and abnormal records – 25% of each subclass). Training & Testing: A 1-D Convolutional Neural Network was trained and tested with a dataset proportion of 90/10, respectively. The last dense layer outputs a probability for each record of being STEMI or Not-STEMI. Additional testing was performed with a subset of the original complete dataset of unconfirmed STEMI. Performance indicators (accuracy, sensitivity, and specificity) were calculated for each model and results were compared with our previous findings from past experiments. Results Complete STEMI data: Accuracy: 95.9% Sensitivity: 95.7% Specificity: 96.5%; Confirmed STEMI: Accuracy: 98.1% Sensitivity: 98.1% Specificity: 98.1%; Prior Data obtained in our previous experiments are shown below for comparison. Conclusion(s) After the addition of clustered pre-processed data, all performance indicators for STEMI detection increased considerably between both Confirmed STEMI datasets. On the other hand, the Complete STEMI dataset kept a strong and steady set of performance metrics when compared with past results. These findings not only validate the consistency and reliability of our algorithm but also connotes the importance of creating a pristine dataset for this and any other AI-derived medical tools. Funding Acknowledgement Type of funding source: None


2020 ◽  
Vol 500 (1) ◽  
pp. 772-785
Author(s):  
G Q Ding ◽  
T T Chen ◽  
J L Qu

ABSTRACT Using all the data of the High Energy X-ray Timing Experiment (HEXTE) on board the Rossi X-ray Timing Explorer for Scorpius X-1 from 1996 February to 2012 January, we systematically search for hard X-ray tails in the X-ray spectra in 20–220 keV and, together with the data of the Proportional Counter Array (PCA), investigate the evolution of the detected hard X-ray tails along the Z-track on its hardness-intensity diagram (HID). The hard X-ray tails are detected in 30 observations and their presence is not confined to a specific position on the HID. Our analysis suggests that from the horizontal branch (HB), through the normal branch (NB), to the flaring branch (FB) on the HID, the hard X-ray tail becomes hard and its flux decreases. Jointly fitting the PCA+HEXTE spectra in 3–220 keV, it is found that the Bulk-Motion Comptonization (BMC) could be an alternative mechanism for producing the hard X-ray tails on the HB and the NB of this source. The temperature of the seed photons for the BMC spans in the range of ∼(2.4–2.6) keV, indicating that the seed photons might come from the surface of the neutron star (NS) or the boundary layer and, therefore, the BMC process could take place around the NS or in the boundary layer. Some possible mechanisms for producing the hard X-ray tails on the FB are given.


2020 ◽  
Vol 499 (2) ◽  
pp. 2214-2228
Author(s):  
S Malu ◽  
K Sriram ◽  
V K Agrawal

ABSTRACT We performed spectro-temporal analysis in the 0.8–50 keV energy band of the neutron star Z source GX 17+2 using AstroSat Soft X-ray Telescope (SXT) and Large Area X-ray Proportional Counter (LAXPC) data. The source was found to vary in the normal branch (NB) of the hardness–intensity diagram. Cross-correlation studies of LAXPC light curves in soft and hard X-ray band unveiled anticorrelated lags of the order of few hundred seconds. For the first time, cross-correlation studies were performed using SXT soft and LAXPC hard light curves and they exhibited correlated and anticorrelated lags of the order of a hundred seconds. Power density spectrum displayed normal branch oscillations (NBOs) of 6.7–7.8 Hz (quality factor 1.5–4.0). Spectral modelling resulted in inner disc radius of ∼12–16 km with Γ ∼ 2.31–2.44 indicating that disc is close to the innermost stable circular orbit and a similar value of disc radius was noticed based on the reflection model. Different methods were used to constrain the corona size in GX 17+2. Using the detected lags, corona size was found to be 27–46 km (β = 0.1, β = vcorona/vdisc) and 138–231 km (β = 0.5). Assuming the X-ray emission to be arising from the boundary layer (BL), its size was determined to be 57–71 km. Assuming that BL is ionizing the disc’s inner region, its size was constrained to ∼19–86 km. Using NBO frequency, the transition shell radius was found to be around 32 km. Observed lags and no movement of the inner disc front strongly indicate that the varying corona structure is causing the X-ray variation in the NB of Z source GX 17+2.


2020 ◽  
Vol 498 (4) ◽  
pp. 5299-5316
Author(s):  
D Munshi ◽  
J D McEwen

ABSTRACT We compute the low-ℓ limit of the family of higher order spectra for projected (2D) weak lensing convergence maps. In this limit these spectra are computed to an arbitrary order using tree-level perturbative calculations. We use the flat-sky approximation and Eulerian perturbative results based on a generating function approach. We test these results for the lower order members of this family, i.e. the skew- and kurt-spectra against state-of-the-art simulated all-sky weak lensing convergence maps and find our results to be in very good agreement. We also show how these spectra can be computed in the presence of a realistic sky-mask and Gaussian noise. We generalize these results to 3D and compute the equal-time higher order spectra. These results will be valuable in analysing higher order statistics from future all-sky weak lensing surveys such as the Euclid survey at low-ℓ modes. As illustrative examples, we compute these statistics in the context of the Horndeski and beyond Horndeski theories of modified gravity. They will be especially useful in constraining theories such as the Gleyzes–Langlois–Piazza–Vernizzi (GLPV) theories and degenerate higher order scalar-tensor theories as well as the commonly used normal-branch of Dvali–Gabadadze–Porrati model, clustering quintessence models and scenarios with massive neutrinos.


Sign in / Sign up

Export Citation Format

Share Document