Accelerating ATM Simulations Using Dynamic Component Substitution (DCS)

SIMULATION ◽  
2006 ◽  
Vol 82 (4) ◽  
pp. 235-253
Author(s):  
Dhananjai M. Rao ◽  
Philip A. Wilsey
2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Robin-Lee Troskie ◽  
Yohaann Jafrani ◽  
Tim R. Mercer ◽  
Adam D. Ewing ◽  
Geoffrey J. Faulkner ◽  
...  

AbstractPseudogenes are gene copies presumed to mainly be functionless relics of evolution due to acquired deleterious mutations or transcriptional silencing. Using deep full-length PacBio cDNA sequencing of normal human tissues and cancer cell lines, we identify here hundreds of novel transcribed pseudogenes expressed in tissue-specific patterns. Some pseudogene transcripts have intact open reading frames and are translated in cultured cells, representing unannotated protein-coding genes. To assess the biological impact of noncoding pseudogenes, we CRISPR-Cas9 delete the nucleus-enriched pseudogene PDCL3P4 and observe hundreds of perturbed genes. This study highlights pseudogenes as a complex and dynamic component of the human transcriptional landscape.


2021 ◽  
Vol 217 (2) ◽  
Author(s):  
Alexander G. Hayes ◽  
P. Corlies ◽  
C. Tate ◽  
M. Barrington ◽  
J. F. Bell ◽  
...  

AbstractThe NASA Perseverance rover Mast Camera Zoom (Mastcam-Z) system is a pair of zoomable, focusable, multi-spectral, and color charge-coupled device (CCD) cameras mounted on top of a 1.7 m Remote Sensing Mast, along with associated electronics and two calibration targets. The cameras contain identical optical assemblies that can range in focal length from 26 mm ($25.5^{\circ }\, \times 19.1^{\circ }\ \mathrm{FOV}$ 25.5 ∘ × 19.1 ∘ FOV ) to 110 mm ($6.2^{\circ } \, \times 4.2^{\circ }\ \mathrm{FOV}$ 6.2 ∘ × 4.2 ∘ FOV ) and will acquire data at pixel scales of 148-540 μm at a range of 2 m and 7.4-27 cm at 1 km. The cameras are mounted on the rover’s mast with a stereo baseline of $24.3\pm 0.1$ 24.3 ± 0.1  cm and a toe-in angle of $1.17\pm 0.03^{\circ }$ 1.17 ± 0.03 ∘ (per camera). Each camera uses a Kodak KAI-2020 CCD with $1600\times 1200$ 1600 × 1200 active pixels and an 8 position filter wheel that contains an IR-cutoff filter for color imaging through the detectors’ Bayer-pattern filters, a neutral density (ND) solar filter for imaging the sun, and 6 narrow-band geology filters (16 total filters). An associated Digital Electronics Assembly provides command data interfaces to the rover, 11-to-8 bit companding, and JPEG compression capabilities. Herein, we describe pre-flight calibration of the Mastcam-Z instrument and characterize its radiometric and geometric behavior. Between April 26$^{th}$ t h and May 9$^{th}$ t h , 2019, ∼45,000 images were acquired during stand-alone calibration at Malin Space Science Systems (MSSS) in San Diego, CA. Additional data were acquired during Assembly Test and Launch Operations (ATLO) at the Jet Propulsion Laboratory and Kennedy Space Center. Results of the radiometric calibration validate a 5% absolute radiometric accuracy when using camera state parameters investigated during testing. When observing using camera state parameters not interrogated during calibration (e.g., non-canonical zoom positions), we conservatively estimate the absolute uncertainty to be $<10\%$ < 10 % . Image quality, measured via the amplitude of the Modulation Transfer Function (MTF) at Nyquist sampling (0.35 line pairs per pixel), shows $\mathrm{MTF}_{\mathit{Nyquist}}=0.26-0.50$ MTF Nyquist = 0.26 − 0.50 across all zoom, focus, and filter positions, exceeding the $>0.2$ > 0.2 design requirement. We discuss lessons learned from calibration and suggest tactical strategies that will optimize the quality of science data acquired during operation at Mars. While most results matched expectations, some surprises were discovered, such as a strong wavelength and temperature dependence on the radiometric coefficients and a scene-dependent dynamic component to the zero-exposure bias frames. Calibration results and derived accuracies were validated using a Geoboard target consisting of well-characterized geologic samples.


Open Theology ◽  
2020 ◽  
Vol 6 (1) ◽  
pp. 547-556
Author(s):  
Martin Nitsche

AbstractThis study focuses on various phenomenological conceptions of the invisible in order to consider to what extent and in what way they involve moments of hiddenness. The relationship among phenomenality, invisibility, and hiddenness is examined in the works of Husserl, Heidegger, Henry, and Merleau-Ponty. The study explains why phenomenologists prefer speaking about the invisible over a discourse of the hidden. It shows that the phenomenological method does not display the invisibility as a limit of experience but rather as a dynamic component of relational nature of any experience, including the religious one. Special attention is paid to topological moments of the relationship between the visible and the invisible.


2021 ◽  
Vol 10 (6) ◽  
pp. 227
Author(s):  
Yago Martín ◽  
Zhenlong Li ◽  
Yue Ge ◽  
Xiao Huang

The study of migrations and mobility has historically been severely limited by the absence of reliable data or the temporal sparsity of available data. Using geospatial digital trace data, the study of population movements can be much more precisely and dynamically measured. Our research seeks to develop a near real-time (one-day lag) Twitter census that gives a more temporally granular picture of local and non-local population at the county level. Internal validation reveals over 80% accuracy when compared with users’ self-reported home location. External validation results suggest these stocks correlate with available statistics of residents/non-residents at the county level and can accurately reflect regular (seasonal tourism) and non-regular events such as the Great American Solar Eclipse of 2017. The findings demonstrate that Twitter holds the potential to introduce the dynamic component often lacking in population estimates. This study could potentially benefit various fields such as demography, tourism, emergency management, and public health and create new opportunities for large-scale mobility analyses.


Vision ◽  
2021 ◽  
Vol 5 (2) ◽  
pp. 17
Author(s):  
Maria Elisa Della-Torre ◽  
Daniele Zavagno ◽  
Rossana Actis-Grosso

E-motions are defined as those affective states the expressions of which—conveyed either by static faces or body posture—embody a dynamic component and, consequently, convey a higher sense of dynamicity than other emotional expressions. An experiment is presented, aimed at testing whether e-motions are perceived as such also by individuals with autism spectrum disorders (ASDs), which have been associated with impairments in emotion recognition and in motion perception. To this aim we replicate with ASD individuals a study, originally conducted with typically developed individuals (TDs), in which we showed to both ASD and TD participants 14 bodiless heads and 14 headless bodies taken from eleven static artworks and four drawings. The Experiment was divided into two sessions. In Session 1 participants were asked to freely associate each stimulus to an emotion or an affective state (Task 1, option A); if they were unable to find a specific emotion, the experimenter showed them a list of eight possible emotions (words) and asked them to choose one from such list, that best described the affective state portrayed in the image (Task 1, option B). After their choice, they were asked to rate the intensity of the perceived emotion on a seven point Likert scale (Task 2). In Session 2 participants were requested to evaluate the degree of dynamicity conveyed by each stimulus on a 7 point Likert scale. Results showed that ASDs and TDs shared a similar range of verbal expressions defining emotions; however, ASDs (i) showed an impairment in the ability to spontaneously assign an emotion to a headless body, and (ii) they more frequently used terms denoting negative emotions (for both faces and bodies) as compared to neutral emotions, which in turn were more frequently used by TDs. No difference emerged between the two groups for positive emotions, with happiness being the emotion better recognized in both faces and in bodies. Although overall there are no significant differences between the two groups with respect to the emotions assigned to the images and the degree of perceived dynamicity, the interaction Artwork x Group showed that for some images ASDs assigned a different value than TDs to perceived dynamicity. Moreover, two images were interpreted by ASDs as conveying completely different emotions than those perceived by TDs. Results are discussed in light of the ability of ASDs to resolve ambiguity, and of possible different cognitive styles characterizing the aesthetical/emotional experience.


2020 ◽  
pp. 27-31
Author(s):  
T.P. Shiryaeva ◽  
A.V. Gribanov ◽  
D.M. Fedotov ◽  
O.A. Rumyantseva

Sign in / Sign up

Export Citation Format

Share Document