scholarly journals The NStED Exoplanet Transit Survey Service

2008 ◽  
Vol 4 (S253) ◽  
pp. 478-481 ◽  
Author(s):  
K. von Braun ◽  
M. Abajian ◽  
B. Ali ◽  
R. Baker ◽  
G. B. Berriman ◽  
...  

AbstractThe NASA Star and Exoplanet Database (NStED) is a general purpose stellar archive with the aim of providing support for NASA's planet finding and characterization goals, stellar astrophysics, and the planning of NASA and other space missions. There are two principal components of NStED: a database of (currently) 140,000 nearby stars and exoplanet-hosting stars, and an archive dedicated to high-precision photometric surveys for transiting exoplanets. We present a summary of the latter component: the NStED Exoplanet Transit Survey Service (NStED-ETSS), along with its content, functionality, tools, and user interface. NStED-ETSS currently serves data from the TrES Survey of the Kepler Field as well as dedicated photometric surveys of four stellar clusters. NStED-ETSS aims to serve both the surveys and the broader astronomical community by archiving these data and making them available in a homogeneous format. Examples of usability of ETSS include investigation of any time-variable phenomena in data sets not studied by the original survey team, application of different techniques or algorithms for planet transit detections, combination of data from different surveys for given objects, statistical studies, etc. NStED-ETSS can be accessed at http://nsted.ipac.caltech.edu.

2008 ◽  
Vol 4 (S253) ◽  
pp. 474-477
Author(s):  
S. Ramirez ◽  
B. Ali ◽  
R. Baker ◽  
G. B. Berriman ◽  
K. von Braun ◽  
...  

AbstractThe NASA Star and Exoplanet Database (NStED) is a general purpose stellar archive with the aim of providing support for NASA's planet finding and characterization goals, stellar astrophysics, and the planning of NASA and other space missions. There are two principal components of NStED: a database of (currently) 140,000 nearby stars and exoplanet-hosting stars, and an archive dedicated to high precision photometric surveys for transiting exoplanets. We present a summary of the NStED stellar database, functionality, tools, and user interface. NStED currently serves the following kinds of data for 140,000 stars (where available): coordinates, multiplicity, proper motion, parallax, spectral type, multiband photometry, radial velocity, metallicity, chromospheric and coronal activity index, and rotation velocity/period. Furthermore, the following derived quantities are given wherever possible: distance, effective temperature, mass, radius, luminosity, space motions, and physical/angular dimensions of habitable zone. Queries to NStED can be made using constraints on any combination of the above parameters. In addition, NStED provides tools to derive specific inferred quantities for the stars in the database, cross-referenced with available extra-solar planetary data for those host stars. NStED can be accessed at http://nsted.ipac.caltech.edu.


1965 ◽  
Vol 04 (03) ◽  
pp. 136-140
Author(s):  
Cl Jeanty

A method is described in an attempt to make medical records suitable for epidemiologigri: purposes. Every case of a disease is recorded on an appropriate punched card with the object of working towards a general description of a disease through the collation of several cases of the same diagnosis. This punched card represents a very great condensation of the original record. Special care has been applied to state as precisely as possible the time variable, particularly as far as its origin and unit of measure are concerned, in order to demonstrate the existence of causal relations between diseases. Such cards are also intended to make easier statistical studies in clinical pathology, in evaluation of new laboratory techniques, and in therapeutical trials.


2016 ◽  
Vol 3 (1) ◽  
Author(s):  
LAL SINGH ◽  
PARMEET SINGH ◽  
RAIHANA HABIB KANTH ◽  
PURUSHOTAM SINGH ◽  
SABIA AKHTER ◽  
...  

WOFOST version 7.1.3 is a computer model that simulates the growth and production of annual field crops. All the run options are operational through a graphical user interface named WOFOST Control Center version 1.8 (WCC). WCC facilitates selecting the production level, and input data sets on crop, soil, weather, crop calendar, hydrological field conditions, soil fertility parameters and the output options. The files with crop, soil and weather data are explained, as well as the run files and the output files. A general overview is given of the development and the applications of the model. Its underlying concepts are discussed briefly.


Author(s):  
Gediminas Adomavicius ◽  
Yaqiong Wang

Numerical predictive modeling is widely used in different application domains. Although many modeling techniques have been proposed, and a number of different aggregate accuracy metrics exist for evaluating the overall performance of predictive models, other important aspects, such as the reliability (or confidence and uncertainty) of individual predictions, have been underexplored. We propose to use estimated absolute prediction error as the indicator of individual prediction reliability, which has the benefits of being intuitive and providing highly interpretable information to decision makers, as well as allowing for more precise evaluation of reliability estimation quality. As importantly, the proposed reliability indicator allows the reframing of reliability estimation itself as a canonical numeric prediction problem, which makes the proposed approach general-purpose (i.e., it can work in conjunction with any outcome prediction model), alleviates the need for distributional assumptions, and enables the use of advanced, state-of-the-art machine learning techniques to learn individual prediction reliability patterns directly from data. Extensive experimental results on multiple real-world data sets show that the proposed machine learning-based approach can significantly improve individual prediction reliability estimation as compared with a number of baselines from prior work, especially in more complex predictive scenarios.


1985 ◽  
Vol 33 (2) ◽  
pp. 141-150
Author(s):  
D.A. Stellingwerf ◽  
S. Lwin

Comparative estimates were made for a 12 841-ha area of Upper Austria comprising areas of pure or mixed Norway spruce and beech, young stands and non-forest. The Landsat data, classified by principal components analysis, gave very inaccurate differentiation of species, age classes and smaller non-forest areas, although the total forest area was reasonably accurate. Stand vol. of spruce was estimated by 2-stage sampling of both data sets followed by field work on sample plots. The Landsat method required 53% more primary (first-stage sampling) units, 23% more man-days and higher extra costs than the orthophoto method for the same accuracy. (Abstract retrieved from CAB Abstracts by CABI’s permission)


mSystems ◽  
2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Gongchao Jing ◽  
Lu Liu ◽  
Zengbin Wang ◽  
Yufeng Zhang ◽  
Li Qian ◽  
...  

ABSTRACT Metagenomic data sets from diverse environments have been growing rapidly. To ensure accessibility and reusability, tools that quickly and informatively correlate new microbiomes with existing ones are in demand. Here, we introduce Microbiome Search Engine 2 (MSE 2), a microbiome database platform for searching query microbiomes in the global metagenome data space based on the taxonomic or functional similarity of a whole microbiome to those in the database. MSE 2 consists of (i) a well-organized and regularly updated microbiome database that currently contains over 250,000 metagenomic shotgun and 16S rRNA gene amplicon samples associated with unified metadata collected from 798 studies, (ii) an enhanced search engine that enables real-time and fast (<0.5 s per query) searches against the entire database for best-matched microbiomes using overall taxonomic or functional profiles, and (iii) a Web-based graphical user interface for user-friendly searching, data browsing, and tutoring. MSE 2 is freely accessible via http://mse.ac.cn. For standalone searches of customized microbiome databases, the kernel of the MSE 2 search engine is provided at GitHub (https://github.com/qibebt-bioinfo/meta-storms). IMPORTANCE A search-based strategy is useful for large-scale mining of microbiome data sets, such as a bird’s-eye view of the microbiome data space and disease diagnosis via microbiome big data. Here, we introduce Microbiome Search Engine 2 (MSE 2), a microbiome database platform for searching query microbiomes against the existing microbiome data sets on the basis of their similarity in taxonomic structure or functional profile. Key improvements include database extension, data compatibility, a search engine kernel, and a user interface. The new ability to search the microbiome space via functional similarity greatly expands the scope of search-based mining of the microbiome big data.


Author(s):  
Seamus M. McGovern ◽  
Surendra M. Gupta

NP-complete combinatorial problems often necessitate the use of near-optimal solution techniques including heuristics and metaheuristics. The addition of multiple optimization criteria can further complicate comparison of these solution techniques due to the decision-maker’s weighting schema potentially masking search limitations. In addition, many contemporary problems lack quantitative assessment tools, including benchmark data sets. This chapter proposes the use of lexicographic goal programming for use in comparing combinatorial search techniques. These techniques are implemented here using a recently formulated problem from the area of production analysis. The development of a benchmark data set and other assessment tools is demonstrated, and these are then used to compare the performance of a genetic algorithm and an H-K general-purpose heuristic as applied to the production-related application.


1997 ◽  
Vol 3 (S2) ◽  
pp. 1131-1132
Author(s):  
Jansma P.L ◽  
M.A. Landis ◽  
L.C. Hansen ◽  
N.C. Merchant ◽  
N.J. Vickers ◽  
...  

We are using Data Explorer (DX), a general-purpose, interactive visualization program developed by IBM, to perform three-dimensional reconstructions of neural structures from microscopic or optical sections. We use the program on a Silicon Graphics workstation; it also can run on Sun, IBM RS/6000, and Hewlett Packard workstations. DX comprises modular building blocks that the user assembles into data-flow networks for specific uses. Many modules come with the program, but others, written by users (including ourselves), are continually being added and are available at the DX ftp site, http://www.tc.cornell.edu/DXhttp://www.nice.org.uk/page.aspx?o=43210.Initally, our efforts were aimed at developing methods for isosurface- and volume-rendering of structures visible in three-dimensional stacks of optical sections of insect brains gathered on our Bio-Rad MRC-600 laser scanning confocal microscope. We also wanted to be able to merge two 3-D data sets (collected on two different photomultiplier channels) and to display them at various angles of view.


1975 ◽  
Vol 10 (1) ◽  
pp. 109-117 ◽  
Author(s):  
M. L. Dudzinski ◽  
J. M. Norris ◽  
J. T. Chmura ◽  
C. B. H. Edwards

2018 ◽  
Vol 17 ◽  
pp. 117693511877108 ◽  
Author(s):  
Min Wang ◽  
Steven M Kornblau ◽  
Kevin R Coombes

Principal component analysis (PCA) is one of the most common techniques in the analysis of biological data sets, but applying PCA raises 2 challenges. First, one must determine the number of significant principal components (PCs). Second, because each PC is a linear combination of genes, it rarely has a biological interpretation. Existing methods to determine the number of PCs are either subjective or computationally extensive. We review several methods and describe a new R package, PCDimension, that implements additional methods, the most important being an algorithm that extends and automates a graphical Bayesian method. Using simulations, we compared the methods. Our newly automated procedure is competitive with the best methods when considering both accuracy and speed and is the most accurate when the number of objects is small compared with the number of attributes. We applied the method to a proteomics data set from patients with acute myeloid leukemia. Proteins in the apoptosis pathway could be explained using 6 PCs. By clustering the proteins in PC space, we were able to replace the PCs by 6 “biological components,” 3 of which could be immediately interpreted from the current literature. We expect this approach combining PCA with clustering to be widely applicable.


Sign in / Sign up

Export Citation Format

Share Document