Selected Papers from i-SAIRAS 2010

Author(s):  
Keiki Takadama

This special issue features the selected papers from i-SAIRAS 2010 (The 10th International Symposium on Artificial Intelligence, Robotics and Automation in Space) at Sapporo, Japan on August 29 - September 1, 2010), which explores the technology of Artificial Intelligence (AI), Automation and Robotics, and its application in space. In the AI domain, in particular, i-SAIRAS focuses on the following issues: (1) spacecraft autonomy (e.g., inboard software for mission planning and execution, resource management, fault protection, science data analysis, guidance, navigation and control, smart sensors, testing and validation, architectures); (2) mission operations automation (e.g., decision support tools for mission planning and scheduling, anomaly detection and fault analysis, innovative operations concepts, data visualization, secure commanding and networking); (3) design tools and optimization methods, electronic documentation; and (4) AI methods (e.g., automated planning and scheduling, agents model-based reasoning, machine learning and data mining). In the selection process for JACIII (Journal of Advanced Computational Intelligence and Intelligent Informatics), 13 papers were firstly nominated from 133 oral presentation papers as outstanding AI-related papers by i-SAIRAS International Committee, and 6 papers were finally accepted through the two-stages of pear-reviews. All papers were reviewed by three reviewers. As the brief introduction of these papers, the paper by Mark Johnston and Mark Giuliano presents an architecture called MUSE (Multi-User Scheduling Environment) to integrate multi-objective evolutionary algorithms with existing domain planning and scheduling tools. The second paper by Amdeo Cesta et al. discusses general lessons learned from a series of deployed planning and scheduling systems. The third paper by Alessandro Donati et al. spotlights specific achievements and trends in the area of spacecraft diagnosis and mission planning and scheduling. The fourth paper by Cedric Cocaud and Takashi Kubota proposes the system that provides position and attitude information to a spacecraft during its approach descent and landing phase toward the surface of an asteroid. The firth paper by Tomohiro Harada et al. studies On-Board Computer which evolves computer programs through the bit inversion and analyzes its robustness to the bit inversion. Finally, the last paper by Masayuki Otani et al. explores the distributed control of the multiple robots which may be broken in the assembly of space solar power satellite. The editor hopes that these papers would help for readers to capture the state-of-art of AI technology in space.

Author(s):  
Amedeo Cesta ◽  
◽  
Gabriella Cortellessa ◽  
Simone Fratini ◽  
Angelo Oddi ◽  
...  

This article contains a retrospective overview of connected work performed for the European Space Agency (ESA) over a span of 10 years. We have been creating and refining an AI approach to problem solving and have infused a series of deployed planning and scheduling systems which have innovated the agency’s mission planning practice. The goal of this paper is to identify strong features of this experience, comment on general lessons learned and offer guidelines for work practice of the future. Specifically, the work considers some key points that have contributed to strengthening the effectiveness of our approach for the development of an end-to-end methodology to field applications: the attention to domain modeling, the constraint-based algorithm synthesis and the relevance of user interaction services.


2020 ◽  
Vol 54 (12) ◽  
pp. 942-947
Author(s):  
Pol Mac Aonghusa ◽  
Susan Michie

Abstract Background Artificial Intelligence (AI) is transforming the process of scientific research. AI, coupled with availability of large datasets and increasing computational power, is accelerating progress in areas such as genetics, climate change and astronomy [NeurIPS 2019 Workshop Tackling Climate Change with Machine Learning, Vancouver, Canada; Hausen R, Robertson BE. Morpheus: A deep learning framework for the pixel-level analysis of astronomical image data. Astrophys J Suppl Ser. 2020;248:20; Dias R, Torkamani A. AI in clinical and genomic diagnostics. Genome Med. 2019;11:70.]. The application of AI in behavioral science is still in its infancy and realizing the promise of AI requires adapting current practices. Purposes By using AI to synthesize and interpret behavior change intervention evaluation report findings at a scale beyond human capability, the HBCP seeks to improve the efficiency and effectiveness of research activities. We explore challenges facing AI adoption in behavioral science through the lens of lessons learned during the Human Behaviour-Change Project (HBCP). Methods The project used an iterative cycle of development and testing of AI algorithms. Using a corpus of published research reports of randomized controlled trials of behavioral interventions, behavioral science experts annotated occurrences of interventions and outcomes. AI algorithms were trained to recognize natural language patterns associated with interventions and outcomes from the expert human annotations. Once trained, the AI algorithms were used to predict outcomes for interventions that were checked by behavioral scientists. Results Intervention reports contain many items of information needing to be extracted and these are expressed in hugely variable and idiosyncratic language used in research reports to convey information makes developing algorithms to extract all the information with near perfect accuracy impractical. However, statistical matching algorithms combined with advanced machine learning approaches created reasonably accurate outcome predictions from incomplete data. Conclusions AI holds promise for achieving the goal of predicting outcomes of behavior change interventions, based on information that is automatically extracted from intervention evaluation reports. This information can be used to train knowledge systems using machine learning and reasoning algorithms.


2021 ◽  
Vol 217 (2) ◽  
Author(s):  
Alexander G. Hayes ◽  
P. Corlies ◽  
C. Tate ◽  
M. Barrington ◽  
J. F. Bell ◽  
...  

AbstractThe NASA Perseverance rover Mast Camera Zoom (Mastcam-Z) system is a pair of zoomable, focusable, multi-spectral, and color charge-coupled device (CCD) cameras mounted on top of a 1.7 m Remote Sensing Mast, along with associated electronics and two calibration targets. The cameras contain identical optical assemblies that can range in focal length from 26 mm ($25.5^{\circ }\, \times 19.1^{\circ }\ \mathrm{FOV}$ 25.5 ∘ × 19.1 ∘ FOV ) to 110 mm ($6.2^{\circ } \, \times 4.2^{\circ }\ \mathrm{FOV}$ 6.2 ∘ × 4.2 ∘ FOV ) and will acquire data at pixel scales of 148-540 μm at a range of 2 m and 7.4-27 cm at 1 km. The cameras are mounted on the rover’s mast with a stereo baseline of $24.3\pm 0.1$ 24.3 ± 0.1  cm and a toe-in angle of $1.17\pm 0.03^{\circ }$ 1.17 ± 0.03 ∘ (per camera). Each camera uses a Kodak KAI-2020 CCD with $1600\times 1200$ 1600 × 1200 active pixels and an 8 position filter wheel that contains an IR-cutoff filter for color imaging through the detectors’ Bayer-pattern filters, a neutral density (ND) solar filter for imaging the sun, and 6 narrow-band geology filters (16 total filters). An associated Digital Electronics Assembly provides command data interfaces to the rover, 11-to-8 bit companding, and JPEG compression capabilities. Herein, we describe pre-flight calibration of the Mastcam-Z instrument and characterize its radiometric and geometric behavior. Between April 26$^{th}$ t h and May 9$^{th}$ t h , 2019, ∼45,000 images were acquired during stand-alone calibration at Malin Space Science Systems (MSSS) in San Diego, CA. Additional data were acquired during Assembly Test and Launch Operations (ATLO) at the Jet Propulsion Laboratory and Kennedy Space Center. Results of the radiometric calibration validate a 5% absolute radiometric accuracy when using camera state parameters investigated during testing. When observing using camera state parameters not interrogated during calibration (e.g., non-canonical zoom positions), we conservatively estimate the absolute uncertainty to be $<10\%$ < 10 % . Image quality, measured via the amplitude of the Modulation Transfer Function (MTF) at Nyquist sampling (0.35 line pairs per pixel), shows $\mathrm{MTF}_{\mathit{Nyquist}}=0.26-0.50$ MTF Nyquist = 0.26 − 0.50 across all zoom, focus, and filter positions, exceeding the $>0.2$ > 0.2 design requirement. We discuss lessons learned from calibration and suggest tactical strategies that will optimize the quality of science data acquired during operation at Mars. While most results matched expectations, some surprises were discovered, such as a strong wavelength and temperature dependence on the radiometric coefficients and a scene-dependent dynamic component to the zero-exposure bias frames. Calibration results and derived accuracies were validated using a Geoboard target consisting of well-characterized geologic samples.


Author(s):  
Sina Shaffiee Haghshenas ◽  
Behrouz Pirouz ◽  
Sami Shaffiee Haghshenas ◽  
Behzad Pirouz ◽  
Patrizia Piro ◽  
...  

Nowadays, an infectious disease outbreak is considered one of the most destructive effects in the sustainable development process. The outbreak of new coronavirus (COVID-19) as an infectious disease showed that it has undesirable social, environmental, and economic impacts, and leads to serious challenges and threats. Additionally, investigating the prioritization parameters is of vital importance to reducing the negative impacts of this global crisis. Hence, the main aim of this study is to prioritize and analyze the role of certain environmental parameters. For this purpose, four cities in Italy were selected as a case study and some notable climate parameters—such as daily average temperature, relative humidity, wind speed—and an urban parameter, population density, were considered as input data set, with confirmed cases of COVID-19 being the output dataset. In this paper, two artificial intelligence techniques, including an artificial neural network (ANN) based on particle swarm optimization (PSO) algorithm and differential evolution (DE) algorithm, were used for prioritizing climate and urban parameters. The analysis is based on the feature selection process and then the obtained results from the proposed models compared to select the best one. Finally, the difference in cost function was about 0.0001 between the performances of the two models, hence, the two methods were not different in cost function, however, ANN-PSO was found to be better, because it reached to the desired precision level in lesser iterations than ANN-DE. In addition, the priority of two variables, urban parameter, and relative humidity, were the highest to predict the confirmed cases of COVID-19.


Author(s):  
Indar Sugiarto ◽  
Doddy Prayogo ◽  
Henry Palit ◽  
Felix Pasila ◽  
Resmana Lim ◽  
...  

This paper describes a prototype of a computing platform dedicated to artificial intelligence explorations. The platform, dubbed as PakCarik, is essentially a high throughput computing platform with GPU (graphics processing units) acceleration. PakCarik is an Indonesian acronym for Platform Komputasi Cerdas Ramah Industri Kreatif, which can be translated as “Creative Industry friendly Intelligence Computing Platform”. This platform aims to provide complete development and production environment for AI-based projects, especially to those that rely on machine learning and multiobjective optimization paradigms. The method for constructing PakCarik was based on a computer hardware assembling technique that uses commercial off-the-shelf hardware and was tested on several AI-related application scenarios. The testing methods in this experiment include: high-performance lapack (HPL) benchmarking, message passing interface (MPI) benchmarking, and TensorFlow (TF) benchmarking. From the experiment, the authors can observe that PakCarik's performance is quite similar to the commonly used cloud computing services such as Google Compute Engine and Amazon EC2, even though falls a bit behind the dedicated AI platform such as Nvidia DGX-1 used in the benchmarking experiment. Its maximum computing performance was measured at 326 Gflops. The authors conclude that PakCarik is ready to be deployed in real-world applications and it can be made even more powerful by adding more GPU cards in it.


The AI and the Interview Chat bot used in the system will be the future for any recruitment process as it effectively saves the time /effort and improves the efficiency of the recruitment process. The selection process through the bot will be unbiased and also will be speedy as multiple interviews can be done at the same time. Chatbots will also make the interviewee comfortable as it may be integrated on mobile platform. To summarise, the future of the interviewing process will be made simple by use of the chatbots


Sign in / Sign up

Export Citation Format

Share Document