target environment
Recently Published Documents


TOTAL DOCUMENTS

141
(FIVE YEARS 56)

H-INDEX

12
(FIVE YEARS 3)

2022 ◽  
Vol 9 (1) ◽  
pp. 8-19
Author(s):  
Sultan Saud Alanazi ◽  
◽  
Adwan Alowine Alanazi ◽  

There are several ways to improve an organization’s cybersecurity protection against intruders. One of the ways is to proactively hunt for threats, i.e., threat hunting. Threat Hunting empowers organizations to detect the presence of intruders in their environment. It identifies and searches the tactics, techniques, and procedures (TTP) of the attackers to find them in the environment. To know what to look for in the collected data and environment, it is required to know and understand the attacker's TTPs. An attacker's TTPs information usually comes from signatures, indicators, and behavior observed in threat intelligence sources. Traditionally, threat hunting involves the analysis of collected logs for Indicator of Compromise (IOCs) through different tools. However, network and security infrastructure devices generate large volumes of logs and can be challenging to analyze thus leaving gaps in the detection process. Similarly, it is very difficult to identify the required IOCs and thus sometimes makes it difficult to hunt the threat which is one of the major drawbacks of the traditional threat hunting processes and frameworks. To address this issue, intelligent automated processes using machine learning can improve the threat hunting process, that will plug those gaps before an attacker can exploit them. This paper aims to propose a machine learning-based threat-hunting model that will be able to fill the gaps in the threat detection process and effectively detect the unknown adversaries by training the machine learning algorithms via extensive datasets of TTPs and normal behavior of the system and target environment. The model is comprised of five main stages. These are Hypotheses Development, Equip, Hunt, Respond and Feedback stages. This threat hunting model is a bit ahead of the traditional models and frameworks by employing machine learning algorithms.


Author(s):  
Jan E. Neuweiler ◽  
Johannes Trini ◽  
Hans Peter Maurer ◽  
Tobias Würschum

Abstract Key message The comparably low genotype-by-nitrogen level interaction suggests that selection in early generations can be done under high-input conditions followed by selection under different nitrogen levels to identify genotypes ideally suited for the target environment. Abstract Breeding high-yielding, nitrogen-efficient crops is of utmost importance to achieve greater agricultural sustainability. The aim of this study was to evaluate nitrogen use efficiency (NUE) of triticale, investigate long-term genetic trends and the genetic architecture, and develop strategies for NUE improvement by breeding. For this, we evaluated 450 different triticale genotypes under four nitrogen fertilization levels in multi-environment field trials for grain yield, protein content, starch content and derived indices. Analysis of temporal trends revealed that modern cultivars are better in exploiting the available nitrogen. Genome-wide association mapping revealed a complex genetic architecture with many small-effect QTL and a high level of pleiotropy for NUE-related traits, in line with phenotypic correlations. Furthermore, the effect of some QTL was dependent on the nitrogen fertilization level. High correlations of each trait between N levels and the rather low genotype-by-N-level interaction variance showed that generally the same genotypes perform well over different N levels. Nevertheless, the best performing genotype was always a different one. Thus, selection in early generations can be done under high nitrogen fertilizer conditions as these provide a stronger differentiation, but the final selection in later generations should be conducted with a nitrogen fertilization as in the target environment.


2021 ◽  
Vol 53 (1) ◽  
Author(s):  
Jack C. M. Dekkers

Abstract Background Genotype-by-environment interactions for a trait can be modeled using multiple-trait, i.e. character-state, models, that consider the phenotype as a different trait in each environment, or using reaction norm models based on a functional relationship, usually linear, between phenotype and a quantitative measure of the quality of the environment. The equivalence between character-state and reaction norm models has been demonstrated for a single trait. The objectives of this study were to extend the equivalence of the reaction norm and character-state models to a multiple-trait setting and to both genetic and environmental effects, and to illustrate the application of this equivalence to the design and optimization of breeding programs for disease resilience. Methods Equivalencies between reaction norm and character-state models for multiple-trait phenotypes were derived at the genetic and environmental levels, which demonstrates how multiple-trait reaction norm parameters can be derived from multiple-trait character state parameters. Methods were applied to optimize selection for a multiple-trait breeding goal in a target environment based on phenotypes collected in a healthy and disease-challenged environment, and to optimize the environment in which disease-challenge phenotypes should be collected. Results and conclusions The equivalence between multiple-trait reaction norm and multiple-trait character-state parameters allow genetic improvement for a multiple-trait breeding goal in a target environment to be optimized without recording phenotypes and estimating parameters for the target environment.


2021 ◽  
Vol 7 (Special) ◽  
pp. 12-12
Author(s):  
Leonid Andreev ◽  
◽  
Alexander Kozlov

The article discusses the prerequisites for the use of machine vision in electrical deratizers, considers the methods and methods of implementation, describes the design principles and composition of security and deratization systems, makes a fundamental approach to the mathematical modeling of the thermal imaging image of the background target environment, and also discusses the installation and maintenance of this system. Keywords: MACHINE VISION, AUTOMATION, CAMERAS, HIGH-SPEED SHOOTING, VIDEO SURVEILLANCE, TECHNICAL VISION, NAVIGATION, DERATIZATION


2021 ◽  
Author(s):  
◽  
Muhammad Ghifary

<p>Machine learning has achieved great successes in the area of computer vision, especially in object recognition or classification. One of the core factors of the successes is the availability of massive labeled image or video data for training, collected manually by human. Labeling source training data, however, can be expensive and time consuming. Furthermore, a large amount of labeled source data may not always guarantee traditional machine learning techniques to generalize well; there is a potential bias or mismatch in the data, i.e., the training data do not represent the target environment.  To mitigate the above dataset bias/mismatch, one can consider domain adaptation: utilizing labeled training data and unlabeled target data to develop a well-performing classifier on the target environment. In some cases, however, the unlabeled target data are nonexistent, but multiple labeled sources of data exist. Such situations can be addressed by domain generalization: using multiple source training sets to produce a classifier that generalizes on the unseen target domain. Although several domain adaptation and generalization approaches have been proposed, the domain mismatch in object recognition remains a challenging, open problem – the model performance has yet reached to a satisfactory level in real world applications.  The overall goal of this thesis is to progress towards solving dataset bias in visual object recognition through representation learning in the context of domain adaptation and domain generalization. Representation learning is concerned with finding proper data representations or features via learning rather than via engineering by human experts. This thesis proposes several representation learning solutions based on deep learning and kernel methods.  This thesis introduces a robust-to-noise deep neural network for handwritten digit classification trained on “clean” images only, which we name Deep Hybrid Network (DHN). DHNs are based on a particular combination of sparse autoencoders and restricted Boltzmann machines. The results show that DHN performs better than the standard deep neural network in recognizing digits with Gaussian and impulse noise, block and border occlusions.  This thesis proposes the Domain Adaptive Neural Network (DaNN), a neural network based domain adaptation algorithm that minimizes the classification error and the domain discrepancy between the source and target data representations. The experiments show the competitiveness of DaNN against several state-of-the-art methods on a benchmark object dataset.  This thesis develops the Multi-task Autoencoder (MTAE), a domain generalization algorithm based on autoencoders trained via multi-task learning. MTAE learns to transform the original image into its analogs in multiple related domains simultaneously. The results show that the MTAE’s representations provide better classification performance than some alternative autoencoder-based models as well as the current state-of-the-art domain generalization algorithms.  This thesis proposes a fast kernel-based representation learning algorithm for both domain adaptation and domain generalization, Scatter Component Analysis (SCA). SCA finds a data representation that trades between maximizing the separability of classes, minimizing the mismatch between domains, and maximizing the separability of the whole data points. The results show that SCA performs much faster than some competitive algorithms, while providing state-of-the-art accuracy in both domain adaptation and domain generalization.  Finally, this thesis presents the Deep Reconstruction-Classification Network (DRCN), a deep convolutional network for domain adaptation. DRCN learns to classify labeled source data and also to reconstruct unlabeled target data via a shared encoding representation. The results show that DRCN provides competitive or better performance than the prior state-of-the-art model on several cross-domain object datasets.</p>


2021 ◽  
Author(s):  
◽  
Muhammad Ghifary

<p>Machine learning has achieved great successes in the area of computer vision, especially in object recognition or classification. One of the core factors of the successes is the availability of massive labeled image or video data for training, collected manually by human. Labeling source training data, however, can be expensive and time consuming. Furthermore, a large amount of labeled source data may not always guarantee traditional machine learning techniques to generalize well; there is a potential bias or mismatch in the data, i.e., the training data do not represent the target environment.  To mitigate the above dataset bias/mismatch, one can consider domain adaptation: utilizing labeled training data and unlabeled target data to develop a well-performing classifier on the target environment. In some cases, however, the unlabeled target data are nonexistent, but multiple labeled sources of data exist. Such situations can be addressed by domain generalization: using multiple source training sets to produce a classifier that generalizes on the unseen target domain. Although several domain adaptation and generalization approaches have been proposed, the domain mismatch in object recognition remains a challenging, open problem – the model performance has yet reached to a satisfactory level in real world applications.  The overall goal of this thesis is to progress towards solving dataset bias in visual object recognition through representation learning in the context of domain adaptation and domain generalization. Representation learning is concerned with finding proper data representations or features via learning rather than via engineering by human experts. This thesis proposes several representation learning solutions based on deep learning and kernel methods.  This thesis introduces a robust-to-noise deep neural network for handwritten digit classification trained on “clean” images only, which we name Deep Hybrid Network (DHN). DHNs are based on a particular combination of sparse autoencoders and restricted Boltzmann machines. The results show that DHN performs better than the standard deep neural network in recognizing digits with Gaussian and impulse noise, block and border occlusions.  This thesis proposes the Domain Adaptive Neural Network (DaNN), a neural network based domain adaptation algorithm that minimizes the classification error and the domain discrepancy between the source and target data representations. The experiments show the competitiveness of DaNN against several state-of-the-art methods on a benchmark object dataset.  This thesis develops the Multi-task Autoencoder (MTAE), a domain generalization algorithm based on autoencoders trained via multi-task learning. MTAE learns to transform the original image into its analogs in multiple related domains simultaneously. The results show that the MTAE’s representations provide better classification performance than some alternative autoencoder-based models as well as the current state-of-the-art domain generalization algorithms.  This thesis proposes a fast kernel-based representation learning algorithm for both domain adaptation and domain generalization, Scatter Component Analysis (SCA). SCA finds a data representation that trades between maximizing the separability of classes, minimizing the mismatch between domains, and maximizing the separability of the whole data points. The results show that SCA performs much faster than some competitive algorithms, while providing state-of-the-art accuracy in both domain adaptation and domain generalization.  Finally, this thesis presents the Deep Reconstruction-Classification Network (DRCN), a deep convolutional network for domain adaptation. DRCN learns to classify labeled source data and also to reconstruct unlabeled target data via a shared encoding representation. The results show that DRCN provides competitive or better performance than the prior state-of-the-art model on several cross-domain object datasets.</p>


2021 ◽  
Vol 911 (1) ◽  
pp. 012024
Author(s):  
Suwarti ◽  
Munif Ghulamahdi ◽  
Muhammad Azrai ◽  
Didy Sopandi ◽  
Trikoesoemaningtyas ◽  
...  

Abstract Development of maize hybrid for tidal swampland was initiated by selecting and combinate some superior line genotypes that tolerate to the restrictions in target environment. This study aimed was to evaluate the capability of ten maize lines result of selection on tidal swamp acid sulphate soils to obtain GCU, GCA, and heterosis values based on parental yield averages. The experiment has consisted of ten fine line genotypes which have selected in 2019 at the tidal swamp. Each parent was crossed in half-diallel combination, resulting in 46 entries, including the inbred parents. The entries were planted in a randomized complete block design with three replications. The research was conducted on Bajeng Research Station Experiment 5°18’S and 119°30’E from September 2020 to January 2021. The result shows that GCA ability and SCA ability was significant to yield in the form of grain (15% moisture content), the number of ear per plot, ten ear weight, ten corncob weight, ear harvested weight, 1000 seeds weight, plant height, ear length and the number of seed per ear. Grain yield of W6xW9 crosses obtains the highest value of 9.36 tha−1, non-significantly different to hybrid check P35 (9.35 tha-1). The highest GCA value in the grain yield character was obtained on W9 parental line (0.64**). The highest SCA was obtained on the crossing of W7 x W8 (2.61). The highest heterosis value was revealed in W5 x W10 hybrid (4.80). However, W7 x W8 crossing heterosis value was 2.34, indicate that a high SCA effect did not usually generate high heterosis. To perform high heterosis value, W10 was good as female parental.


2021 ◽  
Vol 12 ◽  
Author(s):  
Ariel Ferrante ◽  
Brian R. Cullis ◽  
Alison B. Smith ◽  
Jason A. Able

Low temperatures during the flowering period of cereals can lead to floret sterility, yield reduction, and economic losses in Australian crops. In order to breed for improved frost susceptibility, selection methods are urgently required to identify novel sources of frost tolerant germplasm. However, the presence of genotype by environment interactions (i.e. variety responses to a change in environment) is a major constraint to select the most appropriate varieties in any given target environment. An advanced method of analysis for multi-environment trials that includes factor analytic selection tools to summarize overall performance and stability to a specific trait across the environments could deliver useful information to guide growers and plant breeding programs in providing the most appropriate decision making-strategy. In this study, the updated selection tools approached in this multi-environment trials (MET) analysis have allowed variety comparisons with similar frost susceptibility but which have a different response to changes in the environment or vice versa. This MET analysis included a wide range of sowing dates grown at multiple locations from 2010 to 2019, respectively. These results, as far as we are aware, show for the first-time genotypic differences to frost damage through a MET analysis by phenotyping a vast number of accurate empirical measurements that reached in excess of 557,000 spikes. This has resulted in a substantial number of experimental units (10,317 and 5,563 in wheat and barley, respectively) across a wide range of sowing times grown at multiple locations from 2010 to 2019. Varieties with low frost overall performance (OP) and low frost stability (root mean square deviation -RMSD) were less frost susceptible, with performance more consistent across all environments, while varieties with low OP and high RMSD were adapted to specific environmental conditions.


Author(s):  
V. A. Riazantceva ◽  
K. N. Steshenko ◽  
D. D. Nikeev ◽  
E. V. Gavrilov

The developed software model environment allows to estimate the power characteristics of background radiation. A method of this model implementation makes it possible to obtain solar radiation indicatrices taking into account scattered radiation from the water surface. The high level of versatility of the model environment enables us to make calculations for any aircraft under different conditions of the water surface in several spectral ranges.


Sign in / Sign up

Export Citation Format

Share Document